Skip to content

Developer Dashboard

Problem Statement

Design a developer dashboard for a payment platform like Stripe that provides real-time transaction monitoring, API logs, usage analytics, and account management. The dashboard should support both test and live modes with clear separation.


Requirements

Functional Requirements

  • Real-time transaction monitoring
  • API request logs with search and filtering
  • Usage analytics and metrics
  • Webhook management and debugging
  • API key management
  • Account and team settings
  • Test mode vs live mode toggle
  • Event timeline and debugging tools
  • Billing and usage reports

Non-Functional Requirements

  • Latency: Dashboard loads in < 2 seconds
  • Real-time: Transaction updates within 5 seconds
  • Scalability: Support 100K+ concurrent dashboard users
  • Availability: 99.9% uptime
  • Data retention: Logs retained for 30 days (configurable)

High-Level Architecture

Developer Dashboard Architecture


Core Features

1. Home Dashboard

Home Dashboard

2. Payments View

Payments View

3. API Logs

API Logs

4. Webhooks Management

Webhooks Management


Data Models

API Request Log

CREATE TABLE api_request_logs (
    id                  UUID PRIMARY KEY,
    request_id          VARCHAR(50) UNIQUE NOT NULL,
    account_id          UUID NOT NULL,

    -- Request details
    method              VARCHAR(10) NOT NULL,
    path                VARCHAR(500) NOT NULL,
    query_params        JSONB,
    request_headers     JSONB,
    request_body        TEXT,

    -- Authentication
    api_key_id          UUID,
    api_key_type        VARCHAR(20),        -- secret, publishable
    ip_address          INET,

    -- Response details
    status_code         INT,
    response_headers    JSONB,
    response_body       TEXT,               -- Truncated if large
    error_type          VARCHAR(50),
    error_message       TEXT,

    -- Timing
    started_at          TIMESTAMP NOT NULL,
    duration_ms         INT,

    -- Mode
    livemode            BOOLEAN NOT NULL,

    -- Versioning
    api_version         VARCHAR(20),

    -- Idempotency
    idempotency_key     VARCHAR(255),
    idempotent_replayed BOOLEAN DEFAULT false
);

-- Partitioned by time for efficient queries and retention
CREATE INDEX idx_logs_account_time ON api_request_logs(account_id, started_at DESC);
CREATE INDEX idx_logs_request_id ON api_request_logs(request_id);
CREATE INDEX idx_logs_status ON api_request_logs(account_id, status_code, started_at DESC);

Analytics Aggregates (ClickHouse)

CREATE TABLE payment_metrics (
    account_id          UUID,
    date                Date,
    hour                UInt8,

    -- Volume metrics
    transaction_count   UInt64,
    gross_volume        Decimal(18, 2),
    net_volume          Decimal(18, 2),
    fee_volume          Decimal(18, 2),

    -- Status breakdown
    succeeded_count     UInt64,
    failed_count        UInt64,
    pending_count       UInt64,
    refunded_count      UInt64,

    -- Performance
    avg_response_time   Float32,
    p95_response_time   Float32,

    -- Currency
    currency            LowCardinality(String)
)
ENGINE = SummingMergeTree()
PARTITION BY toYYYYMM(date)
ORDER BY (account_id, date, hour, currency);

Real-Time Updates

WebSocket Architecture

Real-Time Event Flow

WebSocket Server

@Component
public class DashboardWebSocketHandler {

    private final Map<String, Set<WebSocketSession>> accountSessions = new ConcurrentHashMap<>();

    @OnOpen
    public void onConnect(WebSocketSession session, @PathParam("accountId") String accountId) {
        // Authenticate session
        String token = session.getRequestParameterMap().get("token").get(0);
        if (!authService.validateDashboardToken(token, accountId)) {
            session.close(CloseReason.UNAUTHORIZED);
            return;
        }

        // Register session
        accountSessions.computeIfAbsent(accountId, k -> ConcurrentHashMap.newKeySet())
            .add(session);
    }

    @OnClose
    public void onDisconnect(WebSocketSession session, String accountId) {
        Set<WebSocketSession> sessions = accountSessions.get(accountId);
        if (sessions != null) {
            sessions.remove(session);
        }
    }

    // Called when event occurs
    public void broadcastToAccount(String accountId, DashboardEvent event) {
        Set<WebSocketSession> sessions = accountSessions.get(accountId);
        if (sessions != null) {
            String message = objectMapper.writeValueAsString(event);
            for (WebSocketSession session : sessions) {
                session.getAsyncRemote().sendText(message);
            }
        }
    }
}

Search & Filtering

Log Search (Elasticsearch)

@Service
public class LogSearchService {

    private final ElasticsearchClient esClient;

    public SearchResult searchLogs(LogSearchRequest request) {
        BoolQuery.Builder query = new BoolQuery.Builder();

        // Must match account
        query.must(TermQuery.of(t -> t.field("account_id").value(request.getAccountId()))._toQuery());

        // Must match livemode
        query.must(TermQuery.of(t -> t.field("livemode").value(request.isLiveMode()))._toQuery());

        // Filter by date range
        query.filter(RangeQuery.of(r -> r
            .field("started_at")
            .gte(JsonData.of(request.getStartDate()))
            .lte(JsonData.of(request.getEndDate()))
        )._toQuery());

        // Optional filters
        if (request.getStatus() != null) {
            query.filter(TermQuery.of(t -> t.field("status_code").value(request.getStatus()))._toQuery());
        }

        if (request.getMethod() != null) {
            query.filter(TermQuery.of(t -> t.field("method").value(request.getMethod()))._toQuery());
        }

        if (request.getPath() != null) {
            query.filter(WildcardQuery.of(w -> w.field("path").value(request.getPath() + "*"))._toQuery());
        }

        // Full-text search on request/response body
        if (request.getSearchText() != null) {
            query.must(MultiMatchQuery.of(m -> m
                .query(request.getSearchText())
                .fields("request_id", "request_body", "response_body", "error_message")
            )._toQuery());
        }

        SearchResponse<ApiRequestLog> response = esClient.search(s -> s
            .index("api-logs-*")
            .query(query.build()._toQuery())
            .sort(SortOptions.of(so -> so.field(f -> f.field("started_at").order(SortOrder.Desc))))
            .from(request.getOffset())
            .size(request.getLimit()),
            ApiRequestLog.class
        );

        return SearchResult.from(response);
    }
}

Analytics Service

Metrics Aggregation

@Service
public class AnalyticsService {

    private final ClickHouseClient clickHouse;

    public DashboardMetrics getDashboardMetrics(String accountId, DateRange range, boolean liveMode) {
        // Volume metrics
        String volumeQuery = """
            SELECT
                sum(transaction_count) as total_transactions,
                sum(gross_volume) as gross_volume,
                sum(succeeded_count) as succeeded,
                sum(failed_count) as failed,
                sum(succeeded_count) / sum(transaction_count) as success_rate
            FROM payment_metrics
            WHERE account_id = ?
            AND date BETWEEN ? AND ?
            AND livemode = ?
            """;

        // Time series for chart
        String timeSeriesQuery = """
            SELECT
                date,
                sum(gross_volume) as volume,
                sum(transaction_count) as count
            FROM payment_metrics
            WHERE account_id = ?
            AND date BETWEEN ? AND ?
            AND livemode = ?
            GROUP BY date
            ORDER BY date
            """;

        // Execute queries and combine results
        VolumeMetrics volume = clickHouse.queryOne(volumeQuery, VolumeMetrics.class,
            accountId, range.getStart(), range.getEnd(), liveMode);

        List<TimeSeriesPoint> timeSeries = clickHouse.queryList(timeSeriesQuery, TimeSeriesPoint.class,
            accountId, range.getStart(), range.getEnd(), liveMode);

        return DashboardMetrics.builder()
            .volume(volume)
            .timeSeries(timeSeries)
            .build();
    }
}

Test Mode vs Live Mode

Mode Switching

Test Mode / Live Mode


Authentication & Authorization

Dashboard Authentication

@Component
public class DashboardAuthService {

    // Dashboard uses session-based auth (not API keys)
    public AuthResult authenticate(String sessionToken) {
        Session session = sessionStore.get(sessionToken);

        if (session == null || session.isExpired()) {
            throw new UnauthorizedException("Session expired");
        }

        User user = userService.getUser(session.getUserId());
        Account account = accountService.getAccount(session.getAccountId());

        return AuthResult.builder()
            .user(user)
            .account(account)
            .permissions(user.getPermissions())
            .build();
    }

    // RBAC for dashboard features
    public boolean hasPermission(User user, Permission permission) {
        return user.getRoles().stream()
            .flatMap(role -> role.getPermissions().stream())
            .anyMatch(p -> p == permission || p == Permission.ADMIN);
    }
}

public enum Permission {
    VIEW_PAYMENTS,
    MANAGE_PAYMENTS,        // Refunds, captures
    VIEW_CUSTOMERS,
    MANAGE_CUSTOMERS,
    VIEW_API_LOGS,
    MANAGE_API_KEYS,
    MANAGE_WEBHOOKS,
    VIEW_ANALYTICS,
    MANAGE_TEAM,
    ADMIN
}

Performance Optimization

Caching Strategy

@Service
public class DashboardCacheService {

    private final RedisTemplate<String, Object> redis;

    // Cache dashboard summary (short TTL, frequently accessed)
    @Cacheable(value = "dashboard-summary", key = "#accountId + ':' + #liveMode", ttl = 60)
    public DashboardSummary getDashboardSummary(String accountId, boolean liveMode) {
        return analyticsService.computeSummary(accountId, liveMode);
    }

    // Cache API key list (medium TTL, less frequent changes)
    @Cacheable(value = "api-keys", key = "#accountId", ttl = 300)
    public List<ApiKey> getApiKeys(String accountId) {
        return apiKeyService.listKeys(accountId);
    }

    // Invalidate on changes
    @CacheEvict(value = "api-keys", key = "#accountId")
    public void onApiKeyChanged(String accountId) {
        // Cache evicted
    }
}

Pagination & Lazy Loading

// Frontend: Virtual scrolling for large lists
const TransactionList = () => {
    const [transactions, setTransactions] = useState([]);
    const [hasMore, setHasMore] = useState(true);
    const [cursor, setCursor] = useState(null);

    const loadMore = async () => {
        const response = await api.get('/v1/dashboard/transactions', {
            params: { limit: 50, starting_after: cursor }
        });

        setTransactions(prev => [...prev, ...response.data]);
        setCursor(response.data[response.data.length - 1]?.id);
        setHasMore(response.has_more);
    };

    return (
        <VirtualList
            items={transactions}
            onEndReached={loadMore}
            hasMore={hasMore}
        />
    );
};

Technology Choices

Component Technology Options
Frontend React, Vue.js, Next.js
API Layer Node.js, Go, Java
Real-time WebSocket, Server-Sent Events
Log Storage Elasticsearch, Loki
Analytics ClickHouse, TimescaleDB
Cache Redis
Search Elasticsearch

Scalability Considerations

Scaling Strategy


Interview Discussion Points

  1. How do you handle large log volumes?
  2. Time-based partitioning, retention policies, tiered storage

  3. How do you ensure real-time updates are reliable?

  4. WebSocket with reconnection, polling fallback, event deduplication

  5. How do you handle search across millions of logs?

  6. Elasticsearch indexing, pre-aggregated metrics, pagination

  7. How do you ensure data consistency between test and live modes?

  8. Separate flag in all tables, strict filtering, separate API keys

  9. How do you optimize dashboard load time?

  10. Caching, lazy loading, CDN, code splitting, pre-computed aggregates