Skip to main content
Request Logs give you visibility into every call made through your Data Provider. They are essential for:
  • Debugging integration issues
  • Auditing consented data access
  • Monitoring performance and latency
  • Forwarding events into your SIEM or observability stack

Log Structure

Fiskil exports logs with the following structure. Note that this structure may evolve in future versions.
  • request_id — Request identifier (string)
  • consent_id — Consent ID / CDR arrangement ID (string)
  • fapi_interaction_id — FAPI interaction ID (string)
  • account_id — Canonical identifier for the account, if applicable (string)
  • customer_id — Identifier for the customer, if applicable (string)
  • app_name — ADR client name requesting data (string)
  • level — Severity level: “ERROR”, “WARN”, or “INFO” (string)
  • error_reason — Human-readable description for error reason (string)
  • status_code — HTTP status code of the request (number)
  • route_pattern — Request route pattern for API endpoint accessed (string)
  • path — Request path for API endpoint accessed (string)
  • query — HTTP request query parameters (string)
  • method — HTTP request method (string)
  • latency_ms — Time taken for the request in milliseconds (number)
  • ip — IPv4 address of the request origin (string)
  • response_body_size — HTTP response body size in bytes (number)
  • timestamp — ISO 8601 timestamp of the request (string)
  • trace_id — Fiskil internal trace identifier (string)

Viewing Logs in the Console

Access real-time request logs directly in the Fiskil Console for immediate debugging and monitoring.
The request logs interface provides filtering options and log entry visualization for immediate debugging and monitoring.
The Console provides powerful filtering capabilities to help you find specific requests quickly:
  • Customer ID — Filter by specific customer identifiers
  • Consent ID — View logs for specific consent arrangements
  • Route — Filter by API endpoint patterns
  • Time range — Narrow down to specific time periods
  • HTTP status — Focus on errors, successes, or specific response codes
  • HTTP method — Filter by GET, POST, PUT, DELETE operations

Detailed Log View

Click on any individual log entry to expand and view comprehensive details including:
  • ADR application name and metadata
  • Internal trace ID for debugging
  • Associated account IDs
  • Request and response timing
  • Complete error context when applicable
Use the expanded log view to quickly identify integration issues and trace request flows through your system.

Log Drains

Log Drains are a powerful feature that automatically forwards your Data Provider request logs to external observability and SIEM tools. This enables you to:
  • Integrate logs into your existing monitoring infrastructure
  • Set up automated alerting and analysis
  • Maintain centralized log storage and retention policies
  • Perform advanced analytics across your entire tech stack

How Log Drains Work

Configure Log Drains in the Console to create a real-time stream of log events from your Data Provider to your chosen destination.

Supported Destinations

Azure Monitor

Application Insights — Stream logs directly to Azure’s monitoring platform with full KQL query support.

Elasticsearch

Data Streams — Forward logs to your Elasticsearch cluster as structured data streams.
Need integration with a different observability tool? Contact support to discuss custom integrations.

Setting Up Log Drains

Azure Monitor Configuration

Logs are delivered to Azure via Application Insights. Steps:
1
Create a new application in Azure Monitor Application Insights
2
Copy the “Connection String”
3
Go to the Log Drain settings in the Fiskil Console
4
Click the Azure card
5
Enter your connection string
6
Save your log drain
It may take a few minutes for logs to appear in Azure. The log attributes are in customDimensions of the record, and you can query them with KQL.

Elasticsearch Configuration

Logs are delivered to Elasticsearch as a data stream. Authentication requires an API key.
1
Create an API Key with privileges:
PUT /_security/api_key
{
    "name": "fiskil-ingest",
    "role_descriptors": {
        "logs_fiskil_holder_ingest_role": {
            "index": [
                {
                    "names": ["logs-fiskil-holder"],
                    "privileges": ["write"]
                }
            ]
        }
    }
}
2
Go to Log Drain settings in the Console
3
Click the Elasticsearch card
4
Enter your cluster endpoint and encoded API Key
5
Save your log drain
It may take a few minutes for logs to begin streaming into your Elasticsearch cluster.