get__aistats_minute
GET /aistats/minute
Get Minute-Level AI Statistics
Get minute-level aggregated AI statistics for LLM usage, including token counts, API calls, policy violations, and performance metrics. This endpoint provides more granular data than the daily endpoint.
Parameters
startDate
query
string(date)
false
Start date (ISO format)
endDate
query
string(date)
false
End date (ISO format)
repoId
query
string
false
Repository ID filter (comma-separated for multiple values)
aiLlmId
query
string
false
AI LLM ID filter (comma-separated for multiple values)
repoAiApiKeyId
query
string
false
Repository AI API Key ID filter (comma-separated for multiple values)
hosts
query
string
false
Host filter (comma-separated for multiple values)
llmType
query
string
false
LLM type filter (comma-separated for multiple values, e.g., "OPENAI", "ANTHROPIC")
llmModel
query
string
false
LLM model filter (comma-separated for multiple values, e.g., "gpt-4", "claude-3")
Note: When multiple filter parameters (repoId, aiLlmId, repoAiApiKeyId) are provided, they are combined using OR logic. The hosts filter uses AND logic with other filters.
TypeScript Client Library
// Using the Flashback TypeScript client
import { FlashbackClient } from '@flashbacktech/flashbackclient';
const client = new FlashbackClient({
accessToken: 'your-access-token'
});
// Get minute-level AI statistics with optional filters
try {
const result = await client.getAiStatsMinute({
startDate: new Date('2024-01-01'),
endDate: new Date('2024-01-01T23:59:59'),
repoId: ['repo-id-1', 'repo-id-2'],
aiLlmId: ['llm-id-1'],
repoAiApiKeyId: ['api-key-id-1'],
hosts: ['host1.example.com', 'host2.example.com'],
llmType: ['OPENAI', 'ANTHROPIC'],
llmModel: ['gpt-4', 'claude-3']
});
console.log('Minute-level AI statistics:', result);
} catch (error) {
console.error('Failed to retrieve minute-level AI statistics:', error);
}Code Samples
Example responses
200 Response
Responses
Response Schema
Status Code 200
» success
boolean
true
none
Indicates if the request was successful
» data
[object]
true
none
Array of minute-level AI statistics records
»» timestamp
integer
true
none
Unix timestamp (seconds) for the statistics record
»» repoId
string
true
none
Repository ID
»» aiLlmId
string
true
none
AI LLM ID
»» repoAiApiKeyId
string
true
none
Repository AI API Key ID
»» tokensIn
string
true
none
Total input tokens (as string to handle large numbers)
»» tokensOut
string
true
none
Total output tokens (as string to handle large numbers)
»» llmTokensIn
string
true
none
LLM input tokens (as string to handle large numbers)
»» llmTokensOut
string
true
none
LLM output tokens (as string to handle large numbers)
»» activeConversations
integer
true
none
Number of active conversations
»» apiCalls
integer
true
none
Number of API calls
»» policyViolations
integer
true
none
Number of policy violations
»» numAlerts
integer
true
none
Number of alerts triggered
»» numBlocks
integer
true
none
Number of blocked requests
»» numSeverityLow
integer
true
none
Number of policy violations with LOW severity
»» numSeverityMedium
integer
true
none
Number of policy violations with MEDIUM severity
»» numSeverityHigh
integer
true
none
Number of policy violations with HIGH severity
»» latency_ms
number
true
none
Average latency in milliseconds
»» llmType
string
true
none
Type of LLM provider (e.g., "openai", "anthropic")
»» llmModel
string
true
none
LLM model name (e.g., "gpt-4", "claude-3")
»» host
string
true
none
Host/domain of the LLM API endpoint
Status Code 400
» success
boolean
true
none
Always false for error responses
» message
string
true
none
Error message
Status Code 404
» success
boolean
true
none
Always false for error responses
» message
string
true
none
Error message
Status Code 500
» success
boolean
true
none
Always false for error responses
» message
string
true
none
Error message
Last updated
Was this helpful?