Skip to content
This repository was archived by the owner on Aug 14, 2024. It is now read-only.
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions src/docs/sdk/performance/span-data-conventions.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,19 @@ Below describes the conventions for the Span interface for the `data` field on t
| `cache.hit` | boolean | If the cache was hit during this span. | `true` |
| `cache.item_size` | int | The size of the requested item in the cache. In bytes. | 58 |

## AI

| Attribute | Type | Description | Examples |
|-----------------------------|---------|-------------------------------------------------------|------------------------------------------|
| `ai.input_messages` | string | The input messages sent to the model | `[{"role": "user", "message": "hello"}]` |
| `ai.completion_tоkens.used` | int | The number of tokens used to respond to the message | `10` |
| `ai.prompt_tоkens.used` | int | The number of tokens used to process just the prompt | `20` |
| `ai.total_tоkens.used` | int | The total number of tokens used to process the prompt | `30` |
| `ai.model_id` | list | The vendor-specific ID of the model used | `"gpt-4"` |
| `ai.streaming` | boolean | Whether the request was streamed back | `true` |
| `ai.responses` | list | The response messages sent back by the AI model | `["hello", "world"]` |


## Thread

| Attribute | Type | Description | Examples |
Expand Down