diff --git a/src/docs/sdk/performance/span-data-conventions.mdx b/src/docs/sdk/performance/span-data-conventions.mdx index 4f2df7fdf8..2e56814908 100644 --- a/src/docs/sdk/performance/span-data-conventions.mdx +++ b/src/docs/sdk/performance/span-data-conventions.mdx @@ -71,6 +71,19 @@ Below describes the conventions for the Span interface for the `data` field on t | `cache.hit` | boolean | If the cache was hit during this span. | `true` | | `cache.item_size` | int | The size of the requested item in the cache. In bytes. | 58 | +## AI + +| Attribute | Type | Description | Examples | +|-----------------------------|---------|-------------------------------------------------------|------------------------------------------| +| `ai.input_messages` | string | The input messages sent to the model | `[{"role": "user", "message": "hello"}]` | +| `ai.completion_tоkens.used` | int | The number of tokens used to respond to the message | `10` | +| `ai.prompt_tоkens.used` | int | The number of tokens used to process just the prompt | `20` | +| `ai.total_tоkens.used` | int | The total number of tokens used to process the prompt | `30` | +| `ai.model_id` | list | The vendor-specific ID of the model used | `"gpt-4"` | +| `ai.streaming` | boolean | Whether the request was streamed back | `true` | +| `ai.responses` | list | The response messages sent back by the AI model | `["hello", "world"]` | + + ## Thread | Attribute | Type | Description | Examples |