From 0ec4d3962e6d81a359f3c2a370f241b97d2aaecd Mon Sep 17 00:00:00 2001 From: colin-sentry <161344340+colin-sentry@users.noreply.github.com> Date: Thu, 7 Mar 2024 11:45:25 -0500 Subject: [PATCH 1/2] Update span-data-conventions.mdx --- src/docs/sdk/performance/span-data-conventions.mdx | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/src/docs/sdk/performance/span-data-conventions.mdx b/src/docs/sdk/performance/span-data-conventions.mdx index 4f2df7fdf8..72ad044b7b 100644 --- a/src/docs/sdk/performance/span-data-conventions.mdx +++ b/src/docs/sdk/performance/span-data-conventions.mdx @@ -71,6 +71,19 @@ Below describes the conventions for the Span interface for the `data` field on t | `cache.hit` | boolean | If the cache was hit during this span. | `true` | | `cache.item_size` | int | The size of the requested item in the cache. In bytes. | 58 | +## AI + +| Attribute | Type | Description | Examples | +|-----------------------------|---------|-------------------------------------------------------|------------------------------------------| +| `ai.input_messages` | string | The input messages sent to the model | `[{"role": "user", "message": "hello"}]` | +| `ai.completion_tоkens.used` | int | The number of tokens used to respond to the message | `10` | +| `ai.prompt_tоkens.used` | int | The number of tokens used to process just the prompt | `20` | +| `ai.total_tоkens.used` | int | The total number of tokens used to process the prompt | `30` | +| `ai.model_id` | string | The vendor-specific ID of the model used | `"gpt-4"` | +| `ai.streaming` | boolean | Whether the request was streamed back | `true` | +| `ai.responses` | string | The response messages sent back by the AI model | `["hello", "world"]` | + + ## Thread | Attribute | Type | Description | Examples | From 043504ab67e85113b796f16ba3cc84bda18609dc Mon Sep 17 00:00:00 2001 From: colin-sentry <161344340+colin-sentry@users.noreply.github.com> Date: Thu, 7 Mar 2024 11:46:24 -0500 Subject: [PATCH 2/2] Update span-data-conventions.mdx --- src/docs/sdk/performance/span-data-conventions.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/docs/sdk/performance/span-data-conventions.mdx b/src/docs/sdk/performance/span-data-conventions.mdx index 72ad044b7b..2e56814908 100644 --- a/src/docs/sdk/performance/span-data-conventions.mdx +++ b/src/docs/sdk/performance/span-data-conventions.mdx @@ -79,9 +79,9 @@ Below describes the conventions for the Span interface for the `data` field on t | `ai.completion_tоkens.used` | int | The number of tokens used to respond to the message | `10` | | `ai.prompt_tоkens.used` | int | The number of tokens used to process just the prompt | `20` | | `ai.total_tоkens.used` | int | The total number of tokens used to process the prompt | `30` | -| `ai.model_id` | string | The vendor-specific ID of the model used | `"gpt-4"` | +| `ai.model_id` | list | The vendor-specific ID of the model used | `"gpt-4"` | | `ai.streaming` | boolean | Whether the request was streamed back | `true` | -| `ai.responses` | string | The response messages sent back by the AI model | `["hello", "world"]` | +| `ai.responses` | list | The response messages sent back by the AI model | `["hello", "world"]` | ## Thread