Skip to content

feat!: Split track_metrics_of into sync and async variants (PR-4.5)#112

Open
jsonbailey wants to merge 2 commits intojb/aic-1664/runner-abcsfrom
jb/aic-1664/graph-tracking-improvements
Open

feat!: Split track_metrics_of into sync and async variants (PR-4.5)#112
jsonbailey wants to merge 2 commits intojb/aic-1664/runner-abcsfrom
jb/aic-1664/graph-tracking-improvements

Conversation

@jsonbailey
Copy link
Contributor

@jsonbailey jsonbailey commented Mar 25, 2026

feat: Add optional graph_key to all LDAIConfigTracker track_* methods for graph correlation
feat: Add track_tool_call/track_tool_calls to LDAIConfigTracker
fix: make AIGraphTracker.track_total_tokens accept Optional[TokenUsage], skip when None or total <= 0
feat: Add get_tool_calls_from_response and sum_token_usage_from_messages to LangChainHelper
feat: Add get_ai_usage_from_response to OpenAIHelper
fix!: Remove node-scoped methods from AIGraphTracker (track_node_invocation, track_tool_call, track_node_judge_response), use related AIConfigTracker methods instead
fix: use time.perf_counter_ns() for sub-millisecond precision in duration calculations


Note

Medium Risk
Medium risk because it introduces a breaking API change (track_metrics_of is now sync and async callers must migrate to track_metrics_of_async) and alters emitted analytics payloads by adding optional graphKey/tool-call events, which could affect downstream event consumers.

Overview
Splits LDAIConfigTracker.track_metrics_of into sync (track_metrics_of) and async (track_metrics_of_async) variants, updates SDK call sites/tests/docs accordingly, and switches duration timing to time.perf_counter_ns().

Extends config-level tracking to optionally include graphKey on all track_* events, adds $ld:ai:tool_call tracking via track_tool_call(s), and simplifies graph tracking by making AIGraphTracker.track_total_tokens accept Optional[TokenUsage] while removing node-scoped graph tracker methods in favor of config-level tracking.

Adds provider helpers: LangChain now exposes get_tool_calls_from_response and sum_token_usage_from_messages, and OpenAI adds get_ai_usage_from_response used by get_ai_metrics_from_response.

Written by Cursor Bugbot for commit c184579. This will update automatically on new commits. Configure here.

…sync) variants

feat: add optional graph_key to all LDAIConfigTracker track_* methods for graph correlation
feat: add track_tool_call/track_tool_calls to LDAIConfigTracker
feat: add graph_key property to AIGraphTracker
feat: make AIGraphTracker.track_total_tokens accept Optional[TokenUsage], skip when None or total <= 0
feat: add LangChainHelper.get_tool_calls_from_response and sum_token_usage_from_messages
feat: extract OpenAIHelper.get_ai_usage_from_response; delegate get_ai_metrics_from_response to it
refactor: remove node-scoped methods from AIGraphTracker (track_node_invocation, track_tool_call, track_node_judge_response)
refactor: use time.time_ns() for sub-millisecond precision in duration calculations
…ration measurement

perf_counter_ns is monotonic and designed for elapsed-time measurement; time.time_ns
reflects wall-clock time and can go backward due to NTP or clock adjustments.
@jsonbailey jsonbailey marked this pull request as ready for review March 25, 2026 22:05
@jsonbailey jsonbailey requested a review from a team as a code owner March 25, 2026 22:05
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.

total=getattr(u, 'total_tokens', 0),
input=getattr(u, 'prompt_tokens', 0),
output=getattr(u, 'completion_tokens', 0),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getattr doesn't coerce None attributes to zero

High Severity

getattr(u, 'total_tokens', 0) only returns the default 0 when the attribute is missing. When the attribute exists but is None, getattr returns None, not 0. The old code used response.usage.total_tokens or 0, which correctly coerced None to 0. This means TokenUsage can now be constructed with None fields, which breaks the all-zeros guard below and causes a TypeError downstream when track_tokens compares tokens.total > 0 against None. The existing test_handles_partial_usage_data test (which sets completion_tokens = None and total_tokens = None) would also fail.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant