feat!: Split track_metrics_of into sync and async variants (PR-4.5)#112
feat!: Split track_metrics_of into sync and async variants (PR-4.5)#112jsonbailey wants to merge 2 commits intojb/aic-1664/runner-abcsfrom
Conversation
…sync) variants feat: add optional graph_key to all LDAIConfigTracker track_* methods for graph correlation feat: add track_tool_call/track_tool_calls to LDAIConfigTracker feat: add graph_key property to AIGraphTracker feat: make AIGraphTracker.track_total_tokens accept Optional[TokenUsage], skip when None or total <= 0 feat: add LangChainHelper.get_tool_calls_from_response and sum_token_usage_from_messages feat: extract OpenAIHelper.get_ai_usage_from_response; delegate get_ai_metrics_from_response to it refactor: remove node-scoped methods from AIGraphTracker (track_node_invocation, track_tool_call, track_node_judge_response) refactor: use time.time_ns() for sub-millisecond precision in duration calculations
…ration measurement perf_counter_ns is monotonic and designed for elapsed-time measurement; time.time_ns reflects wall-clock time and can go backward due to NTP or clock adjustments.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
| total=getattr(u, 'total_tokens', 0), | ||
| input=getattr(u, 'prompt_tokens', 0), | ||
| output=getattr(u, 'completion_tokens', 0), | ||
| ) |
There was a problem hiding this comment.
getattr doesn't coerce None attributes to zero
High Severity
getattr(u, 'total_tokens', 0) only returns the default 0 when the attribute is missing. When the attribute exists but is None, getattr returns None, not 0. The old code used response.usage.total_tokens or 0, which correctly coerced None to 0. This means TokenUsage can now be constructed with None fields, which breaks the all-zeros guard below and causes a TypeError downstream when track_tokens compares tokens.total > 0 against None. The existing test_handles_partial_usage_data test (which sets completion_tokens = None and total_tokens = None) would also fail.


feat: Add optional graph_key to all LDAIConfigTracker track_* methods for graph correlation
feat: Add track_tool_call/track_tool_calls to LDAIConfigTracker
fix: make AIGraphTracker.track_total_tokens accept Optional[TokenUsage], skip when None or total <= 0
feat: Add get_tool_calls_from_response and sum_token_usage_from_messages to LangChainHelper
feat: Add get_ai_usage_from_response to OpenAIHelper
fix!: Remove node-scoped methods from AIGraphTracker (track_node_invocation, track_tool_call, track_node_judge_response), use related AIConfigTracker methods instead
fix: use time.perf_counter_ns() for sub-millisecond precision in duration calculations
Note
Medium Risk
Medium risk because it introduces a breaking API change (
track_metrics_ofis now sync and async callers must migrate totrack_metrics_of_async) and alters emitted analytics payloads by adding optionalgraphKey/tool-call events, which could affect downstream event consumers.Overview
Splits
LDAIConfigTracker.track_metrics_ofinto sync (track_metrics_of) and async (track_metrics_of_async) variants, updates SDK call sites/tests/docs accordingly, and switches duration timing totime.perf_counter_ns().Extends config-level tracking to optionally include
graphKeyon alltrack_*events, adds$ld:ai:tool_calltracking viatrack_tool_call(s), and simplifies graph tracking by makingAIGraphTracker.track_total_tokensacceptOptional[TokenUsage]while removing node-scoped graph tracker methods in favor of config-level tracking.Adds provider helpers: LangChain now exposes
get_tool_calls_from_responseandsum_token_usage_from_messages, and OpenAI addsget_ai_usage_from_responseused byget_ai_metrics_from_response.Written by Cursor Bugbot for commit c184579. This will update automatically on new commits. Configure here.