[codex] Fix marker resolution and duplicate mapping validation#63
[codex] Fix marker resolution and duplicate mapping validation#63
Conversation
|
Claude finished @satvik007's task in 3m 10s —— View job Code Review: Fix marker resolution and duplicate mapping validation
Review submitted. The PR is a well-motivated correctness fix with solid test coverage. I left 5 inline comments:
|
|
Claude finished @satvik007's task in 4m 0s —— View job
SummarySubmitted a review with 2 inline comments after deduplicating against 6 existing bot comments:
Cleanup performed:
|
There was a problem hiding this comment.
Review Summary (re-review after sync)
The investigation report files have been removed — good. The core changes remain well-structured and the test coverage for the key regression scenarios is solid.
Previous findings still applicable:
resolveMarkercross-project inconsistency (line 602) — reinforced with suggestion- Redundant
resolveMarkerscall at line 135 (flagged by gemini) - DRY violation in duplicate-mapping error formatting (handler vs uploader)
- Double
getReusableTCasesInDefaultFolderAPI call in create-tcases flow
New finding:
- The shared
mapping.tsmodule — which contains the core correctness fix — has no dedicated unit tests. TheallowDuplicateTargetsuppression logic infindDuplicateTCaseMappingsis particularly worth testing directly.
Overall this is a solid correctness fix. The architecture of explicit marker metadata flowing through the pipeline is a clear improvement over name-scanning.
There was a problem hiding this comment.
Missing unit tests for the core mapping logic
mapResolvedResultsToTCases and findDuplicateTCaseMappings are the centerpiece of this PR's correctness fix, but they're only exercised indirectly through end-to-end integration tests in result-upload.spec.ts. A dedicated mapping.spec.ts would make the logic easier to verify and would catch regressions more precisely.
Suggested cases:
- Results with matching project code + seq go to
results; non-matching or marker-less go tomissing - Case-insensitive project code comparison (
prjvsPRJ) - Two results mapping to the same
tcase.idare reported as duplicates - The
allowDuplicateTargetsuppression: a group where all results haveallowDuplicateTarget: trueis not reported (this is the key behavior fromassignResolvedTargetwhen multiple results share the same create-tcases title) - Mixed case: one result with
allowDuplicateTarget: trueand one without → still flagged
| return result.marker.projectCode.toLowerCase() === projectCode.toLowerCase() | ||
| ? result.marker | ||
| : null | ||
| } |
There was a problem hiding this comment.
+1 to the previous review's suggestion here — when the marker targets a different project, markerResolution should be updated to 'resolved-none' so the state is self-consistent. The current code works because mapResolvedResultsToTCases independently checks result.marker.projectCode, but leaving markerResolution === 'resolved' with a cross-project marker is a latent inconsistency.
| } | |
| private resolveMarker(result: TestCaseResult, projectCode: string): TestCaseMarker | null { | |
| if (result.markerResolution === 'resolved' && result.marker) { | |
| if (result.marker.projectCode.toLowerCase() === projectCode.toLowerCase()) { | |
| return result.marker | |
| } | |
| // Marker exists but targets a different project — not resolvable here | |
| result.markerResolution = 'resolved-none' | |
| return null | |
| } |
Summary
This branch fixes the remaining result-mapping issues in
qas-cliby making parsed marker metadata the source of truth for upload mapping and by validating duplicate target mappings earlier in the flow.The branch also bumps the package version to
0.6.1.Root Cause
The previous uploader logic inferred identity from
result.name, which is a display field and may include Playwrightdescribe(...)titles. When those describe titles contained multiple QA Sphere markers, the uploader could map a result to the wrong run test case simply because that marker appeared earlier in the string.There was also a second safety issue after duplicate-target detection was introduced: duplicate validation could still happen after side effects had already occurred. The initial fix moved that check ahead of run creation, but
--create-tcasescould still create new test cases before the duplicate rejection fired.What Changed
TestCaseResultspec.titleonly, notdescribeprefixesresolved-nonewhen only thedescribetitle contains markersresolvedstays resolvedresolved-noneis not re-parsed from display namesneeds-project-resolutionstill allows the project-aware JUnit fallback path--forceas the override for duplicate-target failures--create-tcasescreation calls--create-tcaseshandling into a read-only planning step and a post-validation creation/finalization step--forceand mapping behavior describe duplicate-target handling explicitlypackage.jsonand rootpackage-lock.jsonmetadata to0.6.1Why Reviewers Should Care
This is primarily a correctness fix.
Without these changes, a Playwright suite could upload results to the wrong QA Sphere test cases even when the result count looked plausible. In failure cases, the CLI could also leave behind unwanted artifacts:
--create-tcaseswhen duplicate validation failed too lateThe branch makes parser intent explicit and ensures duplicate rejection happens before either of those side effects.
Suggested Review Order
src/utils/result-upload/types.tssrc/utils/result-upload/mapping.tssrc/utils/result-upload/parsers/playwrightJsonParser.tssrc/utils/result-upload/ResultUploadCommandHandler.tssrc/utils/result-upload/ResultUploader.tssrc/tests/playwright-json-parsing.spec.tssrc/tests/result-upload.spec.tsTests Added
describetitles contain unrelated markers butspec.titlecontains the real onedescribetitle contains markers and the result must remain unmatched--create-tcasesValidation
npm run checknpm testNotes
0.6.1