feat(cli): Expand and improve the MCP server and dev CLI command#3224
feat(cli): Expand and improve the MCP server and dev CLI command#3224
Conversation
🦋 Changeset detectedLatest commit: e140f62 The changes in this PR will be included in the next version bump. This PR includes changesets to release 28 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
WalkthroughAdds query and dashboard discovery schemas and API endpoints, and exposes them via new Remix routes. Extends the MCP CLI with multiple new read-only tools (query, get_query_schema, dashboards, run_dashboard_query, profiles, task schema, dev server management), a --readonly flag, project-scoped MCP config/profile persistence, formatting utilities, smoke and tooling tests, and breadcrumb tracing. Introduces cursor-based pagination and trace caching for run details, prunes deprecated workers with a retention cap, fixes nullable deployment runtime fields in core schemas, and surfaces new endpoints in ApiClient. Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (16)
packages/cli-v3/src/dev/devSupervisor.ts (1)
766-769: Extract repeated try-catch pattern into a helper method for readability and DRY principle.The
workerFriendlyIdgetter inDevRunControllerintentionally throws whenserverWorkeris undefined (seedev-run-controller.ts:244-247), which is why the defensive try-catch is necessary. However, this pattern is repeated three times (lines 624, 766, 794-800) and should be extracted into a private helper method. Line 766 is particularly dense, combining filter, try-catch, and map in a single expression; the multi-line formatting in lines 794-800 shows a clearer approach.♻️ Extract to a helper method
+ `#getActiveRunIdsForWorker`(friendlyId: string): string[] { + return Array.from(this.runControllers.entries()) + .filter(([, controller]) => { + try { + return controller.workerFriendlyId === friendlyId; + } catch { + return false; + } + }) + .map(([id]) => id); + } + `#cleanupWorker`(friendlyId: string) { const worker = this.workers.get(friendlyId); if (!worker) { crumb("cleanupWorker: worker not found in map", { friendlyId }); // `@crumbs` return; } - const activeRunIds = Array.from(this.runControllers.entries()).filter(([, c]) => { try { return c.workerFriendlyId === friendlyId; } catch { return false; } }).map(([id]) => id); // `@crumbs` - const hasActiveRuns = activeRunIds.length > 0; // `@crumbs` + const activeRunIds = this.#getActiveRunIdsForWorker(friendlyId); + const hasActiveRuns = activeRunIds.length > 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/dev/devSupervisor.ts` around lines 766 - 769, The repeated try-catch used to safely access DevRunController.workerFriendlyId (which throws when serverWorker is undefined) should be extracted into a private helper like safeWorkerFriendlyId(controller: DevRunController): string | undefined; replace the inline try/catch occurrences (the filter in the activeRunIds computation, and the other two places around lines referencing workerFriendlyId) to call this helper, and update the filter/map logic to use the helper's return value for comparisons (e.g., compare safeWorkerFriendlyId(c) === friendlyId) so the code is clearer and DRY.packages/cli-v3/package.json (1)
82-83: Consider renaming CLI utilities to avoid test file naming collision.The files
tools.test.tsandsmoke.test.tsare CLI utilities rather than vitest test suites. The*.test.tsnaming pattern may cause confusion with actual test files and could be picked up by vitest's test discovery. Consider renaming to something liketools-cli.ts/smoke-cli.tsor moving to a dedicatedscripts/directory.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/package.json` around lines 82 - 83, The CLI utility filenames tools.test.ts and smoke.test.ts conflict with test discovery; rename them (e.g., tools-cli.ts and smoke-cli.ts) or move them into a scripts/ or cli/ directory, and update the npm scripts mcp:test and mcp:smoke in package.json to point to the new filenames/paths (references: mcp:test, mcp:smoke, tools.test.ts, smoke.test.ts).packages/cli-v3/src/mcp/context.ts (1)
24-32: Missing schema validation for project config.The config file is parsed as JSON but cast directly to
McpProjectConfigwithout validation. Malformed or unexpected config shapes could cause subtle issues downstream.🛡️ Suggested fix using zod validation
+import { z } from "zod"; +const McpProjectConfigSchema = z.object({ + profile: z.string().optional(), +}); + +type McpProjectConfig = z.infer<typeof McpProjectConfigSchema>; function readMcpProjectConfig(projectDir: string): McpProjectConfig | undefined { try { const filePath = path.join(projectDir, MCP_CONFIG_DIR, MCP_CONFIG_FILE); const content = fs.readFileSync(filePath, "utf-8"); - return JSON.parse(content) as McpProjectConfig; + const parsed = JSON.parse(content); + const result = McpProjectConfigSchema.safeParse(parsed); + return result.success ? result.data : undefined; } catch { return undefined; } }As per coding guidelines: "Use zod for validation in packages/core and apps/webapp". While this is in cli-v3, using zod would improve robustness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/mcp/context.ts` around lines 24 - 32, The readMcpProjectConfig function parses JSON and casts to McpProjectConfig without validation; add a zod schema (e.g., McpProjectConfigSchema) that models the expected shape and use McpProjectConfigSchema.safeParse(parsed) inside readMcpProjectConfig to validate the content after JSON.parse; if safeParse fails, handle gracefully (return undefined or log the error) instead of casting, and export/use McpProjectConfigSchema where needed so downstream code can rely on validated McpProjectConfig instances rather than unchecked casts.packages/cli-v3/src/mcp/smoke.test.ts (3)
214-214: Hardcoded widgetId is environment-specific.The
widgetId: "9lDDdebQ"is specific to the test environment's dashboard configuration. Consider either making this configurable or deriving it dynamically from thelist_dashboardsoutput in the prior test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/mcp/smoke.test.ts` at line 214, The test currently hardcodes widgetId ("9lDDdebQ") in the test named run_dashboard_query; change it to not be environment-specific by either reading a configurable env var (e.g., process.env.DASHBOARD_WIDGET_ID) or deriving the widgetId from the prior list_dashboards test output: have list_dashboards return or store the dashboard/widget list in a shared variable, find the desired dashboard (by key "overview" or widget title) and extract its widget id, then pass that value into the run_dashboard_query invocation (replace the literal widgetId in the call that uses commonArgs). Ensure test ordering or shared state guarantees list_dashboards runs first so the derived widgetId is available.
190-198: Fragile test result manipulation for expected error.Flipping the result status after the fact when an expected error occurs is brittle. If the error message format changes, this will silently pass incorrect results. Consider adding a dedicated
testExpectedErrorhelper that explicitly expects and validates error responses.♻️ Suggested approach
async function testExpectedError( name: string, toolArgs: Record<string, unknown>, expectedErrorSubstring: string ) { const start = Date.now(); try { const { isError, text } = await callTool(client, name, toolArgs); const duration = Date.now() - start; if (isError && text.includes(expectedErrorSubstring)) { results.push({ tool: name, status: "pass", duration, preview: `Correctly rejected: ${expectedErrorSubstring}` }); } else { results.push({ tool: name, status: "fail", duration, error: `Expected error containing "${expectedErrorSubstring}"` }); } } catch (err) { // ... error handling } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/mcp/smoke.test.ts` around lines 190 - 198, Replace the brittle post-hoc flip of the test result in the test("get_query_schema", ...) block with a dedicated helper (e.g., testExpectedError) that invokes the tool via callTool and asserts the error shape/text up front; specifically, implement testExpectedError(name, toolArgs, expectedErrorSubstring) which calls callTool(client, name, toolArgs), checks isError and text.includes(expectedErrorSubstring), and pushes a pass/fail entry into results (using the same result fields: tool, status, duration, preview or error) instead of mutating the last results element; then replace the current manual flipping code around test("get_query_schema", { ...commonArgs, table: "nonexistent" }) to call testExpectedError("get_query_schema", { ...commonArgs, table: "nonexistent" }, "not found").
60-61: Hardcoded project reference limits portability.The default
projectRef = "proj_rrkpdguyagvsoktglnod"is environment-specific and won't work for other developers running this smoke test. Consider either documenting this as a required parameter or detecting it from the current CLI profile's projects.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/mcp/smoke.test.ts` around lines 60 - 61, The test currently uses a hardcoded projectRef ("proj_rrkpdguyagvsoktglnod") which breaks portability; update the logic around the projectRef and projIdx variables so projectRef is taken from args when "--project-ref" is present, otherwise read a project reference from the developer's CLI profile or a PROJECT_REF environment variable (and fail with a clear error if none is found). Locate the variables projectRef and projIdx in the smoke.test.ts setup, replace the hardcoded default with code that prefers args[projIdx+1], then falls back to process.env.PROJECT_REF or a helper that queries the current CLI profile/projects, and ensure the test logs a helpful message and exits if no valid projectRef can be resolved.packages/cli-v3/src/mcp/tools/devServer.ts (3)
28-35: Usez.object()for inputSchema consistency.The
StartDevServerInputis defined as a plain object, but other tools usez.object(). The handler at line 42 usestoolHandler(StartDevServerInput, ...)which expects a zod shape, and it works because zod shapes are also plain objects. However, for consistency with other tools and better type inference, usez.object().♻️ Proposed fix
-const StartDevServerInput = { - configPath: z - .string() - .describe( - "The path to the trigger.config.ts file or project directory. Only needed when the trigger.config.ts file is not at the root dir." - ) - .optional(), -}; +const StartDevServerInput = z.object({ + configPath: z + .string() + .describe( + "The path to the trigger.config.ts file or project directory. Only needed when the trigger.config.ts file is not at the root dir." + ) + .optional(), +}); export const startDevServerTool = { name: toolsMetadata.start_dev_server.name, title: toolsMetadata.start_dev_server.title, description: toolsMetadata.start_dev_server.description, - inputSchema: StartDevServerInput, - handler: toolHandler(StartDevServerInput, async (input, { ctx }) => { + inputSchema: StartDevServerInput.shape, + handler: toolHandler(StartDevServerInput.shape, async (input, { ctx }) => {Also applies to: 41-41
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/mcp/tools/devServer.ts` around lines 28 - 35, StartDevServerInput is currently a plain object but should be a Zod schema for consistency and better typing; replace the plain object with a z.object(...) schema that contains the same configPath property (z.string().describe(...).optional()) and then pass that schema into toolHandler (the existing call to toolHandler(StartDevServerInput, ...) can remain unchanged after swapping in the z.object). Ensure the exported/used symbol name remains StartDevServerInput.
199-209: Schema duplication between inputSchema and handler.The
devServerStatusTooldefines the schema twice: once ininputSchema(plain object) and again inhandler(z.object). This creates maintenance burden and potential inconsistency.♻️ Proposed fix
+const DevServerStatusInput = z.object({ + lines: z + .number() + .int() + .default(50) + .describe("Number of recent log lines to return. Defaults to 50."), +}); export const devServerStatusTool = { name: toolsMetadata.dev_server_status.name, title: toolsMetadata.dev_server_status.title, description: toolsMetadata.dev_server_status.description, - inputSchema: { - lines: z - .number() - .int() - .default(50) - .describe("Number of recent log lines to return. Defaults to 50."), - }, - handler: toolHandler( - { - lines: z.number().int().default(50), - }, - async (input, { ctx }) => { + inputSchema: DevServerStatusInput.shape, + handler: toolHandler(DevServerStatusInput.shape, async (input, { ctx }) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/mcp/tools/devServer.ts` around lines 199 - 209, The devServerStatusTool currently duplicates the schema by defining a plain object under inputSchema and a zod schema inside the handler; consolidate by extracting a single zod schema (e.g., devServerStatusInputSchema) that defines lines: z.number().int().default(50).describe(...), then use that same schema reference for both inputSchema and the handler's toolHandler call so input validation is defined once; update any imports/exports as needed and ensure the handler still receives the parsed input shape from the shared schema.
185-188: Consider waiting for process termination.
devProcess.kill()sends SIGTERM but doesn't wait for the process to exit. SettingdevProcess = nullimmediately could lead to inconsistent state if checked before the process fully terminates.For the MCP CLI context this is likely acceptable, but be aware the process may still be running briefly after this returns.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/mcp/tools/devServer.ts` around lines 185 - 188, The code currently calls devProcess.kill() then immediately sets devState = "stopped" and devProcess = null which can race with the child process still running; update the shutdown flow in the stop logic to call devProcess.kill() and then wait for the child to emit 'exit' or 'close' (or await a Promise that resolves on that event) before setting devState = "stopped" and devProcess = null; also add a short timeout fallback to force the null/state update and handle errors so the stop routine cannot hang indefinitely. Ensure you reference and modify the logic around devProcess.kill(), devProcess.once('exit'/'close') (or equivalent Promise), and the assignments to devState and devProcess.packages/cli-v3/src/mcp/tools/runs.ts (1)
10-41: Consider memory efficiency for large trace files.The
readLinesPagefunction reads the entire file into memory on each pagination call. For very large traces, this could be inefficient.For now this is acceptable given the CLI context, but consider using streaming file reads if traces grow significantly larger.
💡 Optional: Stream-based reading for large files
+import readline from "node:readline"; + +/** Read a page of lines from a file using streaming. */ +async function readLinesPage( + filePath: string, + offset: number, + limit: number, + totalLines: number +): Promise<{ lines: string[]; hasMore: boolean; nextCursor: string | null }> { + const lines: string[] = []; + let lineNum = 0; + + const fileStream = fs.createReadStream(filePath); + const rl = readline.createInterface({ input: fileStream, crlfDelay: Infinity }); + + for await (const line of rl) { + if (lineNum >= offset && lineNum < offset + limit) { + lines.push(line); + } + lineNum++; + if (lineNum >= offset + limit) break; + } + + const end = offset + limit; + const hasMore = end < totalLines; + return { lines, hasMore, nextCursor: hasMore ? String(end) : null }; +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/mcp/tools/runs.ts` around lines 10 - 41, The current readLinesPage reads the whole file into memory which is inefficient for large traces; modify readLinesPage to stream the file instead (use fs.createReadStream and readline.createInterface) and iterate lines skipping until the offset, collect up to limit lines, and stop the stream once collected to avoid loading the entire file; keep the existing return shape ({ lines, hasMore, nextCursor }) and rely on traceCache.totalLines (or compute line count once when caching the file) to determine hasMore/nextCursor; update any callers that expect nextCursor as a string to continue working with the same semantics.apps/webapp/app/routes/api.v1.query.ts (1)
19-24: Validate regex input safety for ReDoS mitigation.The static analysis flagged potential ReDoS risk. While
s.namecomes from server-controlledquerySchemas(not user input), consider adding a safeguard to ensure table names don't contain regex metacharacters, or use a simpler string matching approach.♻️ Alternative using case-insensitive string matching
/** Extract table names from a TRQL query for authorization */ function detectTables(query: string): string[] { + const upperQuery = query.toUpperCase(); return querySchemas - .filter((s) => new RegExp(`\\bFROM\\s+${s.name}\\b`, "i").test(query)) + .filter((s) => { + const pattern = `FROM ${s.name.toUpperCase()}`; + const idx = upperQuery.indexOf(pattern); + if (idx === -1) return false; + // Check word boundaries + const before = idx === 0 ? ' ' : upperQuery[idx - 1]; + const after = upperQuery[idx + pattern.length] ?? ' '; + return /\s/.test(before) && /[\s;)]/.test(after); + }) .map((s) => s.name); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/webapp/app/routes/api.v1.query.ts` around lines 19 - 24, The detectTables function uses a dynamically built RegExp with s.name from querySchemas which flagged ReDoS risk; replace the dynamic regex with a safe approach: either escape regex metacharacters in s.name before constructing the RegExp, or avoid regex entirely by doing case-insensitive string checks (e.g., normalize query and check for `from ${s.name}` boundaries using indexOf or startsWith on tokens). Update detectTables to use the chosen safe method and keep the same return of matching s.name values.apps/webapp/app/routes/api.v1.query.schema.ts (1)
6-30: Consider defining an explicit return type forserializeColumn.The function returns
Record<string, unknown>, which loses type safety. Since this is a public API response, consider defining a type that matches the expected output shape.♻️ Proposed type definition
+type SerializedColumn = { + name: string; + type: string; + description?: string; + example?: string; + allowedValues?: unknown[]; + coreColumn?: boolean; +}; + -function serializeColumn(col: ColumnSchema) { - const result: Record<string, unknown> = { +function serializeColumn(col: ColumnSchema): SerializedColumn { + const result: SerializedColumn = { name: col.name, type: col.type, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/webapp/app/routes/api.v1.query.schema.ts` around lines 6 - 30, The serializeColumn function currently returns a loose Record<string, unknown>, losing type safety for a public API; define a concrete return interface (e.g., SerializedColumn or ColumnResponse) that explicitly lists name: string, type: string (or the actual type enum), optional description?: string, example?: unknown, allowedValues?: unknown[], and coreColumn?: boolean, then change the serializeColumn signature to return that interface and adjust the result object construction to satisfy the new type (update any callers expecting the previous Record type to use the new SerializedColumn type if necessary); reference ColumnSchema and serializeColumn when making these edits.packages/cli-v3/src/mcp/tools.ts (1)
24-29: Consider expandingWRITE_TOOLSto include other side-effect operations.The current set only includes
deployTool,triggerTaskTool, andcancelRunTool. However, other tools also perform write operations or have side effects:
startDevServerTool/stopDevServerTool- manage dev server lifecycleswitchProfileTool- modifies CLI profile stateinitializeProjectTool- creates configuration filescreateProjectInOrgTool- creates projects in the backendIf
--readonlyis intended to prevent all mutations, these should likely be included.♻️ Proposed expansion
/** Tool names that perform write operations (deploy, trigger, cancel). */ const WRITE_TOOLS = new Set([ deployTool.name, triggerTaskTool.name, cancelRunTool.name, + startDevServerTool.name, + stopDevServerTool.name, + switchProfileTool.name, + initializeProjectTool.name, + createProjectInOrgTool.name, ]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/mcp/tools.ts` around lines 24 - 29, WRITE_TOOLS currently only lists deployTool, triggerTaskTool, and cancelRunTool but omits other commands that perform mutations; update the WRITE_TOOLS Set to also include startDevServerTool, stopDevServerTool, switchProfileTool, initializeProjectTool, and createProjectInOrgTool (and ensure those symbols are imported where WRITE_TOOLS is declared) so the --readonly behavior covers dev-server lifecycle, profile switching, project initialization, and project creation as well.apps/webapp/app/routes/api.v1.query.dashboards._index.ts (1)
7-29: Consider handling or logging unsupported layout versions.The
serializeDashboardfunction only processeslayout.version === "1". Dashboards with other versions will silently return an empty widgets array, which could cause confusion when new layout versions are added.♻️ Proposed improvement
function serializeDashboard(dashboard: BuiltInDashboard): DashboardSummary { const widgets: DashboardWidgetSummary[] = []; if (dashboard.layout.version === "1") { for (const [id, widget] of Object.entries(dashboard.layout.widgets)) { // Skip title widgets — they're just section headers if (widget.display.type === "title") continue; widgets.push({ id, title: widget.title, query: widget.query, type: widget.display.type, }); } + } else { + // Log when encountering unsupported layout versions to aid debugging + console.warn(`Unsupported dashboard layout version: ${dashboard.layout.version} for dashboard: ${dashboard.key}`); } return {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/webapp/app/routes/api.v1.query.dashboards._index.ts` around lines 7 - 29, serializeDashboard currently only handles layout.version === "1" and silently returns an empty widgets array for other versions; add explicit handling for unsupported versions by adding an else branch after the existing if (dashboard.layout.version === "1') that logs a warning (e.g., console.warn or the app logger) including the dashboard.key and dashboard.layout.version so consumers and maintainers can see when an unknown layout version is encountered; keep existing behavior for version "1" (the loop over dashboard.layout.widgets) and do not change the returned shape—just surface a clear warning when dashboard.layout.version !== "1".packages/cli-v3/src/mcp/tools/query.ts (1)
16-156: Add@crumbstracing annotations for the new query tool flows.New MCP handlers are added without
@crumbstrace markers.As per coding guidelines, "Add crumbs as you write code using //
@crumbscomments or //#region@crumbsblocks for agentcrumbs debug tracing".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/mcp/tools/query.ts` around lines 16 - 156, Both new MCP handlers (the Query tool handler inside handler: toolHandler(QueryInput.shape, ...) and the getQuerySchemaTool.handler) are missing the required // `@crumbs` tracing annotations; add a short // `@crumbs` comment immediately before each handler’s entry log (before ctx.logger?.log("calling query", ...) and before ctx.logger?.log("calling get_query_schema", ...)) to mark the start of the trace, and add a matching // `@crumbs` comment just before each handler returns (before the final return in the JSON branch and before the final return in the schema branch) so agentcrumbs can capture start/end for both executeQuery and getQuerySchema flows.packages/cli-v3/src/mcp/tools/profiles.ts (1)
15-135: Add@crumbstracing annotations for these new MCP tool paths.The file adds new MCP handlers but no
@crumbscomments/regions for tracing.As per coding guidelines, "Add crumbs as you write code using //
@crumbscomments or //#region@crumbsblocks for agentcrumbs debug tracing".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli-v3/src/mcp/tools/profiles.ts` around lines 15 - 135, Add agentcrumbs tracing comments for the new MCP tool handlers: insert // `@crumbs` (or a // `#region` `@crumbs` ... // `#endregion` `@crumbs` block) at the start of each handler for the whoami handler (the top handler using toolHandler that logs "calling whoami"), listProfilesTool.handler (the handler that logs "calling list_profiles"), and switchProfileTool.handler (the handler that logs "calling switch_profile"); place the crumbs comment immediately inside each async handler function before any significant operations (e.g., before ctx.getAuth(), before listAuthConfigProfiles(), before ctx.switchProfile()) so agentcrumbs can pick up the entry and important steps for tracing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/cli-v3/src/mcp/context.ts`:
- Around line 216-225: The project profile may not be available when tools are
first invoked because loadProjectProfile() is called after tools are registered;
update McpContext so the profile is loaded synchronously during construction
(call readMcpProjectConfig(cwd) inside the McpContext constructor and set
this.options.profile and log) or, alternatively, make the deployment/dev-server
tool handlers explicitly await a profile-ready promise exposed by McpContext
(add a this.profileLoaded Promise resolved by loadProjectProfile and have the
tool entrypoints await McpContext.profileLoaded before proceeding); modify
either the constructor and loadProjectProfile or the tool handler code paths so
tools cannot run with the default profile.
In `@packages/cli-v3/src/mcp/tools/dashboards.ts`:
- Around line 7-10: The module-level dashboardCache currently holds a single {
data: ListDashboardsResponseBody; expiresAt } and is not keyed by
project/environment, causing cross-project stale results; change dashboardCache
into a keyed cache (e.g., Record<string, { data: ListDashboardsResponseBody;
expiresAt: number }>) and add a small helper to derive a cache key from
projectRef and environment (used by run_dashboard_query and any callers that
read/write the cache), update all reads to check the specific entry's expiresAt
against Date.now() and writes to set the keyed entry with TTL using
CACHE_TTL_MS, and ensure invalidation/clearing logic targets the specific key
rather than the whole module cache.
In `@packages/cli-v3/src/mcp/tools/profiles.ts`:
- Around line 107-132: The rollback message can be incorrect because
ctx.switchProfile(projectDir) swallows disk-write failures; update the
switch-and-verify flow in switchProfile usage so you detect persistence failures
and only claim "Reverted to ..." when the revert actually persisted. Concretely,
change the call to ctx.switchProfile(input.profile, projectDir) in the profile
switch path to either (a) use a variant/return value that surfaces disk-write
errors or (b) call a persist/save method after switching and await/check its
result; if that persist fails, throw or return an error immediately instead of
proceeding to authenticate. In the catch handler, when reverting with
ctx.switchProfile(previousProfile, projectDir) ensure you likewise verify that
the revert persisted (and if it didn't, include that failure in the error
response). Reference symbols: switchProfile, getCwd,
readAuthConfigCurrentProfileName, getAuth, respondWithError.
In `@packages/cli-v3/src/mcp/tools/query.ts`:
- Around line 8-9: The schema cache is a single unkeyed global (schemaCache) so
schemas for one projectRef/environment/branch can be returned to another; change
schemaCache from a single value to a keyed cache (e.g., Map<string, { data:
QuerySchemaResponseBody; expiresAt: number }>) and derive a deterministic key
from the request's projectRef, environment, and branch before storing or
reading; keep SCHEMA_CACHE_TTL_MS but check expiresAt per-key and evict/refresh
only that entry. Update every place that reads/writes schemaCache (including the
other occurrence that mirrors lines ~134-139) to use the keyed cache and the
same key derivation logic.
- Line 96: The current construction of desc (const desc = parts.join(".
").replace(/\|/g, "\\|");) only escapes pipe characters, so backslashes remain
unescaped and can produce ambiguous Markdown; change the transformation to first
escape backslashes (e.g., run .replace(/\\/g, "\\\\") on the result of
parts.join(". ")) before escaping pipes, ensuring desc has backslashes doubled
and then pipes escaped.
---
Nitpick comments:
In `@apps/webapp/app/routes/api.v1.query.dashboards._index.ts`:
- Around line 7-29: serializeDashboard currently only handles layout.version ===
"1" and silently returns an empty widgets array for other versions; add explicit
handling for unsupported versions by adding an else branch after the existing if
(dashboard.layout.version === "1') that logs a warning (e.g., console.warn or
the app logger) including the dashboard.key and dashboard.layout.version so
consumers and maintainers can see when an unknown layout version is encountered;
keep existing behavior for version "1" (the loop over dashboard.layout.widgets)
and do not change the returned shape—just surface a clear warning when
dashboard.layout.version !== "1".
In `@apps/webapp/app/routes/api.v1.query.schema.ts`:
- Around line 6-30: The serializeColumn function currently returns a loose
Record<string, unknown>, losing type safety for a public API; define a concrete
return interface (e.g., SerializedColumn or ColumnResponse) that explicitly
lists name: string, type: string (or the actual type enum), optional
description?: string, example?: unknown, allowedValues?: unknown[], and
coreColumn?: boolean, then change the serializeColumn signature to return that
interface and adjust the result object construction to satisfy the new type
(update any callers expecting the previous Record type to use the new
SerializedColumn type if necessary); reference ColumnSchema and serializeColumn
when making these edits.
In `@apps/webapp/app/routes/api.v1.query.ts`:
- Around line 19-24: The detectTables function uses a dynamically built RegExp
with s.name from querySchemas which flagged ReDoS risk; replace the dynamic
regex with a safe approach: either escape regex metacharacters in s.name before
constructing the RegExp, or avoid regex entirely by doing case-insensitive
string checks (e.g., normalize query and check for `from ${s.name}` boundaries
using indexOf or startsWith on tokens). Update detectTables to use the chosen
safe method and keep the same return of matching s.name values.
In `@packages/cli-v3/package.json`:
- Around line 82-83: The CLI utility filenames tools.test.ts and smoke.test.ts
conflict with test discovery; rename them (e.g., tools-cli.ts and smoke-cli.ts)
or move them into a scripts/ or cli/ directory, and update the npm scripts
mcp:test and mcp:smoke in package.json to point to the new filenames/paths
(references: mcp:test, mcp:smoke, tools.test.ts, smoke.test.ts).
In `@packages/cli-v3/src/dev/devSupervisor.ts`:
- Around line 766-769: The repeated try-catch used to safely access
DevRunController.workerFriendlyId (which throws when serverWorker is undefined)
should be extracted into a private helper like safeWorkerFriendlyId(controller:
DevRunController): string | undefined; replace the inline try/catch occurrences
(the filter in the activeRunIds computation, and the other two places around
lines referencing workerFriendlyId) to call this helper, and update the
filter/map logic to use the helper's return value for comparisons (e.g., compare
safeWorkerFriendlyId(c) === friendlyId) so the code is clearer and DRY.
In `@packages/cli-v3/src/mcp/context.ts`:
- Around line 24-32: The readMcpProjectConfig function parses JSON and casts to
McpProjectConfig without validation; add a zod schema (e.g.,
McpProjectConfigSchema) that models the expected shape and use
McpProjectConfigSchema.safeParse(parsed) inside readMcpProjectConfig to validate
the content after JSON.parse; if safeParse fails, handle gracefully (return
undefined or log the error) instead of casting, and export/use
McpProjectConfigSchema where needed so downstream code can rely on validated
McpProjectConfig instances rather than unchecked casts.
In `@packages/cli-v3/src/mcp/smoke.test.ts`:
- Line 214: The test currently hardcodes widgetId ("9lDDdebQ") in the test named
run_dashboard_query; change it to not be environment-specific by either reading
a configurable env var (e.g., process.env.DASHBOARD_WIDGET_ID) or deriving the
widgetId from the prior list_dashboards test output: have list_dashboards return
or store the dashboard/widget list in a shared variable, find the desired
dashboard (by key "overview" or widget title) and extract its widget id, then
pass that value into the run_dashboard_query invocation (replace the literal
widgetId in the call that uses commonArgs). Ensure test ordering or shared state
guarantees list_dashboards runs first so the derived widgetId is available.
- Around line 190-198: Replace the brittle post-hoc flip of the test result in
the test("get_query_schema", ...) block with a dedicated helper (e.g.,
testExpectedError) that invokes the tool via callTool and asserts the error
shape/text up front; specifically, implement testExpectedError(name, toolArgs,
expectedErrorSubstring) which calls callTool(client, name, toolArgs), checks
isError and text.includes(expectedErrorSubstring), and pushes a pass/fail entry
into results (using the same result fields: tool, status, duration, preview or
error) instead of mutating the last results element; then replace the current
manual flipping code around test("get_query_schema", { ...commonArgs, table:
"nonexistent" }) to call testExpectedError("get_query_schema", { ...commonArgs,
table: "nonexistent" }, "not found").
- Around line 60-61: The test currently uses a hardcoded projectRef
("proj_rrkpdguyagvsoktglnod") which breaks portability; update the logic around
the projectRef and projIdx variables so projectRef is taken from args when
"--project-ref" is present, otherwise read a project reference from the
developer's CLI profile or a PROJECT_REF environment variable (and fail with a
clear error if none is found). Locate the variables projectRef and projIdx in
the smoke.test.ts setup, replace the hardcoded default with code that prefers
args[projIdx+1], then falls back to process.env.PROJECT_REF or a helper that
queries the current CLI profile/projects, and ensure the test logs a helpful
message and exits if no valid projectRef can be resolved.
In `@packages/cli-v3/src/mcp/tools.ts`:
- Around line 24-29: WRITE_TOOLS currently only lists deployTool,
triggerTaskTool, and cancelRunTool but omits other commands that perform
mutations; update the WRITE_TOOLS Set to also include startDevServerTool,
stopDevServerTool, switchProfileTool, initializeProjectTool, and
createProjectInOrgTool (and ensure those symbols are imported where WRITE_TOOLS
is declared) so the --readonly behavior covers dev-server lifecycle, profile
switching, project initialization, and project creation as well.
In `@packages/cli-v3/src/mcp/tools/devServer.ts`:
- Around line 28-35: StartDevServerInput is currently a plain object but should
be a Zod schema for consistency and better typing; replace the plain object with
a z.object(...) schema that contains the same configPath property
(z.string().describe(...).optional()) and then pass that schema into toolHandler
(the existing call to toolHandler(StartDevServerInput, ...) can remain unchanged
after swapping in the z.object). Ensure the exported/used symbol name remains
StartDevServerInput.
- Around line 199-209: The devServerStatusTool currently duplicates the schema
by defining a plain object under inputSchema and a zod schema inside the
handler; consolidate by extracting a single zod schema (e.g.,
devServerStatusInputSchema) that defines lines:
z.number().int().default(50).describe(...), then use that same schema reference
for both inputSchema and the handler's toolHandler call so input validation is
defined once; update any imports/exports as needed and ensure the handler still
receives the parsed input shape from the shared schema.
- Around line 185-188: The code currently calls devProcess.kill() then
immediately sets devState = "stopped" and devProcess = null which can race with
the child process still running; update the shutdown flow in the stop logic to
call devProcess.kill() and then wait for the child to emit 'exit' or 'close' (or
await a Promise that resolves on that event) before setting devState = "stopped"
and devProcess = null; also add a short timeout fallback to force the null/state
update and handle errors so the stop routine cannot hang indefinitely. Ensure
you reference and modify the logic around devProcess.kill(),
devProcess.once('exit'/'close') (or equivalent Promise), and the assignments to
devState and devProcess.
In `@packages/cli-v3/src/mcp/tools/profiles.ts`:
- Around line 15-135: Add agentcrumbs tracing comments for the new MCP tool
handlers: insert // `@crumbs` (or a // `#region` `@crumbs` ... // `#endregion` `@crumbs`
block) at the start of each handler for the whoami handler (the top handler
using toolHandler that logs "calling whoami"), listProfilesTool.handler (the
handler that logs "calling list_profiles"), and switchProfileTool.handler (the
handler that logs "calling switch_profile"); place the crumbs comment
immediately inside each async handler function before any significant operations
(e.g., before ctx.getAuth(), before listAuthConfigProfiles(), before
ctx.switchProfile()) so agentcrumbs can pick up the entry and important steps
for tracing.
In `@packages/cli-v3/src/mcp/tools/query.ts`:
- Around line 16-156: Both new MCP handlers (the Query tool handler inside
handler: toolHandler(QueryInput.shape, ...) and the getQuerySchemaTool.handler)
are missing the required // `@crumbs` tracing annotations; add a short // `@crumbs`
comment immediately before each handler’s entry log (before
ctx.logger?.log("calling query", ...) and before ctx.logger?.log("calling
get_query_schema", ...)) to mark the start of the trace, and add a matching //
`@crumbs` comment just before each handler returns (before the final return in the
JSON branch and before the final return in the schema branch) so agentcrumbs can
capture start/end for both executeQuery and getQuerySchema flows.
In `@packages/cli-v3/src/mcp/tools/runs.ts`:
- Around line 10-41: The current readLinesPage reads the whole file into memory
which is inefficient for large traces; modify readLinesPage to stream the file
instead (use fs.createReadStream and readline.createInterface) and iterate lines
skipping until the offset, collect up to limit lines, and stop the stream once
collected to avoid loading the entire file; keep the existing return shape ({
lines, hasMore, nextCursor }) and rely on traceCache.totalLines (or compute line
count once when caching the file) to determine hasMore/nextCursor; update any
callers that expect nextCursor as a string to continue working with the same
semantics.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 431de69d-3fe3-449e-b855-8826375a6247
📒 Files selected for processing (33)
.changeset/fix-list-deploys-nullable.md.changeset/mcp-query-tools.mdapps/webapp/app/routes/api.v1.query.dashboards._index.tsapps/webapp/app/routes/api.v1.query.schema.tsapps/webapp/app/routes/api.v1.query.tsapps/webapp/app/services/authorization.server.tsapps/webapp/test/authorization.test.tspackages/cli-v3/package.jsonpackages/cli-v3/src/commands/mcp.tspackages/cli-v3/src/dev/backgroundWorker.tspackages/cli-v3/src/dev/devSession.tspackages/cli-v3/src/dev/devSupervisor.tspackages/cli-v3/src/dev/devWatchdog.tspackages/cli-v3/src/mcp/config.tspackages/cli-v3/src/mcp/context.tspackages/cli-v3/src/mcp/formatters.tspackages/cli-v3/src/mcp/mintlifyClient.tspackages/cli-v3/src/mcp/schemas.tspackages/cli-v3/src/mcp/smoke.test.tspackages/cli-v3/src/mcp/tools.test.tspackages/cli-v3/src/mcp/tools.tspackages/cli-v3/src/mcp/tools/dashboards.tspackages/cli-v3/src/mcp/tools/deploys.tspackages/cli-v3/src/mcp/tools/devServer.tspackages/cli-v3/src/mcp/tools/previewBranches.tspackages/cli-v3/src/mcp/tools/profiles.tspackages/cli-v3/src/mcp/tools/query.tspackages/cli-v3/src/mcp/tools/runs.tspackages/cli-v3/src/mcp/tools/tasks.tspackages/cli-v3/src/utilities/configFiles.tspackages/core/src/v3/apiClient/index.tspackages/core/src/v3/schemas/api.tspackages/core/src/v3/schemas/query.ts
… persistence race, escape backslashes in markdown
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
apps/webapp/app/routes/api.v1.query.dashboards._index.ts (2)
45-51: Avoid silently dropping dashboards on any exception.Catching all errors and filtering
nullcan hide real regressions (bad key, presenter changes) and return partial data without a signal. Prefer fail-fast, or at minimum log and tag partial responses.Suggested change (fail-fast)
- const dashboards = BUILT_IN_DASHBOARD_KEYS.map((key) => { - try { - return serializeDashboard(builtInDashboard(key)); - } catch { - return null; - } - }).filter((d): d is DashboardSummary => d !== null); + const dashboards = BUILT_IN_DASHBOARD_KEYS.map((key) => + serializeDashboard(builtInDashboard(key)) + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/webapp/app/routes/api.v1.query.dashboards._index.ts` around lines 45 - 51, The current mapping over BUILT_IN_DASHBOARD_KEYS swallows all exceptions by returning null from within the try/catch around serializeDashboard(builtInDashboard(key)), which hides real errors; change this to fail-fast by removing the broad try/catch so that serializeDashboard(builtInDashboard(key)) throws up to the caller (or, if you must support partial responses, catch exceptions but log them via your logger with the key and rethrow or mark the response as partial), ensuring references to BUILT_IN_DASHBOARD_KEYS, builtInDashboard, serializeDashboard, and DashboardSummary are preserved so the mapping either returns only valid DashboardSummary items or surfaces errors immediately.
1-54: Add@crumbstracing annotations in this new route.The new route currently has no
@crumbsmarkers for agentcrumbs/debug tracing.As per coding guidelines
**/*.{ts,tsx,js}: Add crumbs as you write code using //@Crumbscomments or //#region@Crumbsblocks for agentcrumbs debug tracing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/webapp/app/routes/api.v1.query.dashboards._index.ts` around lines 1 - 54, Add agentcrumbs annotations by inserting // `@crumbs` comments around the key execution points: add a top-level "// `@crumbs`: route=api.v1.query.dashboards._index loader=start" just above the exported loader, a "// `@crumbs`: action=serializeDashboard start" at the start of the serializeDashboard function and a matching end marker after its return, and add inline "// `@crumbs`" markers inside the map over BUILT_IN_DASHBOARD_KEYS (before calling builtInDashboard(key)) and inside the catch block (to annotate failures); reference the loader export, serializeDashboard function, BUILT_IN_DASHBOARD_KEYS constant, and builtInDashboard(key) call when placing these comments so agentcrumbs can trace route start, per-dashboard processing, serialization, and errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/webapp/app/routes/api.v1.query.dashboards._index.ts`:
- Around line 52-53: The route currently returns json({ dashboards }) without
runtime validation; add a Zod schema (e.g., DashboardSchema for each item and
DashboardsResponseSchema = z.object({ dashboards: z.array(DashboardSchema) }))
and validate the dashboards variable with DashboardsResponseSchema.parse or
safeParse before calling json. Import z from 'zod', validate the data produced
by whatever function populates dashboards (referencing the dashboards variable
and the json(...) return), and on validation failure throw or return an
appropriate error Response (500) so only validated data is returned.
---
Nitpick comments:
In `@apps/webapp/app/routes/api.v1.query.dashboards._index.ts`:
- Around line 45-51: The current mapping over BUILT_IN_DASHBOARD_KEYS swallows
all exceptions by returning null from within the try/catch around
serializeDashboard(builtInDashboard(key)), which hides real errors; change this
to fail-fast by removing the broad try/catch so that
serializeDashboard(builtInDashboard(key)) throws up to the caller (or, if you
must support partial responses, catch exceptions but log them via your logger
with the key and rethrow or mark the response as partial), ensuring references
to BUILT_IN_DASHBOARD_KEYS, builtInDashboard, serializeDashboard, and
DashboardSummary are preserved so the mapping either returns only valid
DashboardSummary items or surfaces errors immediately.
- Around line 1-54: Add agentcrumbs annotations by inserting // `@crumbs` comments
around the key execution points: add a top-level "// `@crumbs`:
route=api.v1.query.dashboards._index loader=start" just above the exported
loader, a "// `@crumbs`: action=serializeDashboard start" at the start of the
serializeDashboard function and a matching end marker after its return, and add
inline "// `@crumbs`" markers inside the map over BUILT_IN_DASHBOARD_KEYS (before
calling builtInDashboard(key)) and inside the catch block (to annotate
failures); reference the loader export, serializeDashboard function,
BUILT_IN_DASHBOARD_KEYS constant, and builtInDashboard(key) call when placing
these comments so agentcrumbs can trace route start, per-dashboard processing,
serialization, and errors.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: e09e7b96-f6ef-406c-86d4-a6945ba98181
📒 Files selected for processing (1)
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
📜 Review details
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead
**/*.{ts,tsx}: Use task export syntax: export const myTask = task({ id: 'my-task', run: async (payload) => { ... } })
Use Run Engine 2.0 (@internal/run-engine) and redis-worker for all new work - avoid DEPRECATED zodworker (Graphile-worker wrapper)
Prisma 6.14.0 client and schema use PostgreSQL in internal-packages/database - import only from Prisma client
Files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
{packages/core,apps/webapp}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use zod for validation in packages/core and apps/webapp
Files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use function declarations instead of default exports
Files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
apps/webapp/app/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
Access all environment variables through the
envexport ofenv.server.tsinstead of directly accessingprocess.envin the Trigger.dev webapp
Files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
apps/webapp/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
apps/webapp/**/*.{ts,tsx}: When importing from@trigger.dev/corein the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp
Files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/otel-metrics.mdc)
**/*.ts: When creating or editing OTEL metrics (counters, histograms, gauges), ensure metric attributes have low cardinality by using only enums, booleans, bounded error codes, or bounded shard IDs
Do not use high-cardinality attributes in OTEL metrics such as UUIDs/IDs (envId, userId, runId, projectId, organizationId), unbounded integers (itemCount, batchSize, retryCount), timestamps (createdAt, startTime), or free-form strings (errorMessage, taskName, queueName)
When exporting OTEL metrics via OTLP to Prometheus, be aware that the exporter automatically adds unit suffixes to metric names (e.g., 'my_duration_ms' becomes 'my_duration_ms_milliseconds', 'my_counter' becomes 'my_counter_total'). Account for these transformations when writing Grafana dashboards or Prometheus queries
Files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
**/*.{js,ts,jsx,tsx,json,md,yaml,yml}
📄 CodeRabbit inference engine (AGENTS.md)
Format code using Prettier before committing
Files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
apps/webapp/app/routes/**/*.ts
📄 CodeRabbit inference engine (apps/webapp/CLAUDE.md)
Use Remix flat-file route convention with dot-separated segments (e.g.,
api.v1.tasks.$taskId.trigger.tsfor/api/v1/tasks/:taskId/trigger)
Files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
apps/{webapp,supervisor}/**/*
📄 CodeRabbit inference engine (CLAUDE.md)
When modifying only server components (apps/webapp/, apps/supervisor/) with no package changes, add a .server-changes/ file instead of a changeset
Files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
**/*.{ts,tsx,js}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx,js}: Always import from@trigger.dev/sdkfor Trigger.dev tasks - never use@trigger.dev/sdk/v3or deprecated client.defineJob
Import subpaths only from@trigger.dev/core, never import from root
Add crumbs as you write code using //@crumbscomments or //#region@crumbsblocks for agentcrumbs debug tracing
Files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
apps/webapp/**/*.{ts,tsx,jsx,js}
📄 CodeRabbit inference engine (CLAUDE.md)
Remix 2.1.0 is used in apps/webapp for the main API, dashboard, and orchestration with Express server
Files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
🧠 Learnings (6)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: packages/cli-v3/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:34.140Z
Learning: Applies to packages/cli-v3/src/mcp/**/* : Provide an MCP server implementation in `src/mcp/` for AI-assisted task development
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-13T13:37:49.544Z
Learning: Applies to apps/webapp/**/*.{ts,tsx,jsx,js} : Remix 2.1.0 is used in apps/webapp for the main API, dashboard, and orchestration with Express server
📚 Learning: 2026-03-13T13:37:49.544Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-13T13:37:49.544Z
Learning: Applies to apps/webapp/**/*.{ts,tsx,jsx,js} : Remix 2.1.0 is used in apps/webapp for the main API, dashboard, and orchestration with Express server
Applied to files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
📚 Learning: 2026-03-02T12:42:56.114Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: apps/webapp/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:42:56.114Z
Learning: Applies to apps/webapp/app/routes/**/*.ts : Use Remix flat-file route convention with dot-separated segments (e.g., `api.v1.tasks.$taskId.trigger.ts` for `/api/v1/tasks/:taskId/trigger`)
Applied to files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
📚 Learning: 2025-11-27T16:26:58.661Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/webapp.mdc:0-0
Timestamp: 2025-11-27T16:26:58.661Z
Learning: Applies to apps/webapp/**/*.{ts,tsx} : Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp
Applied to files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
📚 Learning: 2026-02-11T16:50:14.167Z
Learnt from: matt-aitken
Repo: triggerdotdev/trigger.dev PR: 3019
File: apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.dashboards.$dashboardId.widgets.tsx:126-131
Timestamp: 2026-02-11T16:50:14.167Z
Learning: In apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.dashboards.$dashboardId.widgets.tsx, MetricsDashboard entities are intentionally scoped to the organization level, not the project level. The dashboard lookup should filter by organizationId only (not projectId), allowing dashboards to be accessed across projects within the same organization. The optional projectId field on MetricsDashboard serves other purposes and should not be used as an authorization constraint.
Applied to files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
📚 Learning: 2026-02-04T16:34:48.876Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2994
File: apps/webapp/app/routes/vercel.connect.tsx:13-27
Timestamp: 2026-02-04T16:34:48.876Z
Learning: In apps/webapp/app/routes/vercel.connect.tsx, configurationId may be absent for "dashboard" flows but must be present for "marketplace" flows. Enforce this with a Zod superRefine and pass installationId to repository methods only when configurationId is defined (omit the field otherwise).
Applied to files:
apps/webapp/app/routes/api.v1.query.dashboards._index.ts
🔇 Additional comments (1)
apps/webapp/app/routes/api.v1.query.dashboards._index.ts (1)
12-23: Serializer mapping looks correct for v1 dashboard widgets.The field mapping and
titlewidget exclusion are clean and consistent with the API response model.
| return json({ dashboards }); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find and examine the file
find . -name "api.v1.query.dashboards._index.ts" -type fRepository: triggerdotdev/trigger.dev
Length of output: 127
🏁 Script executed:
# List core schema files to understand structure
fd -e ts -e tsx "schemas" packages/core --type f | head -20Repository: triggerdotdev/trigger.dev
Length of output: 262
🏁 Script executed:
# Search for ListDashboardsResponseBody in the codebase
rg "ListDashboardsResponseBody" --type ts --type tsxRepository: triggerdotdev/trigger.dev
Length of output: 96
🏁 Script executed:
# Read the file to see the actual code
cat -n apps/webapp/app/routes/api.v1.query.dashboards._index.tsRepository: triggerdotdev/trigger.dev
Length of output: 2057
🏁 Script executed:
# Search for ListDashboardsResponseBody with correct syntax
rg "ListDashboardsResponseBody" --type tsRepository: triggerdotdev/trigger.dev
Length of output: 868
🏁 Script executed:
# Check the ListDashboardsResponseBody schema definition
cat -n packages/core/src/v3/schemas/query.ts | grep -A 5 "ListDashboardsResponseBody"Repository: triggerdotdev/trigger.dev
Length of output: 507
🏁 Script executed:
# Check other routes in apps/webapp for validation patterns
rg "\.parse\(" apps/webapp/app/routes --type ts -B 2 -A 2 | head -40Repository: triggerdotdev/trigger.dev
Length of output: 3271
🏁 Script executed:
# Check if response body validation is done in other routes
rg "json\(" apps/webapp/app/routes --type ts -B 3 -A 1 | grep -E "(parse|parse\()" -B 3 | head -50Repository: triggerdotdev/trigger.dev
Length of output: 3534
🏁 Script executed:
# Look for response schema validation patterns in apps/webapp routes
rg "ResponseBody.*parse" apps/webapp/app/routes --type ts -B 2 -A 2Repository: triggerdotdev/trigger.dev
Length of output: 51
🏁 Script executed:
# Check if there are patterns of validating response bodies before json() in webapp routes
rg "json\(" apps/webapp/app/routes --type ts -B 5 | grep -E "(ResponseBody|response.*parse|\.parse.*json)" -B 3 -A 1Repository: triggerdotdev/trigger.dev
Length of output: 4513
🏁 Script executed:
# Check the schemas file to see what's exported and available
head -150 packages/core/src/v3/schemas/query.ts | tail -50Repository: triggerdotdev/trigger.dev
Length of output: 1214
🏁 Script executed:
# Look at similar dashboard routes to see if they validate responses
find apps/webapp/app/routes -name "*dashboard*" -type fRepository: triggerdotdev/trigger.dev
Length of output: 601
🏁 Script executed:
# Check the similar dashboard route to see if it validates responses
cat -n apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.dashboards.$dashboardId.widgets.tsx | head -150Repository: triggerdotdev/trigger.dev
Length of output: 178
🏁 Script executed:
# Check for response validation patterns in the webapp more broadly
rg "\.parse\(" apps/webapp/app/routes --type ts -A 1 | grep -E "json\(" -B 2 | head -30Repository: triggerdotdev/trigger.dev
Length of output: 530
🏁 Script executed:
# Check how ListDashboardsResponseBody is exported from `@trigger.dev/core/v3/schemas`
rg "export.*ListDashboardsResponseBody" packages/core --type tsRepository: triggerdotdev/trigger.dev
Length of output: 282
🏁 Script executed:
# Check if the subpath export exists in core package.json
cat packages/core/package.json | grep -A 20 "exports"Repository: triggerdotdev/trigger.dev
Length of output: 2527
🏁 Script executed:
# Check if ListDashboardsResponseBody is exported from v3/schemas/index.ts
grep "ListDashboardsResponseBody" packages/core/src/v3/schemas/index.tsRepository: triggerdotdev/trigger.dev
Length of output: 51
🏁 Script executed:
# Check what query.ts exports are re-exported from index.ts
cat packages/core/src/v3/schemas/index.ts | grep -E "(export.*from|ListDashboards)"Repository: triggerdotdev/trigger.dev
Length of output: 614
🏁 Script executed:
# Verify that query.js exports ListDashboardsResponseBody
head -5 packages/core/src/v3/schemas/query.js 2>/dev/null || echo "Checking TypeScript source..."
rg "^export" packages/core/src/v3/schemas/query.ts | head -20Repository: triggerdotdev/trigger.dev
Length of output: 1386
🏁 Script executed:
# Confirm the import path will work by checking the v3/schemas export
cat packages/core/src/v3/index.ts | head -40Repository: triggerdotdev/trigger.dev
Length of output: 1572
Add Zod validation to the response body.
This public API route returns a typed response but skips runtime validation. Per the coding guidelines for apps/webapp, validate the response using Zod before returning it to enforce contract safety.
Suggested change
-import type { DashboardSummary, DashboardWidgetSummary } from "@trigger.dev/core/v3/schemas";
+import {
+ ListDashboardsResponseBody,
+ type DashboardSummary,
+ type DashboardWidgetSummary,
+} from "@trigger.dev/core/v3/schemas";
@@
- return json({ dashboards });
+ return json(ListDashboardsResponseBody.parse({ dashboards }));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/webapp/app/routes/api.v1.query.dashboards._index.ts` around lines 52 -
53, The route currently returns json({ dashboards }) without runtime validation;
add a Zod schema (e.g., DashboardSchema for each item and
DashboardsResponseSchema = z.object({ dashboards: z.array(DashboardSchema) }))
and validate the dashboards variable with DashboardsResponseSchema.parse or
safeParse before calling json. Import z from 'zod', validate the data produced
by whatever function populates dashboards (referencing the dashboards variable
and the json(...) return), and on validation failure throw or return an
appropriate error Response (500) so only validated data is returned.
Summary
Major expansion of the MCP server (14 → 25 tools), context efficiency optimizations, new API endpoints, and a fix for the dev CLI leaking build directories on disk.
New MCP tools
get_query_schema,query,list_dashboards,run_dashboard_query— query your data using TRQL directly from AI assistantswhoami,list_profiles,switch_profile— see and switch CLI profiles per-project (persisted to.trigger/mcp.json)start_dev_server,stop_dev_server,dev_server_status— start/stoptrigger devand stream build outputget_task_schema— get payload schema for a specific task (split out fromget_current_workerto reduce context)New API endpoints
GET /api/v1/query/schema— discover TRQL tables and columns (server-driven, multi-table)GET /api/v1/query/dashboards— list built-in dashboard widgets and their queriesNew features
--readonlyflag — hides write tools (deploy,trigger_task,cancel_run) so agents can't make changesread:queryJWT scope — new authorization scope for query endpoints, with per-table granularity (read:query:runs,read:query:llm_metrics, etc.)get_run_detailsnow paginates trace events via cursor, caching the full trace in a temp file so subsequent pages don't re-fetchreadOnlyHint/destructiveHintannotations for clients that support themswitch_profilesaves to.trigger/mcp.json(gitignored), automatically loaded on next MCP server startContext optimizations
get_query_schemarequires a table name — returns one table's schema instead of all tables (60-80% fewer tokens)get_current_workerno longer inlines payload schemas — useget_task_schemafor specific taskscancel_run,list_deploys,list_preview_branchesformatted as text instead of rawJSON.stringify()Bug fixes
search_docsfailing due to renamed upstream Mintlify tool (SearchTriggerDev→search_trigger_dev)list_deploysfailing when deployments have nullruntime/runtimeVersionfields (fixes MCP server: list_deploys fails with validation error on null runtime/runtimeVersion #3139)list_preview_branchescrashing due to incorrect response shape accessmetricstable column documented asvalueinstead ofmetric_valuein query docs/api/v1/querynot accepting JWT auth (addedallowJWT: true)Dev CLI build directory fix
The dev CLI was leaking
build-*directories in.trigger/tmp/on every rebuild, accumulating hundreds of MB over time (842MB observed). Three layers of protection added:.trigger/tmp/when it detects the parent CLI was killedclearTmpDirs()wipes any remaining orphansTest plan
pnpm run mcp:smoke— 17 automated smoke tests for all read-only MCP toolspnpm run mcp:test list— verify 25 tools registered (21 in--readonlymode)pnpm run mcp:test --readonly list— verify write tools hidden.trigger/tmp/GET /api/v1/query/schema,GET /api/v1/query/dashboards🤖 Generated with Claude Code