Conversation
- Remove packageManager and engines from root package.json - Add workspaces to root package.json (replaces pnpm-workspace.yaml) - Remove engines from packages/chronicle/package.json - Delete pnpm-workspace.yaml and pnpm-lock.yaml - Generate bun.lock via bun install Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Create build-cli.ts using Bun.build() API (target: node, format: esm) - Update build:cli script to use bun build-cli.ts - Delete tsup.config.ts - Remove tsup from devDependencies Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Convert ~54 relative imports across 33 files to use @/* aliases (tsconfig paths already configured). Only remaining relative import is ../../.source/server (outside src/). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Rewrite Dockerfile with oven/bun:1.3 base image - Replace pnpm with bun install/build - Add serve command (build + start) for production containers - Add /api/health endpoint for K8s readiness/liveness probes - Default CMD changed from dev to serve Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughSummary by CodeRabbit
WalkthroughMigrates tooling from pnpm/tsup to Bun (Dockerfile, package.json, build scripts), standardizes imports to Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant CLI as Chronicle CLI
participant OS as OS / Signals
participant Builder as Next.js Build (child)
participant Server as Next.js Start (prod server)
User->>CLI: run `chronicle serve --port <n> --content <dir>`
CLI->>CLI: resolve content dir, load CLI config, set env CHRONICLE_CONTENT_DIR
CLI->>Builder: spawn `next build`
OS->>CLI: (SIGINT/SIGTERM) if received
CLI->>Builder: forward signal -> terminate build
Builder-->>CLI: exit (success)
CLI->>Server: spawn `next start --port <n>`
Server-->>User: serve production app on port <n>
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
Dockerfile (1)
18-35:⚠️ Potential issue | 🟠 MajorRunner container runs as root — add a non-root
USERdirective.The previous Dockerfile had
chown -R node:node /appandUSER node, both of which were dropped. The Trivy scanner (DS-0002) flags this: noUSERcommand with a non-root user is present in the image. Running production containers as root is a security risk and often blocked by KubernetesPodSecurityAdmissionpolicies.The symlink into
/usr/local/bin/must happen before the user is switched since it requires root, but that is already the case here.🔒 Proposed fix — add non-root user to runner
# --- runner --- FROM base AS runner WORKDIR /app/packages/chronicle COPY --from=builder /app /app -RUN chmod +x bin/chronicle.js -RUN ln -s /app/packages/chronicle/bin/chronicle.js /usr/local/bin/chronicle - -RUN mkdir -p /app/content && ln -s /app/content /app/packages/chronicle/content +RUN chmod +x bin/chronicle.js \ + && ln -s /app/packages/chronicle/bin/chronicle.js /usr/local/bin/chronicle \ + && mkdir -p /app/content \ + && ln -s /app/content /app/packages/chronicle/content \ + && chown -R bun:bun /app VOLUME /app/content ENV CHRONICLE_CONTENT_DIR=./content -WORKDIR /app/packages/chronicle EXPOSE 3000 +USER bun + ENTRYPOINT ["chronicle"] CMD ["serve", "--port", "3000"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` around lines 18 - 35, The runner stage in the Dockerfile currently leaves the container running as root (ENTRYPOINT ["chronicle"]) which triggers security scanners; add a non-root user switch: after creating the /usr/local/bin/chronicle symlink and after ensuring ownership of /app is set (e.g., chown -R node:node /app), add a USER directive to run as that non-root user (USER node or similar) so the container no longer runs as root; ensure the symlink creation (ln -s /usr/local/bin/chronicle) remains before the USER change and that CHRONICLE_CONTENT_DIR and VOLUME usage still work under the non-root user.
🧹 Nitpick comments (2)
packages/chronicle/src/app/api/health/route.ts (1)
1-3: Health probe is liveness-only; ensure K8s readiness semantics are met.The endpoint always returns
{ status: 'ok' }unconditionally, making it suitable as a liveness probe (HTTP server is running). As a readiness probe it offers no guarantee that the app can actually serve content — it doesn't verify config loading, content source availability, or any other initialization.For this documentation site the risk is low if all dependencies are resolved at build time, but if Chronicle reads config or OpenAPI specs from disk at runtime, K8s could route traffic before those are available. Consider either:
- Keeping this as a liveness probe and introducing a separate
/api/readyendpoint that performs a lightweight sanity check (e.g.,loadConfig()succeeds), or- Documenting explicitly that this probe only signals HTTP availability, not content readiness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/app/api/health/route.ts` around lines 1 - 3, The GET handler in route.ts currently always returns a static Response.json({ status: 'ok' }) making it only a liveness probe; add a proper readiness check instead of or alongside this by implementing a new readiness endpoint (e.g., export function READY() or export function GET for /api/ready) which calls your initialization sanity checks (e.g., loadConfig(), validateOpenApiSpecs(), or another isReady() helper) and returns 200 only if those succeed and 503 otherwise; update the existing GET in route.ts to remain as the liveness probe or document its liveness-only semantics so Kubernetes can use /api/ready for readiness checks.packages/chronicle/src/cli/index.ts (1)
6-6: Staleprocess.onsignal handlers inservebuild phase.From the
serve.tsimplementation (provided as context),process.on('SIGINT', ...)andprocess.on('SIGTERM', ...)are registered againstbuildChildbut are never removed after the build completes andstartChildis spawned. Duringnext start, SIGINT/SIGTERM will trigger both the stale no-op handler and whateverattachLifecycleHandlersregisters — functionally correct only because killing an already-closed child is silently ignored, but it leaks listeners.♻️ Suggested fix in
serve.ts- process.on('SIGINT', () => buildChild.kill('SIGINT')) - process.on('SIGTERM', () => buildChild.kill('SIGTERM')) + const onSigInt = () => buildChild.kill('SIGINT') + const onSigTerm = () => buildChild.kill('SIGTERM') + process.on('SIGINT', onSigInt) + process.on('SIGTERM', onSigTerm) buildChild.on('close', (code) => { + process.off('SIGINT', onSigInt) + process.off('SIGTERM', onSigTerm) if (code !== 0) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chronicle/src/cli/index.ts` at line 6, The serve build phase registers SIGINT/SIGTERM handlers on process for buildChild which remain after the build completes and startChild is spawned; update the build flow in serve (where buildChild is created) to register the handlers as named functions and remove them (using process.off/removeListener) immediately after the build finishes and before spawning startChild so the stale handlers do not leak; ensure the same named handlers are not reused by attachLifecycleHandlers for startChild to avoid double-registration and that any cleanup runs on both normal completion and error paths.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@package.json`:
- Around line 1-5: Restore the Node engine constraint in package.json by adding
an "engines" field with "node": ">=22" to ensure end users get a warning on
older Node versions; optionally add "packageManager": "bun@1.3.x" to pin Bun for
contributors and enable corepack enforcement. Update the top-level package.json
(the file that currently contains name/private/workspaces) to include these keys
so tooling and install-time checks can surface the required runtime and
contributor environment. Ensure the exact string is "node": ">=22" and the
optional packageManager matches "bun@1.3.x".
In `@packages/chronicle/build-cli.ts`:
- Around line 3-11: Bun.build(...) returns a BuildOutput with a success flag
that must be checked; update the build invocation in build-cli.ts to capture the
result (the value returned by Bun.build) and inspect result.success, logging
result.errors or result.logs via console.error or processLogger and calling
process.exit(1) (or throwing) when success is false so the script fails non‑zero
when the bundle is broken; reference the Bun.build call and the
BuildOutput.result.success/result.errors fields when adding the check.
In `@packages/chronicle/src/cli/commands/serve.ts`:
- Around line 33-34: Replace the persistent signal listeners for the build phase
with one-time handlers: change the process.on('SIGINT', ...) and
process.on('SIGTERM', ...) registrations that call buildChild.kill(...) to
process.once(...) so the build-phase SIGINT/SIGTERM handlers remove themselves
after first invocation; this prevents stale buildChild.kill(...) calls from
running later (and potentially throwing ESRCH) after buildChild has exited and
ensures attachLifecycleHandlers(startChild)'s listeners run without being
interrupted by stale handlers targeting startChild.
---
Outside diff comments:
In `@Dockerfile`:
- Around line 18-35: The runner stage in the Dockerfile currently leaves the
container running as root (ENTRYPOINT ["chronicle"]) which triggers security
scanners; add a non-root user switch: after creating the
/usr/local/bin/chronicle symlink and after ensuring ownership of /app is set
(e.g., chown -R node:node /app), add a USER directive to run as that non-root
user (USER node or similar) so the container no longer runs as root; ensure the
symlink creation (ln -s /usr/local/bin/chronicle) remains before the USER change
and that CHRONICLE_CONTENT_DIR and VOLUME usage still work under the non-root
user.
---
Duplicate comments:
In `@packages/chronicle/src/cli/commands/start.ts`:
- Around line 5-7: This file repeats the same alias import and ambient
PACKAGE_ROOT declaration problem as build.ts; update start.ts to match the fix
used in build.ts by avoiding duplicate '@/cli/utils' imports and removing or
replacing the ambiguous declare const PACKAGE_ROOT with the concrete
runtime/config source you used in build.ts (e.g., import or read from the shared
config/env helper). Specifically adjust where resolveContentDir, loadCLIConfig,
and attachLifecycleHandlers are imported so they come from the single canonical
module, and replace the declare const PACKAGE_ROOT usage with the same explicit
value-access approach you implemented in build.ts.
---
Nitpick comments:
In `@packages/chronicle/src/app/api/health/route.ts`:
- Around line 1-3: The GET handler in route.ts currently always returns a static
Response.json({ status: 'ok' }) making it only a liveness probe; add a proper
readiness check instead of or alongside this by implementing a new readiness
endpoint (e.g., export function READY() or export function GET for /api/ready)
which calls your initialization sanity checks (e.g., loadConfig(),
validateOpenApiSpecs(), or another isReady() helper) and returns 200 only if
those succeed and 503 otherwise; update the existing GET in route.ts to remain
as the liveness probe or document its liveness-only semantics so Kubernetes can
use /api/ready for readiness checks.
In `@packages/chronicle/src/cli/index.ts`:
- Line 6: The serve build phase registers SIGINT/SIGTERM handlers on process for
buildChild which remain after the build completes and startChild is spawned;
update the build flow in serve (where buildChild is created) to register the
handlers as named functions and remove them (using process.off/removeListener)
immediately after the build finishes and before spawning startChild so the stale
handlers do not leak; ensure the same named handlers are not reused by
attachLifecycleHandlers for startChild to avoid double-registration and that any
cleanup runs on both normal completion and error paths.
…cleanup Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
package.json (1)
1-8: LGTM — workspaces correctly replacespnpm-workspace.yaml, andengines.nodeis preserved.One optional follow-up from the previous review cycle: consider pinning the Bun version with
"packageManager": "bun@1.3.x"so corepack (or contributors) get a hard enforcement signal instead of silently using whatever Bun version is installed locally.✨ Optional addition
{ "name": "chronicle", "private": true, "workspaces": ["packages/*", "examples/*"], + "packageManager": "bun@1.3.x", "engines": { "node": ">=22" } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` around lines 1 - 8, Add a top-level "packageManager" field to package.json to pin Bun (e.g. "bun@1.3.x") so contributors get enforced tooling via corepack; modify the existing package.json object that currently contains "name", "private", "workspaces", and "engines" by adding the "packageManager" key alongside them (ensure the value matches the desired Bun semver like "bun@1.3.x" and is consistent with your CI/tooling).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/chronicle/build-cli.ts`:
- Around line 3-11: The build currently bakes PACKAGE_ROOT using
path.resolve(import.meta.dir) at build time in Bun.build, which embeds an
absolute developer path; change this so PACKAGE_ROOT is resolved at runtime
instead: remove or stop defining PACKAGE_ROOT in the Bun.build.define block and
instead compute the package root inside the CLI code (e.g., in src/cli/index.ts
where you join PACKAGE_ROOT with 'node_modules/.bin/next') using a runtime base
computed from import.meta.url (or an equivalent __dirname fallback) so the
installed package can locate its own node_modules relative to where it is
installed rather than the build machine. Ensure you also remove the redundant
path.resolve(import.meta.dir) usage.
---
Duplicate comments:
In `@packages/chronicle/build-cli.ts`:
- Around line 13-15: The build-failure handling using result.success, iterating
result.logs with console.error, and calling process.exit(1) is correct; no
change required—keep the existing logic as-is (leave the result.success check,
the for-loop over result.logs, and the process.exit(1) call intact).
---
Nitpick comments:
In `@package.json`:
- Around line 1-8: Add a top-level "packageManager" field to package.json to pin
Bun (e.g. "bun@1.3.x") so contributors get enforced tooling via corepack; modify
the existing package.json object that currently contains "name", "private",
"workspaces", and "engines" by adding the "packageManager" key alongside them
(ensure the value matches the desired Bun semver like "bun@1.3.x" and is
consistent with your CI/tooling).
Summary
Bun.build()API (target: node, output runs on node)@/path aliases across 33 filesoven/bun:1.3base imageserveCLI command (build + start) for production containers/api/healthendpoint for K8s readiness/liveness probesNotes
#!/usr/bin/env node)Test plan
bun install— clean installbun build-cli.ts— CLI buildsnode bin/chronicle.js --help— CLI runs on nodenode bin/chronicle.js build— Next.js production build passes (271 pages)docker build .— Docker image buildsdocker runwith content volume — serve command works/api/healthprobe returns 200🤖 Generated with Claude Code