(SP: 3) [SHOP] audit-driven e2e purchase readiness hardening (events, notifications, consent, returns)#378
Conversation
…pe helper with flagged atomic dual-write
…ment-init gate, and quote expiry/timeout sweeps
…anonical events (phase 4)
…oken usage (phase 5)
…ebhook/worker flows (phase 6) :wq n
|
The latest updates on your projects. Learn more about Vercel for GitHub. 1 Skipped Deployment
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds legal-consent capture, international quote and returns workflows, notification outbox (projector, worker, Gmail transport), canonical dual-write + dedupe/event writers, admin audit logging, many DB schema/migrations, status-token scopes, new admin/customer API routes, env helpers, and extensive tests/E2E config. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant OrdersAPI as Orders API
participant QuotesSvc as Quotes Service
participant DB
participant Projector as Notification Projector
participant Outbox as Notification Outbox
Client->>OrdersAPI: POST /shop/orders/:id/quote/request
OrdersAPI->>QuotesSvc: requestIntlQuote(orderId, requestId, actor)
QuotesSvc->>DB: update shipping_quotes & shipping_events
QuotesSvc->>Projector: enqueue projection (limit/async)
Projector->>DB: insert `notification_outbox` rows (deduped)
Note right of DB: Outbox rows available for outbox worker
sequenceDiagram
autonumber
participant Runner as Notifications Run API
participant Projector as Projector
participant Worker as Outbox Worker
participant Transport as Gmail Transport
participant DB
Runner->>Projector: runNotificationOutboxProjector(limit)
Projector->>DB: scan events, insert outbox rows
Runner->>Worker: runNotificationOutboxWorker(runId, limit, lease)
Worker->>DB: claim batch (update lease_owner/expires)
Worker->>Transport: sendShopNotificationEmail(to, subject, body)
alt send ok
Worker->>DB: markSent(outboxId)
else transient error
Worker->>DB: markFailedOrDeadLetter(outboxId, nextAttemptAt)
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f9322a0f95
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| eq(orders.fulfillmentMode, 'intl'), | ||
| eq(orders.quoteStatus, 'accepted'), | ||
| lte(orders.quotePaymentDeadlineAt, now) | ||
| ) |
There was a problem hiding this comment.
Exclude paid orders from intl quote timeout sweep
sweepAcceptedIntlQuotePaymentTimeouts currently selects every intl order with quote_status='accepted' and an expired deadline, then calls restockOrder(..., { reason: 'stale' }). Paid intl orders stay in quote_status='accepted' after webhook success, so they are included here; restockOrder rejects reason: 'stale' for paid orders, which will throw and abort the janitor run instead of processing remaining records. This makes the timeout sweep fragile in production once accepted quotes are paid.
Useful? React with 👍 / 👎.
| if ( | ||
| !applyResult.shipmentQueued && | ||
| order.shippingRequired === true && | ||
| order.shippingProvider === 'nova_poshta' && | ||
| Boolean(order.shippingMethodCode) && |
There was a problem hiding this comment.
Remove duplicated shipment enqueue fallback block
This if (!applyResult.shipmentQueued && ...) block is duplicated immediately after an identical block above it, so the same expensive db.execute CTE runs twice on the same webhook request whenever the fallback path is hit. Even though the SQL is mostly idempotent, it doubles locking/work on hot payment webhook traffic and increases latency under load.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 17
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
frontend/lib/services/shop/shipping/shipments-worker.ts (2)
528-536:⚠️ Potential issue | 🟠 MajorDo not treat shipment-only update as full success when order transition is guarded.
On Line 535 and Line 583, success is derived from
updated_shipmentonly. Ifupdated_orderis blocked byshippingStatusTransitionWhereSql, the caller still proceeds as if order state changed and emits transition events accordingly.Proposed fix
- select order_id from updated_shipment + select us.order_id + from updated_shipment us + join updated_order uo on uo.id = us.order_id- select 1 as touched from updated_shipment + select 1 as touched + from updated_shipment us + join updated_order uo on uo.id = us.order_idAlso applies to: 575-587
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/shipping/shipments-worker.ts` around lines 528 - 536, The current logic treats updates as successful based solely on rows in updated_shipment, causing order transition events to be emitted even when updated_order was blocked by shippingStatusTransitionWhereSql; update the success check to require the order-side update to have affected rows (e.g., use updated_order or an intersection of updated_order and updated_shipment in the final select/return) so that event emission and downstream behavior only occur when updated_order actually changed; locate and modify the SQL/return logic around the CTEs named updated_shipment and updated_order and any code that derives success from the result (the places at/near updated_shipment select on lines referenced) to use updated_order (or joined CTE result) as the authoritative indicator of a successful transition.
667-728:⚠️ Potential issue | 🟠 MajorSeparate post-success event-write failures from shipment processing failures.
markSucceededhas already committed state before Line 667. IfemitWorkerShippingEventthrows, the catch block (starting Line 693) runs failure/lease-lost handling and returnsretried, misclassifying an already-succeeded shipment.Proposed fix
- await emitWorkerShippingEvent({ - orderId: args.claim.order_id, - shipmentId: args.claim.id, - provider: args.claim.provider, - eventName: 'label_created', - statusFrom: 'creating_label', - statusTo: 'label_created', - attemptNumber: nextAttemptNumber(args.claim.attempt_count), - runId: args.runId, - eventRef: created.providerRef, - trackingNumber: created.trackingNumber, - payload: { - providerRef: created.providerRef, - shipmentStatusTo: 'succeeded', - }, - }); + try { + await emitWorkerShippingEvent({ + orderId: args.claim.order_id, + shipmentId: args.claim.id, + provider: args.claim.provider, + eventName: 'label_created', + statusFrom: 'creating_label', + statusTo: 'label_created', + attemptNumber: nextAttemptNumber(args.claim.attempt_count), + runId: args.runId, + eventRef: created.providerRef, + trackingNumber: created.trackingNumber, + payload: { + providerRef: created.providerRef, + shipmentStatusTo: 'succeeded', + }, + }); + } catch (eventError) { + logWarn('shipping_shipments_worker_event_emit_failed', { + runId: args.runId, + shipmentId: args.claim.id, + orderId: args.claim.order_id, + code: 'SHIPPING_EVENT_WRITE_FAILED', + message: + eventError instanceof Error ? eventError.message : 'Unknown error', + }); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/shipping/shipments-worker.ts` around lines 667 - 728, The post-success event write (emitWorkerShippingEvent and recordShippingMetric) must not trigger the main processing catch that calls markFailed; wrap emitWorkerShippingEvent and recordShippingMetric in their own try/catch immediately after markSucceeded returns so any errors are logged/metric'ed but swallowed (do not rethrow) and still return 'succeeded'; leave the existing catch block (which uses asShipmentError, markFailed, computeBackoffSeconds, etc.) to only handle true processing failures before markSucceeded. Ensure you reference emitWorkerShippingEvent and recordShippingMetric for the inner try/catch and do not change the behavior of markFailed, asShipmentError, or computeBackoffSeconds.frontend/app/api/shop/admin/products/route.ts (1)
283-312:⚠️ Potential issue | 🟠 MajorMake product creation and admin-audit write atomic.
Line 284 commits product creation before Line 289 writes the audit record. If audit insertion fails, this endpoint can return 500 even though the product already exists (partial success + misleading retry behavior). This should be a single atomic unit of work (or compensated rollback) to preserve determinism.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/admin/products/route.ts` around lines 283 - 312, The product creation and audit write must be atomic: currently createProduct(...) is committed before writeAdminAudit(...), so if writeAdminAudit fails you end up with a partially successful request; modify the flow so both actions occur in a single transaction (or perform a compensating rollback) — call createProduct and writeAdminAudit inside the same DB transaction (or, if transactions are not supported, attempt writeAdminAudit immediately after createProduct and on any error delete the created product by id using inserted.id and rethrow the error); ensure you propagate requestId and actorUserId into the transactional scope and keep dedupeSeed construction the same so the audit write is identical to the current shape.
🟡 Minor comments (11)
frontend/tests/e2e/shop-minimal-phase9.spec.ts-6-11 (1)
6-11:⚠️ Potential issue | 🟡 MinorHardcoded database credentials in test file.
Even for local development, hardcoding credentials in source code is a security concern and complicates environment-specific configurations. Consider requiring these environment variables without fallbacks, or using a dedicated test configuration file that's gitignored.
🛡️ Suggested approach
const LOCAL_DB_URL = - process.env.DATABASE_URL_LOCAL ?? - 'postgresql://devlovers_local:Gfdtkk43@localhost:5432/devlovers_shop_local_clean?sslmode=disable'; + process.env.DATABASE_URL_LOCAL; const STATUS_TOKEN_SECRET = - process.env.SHOP_STATUS_TOKEN_SECRET ?? - 'test_status_token_secret_test_status_token_secret'; + process.env.SHOP_STATUS_TOKEN_SECRET; + +if (!LOCAL_DB_URL || !STATUS_TOKEN_SECRET) { + throw new Error( + 'E2E tests require DATABASE_URL_LOCAL and SHOP_STATUS_TOKEN_SECRET environment variables' + ); +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/tests/e2e/shop-minimal-phase9.spec.ts` around lines 6 - 11, The test currently hardcodes credentials via LOCAL_DB_URL and STATUS_TOKEN_SECRET fallbacks; remove these hardcoded defaults and require the environment variables instead (i.e., fail the test start if process.env.DATABASE_URL_LOCAL or process.env.SHOP_STATUS_TOKEN_SECRET are missing), or replace the fallbacks with values loaded from a gitignored test config file; update references to LOCAL_DB_URL and STATUS_TOKEN_SECRET so they read directly from process.env and surface a clear error message if not present.frontend/app/api/shop/orders/[id]/status/route.ts-162-183 (1)
162-183:⚠️ Potential issue | 🟡 MinorAudit write failure will block the response—consider making it non-blocking.
If
writeAdminAuditthrows (e.g., DB connection issue, constraint violation), the entire request fails with a 500 error even though the authorization already succeeded. For a side-effect like auditing, consider:
- Wrapping in try-catch and logging failures without blocking the response, or
- Using the
unstable_afterAPI (mentioned in PR objectives) to defer auditing🛡️ Proposed non-blocking approach
if (accessByStatusToken && tokenAuditSeed) { - await writeAdminAudit({ - orderId, - actorUserId: null, - action: 'guest_status_token.used', - targetType: 'order_status', - targetId: orderId, - requestId, - payload: { - scope: 'status_lite', - tokenNonce: tokenAuditSeed.nonce, - tokenIat: tokenAuditSeed.iat, - tokenExp: tokenAuditSeed.exp, - }, - dedupeSeed: { - domain: 'guest_status_token_use', - orderId, - tokenNonce: tokenAuditSeed.nonce, - scope: 'status_lite', - }, - }); + try { + await writeAdminAudit({ + orderId, + actorUserId: null, + action: 'guest_status_token.used', + targetType: 'order_status', + targetId: orderId, + requestId, + payload: { + scope: 'status_lite', + tokenNonce: tokenAuditSeed.nonce, + tokenIat: tokenAuditSeed.iat, + tokenExp: tokenAuditSeed.exp, + }, + dedupeSeed: { + domain: 'guest_status_token_use', + orderId, + tokenNonce: tokenAuditSeed.nonce, + scope: 'status_lite', + }, + }); + } catch (auditErr) { + logWarn('order_status_audit_failed', { + requestId, + orderId, + error: auditErr instanceof Error ? auditErr.message : String(auditErr), + }); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/orders/`[id]/status/route.ts around lines 162 - 183, The current synchronous await of writeAdminAudit inside the accessByStatusToken && tokenAuditSeed block can throw and block the response; change this to a non-blocking audit by invoking writeAdminAudit without awaiting it (fire-and-forget) or by wrapping the await call in a try-catch that catches and logs errors (using your request/process logger) so failures don’t propagate to the route response; reference the writeAdminAudit call in route.ts and the surrounding accessByStatusToken and tokenAuditSeed logic and consider using the unstable_after API if you prefer to run the audit completely after the response.frontend/.env.example-130-131 (1)
130-131:⚠️ Potential issue | 🟡 MinorResolve dotenv key-order lint warning.
Line 130-131 currently triggers
dotenv-linterUnorderedKey. Swap these two keys to avoid persistent lint noise in this file.Suggested change
-SHOP_TERMS_VERSION=terms-v1 -SHOP_PRIVACY_VERSION=privacy-v1 +SHOP_PRIVACY_VERSION=privacy-v1 +SHOP_TERMS_VERSION=terms-v1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/.env.example` around lines 130 - 131, The dotenv-linter flags an UnorderedKey for SHOP_TERMS_VERSION and SHOP_PRIVACY_VERSION; fix it by swapping their order so SHOP_PRIVACY_VERSION appears before SHOP_TERMS_VERSION in the .env.example file (update the two lines containing SHOP_PRIVACY_VERSION and SHOP_TERMS_VERSION accordingly) to satisfy the linter.frontend/lib/tests/shop/monobank-webhook-apply.test.ts-94-98 (1)
94-98:⚠️ Potential issue | 🟡 MinorDon’t swallow all cleanup failures here.
Line 94-98 catches every error, which can mask real test/DB problems and make failures harder to diagnose. Ignore only “table does not exist” and rethrow everything else.
Suggested change
- } catch { - // migration not applied in the local DB yet - } + } catch (error) { + const code = (error as { code?: string } | null)?.code; + if (code !== '42P01') throw error; + // payment_events table may be absent when migration is not applied locally + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/monobank-webhook-apply.test.ts` around lines 94 - 98, The current blanket catch around the cleanup call db.delete(paymentEvents).where(eq(paymentEvents.orderId, orderId)) hides all errors; change it to only ignore the "table does not exist" database error and rethrow all other errors: wrap the await db.delete(...) call in try/catch, inspect the caught error (e.g., error.message includes "no such table" or dialect-specific error code), return/ignore when it matches the missing-table condition, and otherwise rethrow the error so real DB/test failures surface; keep the existing paymentEvents, db.delete and eq(...) usage unchanged.frontend/app/api/shop/admin/products/[id]/route.ts-463-489 (1)
463-489:⚠️ Potential issue | 🟡 MinorSame audit failure concern as status route.
If
writeAdminAuditthrows after a successfulupdateProduct, the client receives a 500 error despite the product being updated. Consider the same defensive pattern to prevent audit failures from masking successful mutations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/admin/products/`[id]/route.ts around lines 463 - 489, The audit call using writeAdminAudit can throw and mask a successful updateProduct, so wrap the writeAdminAudit invocation in a try/catch inside this route handler (the block that runs after updateProduct returns) and ensure any error is caught and handled (log the failure with context including requestId and updated.id) without rethrowing—preserve the successful response to the client even if auditing fails. Use the existing symbols writeAdminAudit, updateProduct, requestId, actorUserId, and updated to locate where to add the defensive try/catch and logging.frontend/app/api/shop/admin/products/[id]/route.ts-740-755 (1)
740-755:⚠️ Potential issue | 🟡 MinorAudit failure after successful delete could cause confusion.
Same pattern issue - if audit fails after
deleteProduct, the product is gone but the client sees an error. This is arguably more problematic since retrying the delete will result in a 404.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/admin/products/`[id]/route.ts around lines 740 - 755, After deleteProduct succeeds, do not let writeAdminAudit failures surface as errors to the client; wrap the call to writeAdminAudit in a try/catch (or run it asynchronously) inside the same route handler so the HTTP success response is returned regardless, and on failure log the audit error with context (actorUserId, productIdForLog, requestId, error) instead of throwing. Locate the writeAdminAudit invocation and surrounding code that calls deleteProduct in this route (references: writeAdminAudit, deleteProduct, productIdForLog, actorUserId, requestId) and change it to swallow audit exceptions (or queue/retry asynchronously) while logging detailed error info so clients don’t receive a failure after a successful delete.frontend/app/api/shop/admin/products/[id]/status/route.ts-98-116 (1)
98-116:⚠️ Potential issue | 🟡 MinorAudit failure will cause 500 response despite successful status toggle.
If
writeAdminAuditthrows, the product status has already been toggled but the client receives a 500 error. This could lead to confusion and potential retry issues.Consider wrapping the audit call in a try-catch to log failures without affecting the success response, or using a fire-and-forget pattern if audit is non-critical:
🛡️ Suggested defensive pattern
+ try { await writeAdminAudit({ actorUserId, action: 'product_admin_action.toggle_status', targetType: 'product', targetId: updated.id, requestId, payload: { productId: updated.id, slug: updated.slug, isActive: updated.isActive, }, dedupeSeed: { domain: 'product_admin_action', action: 'toggle_status', requestId, productId: updated.id, toIsActive: updated.isActive, }, }); + } catch (auditError) { + logError('admin_product_status_audit_failed', auditError, { + ...baseMeta, + productId: updated.id, + }); + // Continue - primary operation succeeded + } return noStoreJson({ success: true, product: updated }, { status: 200 });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/admin/products/`[id]/status/route.ts around lines 98 - 116, The audit call using writeAdminAudit (passing actorUserId, requestId and dedupeSeed with updated.id/updated.isActive) can throw and currently will escalate to a 500 after the product status has already been changed; wrap the call to writeAdminAudit in a try-catch (or fire-and-forget) so audit failures are caught and logged (include actorUserId/requestId/updated.id in the log) and do not alter the successful response returned after the status toggle.frontend/lib/tests/shop/returns-route-phase4.test.ts-101-102 (1)
101-102:⚠️ Potential issue | 🟡 MinorUse
vi.stubEnv/cleanup to avoid cross-test env leakage.Direct
process.envmutation here can bleed into neighboring tests and make runs order-dependent.💡 Suggested fix
-import { describe, expect, it, vi } from 'vitest'; +import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'; @@ describe.sequential('returns customer route phase 4', () => { + beforeEach(() => { + vi.stubEnv('APP_ORIGIN', 'http://localhost:3000'); + }); + + afterEach(() => { + vi.unstubAllEnvs(); + }); + it('create return contract: owner can create and refund amount is server-derived', async () => { - process.env.APP_ORIGIN = 'http://localhost:3000'; @@ it('rejects exchange intent with stable contract code', async () => { - process.env.APP_ORIGIN = 'http://localhost:3000';Also applies to: 147-148
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/returns-route-phase4.test.ts` around lines 101 - 102, Replace the direct mutation of process.env.APP_ORIGIN with vitest's environment stubbing: call vi.stubEnv({ APP_ORIGIN: 'http://localhost:3000' }) where the current assignment occurs (replacing process.env.APP_ORIGIN), capture the returned restore function (e.g., const restoreEnv = vi.stubEnv(...)), and ensure you call that restore in an afterEach/teardown hook (e.g., afterEach(() => restoreEnv())) so the stub is cleaned up and other tests don't leak the env; apply the same change for the other occurrence that sets APP_ORIGIN.frontend/lib/tests/shop/order-payment-init-intl-gate-phase2.test.ts-19-22 (1)
19-22:⚠️ Potential issue | 🟡 MinorAdd shippingQuotes cleanup to prevent orphaned rows.
The
cleanupOrderfunction deletespaymentAttemptsandorders, butinsertIntlOrderForInitcan also insert intoshippingQuotes(lines 64-79). Without cleanup, test reruns or failures could leave orphaned quote rows.🔧 Proposed fix
+import { orders, paymentAttempts, shippingQuotes } from '@/db/schema'; async function cleanupOrder(orderId: string) { + await db.delete(shippingQuotes).where(eq(shippingQuotes.orderId, orderId)); await db.delete(paymentAttempts).where(eq(paymentAttempts.orderId, orderId)); await db.delete(orders).where(eq(orders.id, orderId)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/order-payment-init-intl-gate-phase2.test.ts` around lines 19 - 22, cleanupOrder currently deletes paymentAttempts and orders but misses shippingQuotes inserted by insertIntlOrderForInit, causing orphaned rows; update cleanupOrder to also remove shippingQuotes for the given orderId by issuing a db.delete on shippingQuotes with where(eq(shippingQuotes.orderId, orderId)) so shipping quote rows are cleaned up alongside paymentAttempts and orders.frontend/app/api/shop/orders/[id]/quote/decline/route.ts-49-51 (1)
49-51:⚠️ Potential issue | 🟡 MinorAdd
Cache-Control: no-storeto the blocked response for consistency.The admin routes set
Cache-Control: no-storeon the blocked response fromguardBrowserSameOrigin, but this route returns it directly without the header. This inconsistency could allow caching of error responses.🔧 Proposed fix
const blocked = guardBrowserSameOrigin(request); - if (blocked) return blocked; + if (blocked) { + blocked.headers.set('Cache-Control', 'no-store'); + return blocked; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/orders/`[id]/quote/decline/route.ts around lines 49 - 51, The blocked response returned by guardBrowserSameOrigin in route.ts lacks the Cache-Control: no-store header; update the handler where const blocked = guardBrowserSameOrigin(request) to, when blocked is truthy, set or clone the Response to include header "Cache-Control: no-store" before returning it (e.g., blocked = new Response(blocked.body, { ...blocked, headers: { ...Object.fromEntries(blocked.headers), "Cache-Control": "no-store" }})) so the blocked response matches the admin routes' non-cache policy.frontend/app/api/shop/internal/notifications/run/route.ts-45-50 (1)
45-50:⚠️ Potential issue | 🟡 MinorSet
Cache-Control: no-storeon blocked non-browser responses.On Line 49 you add
X-Request-Id, but this branch does not setCache-Control: no-store(unlike other early exits), so intermediary caching behavior becomes inconsistent.Suggested patch
if (blocked) { + blocked.headers.set('Cache-Control', 'no-store'); blocked.headers.set('X-Request-Id', requestId); return blocked; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/internal/notifications/run/route.ts` around lines 45 - 50, The early-return branch that handles non-browser requests using guardNonBrowserFailClosed sets X-Request-Id on the blocked Response but forgets to set Cache-Control: no-store, causing inconsistent caching; update the block that checks the blocked variable (the response returned by guardNonBrowserFailClosed) to also set blocked.headers.set('Cache-Control', 'no-store') before returning so it matches other early exits that prevent intermediaries from caching these responses.
🧹 Nitpick comments (25)
frontend/lib/env/shop-intl.ts (1)
3-7: Consider logging when falling back due to invalid configuration.Silent fallbacks can make misconfigurations hard to detect in production. A debug/warn log when an env var is present but invalid could aid troubleshooting without changing behavior.
🔧 Optional: Add observability for invalid values
+const LOG_PREFIX = '[shop-intl]'; + function parsePositiveInt(raw: string | undefined, fallback: number): number { + if (raw !== undefined) { + const parsed = Number.parseInt(raw, 10); + if (Number.isFinite(parsed) && parsed > 0) return parsed; + console.warn(`${LOG_PREFIX} Invalid value "${raw}", using fallback ${fallback}`); + return fallback; + } - const parsed = raw ? Number.parseInt(raw, 10) : Number.NaN; - if (!Number.isFinite(parsed) || parsed <= 0) return fallback; - return parsed; + return fallback; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/env/shop-intl.ts` around lines 3 - 7, The parsePositiveInt function silently falls back when given an invalid value; update parsePositiveInt to emit a diagnostic log (e.g., console.warn or the project's logger) whenever raw is non-empty but fails validation, including the raw value and the fallback used so misconfigured env vars are visible in logs; keep the return behavior unchanged and update any call sites if you add an optional envName or logger parameter so logs include context.frontend/lib/services/shop/shipping/shipments-worker.ts (1)
347-351: Prevent caller payload from overriding canonical fields.On Line 350, spreading
args.payloadlast allows overridingrunIdandattemptNumber. Make canonical fields final.Proposed fix
payload: { - runId: args.runId, - attemptNumber: args.attemptNumber, ...(args.payload ?? {}), + runId: args.runId, + attemptNumber: args.attemptNumber, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/shipping/shipments-worker.ts` around lines 347 - 351, The payload construction in shipments-worker.ts currently spreads args.payload after the canonical fields, allowing callers to override runId and attemptNumber; change the object creation so caller data is merged first and then canonical fields are assigned last (e.g., place ...(args.payload ?? {}) before runId: args.runId and attemptNumber: args.attemptNumber or use Object.assign({}, args.payload, { runId: args.runId, attemptNumber: args.attemptNumber })) to ensure runId and attemptNumber in the payload object are final.frontend/lib/tests/shop/setup.ts (1)
5-6: Consider externalizing the local DSN to an environment variable.Hardcoding the full DSN (including password) in source code works for local-only use but creates a maintenance burden if credentials change. Consider loading from an environment variable with this as the default fallback.
🔧 Optional: Load from env with fallback
-const REQUIRED_LOCAL_DB_URL = - 'postgresql://devlovers_local:Gfdtkk43@localhost:5432/devlovers_shop_local_clean?sslmode=disable'; +const REQUIRED_LOCAL_DB_URL = + process.env.SHOP_TEST_REQUIRED_DB_URL ?? + 'postgresql://devlovers_local:Gfdtkk43@localhost:5432/devlovers_shop_local_clean?sslmode=disable';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/setup.ts` around lines 5 - 6, Hardcoded DSN in the REQUIRED_LOCAL_DB_URL constant should be externalized to an environment variable; change the initialization of REQUIRED_LOCAL_DB_URL to read from a descriptive env var (e.g., process.env.LOCAL_DB_DSN or process.env.REQUIRED_LOCAL_DB_URL) and fall back to the existing literal string if the env var is undefined, keeping the same constant name (REQUIRED_LOCAL_DB_URL) so callers (tests/setup) remain unchanged; ensure this uses the runtime env-access pattern appropriate for the project (process.env) and preserves type/const semantics.frontend/lib/services/shop/transitions/return-state.ts (1)
11-17: Freeze the transition matrix before exporting it.
__returnTransitionMatrixcurrently exposes a mutable object by reference. Freezing the matrix (and arrays) prevents accidental runtime policy mutation.Also applies to: 38-38
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/transitions/return-state.ts` around lines 11 - 17, The RETURN_ALLOWED_FROM transition matrix is exported mutably; freeze its inner arrays and the outer object before export to prevent runtime mutation by calling Object.freeze on each value array (for example iterate keys of RETURN_ALLOWED_FROM and replace each array with Object.freeze(array)) then Object.freeze the resulting object (and do the same for the exported symbol __returnTransitionMatrix if it re-exports the same object) so both inner arrays and the top-level map are immutable at runtime.frontend/lib/services/shop/transitions/shipping-state.ts (1)
16-25: Freeze exported shipping transition matrix.The exported matrix is mutable by reference; freezing it avoids accidental runtime changes to transition rules.
Also applies to: 66-66
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/transitions/shipping-state.ts` around lines 16 - 25, The SHIPPING_ALLOWED_FROM export is mutable at runtime; freeze it and its nested arrays to prevent accidental changes by wrapping the exported object with Object.freeze and calling Object.freeze on each array value (e.g., iterate Object.values(SHIPPING_ALLOWED_FROM).forEach(arr => Object.freeze(arr)) before exporting) while keeping the same exported name; apply the same freeze pattern to the other exported transition matrix in this file (the one referenced at the other location).frontend/lib/services/shop/order-access.ts (1)
34-72: Consider fetchinguserIdin the initial query to avoid a second DB roundtrip.The order existence check (lines 34-38) and ownership check (lines 60-64) perform two separate database queries for the same order. You could optimize by selecting
userIdin the first query and comparing locally when checking ownership.♻️ Proposed optimization
- const [exists] = await db - .select({ id: orders.id }) + const [orderRow] = await db + .select({ id: orders.id, userId: orders.userId }) .from(orders) .where(eq(orders.id, args.orderId)) .limit(1); - if (!exists) { + if (!orderRow) { return { authorized: false, actorUserId: null, code: 'ORDER_NOT_FOUND', status: 404, }; } const user = await getCurrentUser(); if (user) { if (user.role === 'admin') { return { authorized: true, actorUserId: user.id, code: 'OK', status: 200, }; } - const [owned] = await db - .select({ id: orders.id }) - .from(orders) - .where(and(eq(orders.id, args.orderId), eq(orders.userId, user.id))) - .limit(1); - if (owned) { + if (orderRow.userId === user.id) { return { authorized: true, actorUserId: user.id, code: 'OK', status: 200, }; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/order-access.ts` around lines 34 - 72, The initial existence check does a second DB lookup for ownership; update the first query (select from orders) to also select userId (e.g., select { id: orders.id, userId: orders.userId }) using args.orderId, then after getCurrentUser() compare the fetched order.userId to user.id locally instead of issuing the second query; keep the existing admin short‑circuit (user.role === 'admin') and return the same response shapes when authorized or not.frontend/lib/services/shop/transitions/order-state.ts (1)
87-97: Consider adding a SQL helper for non-payment status transitions.
orderQuoteTransitionWhereSqlexists for quote statuses, but there's no equivalentorderNonPaymentTransitionWhereSql. If non-payment transitions need SQL guards similar to shipping and quote flows, this would be useful for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/transitions/order-state.ts` around lines 87 - 97, Add a new SQL helper function orderNonPaymentTransitionWhereSql that mirrors orderQuoteTransitionWhereSql: accept args { column: SQL; to: OrderNonPaymentStatus; includeSame?: boolean }, call allowedFromOrderNonPaymentStatus(args.to, { includeSame: args.includeSame }) to compute permitted FROM values, return sql`false` if the list is empty, otherwise return sql`${args.column} in (${sql.join(from.map(v => sql`${v}`), sql`, `)})`; place it alongside orderQuoteTransitionWhereSql so non-payment flows can be guarded the same way.frontend/lib/services/orders/restock.ts (1)
127-152: Duplicated transition validation logic.The target status calculation (lines 127-131 and 321-325) and validation check (lines 133-152 and 327-344) are duplicated. Consider extracting a helper function to reduce duplication and ensure consistency.
♻️ Proposed helper extraction
function validateRestockTransition( orderId: string, currentStatus: string, reason: RestockReason | undefined ): void { const shouldCancel = reason === 'canceled'; const shouldFail = reason === 'failed' || reason === 'stale'; const targetOrderStatus = shouldFail ? 'INVENTORY_FAILED' : shouldCancel ? 'CANCELED' : null; if ( targetOrderStatus && !isOrderNonPaymentStatusTransitionAllowed(currentStatus, targetOrderStatus, { includeSame: true, }) ) { throw new OrderStateInvalidError( `Invalid order status transition: ${currentStatus} -> ${targetOrderStatus}`, { orderId, details: { reason, fromStatus: currentStatus, toStatus: targetOrderStatus, }, } ); } }Also applies to: 321-344
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/orders/restock.ts` around lines 127 - 152, Extract the duplicated target status calculation and transition validation into a helper (e.g., validateRestockTransition) that accepts orderId, currentStatus (order.status) and reason (RestockReason | undefined) and performs the shouldCancel/shouldFail -> targetOrderStatus logic plus the isOrderNonPaymentStatusTransitionAllowed check and throws OrderStateInvalidError with the same details when invalid; replace both duplicated blocks that compute targetOrderStatus and call isOrderNonPaymentStatusTransitionAllowed with a single call to this helper to ensure consistency.frontend/lib/services/shop/shipping/admin-actions.ts (2)
210-210: Redundant SQL conditionwhere ${args.canonical.enabled} = true.Since the code already branches on
args.canonical.enabledat the function level (e.g., line 156, 224, 339), this WHERE clause is always evaluatingwhere true = truein the canonical branch. It adds no protection and could be confusing.♻️ Simplify by removing redundant condition
from updated_order uo - where ${args.canonical.enabled} = true on conflict (dedupe_key) do nothingAlso applies to: 321-321, 406-406
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/shipping/admin-actions.ts` at line 210, Remove the redundant SQL condition that uses args.canonical.enabled in the WHERE clause (e.g., the literal fragment "where ${args.canonical.enabled} = true") because the surrounding function already branches on args.canonical.enabled; locate occurrences of the string/interpolation in frontend/lib/services/shop/shipping/admin-actions.ts (instances around the blocks that check args.canonical.enabled) and delete that conditional fragment so the WHERE clause only contains meaningful predicates; ensure you remove all similar occurrences (the ones flagged around lines 210, 321, 406) and run tests/lint to verify no leftover template interpolation breaks the query string.
150-329: Significant code duplication between canonical and non-canonical paths.
appendAuditEntry,requeueShipment, andupdateOrderShippingStatuseach contain nearly duplicated SQL blocks for canonical vs non-canonical paths. Consider extracting the common CTE structure and conditionally adding theinserted_admin_auditCTE only when canonical is enabled—this would reduce maintenance burden and risk of the two paths drifting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/shipping/admin-actions.ts` around lines 150 - 329, The SQL for the canonical vs non-canonical branches is duplicated in appendAuditEntry and requeueShipment (and updateOrderShippingStatus); refactor by extracting the shared CTEs (the updated_order CTE that updates orders and appends shippingAdminAudit using entryArrayJson/entryObjectJson) and then conditionally appending the inserted_admin_audit CTE (the insert into admin_audit_log using args.canonical.dedupeKey, args.canonical.occurredAt and args.entry/args.auditEntry fields) only when args.canonical.enabled is true; implement this by composing the common SQL once (reusing entryArrayJson/entryObjectJson and the updated_shipment CTE where present) and adding the inserted_admin_audit CTE dynamically for the canonical path so both branches run the same update logic and only differ by the optional admin_audit_log insert.frontend/tests/e2e/shop-minimal-phase9.spec.ts (2)
145-162: New database connection per cleanup call may be inefficient.Each
cleanupOrdercall creates and closes a newpg.Clientconnection. For test suites with many tests, consider reusing a connection pool or a single connection across the test file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/tests/e2e/shop-minimal-phase9.spec.ts` around lines 145 - 162, cleanupOrder currently creates and tears down a new pg.Client on each call (Client, LOCAL_DB_URL), which is inefficient for large test suites; change to reuse a single connection by creating a module-level pg.Pool (or a shared pg.Client) once for the test file, update cleanupOrder to accept or use that shared pool/client and call pool.query(...) instead of new Client, and ensure you properly call pool.end() (or client.end()) once in an afterAll/teardown hook to close the shared connection.
21-41: DuplicatedcreateStatusTokenimplementation.This function duplicates logic from
frontend/lib/shop/status-token.ts, which is already exported. If the signature algorithm, token structure, or scope validation changes in production, this test implementation will not reflect those changes, risking tests passing while production behavior diverges.Import
createStatusTokenfrom the production module instead of duplicating it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/tests/e2e/shop-minimal-phase9.spec.ts` around lines 21 - 41, Replace the duplicated createStatusToken implementation in the test with the production-exported createStatusToken: remove the local function and import the createStatusToken used by the app (and its dependent secret constant, e.g., STATUS_TOKEN_SECRET if required) so the test reuses the canonical signing algorithm, payload structure, scopes and TTL logic; ensure the test calls the imported createStatusToken where the local function was previously used.frontend/lib/tests/shop/checkout-legal-consent-phase4.test.ts (2)
26-39: Consider typing seed data instead ofas any.Similar to the shipping audit test, the
as anycasts on lines 39 and 64 bypass type checking. A typed partial or factory would catch schema drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/checkout-legal-consent-phase4.test.ts` around lines 26 - 39, The seed insert currently uses an untyped cast ("as any") for the product object in the call to db.insert(products).values; replace this with a properly typed object (e.g., use Partial<Product> or a ProductFactory/test helper) so TypeScript can validate fields before insertion: locate the db.insert(products).values call and change the literal to a typed partial/factory return value (typed against your Product model/interface used by the products table) and remove the "as any" casts used around the product object and any other seed inserts in this test.
79-85: Unusedvi.unstubAllEnvs()calls.The
beforeEachandafterEachhooks callvi.unstubAllEnvs()but no environment variables are stubbed in these tests. This appears to be vestigial code that can be removed for clarity.♻️ Proposed cleanup
- beforeEach(() => { - vi.unstubAllEnvs(); - }); - - afterEach(() => { - vi.unstubAllEnvs(); - }); - it('persists legal consent artifact for new order', async () => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/checkout-legal-consent-phase4.test.ts` around lines 79 - 85, Remove the unnecessary calls to vi.unstubAllEnvs() in the test setup/teardown: delete the beforeEach and afterEach blocks that only call vi.unstubAllEnvs() (they are vestigial since no envs are stubbed), leaving the rest of the test file intact; specifically remove the beforeEach(...) and afterEach(...) functions that call vi.unstubAllEnvs() in the checkout-legal-consent-phase4.test.ts tests.frontend/lib/env/monobank.ts (1)
89-105: Minor inconsistency:getRuntimeEnv()vs directprocess.envaccess.Line 90 uses
getRuntimeEnv()forNODE_ENVwhile all other env access in this file now uses directprocess.env. Consider usingprocess.env.NODE_ENVdirectly for consistency:♻️ Suggested change
export function getMonobankEnv(): MonobankEnv { - const runtimeEnv = getRuntimeEnv(); - const token = resolveMonobankToken(); const publicKey = nonEmpty(process.env.MONO_PUBLIC_KEY); const apiBaseUrl = nonEmpty(process.env.MONO_API_BASE) ?? 'https://api.monobank.ua'; const paymentsFlag = process.env.PAYMENTS_ENABLED ?? 'false'; const configured = !!token; const paymentsEnabled = String(paymentsFlag).trim() === 'true' && configured; const invoiceTimeoutMs = parseTimeoutMs( process.env.MONO_INVOICE_TIMEOUT_MS, - runtimeEnv.NODE_ENV === 'production' ? 8000 : 12000 + process.env.NODE_ENV === 'production' ? 8000 : 12000 );This would also allow removing the
getRuntimeEnvimport.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/env/monobank.ts` around lines 89 - 105, getMonobankEnv mixes runtime env access patterns by calling getRuntimeEnv() to check NODE_ENV while the rest of the function reads directly from process.env; change the NODE_ENV read in getMonobankEnv to use process.env.NODE_ENV (consistent with other env reads), update the conditional that passes NODE_ENV to parseTimeoutMs accordingly, and remove the now-unused getRuntimeEnv import if it becomes unused elsewhere.frontend/lib/tests/shop/admin-shipping-canonical-audit.test.ts (1)
21-37: Consider typing the test seed data instead of usingas any.The
as anycast bypasses type checking. While acceptable for tests, consider creating a typed factory function or partial type to maintain type safety and catch schema drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/admin-shipping-canonical-audit.test.ts` around lines 21 - 37, The test seed is cast with "as any" which hides type errors; replace it with a properly typed seed by creating a factory/typed helper (e.g., buildOrderSeed or createOrderSeed) that returns Partial<OrderRow> or the concrete Orders table row type, populate required fields (id, totalAmountMinor, totalAmount via toDbMoney, currency, paymentProvider, etc.), and pass that object to db.insert(orders).values(...) without "as any" (references: the orders table used in db.insert, the toDbMoney call, and orderId variable).frontend/app/api/shop/orders/[id]/quote/accept/route.ts (1)
1-17: Missingruntimeexport.Other similar routes (e.g.,
frontend/app/api/shop/admin/returns/[id]/reject/route.ts) exportruntime = 'nodejs'. Consider adding it for consistency and to explicitly declare the runtime requirement.💡 Suggested addition after imports
import { intlQuoteAcceptPayloadSchema, orderIdParamSchema, } from '@/lib/validation/shop'; + +export const runtime = 'nodejs';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/orders/`[id]/quote/accept/route.ts around lines 1 - 17, Add an explicit runtime export by defining export const runtime = 'nodejs' near the top of this route file (after the imports) so the route declares it requires the Node.js runtime; this mirrors other handlers (e.g., admin returns reject route) and ensures functions like acceptIntlQuote, authorizeOrderMutationAccess and guardBrowserSameOrigin run under the expected runtime.frontend/lib/tests/shop/order-payment-init-intl-gate-phase2.test.ts (1)
178-195: Strengthen the database constraint assertion.Using
.rejects.toBeTruthy()is a weak assertion that would pass for any thrown error, not just the constraint violation. Consider verifying the error message or code to confirm it's the expected constraint.💡 Suggested improvement
it('database constraint rejects monobank provider on intl orders', async () => { - await expect( - db.insert(orders).values({ - id: crypto.randomUUID(), - totalAmountMinor: 1000, - totalAmount: toDbMoney(1000), - currency: 'USD', - paymentProvider: 'monobank', - paymentStatus: 'pending', - status: 'CREATED', - inventoryStatus: 'none', - idempotencyKey: `idem_${crypto.randomUUID()}`, - fulfillmentMode: 'intl', - quoteStatus: 'none', - itemsSubtotalMinor: 1000, - } as any) - ).rejects.toBeTruthy(); + await expect( + db.insert(orders).values({ + id: crypto.randomUUID(), + totalAmountMinor: 1000, + totalAmount: toDbMoney(1000), + currency: 'USD', + paymentProvider: 'monobank', + paymentStatus: 'pending', + status: 'CREATED', + inventoryStatus: 'none', + idempotencyKey: `idem_${crypto.randomUUID()}`, + fulfillmentMode: 'intl', + quoteStatus: 'none', + itemsSubtotalMinor: 1000, + } as any) + ).rejects.toThrow(/constraint|intl.*monobank|monobank.*intl/i); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/order-payment-init-intl-gate-phase2.test.ts` around lines 178 - 195, The test currently uses a weak assertion .rejects.toBeTruthy(); update the assertion to verify the specific DB constraint failure when inserting an order with paymentProvider 'monobank' and fulfillmentMode 'intl': call the same db.insert(orders).values(...) and assert the rejection either with .rejects.toThrow(...) matching the database constraint message (e.g., contains 'constraint' or the constraint name that rejects monobank on intl orders) or with .rejects.toMatchObject(...) checking the error.code/property that your DB client surfaces for constraint violations; reference the db.insert(orders).values invocation, paymentProvider 'monobank' and fulfillmentMode 'intl' to locate the test.frontend/app/api/shop/internal/orders/restock-stale/route.ts (1)
480-496: Consider passingtimeBudgetMsto the new sweep functions for consistency.The existing restock functions receive
timeBudgetMsto respect the deadline internally, while the new sweep functions only receivebatchSizeandnow. Theremaining > 0guard prevents calling them with zero budget, but they could still overrun the deadline if the batch takes longer than expected.This is a minor consistency concern; the current approach is safe since batches are capped at 100 items.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/internal/orders/restock-stale/route.ts` around lines 480 - 496, The new sweep calls (sweepExpiredOfferedIntlQuotes and sweepAcceptedIntlQuotePaymentTimeouts) are not given the remaining time budget and therefore could overrun the overall deadline; change their invocations to pass a timeBudgetMs parameter computed from the current remaining time (e.g., use the existing remaining3 and remaining4 values derived from Math.max(0, deadlineMs - Date.now())) alongside batchSize and now so the swept functions can respect the deadline internally and stop early if needed.frontend/lib/tests/shop/returns-phase4.test.ts (1)
393-399: Incomplete user cleanup in final test.The fourth test creates multiple admin users (
admin_1throughadmin_4) and test users viaseedPaidReservedOrder. The cleanup deletesseed.userIdand the admin users, but earlier tests (1-3) also create admin users (admin_1,admin_2,admin_3) and test users that aren't cleaned up. Consider usingafterAllorafterEachhooks for consistent cleanup across all tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/returns-phase4.test.ts` around lines 393 - 399, The final block only cleans up seed.userId and admins created in this test, leaving earlier test users and admin_1..admin_3 behind; update the test file to perform consistent teardown by adding an afterEach or afterAll hook that calls cleanupSeed (for seeds created by seedPaidReservedOrder) and removes any test/admin users (use cleanupSeed, seedPaidReservedOrder, and the db.delete(users).where(...) pattern shown) so all admin_1..admin_4 and any seed.userId from every test are deleted after tests run.frontend/lib/services/shop/events/write-payment-event.ts (1)
29-42: Consider includingamountMinorandcurrencyin the dedupe seed.The default dedupe seed includes identifiers like
orderId,eventRef,attemptId, but excludesamountMinorandcurrency. This means two events with identical identifiers but different amounts would be considered duplicates and the second would be silently ignored.If this is intentional (e.g., amount is mutable metadata), the current implementation is correct. Otherwise, consider including these fields in the seed:
💡 Potential enhancement
buildPaymentEventDedupeKey( args.dedupeSeed ?? { orderId: args.orderId, provider: args.provider, eventName: args.eventName, eventSource: args.eventSource, eventRef: args.eventRef ?? null, attemptId: args.attemptId ?? null, providerPaymentIntentId: args.providerPaymentIntentId ?? null, providerChargeId: args.providerChargeId ?? null, + amountMinor: args.amountMinor, + currency: args.currency, } );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/events/write-payment-event.ts` around lines 29 - 42, The dedupe seed used to compute dedupeKey (in the dedupeKey assignment that calls buildPaymentEventDedupeKey when args.dedupeSeed is missing) omits payment amount and currency, allowing events with different amounts/currencies to be treated as duplicates; update the fallback dedupe seed object to include args.amountMinor and args.currency (or args.amountMinor ?? null, args.currency ?? null) so buildPaymentEventDedupeKey receives these values, or explicitly document/guard this behavior if omitting them was intentional.frontend/lib/services/orders/monobank-webhook.ts (1)
491-741: Consider extracting shared CTE logic to reduce duplication.The dual-write enabled and disabled branches share ~70% of the CTE structure (updated_order, updated_attempt, eligible_for_enqueue, inserted_shipment, queued_order_ids, shipping_status_update, final select). Only the
inserted_payment_eventCTE differs.While the current approach works correctly and the feature flag enables safe rollout, consolidating the shared SQL would improve maintainability. One approach:
- Build CTE fragments as template literals
- Conditionally include the
inserted_payment_eventCTE- Compose the final query from fragments
This is optional given the rollout context—keeping branches separate can aid debugging during the transition.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/orders/monobank-webhook.ts` around lines 491 - 741, The two branches under the canonicalDualWriteEnabled check duplicate most CTEs (updated_order, updated_attempt, eligible_for_enqueue, inserted_shipment, queued_order_ids, shipping_status_update and the final select); extract these shared CTE fragments into reusable template literals/variables (e.g., sharedCTEs and finalSelect) and build the full query by conditionally injecting the differing inserted_payment_event CTE only when needed (keeping the same identifiers: updated_order, updated_attempt, inserted_payment_event, eligible_for_enqueue, inserted_shipment, queued_order_ids, shipping_status_update). Compose the final sql by concatenating the shared fragments with the optional inserted_payment_event fragment and use that single assembled query in the db.execute call instead of duplicating the entire with/.../select blocks.frontend/lib/services/shop/notifications/projector.ts (2)
84-147: Consider adding a projection watermark to avoid re-scanning processed events.The projector queries events without filtering out already-projected ones. While
onConflictDoNothinghandles duplicates correctly, this approach will re-scan the same events on every run, becoming inefficient as the event tables grow.Consider tracking a high-water mark (e.g., last projected
occurredAt+id) to filter the query:WHERE (occurred_at, id) > (last_projected_at, last_projected_id)Alternatively, add a
projected_atcolumn to event tables or maintain a separate cursor table.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/notifications/projector.ts` around lines 84 - 147, projectShippingEvents currently re-scans all shippingEvents each run; add a projection watermark/cursor to avoid reprocessing by (1) persisting a last-projected tuple (e.g., lastProjectedOccurredAt and lastProjectedId) in a cursor table or by adding projected_at on shippingEvents, (2) change the candidates query in projectShippingEvents to include a WHERE (occurredAt, id) > (lastProjectedOccurredAt, lastProjectedId) tuple-comparison filter using the persisted cursor, and (3) after processing the batch update the cursor to the highest (occurredAt, id) processed (or set projected_at on rows). Reference projectShippingEvents, candidates, and the notificationOutbox insert flow when implementing the cursor update and the tuple WHERE clause.
214-229: Limit splitting may under-utilize budget when one source has few events.If shipping events return fewer rows than
shippingLimit, the remaining budget isn't reallocated to payment events. For example, withlimit=100and only 10 shipping events, 40 slots go unused.Consider a two-pass approach or a single query with
UNION ALLto maximize throughput per invocation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/notifications/projector.ts` around lines 214 - 229, runNotificationOutboxProjector currently splits budget up-front between shippingLimit and paymentLimit which can leave unused capacity if shipping returns fewer rows; change to a two-pass allocation: call projectShippingEvents(shippingLimit) first, compute remaining = limit - shipping.inserted (or limit - shipping.scanned if you treat scanned as consumed budget), then call projectPaymentEvents(Math.max(1, remaining)) so any unused shipping slots are reallocated to payment. Update the returned totals to sum the actual shipping and payment results and remove the fixed precomputed paymentLimit.frontend/app/api/shop/webhooks/stripe/route.ts (1)
1342-1395: Remove the duplicated fallback shipment-queue block.Lines 1342-1395 duplicate the same SQL fallback already executed in Lines 1287-1340. This adds an extra heavy round-trip on a critical webhook path with no behavioral gain.
♻️ Suggested cleanup
- if ( - !applyResult.shipmentQueued && - order.shippingRequired === true && - order.shippingProvider === 'nova_poshta' && - Boolean(order.shippingMethodCode) && - order.inventoryStatus === 'reserved' - ) { - await db.execute(sql` - with ensured_shipment as ( - insert into shipping_shipments ( - order_id, - provider, - status, - attempt_count, - created_at, - updated_at - ) values ( - ${order.id}::uuid, - 'nova_poshta', - 'queued', - 0, - ${now}, - ${now} - ) - on conflict (order_id) do update - set status = 'queued', - updated_at = ${now} - where shipping_shipments.provider = 'nova_poshta' - and shipping_shipments.status is distinct from 'queued' - returning order_id - ), - existing_shipment as ( - select order_id - from shipping_shipments - where order_id = ${order.id}::uuid - and status = 'queued' - ), - shipment_order_ids as ( - select order_id from ensured_shipment - union - select order_id from existing_shipment - ) - update orders - set shipping_status = 'queued'::shipping_status, - updated_at = ${now} - where id in (select order_id from shipment_order_ids) - and shipping_status is distinct from 'queued'::shipping_status - and ${shippingStatusTransitionWhereSql({ - column: sql`shipping_status`, - to: 'queued', - allowNullFrom: true, - })} - `); - }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/webhooks/stripe/route.ts` around lines 1342 - 1395, Duplicate fallback shipment-queue SQL block: remove the entire conditional that checks "!applyResult.shipmentQueued && order.shippingRequired === true && order.shippingProvider === 'nova_poshta' && Boolean(order.shippingMethodCode) && order.inventoryStatus === 'reserved'" which inserts/updates shipping_shipments (ensured_shipment, existing_shipment, shipment_order_ids) and updates orders using shippingStatusTransitionWhereSql; keep the original/first occurrence of this SQL logic (the earlier block that performs the same insert/update) and delete this second redundant block so the webhook does not perform an extra duplicate round-trip.
| switch (eventName) { | ||
| case 'quote_requested': | ||
| return 'intl_quote_requested'; | ||
| case 'quote_offered': | ||
| return 'intl_quote_offered'; | ||
| case 'quote_accepted': | ||
| return 'intl_quote_accepted'; | ||
| case 'quote_declined': | ||
| return 'intl_quote_declined'; | ||
| case 'quote_expired': | ||
| case 'quote_timeout_requires_requote': | ||
| return 'intl_quote_expired'; | ||
| case 'shipment_created': | ||
| case 'label_created': | ||
| return 'shipment_created'; | ||
| default: | ||
| return null; | ||
| } |
There was a problem hiding this comment.
quote_requires_requote events are currently unmapped and can skip customer notifications.
mapShippingEventToTemplate handles quote_timeout_requires_requote but not quote_requires_requote (emitted in frontend/lib/services/shop/quotes.ts on stock-unavailable acceptance path). This can drop an important notification transition.
Suggested patch
case 'quote_expired':
+ case 'quote_requires_requote':
case 'quote_timeout_requires_requote':
return 'intl_quote_expired';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/lib/services/shop/notifications/templates.ts` around lines 120 -
137, mapShippingEventToTemplate currently maps 'quote_timeout_requires_requote'
but omits 'quote_requires_requote', causing those events to return null and skip
customer notifications; update the switch in mapShippingEventToTemplate to
handle the 'quote_requires_requote' event (same mapping as
'quote_timeout_requires_requote' to return 'intl_quote_expired') so that the
stock-unavailable acceptance path triggers the same notification; ensure you add
the case for 'quote_requires_requote' alongside the existing
'quote_timeout_requires_requote' case in the switch.
| const rawScopes = normalizeScopes(rawPayload.scp); | ||
| const scopes = | ||
| rawScopes.length > 0 ? rawScopes : [...DEFAULT_STATUS_TOKEN_SCOPES]; | ||
|
|
There was a problem hiding this comment.
Reject invalid scp when the claim is present.
Verifier currently falls back to default scopes even if scp exists but is malformed/empty. Defaults should apply only when scp is absent (legacy tokens); otherwise reject as invalid payload.
Suggested change
- const rawScopes = normalizeScopes(rawPayload.scp);
- const scopes =
- rawScopes.length > 0 ? rawScopes : [...DEFAULT_STATUS_TOKEN_SCOPES];
+ const hasScpClaim = Object.prototype.hasOwnProperty.call(rawPayload, 'scp');
+ const rawScopes = normalizeScopes(rawPayload.scp);
+ if (hasScpClaim && rawScopes.length === 0) {
+ return { ok: false, reason: 'invalid_payload' };
+ }
+ const scopes = hasScpClaim
+ ? rawScopes
+ : [...DEFAULT_STATUS_TOKEN_SCOPES];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const rawScopes = normalizeScopes(rawPayload.scp); | |
| const scopes = | |
| rawScopes.length > 0 ? rawScopes : [...DEFAULT_STATUS_TOKEN_SCOPES]; | |
| const hasScpClaim = Object.prototype.hasOwnProperty.call(rawPayload, 'scp'); | |
| const rawScopes = normalizeScopes(rawPayload.scp); | |
| if (hasScpClaim && rawScopes.length === 0) { | |
| return { ok: false, reason: 'invalid_payload' }; | |
| } | |
| const scopes = hasScpClaim | |
| ? rawScopes | |
| : [...DEFAULT_STATUS_TOKEN_SCOPES]; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/lib/shop/status-token.ts` around lines 149 - 152, The current logic
falls back to DEFAULT_STATUS_TOKEN_SCOPES even when rawPayload.scp exists but
normalizes to an empty/malformed set; change the scopes resolution so that you
first detect presence of the claim (rawPayload.scp), then call
normalizeScopes(rawPayload.scp) and: if scp is undefined use
[...DEFAULT_STATUS_TOKEN_SCOPES], but if scp is present and normalizeScopes
returns an empty/invalid array reject the token (throw or return an
invalid-payload error) instead of falling back to defaults; update the code
around normalizeScopes, rawPayload.scp and the scopes assignment to implement
this branching and return an error when scp is present but malformed.
| const first = await runNotificationOutboxProjector({ limit: 20 }); | ||
| const second = await runNotificationOutboxProjector({ limit: 20 }); | ||
|
|
||
| expect(first.inserted).toBeGreaterThanOrEqual(1); | ||
| expect(second.inserted).toBe(0); |
There was a problem hiding this comment.
Scope dedupe assertions to this event instead of global projector counters.
result.inserted is process-wide. The current check can fail when unrelated events exist, making this test non-deterministic.
💡 Suggested fix
- const first = await runNotificationOutboxProjector({ limit: 20 });
- const second = await runNotificationOutboxProjector({ limit: 20 });
-
- expect(first.inserted).toBeGreaterThanOrEqual(1);
- expect(second.inserted).toBe(0);
+ await runNotificationOutboxProjector({ limit: 20 });
+ const afterFirst = await db
+ .select({ id: notificationOutbox.id })
+ .from(notificationOutbox)
+ .where(
+ and(
+ eq(notificationOutbox.orderId, orderId),
+ eq(notificationOutbox.sourceEventId, canonicalEventId)
+ )
+ );
+
+ await runNotificationOutboxProjector({ limit: 20 });
+ const afterSecond = await db
+ .select({ id: notificationOutbox.id })
+ .from(notificationOutbox)
+ .where(
+ and(
+ eq(notificationOutbox.orderId, orderId),
+ eq(notificationOutbox.sourceEventId, canonicalEventId)
+ )
+ );
+
+ expect(afterFirst).toHaveLength(1);
+ expect(afterSecond).toHaveLength(1);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const first = await runNotificationOutboxProjector({ limit: 20 }); | |
| const second = await runNotificationOutboxProjector({ limit: 20 }); | |
| expect(first.inserted).toBeGreaterThanOrEqual(1); | |
| expect(second.inserted).toBe(0); | |
| await runNotificationOutboxProjector({ limit: 20 }); | |
| const afterFirst = await db | |
| .select({ id: notificationOutbox.id }) | |
| .from(notificationOutbox) | |
| .where( | |
| and( | |
| eq(notificationOutbox.orderId, orderId), | |
| eq(notificationOutbox.sourceEventId, canonicalEventId) | |
| ) | |
| ); | |
| await runNotificationOutboxProjector({ limit: 20 }); | |
| const afterSecond = await db | |
| .select({ id: notificationOutbox.id }) | |
| .from(notificationOutbox) | |
| .where( | |
| and( | |
| eq(notificationOutbox.orderId, orderId), | |
| eq(notificationOutbox.sourceEventId, canonicalEventId) | |
| ) | |
| ); | |
| expect(afterFirst).toHaveLength(1); | |
| expect(afterSecond).toHaveLength(1); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/lib/tests/shop/notifications-projector-phase3.test.ts` around lines
60 - 64, The test currently asserts on the process-wide counters returned by
runNotificationOutboxProjector (variables first.inserted and second.inserted),
which can be influenced by unrelated events; change the test to assert dedupe
only for this specific event by capturing the identifiers created by the
projector (or querying the notifications table for the unique event id) and
asserting that only one notification for this event was inserted after the first
run and none after the second. Concretely, update the call sites around
runNotificationOutboxProjector to either return the inserted row ids or query by
the event's unique key (use the event id/unique payload used in this test) and
assert counts against that filtered result rather than
first.inserted/second.inserted. Ensure references to
runNotificationOutboxProjector and result.inserted are replaced with the
event-scoped check.
| await updateProduct(product.id, { | ||
| prices: [{ currency: 'USD', priceMinor: 2500, originalPriceMinor: null }], | ||
| } as any); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
# First, let's read the test file around the specified lines
head -n 160 frontend/lib/tests/shop/product-prices-write-authority-phase8.test.ts | tail -n 85Repository: DevLoversTeam/devlovers.net
Length of output: 2299
🏁 Script executed:
# Also check the updateProduct function definition
rg -n "export.*updateProduct|function updateProduct|const updateProduct" --type ts --type tsxRepository: DevLoversTeam/devlovers.net
Length of output: 98
🏁 Script executed:
# Get the full context of the test file to understand the imports and setup
head -n 30 frontend/lib/tests/shop/product-prices-write-authority-phase8.test.tsRepository: DevLoversTeam/devlovers.net
Length of output: 1006
🏁 Script executed:
# Find the updateProduct function definition
find . -name "products.ts" -o -name "products.js" | grep servicesRepository: DevLoversTeam/devlovers.net
Length of output: 106
🏁 Script executed:
# Search for updateProduct function definition
rg -n "export.*function updateProduct|export const updateProduct" --type tsRepository: DevLoversTeam/devlovers.net
Length of output: 161
🏁 Script executed:
# Read the updateProduct function definition and surrounding context
sed -n '20,100p' frontend/lib/services/products/mutations/update.tsRepository: DevLoversTeam/devlovers.net
Length of output: 2353
🏁 Script executed:
# Find ProductUpdateInput type definition
rg -n "type ProductUpdateInput|interface ProductUpdateInput" --type tsRepository: DevLoversTeam/devlovers.net
Length of output: 178
🏁 Script executed:
# Also check the imports at the top of update.ts to see where ProductUpdateInput comes from
head -n 20 frontend/lib/services/products/mutations/update.tsRepository: DevLoversTeam/devlovers.net
Length of output: 752
🏁 Script executed:
# Get the ProductUpdateInput definition
sed -n '15,50p' frontend/lib/types/shop.tsRepository: DevLoversTeam/devlovers.net
Length of output: 1125
🏁 Script executed:
# Find productAdminUpdateSchema
rg -n "productAdminUpdateSchema" --type ts -A 20Repository: DevLoversTeam/devlovers.net
Length of output: 12857
🏁 Script executed:
# Get the ProductPriceInput type definition
sed -n '30,60p' frontend/lib/types/shop.tsRepository: DevLoversTeam/devlovers.net
Length of output: 648
🏁 Script executed:
# Also check adminPriceRowSchema which is used in productAdminUpdateSchema
rg -n "adminPriceRowSchema" --type ts -A 10Repository: DevLoversTeam/devlovers.net
Length of output: 5051
🏁 Script executed:
# Check if ProductPriceInput is used elsewhere in the codebase
rg "ProductPriceInput" --type tsRepository: DevLoversTeam/devlovers.net
Length of output: 190
Remove as any casts by aligning the ProductUpdateInput.prices type with the actual adminPriceRowSchema.
The type definition of ProductUpdateInput.prices declares ProductPriceInput[] (which uses price and originalPrice fields), but productAdminUpdateSchema.prices validates against adminPriceRowSchema (which expects priceMinor and originalPriceMinor). The as any casts suppress this type mismatch.
Update the type definition so it accurately reflects what the schema accepts. Consider removing the conflicting ProductPriceInput type override in ProductUpdateInput and deriving it directly from the schema.
This applies to lines 79–81 and 147–149.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/lib/tests/shop/product-prices-write-authority-phase8.test.ts` around
lines 79 - 81, The ProductUpdateInput.prices type conflicts with the runtime
schema (productAdminUpdateSchema) which expects adminPriceRowSchema fields
(priceMinor/originalPriceMinor), causing the use of `as any` in calls like
updateProduct(...) at the prices payload; fix by removing or replacing the
custom ProductPriceInput override on ProductUpdateInput and instead derive the
prices type from productAdminUpdateSchema/adminPriceRowSchema (or update
ProductPriceInput to use priceMinor/originalPriceMinor) so TypeScript matches
the validated shape; update the type used by ProductUpdateInput.prices and any
related test inputs (the calls near updateProduct and other occurrences) to the
corrected type so the `as any` casts can be removed.
| const LOCAL_DB_URL = | ||
| 'postgresql://devlovers_local:Gfdtkk43@localhost:5432/devlovers_shop_local_clean?sslmode=disable'; |
There was a problem hiding this comment.
Do not commit DB credentials in Playwright config.
Hardcoding a full DSN (including password) in tracked code is a security risk and complicates rotation. Read it from env and fail fast when missing.
Suggested change
-const LOCAL_DB_URL =
- 'postgresql://devlovers_local:Gfdtkk43@localhost:5432/devlovers_shop_local_clean?sslmode=disable';
+const LOCAL_DB_URL = process.env.PLAYWRIGHT_DATABASE_URL_LOCAL?.trim();
+if (!LOCAL_DB_URL) {
+ throw new Error('PLAYWRIGHT_DATABASE_URL_LOCAL must be configured');
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const LOCAL_DB_URL = | |
| 'postgresql://devlovers_local:Gfdtkk43@localhost:5432/devlovers_shop_local_clean?sslmode=disable'; | |
| const LOCAL_DB_URL = process.env.PLAYWRIGHT_DATABASE_URL_LOCAL?.trim(); | |
| if (!LOCAL_DB_URL) { | |
| throw new Error('PLAYWRIGHT_DATABASE_URL_LOCAL must be configured'); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/playwright.config.ts` around lines 3 - 4, Replace the hardcoded
LOCAL_DB_URL constant with a value read from an environment variable (e.g.,
process.env.PLAYWRIGHT_LOCAL_DB_URL or process.env.LOCAL_DB_URL) and make the
startup fail fast if the env var is missing/empty; locate the LOCAL_DB_URL
declaration in the file and change it to read from process.env, and add a guard
that throws or exits with a clear error when the env var is not set so no
credentials remain in source control.
| reuseExistingServer: true, | ||
| env: { | ||
| APP_ENV: 'local', | ||
| DATABASE_URL_LOCAL: LOCAL_DB_URL, | ||
| DATABASE_URL: '', | ||
| SHOP_STRICT_LOCAL_DB: '1', | ||
| SHOP_REQUIRED_DATABASE_URL_LOCAL: LOCAL_DB_URL, | ||
| SHOP_STATUS_TOKEN_SECRET: | ||
| 'test_status_token_secret_test_status_token_secret', | ||
| NODE_ENV: 'test', | ||
| NEXT_TELEMETRY_DISABLED: '1', | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the full playwright.config.ts file to understand the webServer configuration
find . -name "playwright.config.ts" -o -name "playwright.config.js" | head -5Repository: DevLoversTeam/devlovers.net
Length of output: 102
🌐 Web query:
Playwright webServer reuseExistingServer env configuration behavior
💡 Result:
webServer.envis only applied to the process Playwright spawns viawebServer.command. By default it inheritsprocess.envand addsPLAYWRIGHT_TEST=1, and you can override/add variables viaenv: { ... }. [1]- If
reuseExistingServer: trueand Playwright detects something already listening on the configuredurl/port, it reuses that existing server and does not runcommand—sowebServer.envwon’t affect the already-running server’s environment (those env vars had to be set when that server was started). [1] - If
reuseExistingServer: truebut no server is running, Playwright will start one usingcommand(withenvapplied) and will shut it down after the tests (current behavior). [1] [2]
Example pattern:
webServer: {
command: 'npm run start',
url: 'http://localhost:3000',
reuseExistingServer: true,
env: { MY_FLAG: '1' }, // only affects the server if Playwright starts it
}Sources: [1] Playwright docs (webServer options, incl. env + reuseExistingServer) (playwright.dev), [2] Playwright issue discussing shutdown behavior even with reuseExistingServer (github.com)
Citations:
- 1: https://playwright.dev/docs/test-webserver?utm_source=openai
- 2: [Feature] Don't quit a web server started by Playwright if reuseExistingServer is true microsoft/playwright#26459
🏁 Script executed:
cat -n ./frontend/playwright.config.tsRepository: DevLoversTeam/devlovers.net
Length of output: 1251
Reconsider reuseExistingServer: true with strict local-only environment controls.
When reuseExistingServer: true, Playwright only applies webServer.env to servers it spawns itself. If a server is already running on port 3100, Playwright reuses it with its original environment variables—bypassing the strict local-only controls (SHOP_STRICT_LOCAL_DB, SHOP_REQUIRED_DATABASE_URL_LOCAL, empty DATABASE_URL) configured here. This creates non-deterministic test behavior dependent on what's already running, undermining E2E isolation.
Consider setting reuseExistingServer: false to ensure tests always run against a server with these strict environment settings applied.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/playwright.config.ts` around lines 23 - 34, The Playwright config
currently sets reuseExistingServer: true which causes webServer.env to be
ignored when an existing server is reused, breaking the strict local-only
environment controls; change reuseExistingServer to false in the Playwright
config so the test runner always spawns the server with the provided
webServer.env (the env keys like SHOP_STRICT_LOCAL_DB,
SHOP_REQUIRED_DATABASE_URL_LOCAL, DATABASE_URL, NEXT_TELEMETRY_DISABLED,
NODE_ENV are applied), ensuring deterministic E2E isolation and preventing tests
from inheriting an external process's environment.
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/lib/services/orders/restock.ts (1)
171-190:⚠️ Potential issue | 🔴 CriticalTransition validation is not atomic with writes, allowing partial/inconsistent terminalization.
At Line 311,
stockRestored/restockedAtis committed before the Line 330 transition check. If Line 330 throws, the order can be stranded in a mixed state and later skipped by the early-return guard. Also, both update paths validate against a snapshot status but don’t guardorders.statusinWHERE, so concurrent state changes can still slip through.Suggested fix (validate before mutation + compare-and-set on status)
@@ - const finalizedAt = new Date(); - const [finalized] = await db - .update(orders) - .set({ - stockRestored: true, - restockedAt: finalizedAt, - updatedAt: finalizedAt, - }) - .where(and(eq(orders.id, orderId), eq(orders.stockRestored, false))) - .returning({ id: orders.id }); - - if (!finalized) return; + const finalizedAt = new Date(); @@ const shouldCancel = reason === 'canceled'; const shouldFail = reason === 'failed' || reason === 'stale'; - validateRestockTransition(orderId, order.status, reason); - - await db + validateRestockTransition(orderId, order.status, reason); + + const [finalized] = await db .update(orders) .set({ + stockRestored: true, + restockedAt: finalizedAt, inventoryStatus: 'released', updatedAt: finalizedAt, ...(shouldFail ? { status: 'INVENTORY_FAILED' } : {}), ...(shouldCancel ? { status: 'CANCELED' } : {}), }) - .where(eq(orders.id, orderId)); + .where( + and( + eq(orders.id, orderId), + eq(orders.stockRestored, false), + eq(orders.status, order.status) + ) + ) + .returning({ id: orders.id }); + + if (!finalized) { + throw new OrderStateInvalidError('Order state changed during restock', { + orderId, + details: { fromStatus: order.status, reason }, + }); + } @@ - .where(and(eq(orders.id, orderId), eq(orders.stockRestored, false))) + .where( + and( + eq(orders.id, orderId), + eq(orders.stockRestored, false), + eq(orders.status, order.status) + ) + )Also applies to: 310-340
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/orders/restock.ts` around lines 171 - 190, The update isn't atomic with the transition validation: move the transition validation out of the post-update path and perform a compare-and-set update on orders so the mutation only succeeds if the current status equals the expected snapshot; specifically, in the code that calls db.update(orders).set(...).where(and(eq(orders.id, orderId), eq(orders.stockRestored, false))) (the block that returns touched), include an additional predicate comparing orders.status to the snapshot status (e.g., eq(orders.status, <currentStatus>)) and only set status to 'INVENTORY_FAILED' or 'CANCELED' as part of that same update, so the status transition and stockRestored/restockedAt are committed in one conditional DB write and concurrent changes cannot slip through.
♻️ Duplicate comments (1)
frontend/lib/services/shop/shipping/shipments-worker.ts (1)
453-472:⚠️ Potential issue | 🟠 MajorEmit
creating_labelonly for orders that actually transitioned.
mark_orderscan skip rows via transition guard, but Lines 458-472 still emit for everyclaimedrow. This can write false canonical events.💡 Proposed fix
- select * from claimed + select c.* + from claimed c + join mark_orders mo on mo.id = c.order_id🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/shipping/shipments-worker.ts` around lines 453 - 472, The loop over claimed rows currently calls emitWorkerShippingEvent for every row; change it to only emit when the row actually transitioned in mark_orders by checking the transition result (e.g., the response/flags returned from mark_orders or the transition guard result) before calling emitWorkerShippingEvent; locate the claimed iteration in shipments-worker.ts (the for loop over claimed, the emitWorkerShippingEvent call and nextAttemptNumber usage) and add a conditional that verifies the row transitioned (or filter claimed using the transition result) so only transitioned rows produce a 'creating_label' event.
🧹 Nitpick comments (2)
frontend/lib/services/products/mutations/create.ts (1)
109-112: Consider guarding cleanup for injected executors.Line 111 executes cleanup with
executor.delete()which could be a transaction context ifoptions.dbis injected. While cleanup currently only occurs in the default non-injected path (admin route calls without db parameter), this defensive pattern prevents potential issues if the function is later called with an injected transaction executor. Usedb.delete()directly and guard with!options?.dbto ensure cleanup only runs in the safe default path.Suggested patch
- if (createdProductId) { + if (createdProductId && !options?.db) { try { - await executor.delete(products).where(eq(products.id, createdProductId)); + await db.delete(products).where(eq(products.id, createdProductId)); } catch (cleanupDbError) { logError( 'Failed to cleanup product after create failure', cleanupDbError ); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/products/mutations/create.ts` around lines 109 - 112, The cleanup currently calls executor.delete(products).where(...) which may run inside an injected transaction executor; change the guard so cleanup uses the default DB directly and only when no injected DB is present: check !options?.db before performing cleanup and call db.delete(products).where(...) (referencing createdProductId, executor.delete, products, options.db and db.delete) so the delete runs only in the safe non-injected path.frontend/lib/services/orders/monobank-retry.ts (1)
51-53: Redundant size check can be simplified.The condition
if (TRANSIENT_APPLY_CODES.size > 0)is always true since the set is statically initialized with values. This could be simplified, but it doesn't affect correctness and may be intentional for future flexibility.♻️ Optional simplification
- if (TRANSIENT_APPLY_CODES.size > 0) { - return TRANSIENT_APPLY_CODES.has(code); - } - - return false; + return TRANSIENT_APPLY_CODES.has(code);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/orders/monobank-retry.ts` around lines 51 - 53, The guard "if (TRANSIENT_APPLY_CODES.size > 0)" is redundant because TRANSIENT_APPLY_CODES is statically populated; simplify the function by removing that size check and directly return TRANSIENT_APPLY_CODES.has(code) (referencing the TRANSIENT_APPLY_CODES set and the function that currently wraps this conditional).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/app/api/shop/admin/products/`[id]/route.ts:
- Around line 480-485: The dedupe seed in the admin product update audit uses a
capitalized key "Slug" which makes the event shape inconsistent; update the seed
object where domain: 'product_admin_action' and action: 'update' (the properties
including requestId, productId: updated.id, Slug: updated.slug, toBadge:
updated.badge) to use a lowercase slug key (slug: updated.slug) so the dedupe
seed shape matches other admin audit seeds.
In `@frontend/app/api/shop/admin/products/route.ts`:
- Around line 327-360: The image cleanup currently runs regardless of whether
the product rollback (deleteProduct) succeeded, which risks deleting an image
while the product still exists; change the flow so
destroyProductImage(inserted.imagePublicId) is only called when
deleteProduct(inserted.id) completes without throwing: attempt deleteProduct
inside a try and, if it succeeds, then (and only then) run destroyProductImage
for inserted.imagePublicId; keep the existing error logging (logError) for both
the rollback delete and the image deletion but ensure the image deletion path is
guarded by the successful deleteProduct outcome (use the same
inserted.id/inserted.slug/inserted.imagePublicId symbols to identify the record
in logs).
In `@frontend/app/api/shop/orders/`[id]/quote/accept/route.ts:
- Around line 52-53: When handling origin guard results from
guardBrowserSameOrigin(request) in the route, ensure the returned blocked
Response is mutated or recreated to include the no-store cache directive so it
matches the route's no-store posture; specifically, if blocked is truthy (the
variable from the call to guardBrowserSameOrigin), return a Response based on
blocked with its headers updated to include "cache-control: no-store" (or set
the equivalent header) before returning from this route handler.
In `@frontend/lib/services/products/mutations/create.ts`:
- Line 28: The call currently passing "executor as any" to normalizeSlug hides a
type mismatch between ProductMutationExecutor and the DbClient that
normalizeSlug expects; remove the cast and either (A) update normalizeSlug's
parameter type to accept the narrower ProductMutationExecutor (or a union like
ProductMutationExecutor | DbClient) or (B) make ProductMutationExecutor
implement/extend the DbClient interface so it satisfies normalizeSlug's
signature, and then call normalizeSlug(executor) directly; reference the
executor variable, normalizeSlug function, and the ProductMutationExecutor and
DbClient types when making the change to restore proper compile-time safety.
In `@frontend/lib/services/shop/shipping/shipments-worker.ts`:
- Around line 752-773: The unguarded call to emitWorkerShippingEvent in
processClaimedShipment can throw after markFailed has committed and abort the
worker; wrap the emitWorkerShippingEvent call in a try/catch (or otherwise guard
it) so any exceptions are caught and handled (log the error via the existing
logger and/or record a non-fatal telemetry event) without rethrowing, ensuring
processClaimedShipment completes even if the event write fails; keep the same
payload and status mapping but prevent the event-write failure from propagating.
In `@frontend/lib/services/shop/transitions/return-state.ts`:
- Line 18: The lint error comes from returning Object.freeze(arr) inside a
forEach callback; change the code so the callback does not return a value—either
call Object.freeze(arr) in a block-bodied forEach callback (i.e., remove the
implicit return) or replace the forEach with a for...of loop over
Object.values(RETURN_ALLOWED_FROM) and call Object.freeze(arr) for each element;
reference RETURN_ALLOWED_FROM and the current Object.freeze(arr) usage to locate
and update the line.
In `@frontend/lib/services/shop/transitions/shipping-state.ts`:
- Line 26: The forEach callback currently uses an implicit-return arrow which
returns the result of Object.freeze(arr) and triggers the
iterable-callback-return lint rule; change the callback for
Object.values(SHIPPING_ALLOWED_FROM).forEach to a block-bodied arrow (e.g.,
forEach(arr => { Object.freeze(arr); })) so the freeze call is executed but
nothing is returned, keeping SHIPPING_ALLOWED_FROM arrays frozen without
returning a value.
In `@frontend/lib/tests/shop/shipping-shipments-worker-phase5.test.ts`:
- Around line 380-387: The mock for writeShippingEvent swallows the return value
of the real implementation, breaking the function contract; update the mock in
the vi.mocked(writeShippingEvent) implementation to return the awaited result
when delegating to originalWriteShippingEventImpl (i.e., use return await
originalWriteShippingEventImpl(args)) while still throwing for args.eventName
=== 'label_created', so callers receive the same return value as the real
writeShippingEvent.
In `@frontend/tests/e2e/shop-minimal-phase9.spec.ts`:
- Around line 8-15: The current safety check only ensures DATABASE_URL_LOCAL and
SHOP_STATUS_TOKEN_SECRET are set; enhance it by validating that LOCAL_DB_URL
actually targets a local/test DB: parse LOCAL_DB_URL (using new
URL(LOCAL_DB_URL)) and assert the hostname is "localhost" or "127.0.0.1" (or,
for file-based DBs, that the protocol/ pathname indicates a local file), and if
not throw an Error. Update the top-level validation in the test file where
LOCAL_DB_URL and STATUS_TOKEN_SECRET are checked (symbols: LOCAL_DB_URL,
STATUS_TOKEN_SECRET) to include this host/protocol check and a clear error
message refusing to run against non-local hosts.
---
Outside diff comments:
In `@frontend/lib/services/orders/restock.ts`:
- Around line 171-190: The update isn't atomic with the transition validation:
move the transition validation out of the post-update path and perform a
compare-and-set update on orders so the mutation only succeeds if the current
status equals the expected snapshot; specifically, in the code that calls
db.update(orders).set(...).where(and(eq(orders.id, orderId),
eq(orders.stockRestored, false))) (the block that returns touched), include an
additional predicate comparing orders.status to the snapshot status (e.g.,
eq(orders.status, <currentStatus>)) and only set status to 'INVENTORY_FAILED' or
'CANCELED' as part of that same update, so the status transition and
stockRestored/restockedAt are committed in one conditional DB write and
concurrent changes cannot slip through.
---
Duplicate comments:
In `@frontend/lib/services/shop/shipping/shipments-worker.ts`:
- Around line 453-472: The loop over claimed rows currently calls
emitWorkerShippingEvent for every row; change it to only emit when the row
actually transitioned in mark_orders by checking the transition result (e.g.,
the response/flags returned from mark_orders or the transition guard result)
before calling emitWorkerShippingEvent; locate the claimed iteration in
shipments-worker.ts (the for loop over claimed, the emitWorkerShippingEvent call
and nextAttemptNumber usage) and add a conditional that verifies the row
transitioned (or filter claimed using the transition result) so only
transitioned rows produce a 'creating_label' event.
---
Nitpick comments:
In `@frontend/lib/services/orders/monobank-retry.ts`:
- Around line 51-53: The guard "if (TRANSIENT_APPLY_CODES.size > 0)" is
redundant because TRANSIENT_APPLY_CODES is statically populated; simplify the
function by removing that size check and directly return
TRANSIENT_APPLY_CODES.has(code) (referencing the TRANSIENT_APPLY_CODES set and
the function that currently wraps this conditional).
In `@frontend/lib/services/products/mutations/create.ts`:
- Around line 109-112: The cleanup currently calls
executor.delete(products).where(...) which may run inside an injected
transaction executor; change the guard so cleanup uses the default DB directly
and only when no injected DB is present: check !options?.db before performing
cleanup and call db.delete(products).where(...) (referencing createdProductId,
executor.delete, products, options.db and db.delete) so the delete runs only in
the safe non-injected path.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (36)
frontend/.env.examplefrontend/app/api/shop/admin/products/[id]/route.tsfrontend/app/api/shop/admin/products/[id]/status/route.tsfrontend/app/api/shop/admin/products/route.tsfrontend/app/api/shop/internal/notifications/run/route.tsfrontend/app/api/shop/orders/[id]/quote/accept/route.tsfrontend/app/api/shop/orders/[id]/quote/decline/route.tsfrontend/app/api/shop/orders/[id]/status/route.tsfrontend/app/api/shop/webhooks/monobank/route.tsfrontend/app/api/shop/webhooks/stripe/route.tsfrontend/db/schema/shop.tsfrontend/drizzle/0026_gray_stone_men.sqlfrontend/drizzle/meta/0026_snapshot.jsonfrontend/drizzle/meta/_journal.jsonfrontend/lib/services/orders/checkout.tsfrontend/lib/services/orders/monobank-retry.tsfrontend/lib/services/orders/restock.tsfrontend/lib/services/products/mutations/create.tsfrontend/lib/services/shop/events/write-admin-audit.tsfrontend/lib/services/shop/events/write-payment-event.tsfrontend/lib/services/shop/notifications/outbox-worker.tsfrontend/lib/services/shop/order-access.tsfrontend/lib/services/shop/shipping/shipments-worker.tsfrontend/lib/services/shop/transitions/order-state.tsfrontend/lib/services/shop/transitions/return-state.tsfrontend/lib/services/shop/transitions/shipping-state.tsfrontend/lib/tests/shop/admin-product-create-atomic-phasec.test.tsfrontend/lib/tests/shop/checkout-legal-consent-phase4.test.tsfrontend/lib/tests/shop/monobank-webhook-apply.test.tsfrontend/lib/tests/shop/monobank-webhook-retry-classifier.test.tsfrontend/lib/tests/shop/notifications-worker-phase3.test.tsfrontend/lib/tests/shop/order-payment-init-intl-gate-phase2.test.tsfrontend/lib/tests/shop/returns-composite-fk-phaseE.test.tsfrontend/lib/tests/shop/returns-route-phase4.test.tsfrontend/lib/tests/shop/shipping-shipments-worker-phase5.test.tsfrontend/tests/e2e/shop-minimal-phase9.spec.ts
✅ Files skipped from review due to trivial changes (1)
- frontend/lib/tests/shop/admin-product-create-atomic-phasec.test.ts
🚧 Files skipped from review as they are similar to previous changes (7)
- frontend/lib/tests/shop/order-payment-init-intl-gate-phase2.test.ts
- frontend/lib/tests/shop/returns-route-phase4.test.ts
- frontend/app/api/shop/webhooks/monobank/route.ts
- frontend/lib/services/shop/events/write-admin-audit.ts
- frontend/lib/services/shop/transitions/order-state.ts
- frontend/.env.example
- frontend/lib/tests/shop/monobank-webhook-apply.test.ts
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (3)
frontend/app/api/shop/orders/[id]/quote/accept/route.ts (1)
20-38: Consider extracting shared helpers to a common module.
noStoreJsonandmapQuoteErrorStatusappear to be duplicated across multiple quote-related routes (accept, decline, offer). Extracting them to a shared utility (e.g.,@/lib/api/shop/quote-utils) would reduce duplication and ensure consistent behavior across endpoints.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/orders/`[id]/quote/accept/route.ts around lines 20 - 38, The helpers noStoreJson and mapQuoteErrorStatus are duplicated across quote routes; extract them into a shared module (suggested name: quote-utils) and replace the inline definitions with imports. Create a new module that exports noStoreJson and mapQuoteErrorStatus (keeping the same function names/signatures), update the route handlers (e.g., accept/decline/offer route files that currently declare these functions) to import those helpers, and run a quick search/replace to ensure all usages reference the shared exports so behavior stays consistent across endpoints.frontend/lib/tests/shop/shipping-shipments-worker-phase5.test.ts (1)
519-531: Consider adding event verification after mock throws.This test verifies shipment and order state when
label_createdevent write throws, but doesn't assert on the events themselves. Adding verification thatcreating_labelwas written whilelabel_createdwas not would strengthen the test and document expected behavior.💡 Suggested addition before cleanup
expect(order?.shippingStatus).toBe('label_created'); expect(order?.trackingNumber).toBe('20450000999999'); expect(order?.shippingProviderRef).toBe('np-provider-ref-event-fail'); + + const events = await readOrderShippingEvents(seed.orderId); + expect(events.some(event => event.eventName === 'creating_label')).toBe(true); + expect(events.some(event => event.eventName === 'label_created')).toBe(false); } finally {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/shipping-shipments-worker-phase5.test.ts` around lines 519 - 531, Add explicit assertions that the "creating_label" event was written and the "label_created" event was not after the mocked writeShippingEvent throws: locate where writeShippingEvent is mocked in the test and, before restoring the mock and calling cleanupSeed, assert (via vi.mocked(writeShippingEvent).toHaveBeenCalledWith(...) or by inspecting vi.mocked(writeShippingEvent).mock.calls) that a call with event type "creating_label" occurred and that no call with event type "label_created" happened; keep these checks adjacent to the existing order/shipment assertions so the test documents expected event behavior when writeShippingEvent fails.frontend/lib/services/shop/shipping/shipments-worker.ts (1)
803-829: Guard failure-path metric writes the same way as success-path writes.If
recordShippingMetricthrows here, the function can still fail after DB state updates. Wrapping this block keeps post-commit telemetry non-fatal and consistent with Line 714 handling.Proposed fix
- if (terminalNeedsAttention) { - recordShippingMetric({ - name: 'needs_attention', - source: 'shipments_worker', - runId: args.runId, - orderId: args.claim.order_id, - shipmentId: args.claim.id, - code: classified.code, - }); - } else { - recordShippingMetric({ - name: 'failed', - source: 'shipments_worker', - runId: args.runId, - orderId: args.claim.order_id, - shipmentId: args.claim.id, - code: classified.code, - }); - recordShippingMetric({ - name: 'retries', - source: 'shipments_worker', - runId: args.runId, - orderId: args.claim.order_id, - shipmentId: args.claim.id, - code: classified.code, - }); - } + try { + if (terminalNeedsAttention) { + recordShippingMetric({ + name: 'needs_attention', + source: 'shipments_worker', + runId: args.runId, + orderId: args.claim.order_id, + shipmentId: args.claim.id, + code: classified.code, + }); + } else { + recordShippingMetric({ + name: 'failed', + source: 'shipments_worker', + runId: args.runId, + orderId: args.claim.order_id, + shipmentId: args.claim.id, + code: classified.code, + }); + recordShippingMetric({ + name: 'retries', + source: 'shipments_worker', + runId: args.runId, + orderId: args.claim.order_id, + shipmentId: args.claim.id, + code: classified.code, + }); + } + } catch { + logWarn('shipping_shipments_worker_post_failure_metric_write_failed', { + runId: args.runId, + shipmentId: args.claim.id, + orderId: args.claim.order_id, + code: 'SHIPPING_METRIC_WRITE_FAILED', + }); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/shipping/shipments-worker.ts` around lines 803 - 829, The metric writes for both the terminalNeedsAttention and else branches should be made non-fatal by wrapping the recordShippingMetric calls in a try/catch (matching the non-fatal pattern used elsewhere) so failures don’t bubble after DB updates; locate the block that checks terminalNeedsAttention and replace the direct recordShippingMetric invocations with a single try { ... } catch (err) { /* log non-fatal metric error with context: args.runId, args.claim.order_id, args.claim.id, classified.code */ } so metrics are attempted but any thrown errors are swallowed or logged without affecting the function outcome.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/app/api/shop/admin/products/`[id]/route.ts:
- Around line 491-505: The catch block that currently logs AUDIT_WRITE_FAILED
(using logWarn, baseMeta, requestId, actorUserId, productId: updated.id, action:
'product_admin_action.update') must fail the overall request instead of silently
succeeding: after logging the audit failure rethrow the caught auditError (or
throw a new descriptive error) so the update handler returns an error response
and does not report success; apply the same change to the delete-audit catch
(the corresponding AUDIT_WRITE_FAILED block around the delete flow) so both
update and delete mutations fail closed on audit write errors.
In `@frontend/lib/services/shop/shipping/shipments-worker.ts`:
- Around line 552-555: The current SQL join (selecting from updated_shipment us
join updated_order uo) conflates missing order updates with missing shipment
updates and causes callers (the return paths near the branches used at the call
sites around lines 678 and 759) to treat any missing row as SHIPMENT_LEASE_LOST;
instead, change the logic in shipments-worker.ts to distinguish a missing
updated_shipment row from a missing updated_order row before returning: either
modify the query to return separate presence flags (e.g., select us.order_id is
not null as shipment_updated, uo.id is not null as order_updated) or run two
targeted queries, then branch so that if shipment update is missing you return
the existing SHIPMENT_LEASE_LOST but if shipment exists and order update is
missing you return the transition-guard failure variant (the
transition-guard/missed-order result) used by the callers around 678/759; update
the early-return paths that now assume SHIPMENT_LEASE_LOST to use the correct
result based on those flags.
In `@frontend/lib/tests/shop/admin-product-create-atomic-phasec.test.ts`:
- Around line 131-136: The test currently only checks for a 500 and
INTERNAL_ERROR but may pass on unrelated failures; update the failing-path
assertions to explicitly verify the audit and rollback behavior by asserting
that the mocked writeAdminAudit was called (e.g.,
expect(writeAdminAudit).toHaveBeenCalledWith(...)) after invoking POST(req), and
in the second test also assert that the delete-rollback function (the mock used
to undo the created product—e.g., deleteProduct / rollbackDelete) was called to
confirm the rollback path was exercised; keep the existing res and json
assertions but add these explicit mock call checks so the audit-failure and
rollback branches are actually validated.
In `@frontend/lib/tests/shop/shipping-shipments-worker-phase5.test.ts`:
- Around line 550-573: The test name and assertions are contradictory: the test
"does not mark success" asserts shipment.status === 'succeeded' while the worker
result shows succeeded: 0 and retried: 1; update the test to make intent clear
by either renaming the test (e.g., "shipment succeeds but order transition is
blocked / counted as retried") or adjust assertions to match intended behavior,
and add a concise inline comment near the result and shipment checks explaining
why a succeeded shipment is counted under retried (reference the variables
result, succeeded, retried and the fetched shipment via db.select where
eq(shippingShipments.id, seed.shipmentId)); if the current assertions are wrong,
change the expected status/metrics to the correct values instead of renaming.
---
Nitpick comments:
In `@frontend/app/api/shop/orders/`[id]/quote/accept/route.ts:
- Around line 20-38: The helpers noStoreJson and mapQuoteErrorStatus are
duplicated across quote routes; extract them into a shared module (suggested
name: quote-utils) and replace the inline definitions with imports. Create a new
module that exports noStoreJson and mapQuoteErrorStatus (keeping the same
function names/signatures), update the route handlers (e.g.,
accept/decline/offer route files that currently declare these functions) to
import those helpers, and run a quick search/replace to ensure all usages
reference the shared exports so behavior stays consistent across endpoints.
In `@frontend/lib/services/shop/shipping/shipments-worker.ts`:
- Around line 803-829: The metric writes for both the terminalNeedsAttention and
else branches should be made non-fatal by wrapping the recordShippingMetric
calls in a try/catch (matching the non-fatal pattern used elsewhere) so failures
don’t bubble after DB updates; locate the block that checks
terminalNeedsAttention and replace the direct recordShippingMetric invocations
with a single try { ... } catch (err) { /* log non-fatal metric error with
context: args.runId, args.claim.order_id, args.claim.id, classified.code */ } so
metrics are attempted but any thrown errors are swallowed or logged without
affecting the function outcome.
In `@frontend/lib/tests/shop/shipping-shipments-worker-phase5.test.ts`:
- Around line 519-531: Add explicit assertions that the "creating_label" event
was written and the "label_created" event was not after the mocked
writeShippingEvent throws: locate where writeShippingEvent is mocked in the test
and, before restoring the mock and calling cleanupSeed, assert (via
vi.mocked(writeShippingEvent).toHaveBeenCalledWith(...) or by inspecting
vi.mocked(writeShippingEvent).mock.calls) that a call with event type
"creating_label" occurred and that no call with event type "label_created"
happened; keep these checks adjacent to the existing order/shipment assertions
so the test documents expected event behavior when writeShippingEvent fails.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
frontend/app/api/shop/admin/products/[id]/route.tsfrontend/app/api/shop/admin/products/route.tsfrontend/app/api/shop/orders/[id]/quote/accept/route.tsfrontend/lib/services/orders/monobank-retry.tsfrontend/lib/services/orders/restock.tsfrontend/lib/services/products/mutations/create.tsfrontend/lib/services/products/slug.tsfrontend/lib/services/products/types.tsfrontend/lib/services/shop/shipping/shipments-worker.tsfrontend/lib/services/shop/transitions/return-state.tsfrontend/lib/services/shop/transitions/shipping-state.tsfrontend/lib/tests/shop/admin-product-create-atomic-phasec.test.tsfrontend/lib/tests/shop/shipping-shipments-worker-phase5.test.tsfrontend/tests/e2e/shop-minimal-phase9.spec.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- frontend/tests/e2e/shop-minimal-phase9.spec.ts
- frontend/lib/services/shop/transitions/return-state.ts
frontend/lib/tests/shop/shipping-shipments-worker-phase5.test.ts
Outdated
Show resolved
Hide resolved
…s/metrics, tighten quote+tests
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
frontend/app/api/shop/admin/products/[id]/route.ts (1)
755-787: Audit fails after irreversible delete — client sees error despite successful deletion.The product is deleted at line 755 before the audit write. If
writeAdminAuditfails, the product is gone but the client receives a 500 error. On retry, they'll get 404. This is consistent with the fail-closed intent but creates a confusing operator experience.If transactional atomicity is feasible (e.g., both tables in same DB), wrapping delete + audit in a transaction would ensure consistency. Otherwise, consider whether swallowing the audit error for DELETE specifically (logging at error level) better serves the use case, since the mutation is irreversible.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/admin/products/`[id]/route.ts around lines 755 - 787, The code currently calls deleteProduct(productIdForLog) and then throws if writeAdminAudit(...) fails, causing clients to see a 500 despite the irreversible delete; update the behavior so DELETE does not surface audit-writes as fatal: either wrap deleteProduct and writeAdminAudit in a DB transaction if both stores support the same transactional boundary, or (if not feasible) catch auditError and only log it (using the existing logWarn/logError path like 'admin_product_delete_audit_failed' with baseMeta, code 'AUDIT_WRITE_FAILED', requestId, actorUserId, productIdForLog, action, message and durationMs) but do NOT rethrow—return success to the client. Ensure references to deleteProduct, writeAdminAudit, auditError, baseMeta, requestId and startedAtMs are used to locate and implement the change.frontend/lib/tests/shop/shipping-shipments-worker-phase5.test.ts (1)
133-145: Make event-history reads deterministic.On Line 144,
readOrderShippingEventshas no explicit sort. Add a stableorderBy(timestamp + PK or equivalent) so event-history assertions remain deterministic across query plans.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/shipping-shipments-worker-phase5.test.ts` around lines 133 - 145, readOrderShippingEvents currently returns shippingEvents without a deterministic order; add a stable orderBy to the query (e.g., orderBy(shippingEvents.timestamp, shippingEvents.id) or the table's createdAt/primary key columns) so reads are stable across query plans; update the query built in readOrderShippingEvents to append a deterministic orderBy using the timestamp column plus the primary key (shipmentId or id) from the shippingEvents table.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/app/api/shop/orders/`[id]/quote/request/route.ts:
- Around line 29-30: The blocked response from guardBrowserSameOrigin in the
route handler (const blocked = guardBrowserSameOrigin(request); if (blocked)
return blocked;) should explicitly set the Cache-Control header to "no-store"
for consistency with sibling quote routes; update the handler to, when blocked
is truthy, set blocked.headers.set("Cache-Control", "no-store") (or equivalent
on the Response object) before returning blocked so the blocked response always
includes that header.
In `@frontend/lib/services/shop/shipping/shipments-worker.ts`:
- Around line 468-483: The loop that calls emitWorkerShippingEvent for each
claimed row can throw and abort the worker after DB claims have committed; wrap
the call to emitWorkerShippingEvent inside a try/catch within the for (const row
of claimed) loop (the block referencing nextAttemptNumber(row.attempt_count) and
payload.shipmentStatusTo) so any error is caught, logged (including row.order_id
and row.id and the error), and swallowed (allowing the loop to continue) rather
than letting the exception propagate and abort the run; ensure you still await
the call inside the try and do not rethrow the error.
- Around line 485-491: The mapping over claimed currently includes rows whose
order_transitioned flag is false and then later may call createInternetDocument;
filter out any rows with row.order_transitioned === false before mapping (or
within the map) so only rows where order_transitioned is true are
returned/processed. Update the logic around the claimed variable handling (the
returned object mapping that includes id/order_id/provider/status/attempt_count)
to skip or remove rows with order_transitioned false to prevent
createInternetDocument from being invoked for blocked transitions.
---
Nitpick comments:
In `@frontend/app/api/shop/admin/products/`[id]/route.ts:
- Around line 755-787: The code currently calls deleteProduct(productIdForLog)
and then throws if writeAdminAudit(...) fails, causing clients to see a 500
despite the irreversible delete; update the behavior so DELETE does not surface
audit-writes as fatal: either wrap deleteProduct and writeAdminAudit in a DB
transaction if both stores support the same transactional boundary, or (if not
feasible) catch auditError and only log it (using the existing logWarn/logError
path like 'admin_product_delete_audit_failed' with baseMeta, code
'AUDIT_WRITE_FAILED', requestId, actorUserId, productIdForLog, action, message
and durationMs) but do NOT rethrow—return success to the client. Ensure
references to deleteProduct, writeAdminAudit, auditError, baseMeta, requestId
and startedAtMs are used to locate and implement the change.
In `@frontend/lib/tests/shop/shipping-shipments-worker-phase5.test.ts`:
- Around line 133-145: readOrderShippingEvents currently returns shippingEvents
without a deterministic order; add a stable orderBy to the query (e.g.,
orderBy(shippingEvents.timestamp, shippingEvents.id) or the table's
createdAt/primary key columns) so reads are stable across query plans; update
the query built in readOrderShippingEvents to append a deterministic orderBy
using the timestamp column plus the primary key (shipmentId or id) from the
shippingEvents table.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
frontend/app/api/shop/admin/products/[id]/route.tsfrontend/app/api/shop/orders/[id]/quote/accept/route.tsfrontend/app/api/shop/orders/[id]/quote/decline/route.tsfrontend/app/api/shop/orders/[id]/quote/quote-utils.tsfrontend/app/api/shop/orders/[id]/quote/request/route.tsfrontend/lib/services/shop/shipping/shipments-worker.tsfrontend/lib/tests/shop/admin-product-create-atomic-phasec.test.tsfrontend/lib/tests/shop/shipping-shipments-worker-phase5.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- frontend/app/api/shop/orders/[id]/quote/accept/route.ts
- frontend/lib/tests/shop/admin-product-create-atomic-phasec.test.ts
…uote/test paths resilient
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
frontend/app/api/shop/admin/products/[id]/route.ts (1)
774-788:⚠️ Potential issue | 🟠 MajorFail closed on DELETE audit write failure.
At Line 774, audit failures are logged but swallowed, and Line 788 still returns
200. This permits successful deletes without audit records. Match PATCH behavior by rethrowing after logging.💡 Minimal patch
} catch (auditError) { logWarn('admin_product_delete_audit_failed', { ...baseMeta, code: 'AUDIT_WRITE_FAILED', requestId, actorUserId, productId: productIdForLog, action: 'product_admin_action.delete', message: auditError instanceof Error ? auditError.message : String(auditError), durationMs: Date.now() - startedAtMs, }); + throw auditError; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/admin/products/`[id]/route.ts around lines 774 - 788, The DELETE audit catch currently logs auditError via logWarn (with meta including product_admin_action.delete) but swallows it and then returns noStoreJson({ success: true }, { status: 200 }); — change the catch in the DELETE flow to rethrow the auditError after logging (e.g., throw auditError or wrap and throw a new Error) so the request fails like PATCH does; keep the existing logWarn call and metadata (requestId, actorUserId, productIdForLog, durationMs) and ensure noStoreJson is not returned when an audit write fails.
🧹 Nitpick comments (3)
frontend/app/api/shop/orders/[id]/quote/request/route.ts (1)
21-23: Normalize inboundx-request-idbefore using it as canonicalrequestId.Right now any header value is accepted and forwarded to logs/service calls. Prefer UUID-only (or bounded length) normalization and fallback to
crypto.randomUUID()for invalid values.Suggested patch
- const requestId = - request.headers.get('x-request-id')?.trim() || crypto.randomUUID(); + const inboundRequestId = request.headers.get('x-request-id')?.trim(); + const requestId = + inboundRequestId && + /^[0-9a-f]{8}-[0-9a-f]{4}-[1-8][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/i.test( + inboundRequestId + ) + ? inboundRequestId + : crypto.randomUUID();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/orders/`[id]/quote/request/route.ts around lines 21 - 23, Normalize the incoming x-request-id header before using it as the canonical requestId: in the route handler where requestId is assigned (the current requestId variable initialization that reads request.headers.get('x-request-id')), validate that the header is a well-formed UUID (or enforce a safe max length) and only use it if it passes validation; otherwise fall back to crypto.randomUUID(). Update any subsequent use of requestId (e.g., baseMeta and logging/service calls) to rely on this sanitized value so untrusted header values are never forwarded.frontend/lib/services/shop/shipping/shipments-worker.ts (2)
291-293: Minor: Simplify redundantMath.maxnesting.The inner
Math.max(0, ...)is unnecessary since the outerMath.max(1, ...)already ensures the result is at least 1.♻️ Suggested simplification
function nextAttemptNumber(attemptCount: number): number { - return Math.max(1, Math.max(0, Math.trunc(attemptCount)) + 1); + return Math.max(1, Math.trunc(attemptCount) + 1); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/shipping/shipments-worker.ts` around lines 291 - 293, The function nextAttemptNumber contains redundant Math.max nesting; simplify by removing the inner Math.max(0, ...) and compute Math.trunc(attemptCount) + 1, then clamp that result with Math.max(1, ...). Update the implementation of nextAttemptNumber to use Math.trunc(attemptCount) + 1 wrapped by a single Math.max(1, ...) to produce the same behavior more simply.
864-921: Consider consolidating repetitive metric try/catch blocks.The metric recording follows a consistent pattern across three locations (needs_attention, failed, retries) with identical error handling. While functional, this could be simplified with a helper.
♻️ Optional: Extract helper for guarded metric recording
function safeRecordShippingMetric( name: ShippingMetricName, args: { runId: string; orderId: string; shipmentId: string; code?: string } ) { try { recordShippingMetric({ name, source: 'shipments_worker', ...args, }); } catch { logWarn('shipping_shipments_worker_terminal_metric_write_failed', { runId: args.runId, orderId: args.orderId, shipmentId: args.shipmentId, errorCode: args.code, code: 'SHIPPING_METRIC_WRITE_FAILED', }); } }Then replace the three try/catch blocks with:
if (terminalNeedsAttention) { safeRecordShippingMetric('needs_attention', { runId: args.runId, orderId: args.claim.order_id, shipmentId: args.claim.id, code: classified.code }); } else { safeRecordShippingMetric('failed', { runId: args.runId, orderId: args.claim.order_id, shipmentId: args.claim.id, code: classified.code }); safeRecordShippingMetric('retries', { runId: args.runId, orderId: args.claim.order_id, shipmentId: args.claim.id, code: classified.code }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/shipping/shipments-worker.ts` around lines 864 - 921, Repetitive try/catch blocks around recordShippingMetric should be consolidated into a small helper to reduce duplication; add a function (e.g., safeRecordShippingMetric) that accepts the metric name and the args (runId, orderId, shipmentId, code), calls recordShippingMetric with source: 'shipments_worker' inside a try and on catch calls logWarn('shipping_shipments_worker_terminal_metric_write_failed', { runId, orderId, shipmentId, errorCode: code, code: 'SHIPPING_METRIC_WRITE_FAILED' }); then replace the three try/catch sites in the existing logic with calls to safeRecordShippingMetric('needs_attention', ...), safeRecordShippingMetric('failed', ...), and safeRecordShippingMetric('retries', ...) using args.runId, args.claim.order_id, args.claim.id, and classified.code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/app/api/shop/orders/`[id]/quote/request/route.ts:
- Around line 83-88: The response is leaking arbitrary
InvalidPayloadError.details back to clients; stop including error.details in the
noStoreJson response and only return stable public fields (error.code and
error.message). Instead, capture and record the full error details to your
server logs inside the catch block (e.g., logger.error or processLogger.error)
referencing the same caught error object, then build the response using
noStoreJson({ code: error.code, message: error.message }) without spreading
error.details. Locate the return that uses noStoreJson and remove the
...(error.details ? { details: error.details } : {}) piece, adding a log
statement that records error.details for debugging while keeping the public
response minimal.
---
Duplicate comments:
In `@frontend/app/api/shop/admin/products/`[id]/route.ts:
- Around line 774-788: The DELETE audit catch currently logs auditError via
logWarn (with meta including product_admin_action.delete) but swallows it and
then returns noStoreJson({ success: true }, { status: 200 }); — change the catch
in the DELETE flow to rethrow the auditError after logging (e.g., throw
auditError or wrap and throw a new Error) so the request fails like PATCH does;
keep the existing logWarn call and metadata (requestId, actorUserId,
productIdForLog, durationMs) and ensure noStoreJson is not returned when an
audit write fails.
---
Nitpick comments:
In `@frontend/app/api/shop/orders/`[id]/quote/request/route.ts:
- Around line 21-23: Normalize the incoming x-request-id header before using it
as the canonical requestId: in the route handler where requestId is assigned
(the current requestId variable initialization that reads
request.headers.get('x-request-id')), validate that the header is a well-formed
UUID (or enforce a safe max length) and only use it if it passes validation;
otherwise fall back to crypto.randomUUID(). Update any subsequent use of
requestId (e.g., baseMeta and logging/service calls) to rely on this sanitized
value so untrusted header values are never forwarded.
In `@frontend/lib/services/shop/shipping/shipments-worker.ts`:
- Around line 291-293: The function nextAttemptNumber contains redundant
Math.max nesting; simplify by removing the inner Math.max(0, ...) and compute
Math.trunc(attemptCount) + 1, then clamp that result with Math.max(1, ...).
Update the implementation of nextAttemptNumber to use Math.trunc(attemptCount) +
1 wrapped by a single Math.max(1, ...) to produce the same behavior more simply.
- Around line 864-921: Repetitive try/catch blocks around recordShippingMetric
should be consolidated into a small helper to reduce duplication; add a function
(e.g., safeRecordShippingMetric) that accepts the metric name and the args
(runId, orderId, shipmentId, code), calls recordShippingMetric with source:
'shipments_worker' inside a try and on catch calls
logWarn('shipping_shipments_worker_terminal_metric_write_failed', { runId,
orderId, shipmentId, errorCode: code, code: 'SHIPPING_METRIC_WRITE_FAILED' });
then replace the three try/catch sites in the existing logic with calls to
safeRecordShippingMetric('needs_attention', ...),
safeRecordShippingMetric('failed', ...), and safeRecordShippingMetric('retries',
...) using args.runId, args.claim.order_id, args.claim.id, and classified.code.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
frontend/app/api/shop/admin/products/[id]/route.tsfrontend/app/api/shop/orders/[id]/quote/request/route.tsfrontend/lib/services/shop/shipping/shipments-worker.tsfrontend/lib/tests/shop/shipping-shipments-worker-phase5.test.ts
| return noStoreJson( | ||
| { | ||
| code: error.code, | ||
| message: error.message, | ||
| ...(error.details ? { details: error.details } : {}), | ||
| }, |
There was a problem hiding this comment.
Do not return raw error.details from service exceptions.
Line 87 currently reflects arbitrary InvalidPayloadError.details back to the client. That payload is untyped and can leak internal/sensitive fields. Keep detailed context in logs, and return only stable public fields (e.g., code, message).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/app/api/shop/orders/`[id]/quote/request/route.ts around lines 83 -
88, The response is leaking arbitrary InvalidPayloadError.details back to
clients; stop including error.details in the noStoreJson response and only
return stable public fields (error.code and error.message). Instead, capture and
record the full error details to your server logs inside the catch block (e.g.,
logger.error or processLogger.error) referencing the same caught error object,
then build the response using noStoreJson({ code: error.code, message:
error.message }) without spreading error.details. Locate the return that uses
noStoreJson and remove the ...(error.details ? { details: error.details } : {})
piece, adding a log statement that records error.details for debugging while
keeping the public response minimal.
…Id; document best-effort delete audit
* (SP: 3) [Backend] add Nova Poshta shipping foundation + checkout persistence + async label workflow (#364) * (SP: 2) [Frontend] Reduce Vercel variable costs via caching and analytics cleanup (#367) * perf(vercel): cut runtime costs via notification, blog cache, and analytics changes * perf(blog): remove server searchParams usage to preserve ISR * fix(build): align Netlify Node version and remove SpeedInsights import * chore(release): bump version to 1.0.4 * (SP: 2) [Frontend] Remove [locale] layout force-dynamic and move auth to client-side (#370) * refactor(frontend): remove locale layout dynamic auth and move header auth client-side * fix(frontend): prevent stale auth responses in useAuth and remove redundant dashboard dynamic layout * (SP: 2) [Frontend] Reduce auth overhead and sync auth state across tabs (#372) * refactor(frontend): remove locale layout dynamic auth and move header auth client-side * fix(frontend): prevent stale auth responses in useAuth and remove redundant dashboard dynamic layout * feat(frontend): sync auth state across tabs via BroadcastChannel * (SP: 2) [Frontend] Quizzes page ISR + client-side progress + GitHub stars cache (#371) * perf(quiz-flow): move quiz progress to client-side fetch, enable ISR for quizzes page - Move user progress fetch from SSR to client-side API (/api/quiz/progress) - Remove force-dynamic and getCurrentUser() from quizzes page - Add revalidate=300 for ISR caching - Use window.history.replaceState for tab URL sync (avoid Next.js navigation) - Add forceMount to TabsContent to prevent layout shift on tab switch - Fix nested <main> — use <section> inside DynamicGridBackground - Cache GitHub stars count in sessionStorage to avoid refetch + re-animation * perf: replace useRef with useState lazy initializer in GitHubStarButton Fixes React 19 react-hooks/refs ESLint error — useRef.current cannot be read during render. Uses useState(getStoredStars) to capture the sessionStorage value once on mount instead. * fix: stop star icon trembling on hover in GitHubStarButton * (SP: 1) [Frontend] Fix quiz timer flash and card layout shift on quizzes page (#373) * perf(quiz-flow): move quiz progress to client-side fetch, enable ISR for quizzes page - Move user progress fetch from SSR to client-side API (/api/quiz/progress) - Remove force-dynamic and getCurrentUser() from quizzes page - Add revalidate=300 for ISR caching - Use window.history.replaceState for tab URL sync (avoid Next.js navigation) - Add forceMount to TabsContent to prevent layout shift on tab switch - Fix nested <main> — use <section> inside DynamicGridBackground - Cache GitHub stars count in sessionStorage to avoid refetch + re-animation * perf: replace useRef with useState lazy initializer in GitHubStarButton Fixes React 19 react-hooks/refs ESLint error — useRef.current cannot be read during render. Uses useState(getStoredStars) to capture the sessionStorage value once on mount instead. * fix: stop star icon trembling on hover in GitHubStarButton * fix: eliminate quiz timer flash on language switch Remove Suspense boundary (loading.tsx) that unmounted QuizContainer during locale navigation. Synchronous session restore via useReducer lazy initializer and correct timer initialization via useState lazy initializer prevent any visible state reset on language switch * fix: replace quiz card layout shift with skeleton grid during progress load * chore(release): v1.0.5 * (SP: 3)[Shop][DB] Reduce Neon compute: throttle janitor + relax checkout polling + add sweep indexes (#375) * (SP: 3) [Backend] add internal janitor (jobs 1-4), claim/lease + runbook (G0-G6) * (SP: 3) [Backend] add provider selector, fix payments gating, i18n checkout errors * Add shop category images to public * (SP: 3) [Shop][Monobank] I1 structured logging: codes + logging safety checks * (SP: 3) [Shop][Monobank] Fail-closed non-browser origin posture for webhook + janitor (ORIGIN_BLOCKED) * (SP: 3) [Shop][Monobank] [Shop][Monobank] J gate: add orders status ownership test and pass all pre-prod invariants * (SP: 3) [Shop][Monobank] review fixes (tests, logging, success UI) * (SP: 1) [Shop][Monobank] Tighten webhook log-code typing; harden DB tests; minor security/log/UI cleanups * (SP: 1) [Shop][Monobank] harden Monobank webhook (origin/PII-safe logs) and remove duplicate sha256 hashing * (SP: 1) [Cart] adding route for user orders to cart page * (SP: 1) [Cart] fix after review cart mpage and adding index for orders * (SP: 1) [Cart] Fix cart orders summary auth rendering and return totalCount for orders badge * (SP: 1) [Cart] remove console.warn from CartPageClient to satisfy monobank logging safety invariant, namespace localStorage cart by user and reset on auth change * (SP: 1) [Cart] rehydrate per cartOwnerId (remove didHydrate coupling) * (SP: 2)[Backend] shop/shipping schema migrations foundation * (SP: 2)[Backend] shop/shipping public routes + np cache + sync * (SP: 2)[Backend] shop/shipping: shipping persistence + currency policy * (SP: 2)[Backend] shop/shipping: webhook apply + psp fields + enqueue shipping * (SP: 2)[Backend] shop/shipping: shipments worker + internal run + np mock * (SP: 2)[Backend] shop/shipping: admin+ui shipping actions * (SP: 2)[Backend] shop/shipping: retention + log sanitizer + metrics * (SP: 1)[Backend] stabilize Monobank janitor (job1/job3) and fix failing apply-outcomes tests * (SP: 1) [db]: add shop shipping core migration * (SP: 1) [FIX] resolve merge artifacts in order details page * (SP: 1) [FIX] apply post-review fixes for shipping and admin flows * (SP: 1) [FIX] align cart shipping imports (localeToCountry + availability reason code) * (SP: 1) [FIX] hard-block checkout when shipping disabled + i18n reason mapping * (SP: 1) [FIX] harden webhook enqueue + shipping worker + NP catalog + cart fail-closed * (SP: 1) [FIX] Initialize shippingMethodsLoading to true to avoid premature checkout. * (SP: 1) [FIX] migration 17 * (SP: 1) [DB] migrarion to testind DB and adjusting tests * (SP: 1)[DB] slow down restock janitor + enforce prod interval floor * (SP: 1) [DB] add order status lite view (opt-in) + instrumentation * (SP: 1) [DB] replace checkout success router.refresh polling with backoff API polling * (SP: 1) [DB] throttle sessions activity heartbeat + use count(*) (PK invariant) * (SP: 1)[DB] enforce production min intervals for internal shipping jobs * (SP: 1) [DB] add minimal partial indexes for orders sweeps + rollout notes * (SP: 1) [DB] refactor sweep claim step to FOR UPDATE SKIP LOCKED batching * (SP: 1)[DB]: slow janitor schedule to every 30 minutes * (SP: 1)[DB] increase polling delays for MonobankRedirectStatus * (SP: 1)[FIX] harden webhooks + fix SSR hydration + janitor/np gates + sweeps refactor * (SP: 1)[FIX] harden shipping enqueue gating + apply NP interval floor * (SP: 3) [SHOP] audit-driven e2e purchase readiness hardening (events, notifications, consent, returns) (#378) * (SP:3)[SHOP] add canonical payment/shipping/admin audit tables + dedupe helper with flagged atomic dual-write * (SP: 2)[SHOP] add INTL quote flow (request/offer/accept/decline), payment-init gate, and quote expiry/timeout sweeps * (SP: 3)[SHOP] introduce outbox-driven notifications with projector + worker (phase 3) * (SP: 3)[SHOP] add minimal returns/RMA lifecycle with atomic audit + canonical events (phase 4) * (SP: 3)[SHOP] enforce guest status-token lite-only access and audit token usage (phase 5) * (SP: 3)[SHOP] centralize transition guards and enforce across admin/webhook/worker flows (phase 6) :wq n * (SP:3)[SHOP]: enforce DATABASE_URL_LOCAL preflight + deterministic vitest config * (SP:3)[SHOP]: require canonical events in prod (fail-fast) * (SP:3)[SHOP]: implement notifications transport with retries + DLQ * (SP:3)[SHOP]: persist checkout legal consent artifact * (SP:1)[SHOP] add migration 0025 for consent + events + audit + prices * (SP:1)[SHOP] emit shipping_events for shipment worker transitions * (SP:2)[SHOP] audit admin product mutations (deduped) * (SP:2)[SHOP] make Monobank webhook retryable on transient apply failures * (SP:3) [SHOP] enforce guest status-token scopes across order actions * (SP:1) [SHOP] make product_prices the only write authority * (SP:1) [SHOP] add Playwright e2e smoke suite (local DB only) * (SP:3) [SHOP] explicitly reject exchanges (EXCHANGES_NOT_SUPPORTED) * (SP: 3)[FIX] harden audit + workers/tests; fix transitions/restock + webhook perf * (SP: 3)[FIX] harden local-db test safety and tighten shop reliability guards * (SP: 3)[FIX] fail-closed admin audit, refine shipments-worker outcomes/metrics, tighten quote+tests * (SP: 1)[FIX] harden shipments-worker claiming/leases and make audit+quote/test paths resilient * (SP: 1)[FIX] harden quote request errors/logging and sanitize requestId; document best-effort delete audit * feat(ui): add devops/cloud category icons and styles (#379) * chore(release): prepare v1.0.6 changelog * chore: bump version to 1.0.6 --------- Co-authored-by: Liudmyla Sovetovs <milkaegik@gmail.com> Co-authored-by: Lesia Soloviova <106915140+LesiaUKR@users.noreply.github.com>
* (SP: 3) [Backend] add Nova Poshta shipping foundation + checkout persistence + async label workflow (#364) * (SP: 2) [Frontend] Reduce Vercel variable costs via caching and analytics cleanup (#367) * perf(vercel): cut runtime costs via notification, blog cache, and analytics changes * perf(blog): remove server searchParams usage to preserve ISR * fix(build): align Netlify Node version and remove SpeedInsights import * chore(release): bump version to 1.0.4 * (SP: 2) [Frontend] Remove [locale] layout force-dynamic and move auth to client-side (#370) * refactor(frontend): remove locale layout dynamic auth and move header auth client-side * fix(frontend): prevent stale auth responses in useAuth and remove redundant dashboard dynamic layout * (SP: 2) [Frontend] Reduce auth overhead and sync auth state across tabs (#372) * refactor(frontend): remove locale layout dynamic auth and move header auth client-side * fix(frontend): prevent stale auth responses in useAuth and remove redundant dashboard dynamic layout * feat(frontend): sync auth state across tabs via BroadcastChannel * (SP: 2) [Frontend] Quizzes page ISR + client-side progress + GitHub stars cache (#371) * perf(quiz-flow): move quiz progress to client-side fetch, enable ISR for quizzes page - Move user progress fetch from SSR to client-side API (/api/quiz/progress) - Remove force-dynamic and getCurrentUser() from quizzes page - Add revalidate=300 for ISR caching - Use window.history.replaceState for tab URL sync (avoid Next.js navigation) - Add forceMount to TabsContent to prevent layout shift on tab switch - Fix nested <main> — use <section> inside DynamicGridBackground - Cache GitHub stars count in sessionStorage to avoid refetch + re-animation * perf: replace useRef with useState lazy initializer in GitHubStarButton Fixes React 19 react-hooks/refs ESLint error — useRef.current cannot be read during render. Uses useState(getStoredStars) to capture the sessionStorage value once on mount instead. * fix: stop star icon trembling on hover in GitHubStarButton * (SP: 1) [Frontend] Fix quiz timer flash and card layout shift on quizzes page (#373) * perf(quiz-flow): move quiz progress to client-side fetch, enable ISR for quizzes page - Move user progress fetch from SSR to client-side API (/api/quiz/progress) - Remove force-dynamic and getCurrentUser() from quizzes page - Add revalidate=300 for ISR caching - Use window.history.replaceState for tab URL sync (avoid Next.js navigation) - Add forceMount to TabsContent to prevent layout shift on tab switch - Fix nested <main> — use <section> inside DynamicGridBackground - Cache GitHub stars count in sessionStorage to avoid refetch + re-animation * perf: replace useRef with useState lazy initializer in GitHubStarButton Fixes React 19 react-hooks/refs ESLint error — useRef.current cannot be read during render. Uses useState(getStoredStars) to capture the sessionStorage value once on mount instead. * fix: stop star icon trembling on hover in GitHubStarButton * fix: eliminate quiz timer flash on language switch Remove Suspense boundary (loading.tsx) that unmounted QuizContainer during locale navigation. Synchronous session restore via useReducer lazy initializer and correct timer initialization via useState lazy initializer prevent any visible state reset on language switch * fix: replace quiz card layout shift with skeleton grid during progress load * chore(release): v1.0.5 * (SP: 3)[Shop][DB] Reduce Neon compute: throttle janitor + relax checkout polling + add sweep indexes (#375) * (SP: 3) [Backend] add internal janitor (jobs 1-4), claim/lease + runbook (G0-G6) * (SP: 3) [Backend] add provider selector, fix payments gating, i18n checkout errors * Add shop category images to public * (SP: 3) [Shop][Monobank] I1 structured logging: codes + logging safety checks * (SP: 3) [Shop][Monobank] Fail-closed non-browser origin posture for webhook + janitor (ORIGIN_BLOCKED) * (SP: 3) [Shop][Monobank] [Shop][Monobank] J gate: add orders status ownership test and pass all pre-prod invariants * (SP: 3) [Shop][Monobank] review fixes (tests, logging, success UI) * (SP: 1) [Shop][Monobank] Tighten webhook log-code typing; harden DB tests; minor security/log/UI cleanups * (SP: 1) [Shop][Monobank] harden Monobank webhook (origin/PII-safe logs) and remove duplicate sha256 hashing * (SP: 1) [Cart] adding route for user orders to cart page * (SP: 1) [Cart] fix after review cart mpage and adding index for orders * (SP: 1) [Cart] Fix cart orders summary auth rendering and return totalCount for orders badge * (SP: 1) [Cart] remove console.warn from CartPageClient to satisfy monobank logging safety invariant, namespace localStorage cart by user and reset on auth change * (SP: 1) [Cart] rehydrate per cartOwnerId (remove didHydrate coupling) * (SP: 2)[Backend] shop/shipping schema migrations foundation * (SP: 2)[Backend] shop/shipping public routes + np cache + sync * (SP: 2)[Backend] shop/shipping: shipping persistence + currency policy * (SP: 2)[Backend] shop/shipping: webhook apply + psp fields + enqueue shipping * (SP: 2)[Backend] shop/shipping: shipments worker + internal run + np mock * (SP: 2)[Backend] shop/shipping: admin+ui shipping actions * (SP: 2)[Backend] shop/shipping: retention + log sanitizer + metrics * (SP: 1)[Backend] stabilize Monobank janitor (job1/job3) and fix failing apply-outcomes tests * (SP: 1) [db]: add shop shipping core migration * (SP: 1) [FIX] resolve merge artifacts in order details page * (SP: 1) [FIX] apply post-review fixes for shipping and admin flows * (SP: 1) [FIX] align cart shipping imports (localeToCountry + availability reason code) * (SP: 1) [FIX] hard-block checkout when shipping disabled + i18n reason mapping * (SP: 1) [FIX] harden webhook enqueue + shipping worker + NP catalog + cart fail-closed * (SP: 1) [FIX] Initialize shippingMethodsLoading to true to avoid premature checkout. * (SP: 1) [FIX] migration 17 * (SP: 1) [DB] migrarion to testind DB and adjusting tests * (SP: 1)[DB] slow down restock janitor + enforce prod interval floor * (SP: 1) [DB] add order status lite view (opt-in) + instrumentation * (SP: 1) [DB] replace checkout success router.refresh polling with backoff API polling * (SP: 1) [DB] throttle sessions activity heartbeat + use count(*) (PK invariant) * (SP: 1)[DB] enforce production min intervals for internal shipping jobs * (SP: 1) [DB] add minimal partial indexes for orders sweeps + rollout notes * (SP: 1) [DB] refactor sweep claim step to FOR UPDATE SKIP LOCKED batching * (SP: 1)[DB]: slow janitor schedule to every 30 minutes * (SP: 1)[DB] increase polling delays for MonobankRedirectStatus * (SP: 1)[FIX] harden webhooks + fix SSR hydration + janitor/np gates + sweeps refactor * (SP: 1)[FIX] harden shipping enqueue gating + apply NP interval floor * (SP: 3) [SHOP] audit-driven e2e purchase readiness hardening (events, notifications, consent, returns) (#378) * (SP:3)[SHOP] add canonical payment/shipping/admin audit tables + dedupe helper with flagged atomic dual-write * (SP: 2)[SHOP] add INTL quote flow (request/offer/accept/decline), payment-init gate, and quote expiry/timeout sweeps * (SP: 3)[SHOP] introduce outbox-driven notifications with projector + worker (phase 3) * (SP: 3)[SHOP] add minimal returns/RMA lifecycle with atomic audit + canonical events (phase 4) * (SP: 3)[SHOP] enforce guest status-token lite-only access and audit token usage (phase 5) * (SP: 3)[SHOP] centralize transition guards and enforce across admin/webhook/worker flows (phase 6) :wq n * (SP:3)[SHOP]: enforce DATABASE_URL_LOCAL preflight + deterministic vitest config * (SP:3)[SHOP]: require canonical events in prod (fail-fast) * (SP:3)[SHOP]: implement notifications transport with retries + DLQ * (SP:3)[SHOP]: persist checkout legal consent artifact * (SP:1)[SHOP] add migration 0025 for consent + events + audit + prices * (SP:1)[SHOP] emit shipping_events for shipment worker transitions * (SP:2)[SHOP] audit admin product mutations (deduped) * (SP:2)[SHOP] make Monobank webhook retryable on transient apply failures * (SP:3) [SHOP] enforce guest status-token scopes across order actions * (SP:1) [SHOP] make product_prices the only write authority * (SP:1) [SHOP] add Playwright e2e smoke suite (local DB only) * (SP:3) [SHOP] explicitly reject exchanges (EXCHANGES_NOT_SUPPORTED) * (SP: 3)[FIX] harden audit + workers/tests; fix transitions/restock + webhook perf * (SP: 3)[FIX] harden local-db test safety and tighten shop reliability guards * (SP: 3)[FIX] fail-closed admin audit, refine shipments-worker outcomes/metrics, tighten quote+tests * (SP: 1)[FIX] harden shipments-worker claiming/leases and make audit+quote/test paths resilient * (SP: 1)[FIX] harden quote request errors/logging and sanitize requestId; document best-effort delete audit * feat(ui): add devops/cloud category icons and styles (#379) * chore(release): prepare v1.0.6 changelog * chore: bump version to 1.0.6 * fix(orders): close missing brace in checkout shipping snapshot try block * fix(checkout): correct nested try/catch structure for shipping snapshot * fix(order-status): remove stale responseMode lite branch --------- Co-authored-by: Liudmyla Sovetovs <milkaegik@gmail.com> Co-authored-by: Lesia Soloviova <106915140+LesiaUKR@users.noreply.github.com>
Description
Audit-driven hardening of the Shop module to make the end-to-end purchase flow production-safe and deterministic.
Focus areas: canonical append-only events/audit, async notifications via outbox/worker, persisted checkout consent, safer webhook retry posture, strict local-only test preflight, and explicit returns/exchanges contracts.
Related Issue
Issue: #<issue_number>
Changes
DATABASE_URLusage).Database Changes (if applicable)
How Has This Been Tested?
Commands (PowerShell):
Screenshots (if applicable)
Checklist
Before submitting
Reviewers
Summary by CodeRabbit
New Features
Bug Fixes
Tests