feat: support CSV export for usage logs with request_path filtering#3344
feat: support CSV export for usage logs with request_path filtering#3344rockyicer wants to merge 5 commits intoQuantumNous:mainfrom
Conversation
WalkthroughThis PR adds CSV export functionality for usage logs and introduces request path tracking throughout the logging system. It includes new Changes
Sequence DiagramsequenceDiagram
participant User as User/Admin
participant Frontend as Frontend UI
participant Controller as Log Controller
participant Database as Database
participant FileStream as HTTP Response
User->>Frontend: Click Export Logs CSV
Frontend->>Frontend: Build query params (filters, pagination)
Frontend->>Controller: GET /api/log/export or /api/log/self/export
Controller->>Controller: Parse filters (type, timestamps, request_path, etc.)
Controller->>Database: GetAllLogsForExport(filters) / GetUserLogsForExport(filters)
Database->>Database: Apply filter WHERE clauses (request_path match, user, type range, etc.)
Database-->>Controller: Return matched Log records
Controller->>Controller: Format logs as CSV (with UTF-8 BOM)
Controller->>FileStream: Set headers (text/csv, Content-Disposition)
Controller->>FileStream: Write CSV rows (timestamps, quotas, request metadata)
FileStream-->>Frontend: Stream CSV blob
Frontend->>Frontend: Detect error via Content-Type (JSON vs CSV)
alt Error Response
Frontend->>User: Show error notification
else Success Response
Frontend->>Frontend: Extract filename from Content-Disposition
Frontend->>Frontend: Trigger browser download
Frontend->>User: Show success notification
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can disable the changed files summary in the walkthrough.Disable the |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
controller/log.go (1)
39-92: Consider adding error logging for CSV write failures.The function silently returns on write errors (lines 67 and 88-89), which could make debugging difficult when exports fail mid-stream. Consider logging these errors before returning.
Also, the
Content-Dispositionheader filename should sanitize or escape special characters to prevent header injection.💡 Suggested improvement for error visibility
if err := writer.Write(header); err != nil { + common.SysLog("failed to write CSV header: " + err.Error()) return } for _, log := range logs { // ... record building ... if err := writer.Write(record); err != nil { + common.SysLog("failed to write CSV record: " + err.Error()) return } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controller/log.go` around lines 39 - 92, The writeLogsCSV function currently returns silently on CSV write errors (calls to writer.Write) and when setting the Content-Disposition header it injects an unsanitized filename; update writeLogsCSV to log any writer.Write errors before returning (e.g., use the project's logger or c.Error/c.String with an error message) for the header write and per-record writes, and sanitize/escape the filename used in the Content-Disposition header (build the filename from time.Now().Format and pass it through a safe escaper like url.QueryEscape or remove/escape quotes and control characters) so header injection can't occur; reference the writeLogsCSV function and its writer.Write calls and the c.Header("Content-Disposition", ...) usage when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@controller/log.go`:
- Around line 124-145: The export handlers ExportAllLogsCSV and
ExportUserLogsCSV call GetAllLogsForExport/GetUserLogsForExport which currently
load all matching rows into memory; change these handlers and/or model functions
to enforce a sane export limit (e.g., reuse logSearchCountLimit or add a new
configurable exportLimit) or implement chunked streaming: pass the parsed
filters and a limit/offset (or a streaming callback) to the model methods,
ensure GetAllLogsForExport/GetUserLogsForExport accept a limit (and return an
error if limit exceeded) or yield rows in pages, and update writeLogsCSV to
stream rows to the response as they are fetched instead of buffering the entire
slice. Ensure you reference parseAdminLogFilter/parseLogFilter when wiring the
limit and preserve existing behavior for small exports.
In `@model/log.go`:
- Around line 520-544: Both GetUserLogsForExport and GetAllLogsForExport load
all matching rows into memory which can OOM; enforce a configurable maximum
export size or switch to chunked/streaming retrieval: add a MaxExportLimit (or
use an existing field on LogFilter) and apply tx = tx.Limit(max) before Find to
cap results, or replace Find with streaming Rows + tx.Order(...).Rows() and
process/fetch in batches (e.g., scan into slice chunks) to avoid loading
millions at once; update both GetUserLogsForExport and GetAllLogsForExport to
use the chosen cap/streaming approach and return a clear error if the requested
export would exceed the configured limit.
In `@model/task_cas_test.go`:
- Line 51: The cleanup call DB.Exec("DELETE FROM options") can fail silently
because the TestMain migrations do not include the Option model; update TestMain
to migrate the Option model (or ensure the options table is created before
tests) and change the DB.Exec call in task_cas_test.go to check and handle the
returned error (fail the test or log the error) so missing tables or exec errors
do not silently break test cleanup; refer to TestMain, the Option
model/migration, and the DB.Exec("DELETE FROM options") call to locate the
changes.
In `@web/src/hooks/usage-logs/useUsageLogsData.jsx`:
- Around line 812-847: In handleExport, the catch currently passes the raw error
object to showError which may render as [object Object]; update the catch to
extract a user-friendly message (e.g. error.response?.data?.message ||
error.message || String(error)) and call showError with that string; ensure you
modify the catch block in handleExport where showError(error) is called so it
uses the extracted message and preserves existing setExporting(false) behavior
in finally.
---
Nitpick comments:
In `@controller/log.go`:
- Around line 39-92: The writeLogsCSV function currently returns silently on CSV
write errors (calls to writer.Write) and when setting the Content-Disposition
header it injects an unsanitized filename; update writeLogsCSV to log any
writer.Write errors before returning (e.g., use the project's logger or
c.Error/c.String with an error message) for the header write and per-record
writes, and sanitize/escape the filename used in the Content-Disposition header
(build the filename from time.Now().Format and pass it through a safe escaper
like url.QueryEscape or remove/escape quotes and control characters) so header
injection can't occur; reference the writeLogsCSV function and its writer.Write
calls and the c.Header("Content-Disposition", ...) usage when making these
changes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3516f47a-e556-4595-b9e8-8cb62021029b
📒 Files selected for processing (24)
controller/log.gocontroller/log_export_test.gocontroller/relay.gomodel/log.gomodel/log_filter_test.gomodel/log_request_path_backfill.gomodel/log_request_path_backfill_test.gomodel/option.gomodel/task.gomodel/task_cas_test.gorouter/api-router.goservice/channel_affinity_usage_cache_test.goservice/task_billing.goweb/src/components/table/usage-logs/UsageLogsActions.jsxweb/src/components/table/usage-logs/UsageLogsFilters.jsxweb/src/constants/console.constants.jsweb/src/hooks/usage-logs/useUsageLogsData.jsxweb/src/i18n/locales/en.jsonweb/src/i18n/locales/fr.jsonweb/src/i18n/locales/ja.jsonweb/src/i18n/locales/ru.jsonweb/src/i18n/locales/vi.jsonweb/src/i18n/locales/zh-CN.jsonweb/src/i18n/locales/zh-TW.json
| func ExportAllLogsCSV(c *gin.Context) { | ||
| filters := parseAdminLogFilter(c) | ||
| logs, err := model.GetAllLogsForExport(filters) | ||
| if err != nil { | ||
| common.ApiError(c, err) | ||
| return | ||
| } | ||
| writeLogsCSV(c, logs, true) | ||
| } | ||
|
|
||
| func ExportUserLogsCSV(c *gin.Context) { | ||
| userId := c.GetInt("id") | ||
| filters := parseLogFilter(c) | ||
| filters.UserID = &userId | ||
|
|
||
| logs, err := model.GetUserLogsForExport(filters) | ||
| if err != nil { | ||
| common.ApiError(c, err) | ||
| return | ||
| } | ||
| writeLogsCSV(c, logs, false) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there's any limit applied in the model layer for export functions
ast-grep --pattern 'func GetAllLogsForExport($$$) {
$$$
}'
ast-grep --pattern 'func GetUserLogsForExport($$$) {
$$$
}'
# Also check if there are any existing export limits defined
rg -n "export.*limit|ExportLimit|MaxExport" --type goRepository: QuantumNous/new-api
Length of output: 45
🏁 Script executed:
# Search for the export functions in the model layer
fd -e go -path "*/model/*" | xargs grep -l "GetAllLogsForExport\|GetUserLogsForExport"Repository: QuantumNous/new-api
Length of output: 233
🏁 Script executed:
# If model directory exists, let's list and inspect it
ls -la model/ 2>/dev/null | head -20Repository: QuantumNous/new-api
Length of output: 1319
🏁 Script executed:
# Search for these functions across all go files
rg "func.*GetAllLogsForExport|func.*GetUserLogsForExport" --type goRepository: QuantumNous/new-api
Length of output: 231
🏁 Script executed:
# Also check for any log-related limits or configuration
rg -n "MaxExport|ExportLimit|LOG.*LIMIT|log.*limit" --type go -iRepository: QuantumNous/new-api
Length of output: 2564
🏁 Script executed:
# Get the line numbers for the export functions
rg -n "func GetUserLogsForExport|func GetAllLogsForExport" model/log.goRepository: QuantumNous/new-api
Length of output: 213
🏁 Script executed:
# Read the model/log.go file to see the implementations
wc -l model/log.goRepository: QuantumNous/new-api
Length of output: 79
🏁 Script executed:
# Read the export function implementations
sed -n '520,545p' model/log.goRepository: QuantumNous/new-api
Length of output: 866
🏁 Script executed:
# Check the applyLogFilters function to see if it applies any limit
rg -n "func applyLogFilters" model/log.goRepository: QuantumNous/new-api
Length of output: 139
🏁 Script executed:
# Read the applyLogFilters implementation
sed -n '460,520p' model/log.goRepository: QuantumNous/new-api
Length of output: 1833
🏁 Script executed:
# Read the applyLogFilters implementation starting from line 286
sed -n '286,350p' model/log.goRepository: QuantumNous/new-api
Length of output: 2062
🏁 Script executed:
# Check the controller code to see if any limit is applied before calling the export functions
sed -n '124,145p' controller/log.goRepository: QuantumNous/new-api
Length of output: 534
🏁 Script executed:
# Check the writeLogsCSV function to see if it has any safeguards
rg -n "func writeLogsCSV" controller/log.goRepository: QuantumNous/new-api
Length of output: 141
🏁 Script executed:
# Read the writeLogsCSV implementation
sed -n '85,125p' controller/log.goRepository: QuantumNous/new-api
Length of output: 1059
🏁 Script executed:
# Read the writeLogsCSV function completely
sed -n '39,85p' controller/log.goRepository: QuantumNous/new-api
Length of output: 1144
Add limit to export queries to prevent memory exhaustion.
GetAllLogsForExport and GetUserLogsForExport fetch all matching logs without any row limit. Unlike the search functions which cap at 10,000 rows via logSearchCountLimit, these export functions can load millions of logs into memory, exhausting server resources. Implement a configurable export limit or consider streaming the CSV output without buffering all logs in memory first.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controller/log.go` around lines 124 - 145, The export handlers
ExportAllLogsCSV and ExportUserLogsCSV call
GetAllLogsForExport/GetUserLogsForExport which currently load all matching rows
into memory; change these handlers and/or model functions to enforce a sane
export limit (e.g., reuse logSearchCountLimit or add a new configurable
exportLimit) or implement chunked streaming: pass the parsed filters and a
limit/offset (or a streaming callback) to the model methods, ensure
GetAllLogsForExport/GetUserLogsForExport accept a limit (and return an error if
limit exceeded) or yield rows in pages, and update writeLogsCSV to stream rows
to the response as they are fetched instead of buffering the entire slice.
Ensure you reference parseAdminLogFilter/parseLogFilter when wiring the limit
and preserve existing behavior for small exports.
| func GetUserLogsForExport(filters LogFilter) (logs []*Log, err error) { | ||
| tx, err := applyLogFilters(LOG_DB.Model(&Log{}), filters) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| err = tx.Order("logs.created_at asc, logs.id asc").Find(&logs).Error | ||
| if err != nil { | ||
| common.SysError("failed to query logs for export: " + err.Error()) | ||
| return nil, errors.New("查询日志失败") | ||
| } | ||
| return logs, nil | ||
| } | ||
|
|
||
| func GetAllLogsForExport(filters LogFilter) (logs []*Log, err error) { | ||
| tx, err := applyLogFilters(LOG_DB.Model(&Log{}), filters) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| err = tx.Order("logs.created_at asc, logs.id asc").Find(&logs).Error | ||
| if err != nil { | ||
| common.SysError("failed to query logs for export: " + err.Error()) | ||
| return nil, errors.New("failed to query logs for export") | ||
| } | ||
| return logs, nil | ||
| } |
There was a problem hiding this comment.
Export functions lack row limits - potential memory exhaustion.
Both GetUserLogsForExport and GetAllLogsForExport query all matching logs without any limit. For accounts with extensive history, this could load millions of records into memory, causing OOM conditions.
Consider adding a configurable maximum export limit or implementing streaming/chunked export.
💡 Suggested safeguard
+const maxExportRows = 100000 // Configurable limit
+
func GetUserLogsForExport(filters LogFilter) (logs []*Log, err error) {
tx, err := applyLogFilters(LOG_DB.Model(&Log{}), filters)
if err != nil {
return nil, err
}
- err = tx.Order("logs.created_at asc, logs.id asc").Find(&logs).Error
+ err = tx.Order("logs.created_at asc, logs.id asc").Limit(maxExportRows).Find(&logs).Error
if err != nil {
common.SysError("failed to query logs for export: " + err.Error())
return nil, errors.New("查询日志失败")
}
return logs, nil
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@model/log.go` around lines 520 - 544, Both GetUserLogsForExport and
GetAllLogsForExport load all matching rows into memory which can OOM; enforce a
configurable maximum export size or switch to chunked/streaming retrieval: add a
MaxExportLimit (or use an existing field on LogFilter) and apply tx =
tx.Limit(max) before Find to cap results, or replace Find with streaming Rows +
tx.Order(...).Rows() and process/fetch in batches (e.g., scan into slice chunks)
to avoid loading millions at once; update both GetUserLogsForExport and
GetAllLogsForExport to use the chosen cap/streaming approach and return a clear
error if the requested export would exceed the configured limit.
| DB.Exec("DELETE FROM tokens") | ||
| DB.Exec("DELETE FROM logs") | ||
| DB.Exec("DELETE FROM channels") | ||
| DB.Exec("DELETE FROM options") |
There was a problem hiding this comment.
Ensure options table exists for this cleanup path.
Line 51 deletes from options, but TestMain migration (Line 36) doesn’t include Option. With unchecked Exec errors, this can silently fail and make cleanup misleading.
🔧 Proposed fix
- if err := db.AutoMigrate(&Task{}, &User{}, &Token{}, &Log{}, &Channel{}); err != nil {
+ if err := db.AutoMigrate(&Task{}, &User{}, &Token{}, &Log{}, &Channel{}, &Option{}); err != nil {
panic("failed to migrate: " + err.Error())
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@model/task_cas_test.go` at line 51, The cleanup call DB.Exec("DELETE FROM
options") can fail silently because the TestMain migrations do not include the
Option model; update TestMain to migrate the Option model (or ensure the options
table is created before tests) and change the DB.Exec call in task_cas_test.go
to check and handle the returned error (fail the test or log the error) so
missing tables or exec errors do not silently break test cleanup; refer to
TestMain, the Option model/migration, and the DB.Exec("DELETE FROM options")
call to locate the changes.
| const handleExport = async () => { | ||
| if (exporting) { | ||
| return; | ||
| } | ||
|
|
||
| setExporting(true); | ||
| try { | ||
| const query = buildQueryString( | ||
| buildLogQueryParams({ includeAdminFields: isAdminUser }), | ||
| ); | ||
| const exportUrl = isAdminUser ? '/api/log/export' : '/api/log/self/export'; | ||
| const response = await API.get(`${exportUrl}?${query}`, { | ||
| responseType: 'blob', | ||
| disableDuplicate: true, | ||
| skipErrorHandler: true, | ||
| }); | ||
|
|
||
| const contentType = response.headers['content-type'] || ''; | ||
| if (contentType.includes('application/json')) { | ||
| const text = await response.data.text(); | ||
| const payload = JSON.parse(text); | ||
| showError(payload.message || t('导出日志失败')); | ||
| return; | ||
| } | ||
|
|
||
| const filename = | ||
| parseExportFilename(response.headers['content-disposition']) || | ||
| `usage-logs-${new Date().toISOString().slice(0, 10)}.csv`; | ||
| downloadBlob(response.data, filename); | ||
| showSuccess(t('日志导出成功')); | ||
| } catch (error) { | ||
| showError(error); | ||
| } finally { | ||
| setExporting(false); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Error object passed directly to showError may not display correctly.
At line 843, showError(error) receives the raw error object. If showError expects a string message, this may display [object Object] or similar. Consider extracting the error message.
🐛 Proposed fix
} catch (error) {
- showError(error);
+ showError(error?.message || error?.toString() || t('导出日志失败'));
} finally {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleExport = async () => { | |
| if (exporting) { | |
| return; | |
| } | |
| setExporting(true); | |
| try { | |
| const query = buildQueryString( | |
| buildLogQueryParams({ includeAdminFields: isAdminUser }), | |
| ); | |
| const exportUrl = isAdminUser ? '/api/log/export' : '/api/log/self/export'; | |
| const response = await API.get(`${exportUrl}?${query}`, { | |
| responseType: 'blob', | |
| disableDuplicate: true, | |
| skipErrorHandler: true, | |
| }); | |
| const contentType = response.headers['content-type'] || ''; | |
| if (contentType.includes('application/json')) { | |
| const text = await response.data.text(); | |
| const payload = JSON.parse(text); | |
| showError(payload.message || t('导出日志失败')); | |
| return; | |
| } | |
| const filename = | |
| parseExportFilename(response.headers['content-disposition']) || | |
| `usage-logs-${new Date().toISOString().slice(0, 10)}.csv`; | |
| downloadBlob(response.data, filename); | |
| showSuccess(t('日志导出成功')); | |
| } catch (error) { | |
| showError(error); | |
| } finally { | |
| setExporting(false); | |
| } | |
| }; | |
| const handleExport = async () => { | |
| if (exporting) { | |
| return; | |
| } | |
| setExporting(true); | |
| try { | |
| const query = buildQueryString( | |
| buildLogQueryParams({ includeAdminFields: isAdminUser }), | |
| ); | |
| const exportUrl = isAdminUser ? '/api/log/export' : '/api/log/self/export'; | |
| const response = await API.get(`${exportUrl}?${query}`, { | |
| responseType: 'blob', | |
| disableDuplicate: true, | |
| skipErrorHandler: true, | |
| }); | |
| const contentType = response.headers['content-type'] || ''; | |
| if (contentType.includes('application/json')) { | |
| const text = await response.data.text(); | |
| const payload = JSON.parse(text); | |
| showError(payload.message || t('导出日志失败')); | |
| return; | |
| } | |
| const filename = | |
| parseExportFilename(response.headers['content-disposition']) || | |
| `usage-logs-${new Date().toISOString().slice(0, 10)}.csv`; | |
| downloadBlob(response.data, filename); | |
| showSuccess(t('日志导出成功')); | |
| } catch (error) { | |
| showError(error?.message || error?.toString() || t('导出日志失败')); | |
| } finally { | |
| setExporting(false); | |
| } | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web/src/hooks/usage-logs/useUsageLogsData.jsx` around lines 812 - 847, In
handleExport, the catch currently passes the raw error object to showError which
may render as [object Object]; update the catch to extract a user-friendly
message (e.g. error.response?.data?.message || error.message || String(error))
and call showError with that string; ensure you modify the catch block in
handleExport where showError(error) is called so it uses the extracted message
and preserves existing setExporting(false) behavior in finally.

Summary
This PR adds CSV export support to the usage log page and backend export APIs.
It also promotes
request_pathto a dedicated log field so that logs can be filtered and exported precisely, including historical logs after backfill.What Changed
Backend
request_pathfield for logsrequest_pathon new logsFrontend
导出 CSVaction to the usage log pagerequest_pathfilter supportWhy
The existing log UI is useful for online inspection, but operational workflows often require structured export for Excel/reporting, especially for weekly accounting and performance review scenarios.
Validation
bun run buildNotes
I intentionally kept this PR scoped to usage log export and log filtering only.
It does not include unrelated local task files or the token period quota feature.
Summary by CodeRabbit
Release Notes
New Features
Improvements