Skip to content

feat: support CSV export for usage logs with request_path filtering#3344

Open
rockyicer wants to merge 5 commits intoQuantumNous:mainfrom
rockyicer:usage-log-csv-export
Open

feat: support CSV export for usage logs with request_path filtering#3344
rockyicer wants to merge 5 commits intoQuantumNous:mainfrom
rockyicer:usage-log-csv-export

Conversation

@rockyicer
Copy link
Copy Markdown

@rockyicer rockyicer commented Mar 19, 2026

Summary

This PR adds CSV export support to the usage log page and backend export APIs.

It also promotes request_path to a dedicated log field so that logs can be filtered and exported precisely, including historical logs after backfill.

What Changed

Backend

  • added a dedicated request_path field for logs
  • persisted request_path on new logs
  • added CSV export handlers for usage logs
  • added historical backfill support for existing log records
  • added tests for filtering, export, and backfill behavior

Frontend

  • added a 导出 CSV action to the usage log page
  • added request_path filter support
  • kept export behavior aligned with current page filters and selected time range

Why

The existing log UI is useful for online inspection, but operational workflows often require structured export for Excel/reporting, especially for weekly accounting and performance review scenarios.

Validation

  • bun run build
  • targeted controller/model tests for export, filter, and backfill behavior

Notes

I intentionally kept this PR scoped to usage log export and log filtering only.
It does not include unrelated local task files or the token period quota feature.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added CSV export functionality for usage logs in both user and admin views
    • Added request path filter to log searches
    • Added "Last week" date range preset for quick filtering
  • Improvements

    • Enhanced log request path tracking and filtering capabilities across the application

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 19, 2026

Walkthrough

This PR adds CSV export functionality for usage logs and introduces request path tracking throughout the logging system. It includes new /api/log/export and /api/log/self/export endpoints, a backfill utility to populate request_path for historical logs, and updates the frontend UI to support exporting logs and filtering by request_path.

Changes

Cohort / File(s) Summary
Log Export Endpoints
controller/log.go, router/api-router.go
Added ExportAllLogsCSV and ExportUserLogsCSV handlers that retrieve filtered logs and stream CSV output with UTF-8 BOM. New routes map /api/log/export and /api/log/self/export to these handlers. Refactored existing GetAllLogs and GetUserLogs to use centralized parseLogFilter and new filter-based model methods.
Log Export Testing
controller/log_export_test.go
Introduced comprehensive test suite validating authentication, CSV format (headers, content-type), BOM handling, row counts, and filter behavior for both admin and user export endpoints.
Log Model & Filtering
model/log.go, model/log_filter_test.go
Added RequestPath field to Log struct with database index. Introduced LogFilter struct consolidating filter parameters (RequestPath, RequestID, ChannelID, UserID). Added applyLogFilters centralized query builder and new export-focused retrieval functions (GetAllLogsByFilter, GetUserLogsByFilter, GetAllLogsForExport, GetUserLogsForExport). Tests validate exact-match request path filtering and metadata correlation.
Request Path Backfill Job
model/log_request_path_backfill.go, model/log_request_path_backfill_test.go
Introduced resumable batch backfill utility BackfillLogRequestPath that extracts request_path from existing log Other JSON fields and updates missing values. Persists progress via option store. Tests verify idempotency, batch processing, and accurate counters.
Task/Relay Billing Context
controller/relay.go, model/task.go, service/task_billing.go
Extended TaskBillingContext with RequestPath field. Updated relay task creation to capture request path. Modified RefundTaskQuota and RecalculateTaskQuota to propagate request path through billing log parameters.
Database & Option Handling
model/option.go, model/task_cas_test.go
Added defensive initialization of common.OptionMap in updateOptionMap to prevent nil panics. Updated test cleanup to truncate options table.
Frontend Export UI
web/src/components/table/usage-logs/UsageLogsActions.jsx, web/src/hooks/usage-logs/useUsageLogsData.jsx
Added export button with loading state to actions component. Extended hook with exporting state and handleExport function that performs CSV downloads from /api/log/export or /api/log/self/export, detects JSON errors, derives filenames from content-disposition, and shows notifications.
Frontend Request Path Filter
web/src/components/table/usage-logs/UsageLogsFilters.jsx, web/src/hooks/usage-logs/useUsageLogsData.jsx
Added request_path text filter input to log filters. Extended hook to include request_path in form values and query parameter construction. Updated loadLogs to use new buildLogQueryParams helper for consistent URL generation.
Date Range Constants & i18n
web/src/constants/console.constants.js, web/src/i18n/locales/*
Added "last week" ("上周") date preset to DATE_RANGE_PRESETS. Added Chinese/English/French/Japanese/Russian/Vietnamese translations for relative week labels ("本周", "上周").

Sequence Diagram

sequenceDiagram
    participant User as User/Admin
    participant Frontend as Frontend UI
    participant Controller as Log Controller
    participant Database as Database
    participant FileStream as HTTP Response

    User->>Frontend: Click Export Logs CSV
    Frontend->>Frontend: Build query params (filters, pagination)
    Frontend->>Controller: GET /api/log/export or /api/log/self/export
    Controller->>Controller: Parse filters (type, timestamps, request_path, etc.)
    Controller->>Database: GetAllLogsForExport(filters) / GetUserLogsForExport(filters)
    Database->>Database: Apply filter WHERE clauses (request_path match, user, type range, etc.)
    Database-->>Controller: Return matched Log records
    Controller->>Controller: Format logs as CSV (with UTF-8 BOM)
    Controller->>FileStream: Set headers (text/csv, Content-Disposition)
    Controller->>FileStream: Write CSV rows (timestamps, quotas, request metadata)
    FileStream-->>Frontend: Stream CSV blob
    Frontend->>Frontend: Detect error via Content-Type (JSON vs CSV)
    alt Error Response
        Frontend->>User: Show error notification
    else Success Response
        Frontend->>Frontend: Extract filename from Content-Disposition
        Frontend->>Frontend: Trigger browser download
        Frontend->>User: Show success notification
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • Calcium-Ion
  • seefs001
  • creamlike1024

Poem

🐰 A rabbit hops through logs so vast,
With paths recorded, present and past!
CSV streams now flow so free,
Export and filter with glee, tee-hee!
Backfill jobs keep data neat,
Request paths make logging complete! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 8.77% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: support CSV export for usage logs with request_path filtering' directly and clearly describes the main feature added: CSV export functionality with request path filtering for usage logs.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can disable the changed files summary in the walkthrough.

Disable the reviews.changed_files_summary setting to disable the changed files summary in the walkthrough.

@rockyicer
Copy link
Copy Markdown
Author

image

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
controller/log.go (1)

39-92: Consider adding error logging for CSV write failures.

The function silently returns on write errors (lines 67 and 88-89), which could make debugging difficult when exports fail mid-stream. Consider logging these errors before returning.

Also, the Content-Disposition header filename should sanitize or escape special characters to prevent header injection.

💡 Suggested improvement for error visibility
 	if err := writer.Write(header); err != nil {
+		common.SysLog("failed to write CSV header: " + err.Error())
 		return
 	}

 	for _, log := range logs {
 		// ... record building ...
 		if err := writer.Write(record); err != nil {
+			common.SysLog("failed to write CSV record: " + err.Error())
 			return
 		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controller/log.go` around lines 39 - 92, The writeLogsCSV function currently
returns silently on CSV write errors (calls to writer.Write) and when setting
the Content-Disposition header it injects an unsanitized filename; update
writeLogsCSV to log any writer.Write errors before returning (e.g., use the
project's logger or c.Error/c.String with an error message) for the header write
and per-record writes, and sanitize/escape the filename used in the
Content-Disposition header (build the filename from time.Now().Format and pass
it through a safe escaper like url.QueryEscape or remove/escape quotes and
control characters) so header injection can't occur; reference the writeLogsCSV
function and its writer.Write calls and the c.Header("Content-Disposition", ...)
usage when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@controller/log.go`:
- Around line 124-145: The export handlers ExportAllLogsCSV and
ExportUserLogsCSV call GetAllLogsForExport/GetUserLogsForExport which currently
load all matching rows into memory; change these handlers and/or model functions
to enforce a sane export limit (e.g., reuse logSearchCountLimit or add a new
configurable exportLimit) or implement chunked streaming: pass the parsed
filters and a limit/offset (or a streaming callback) to the model methods,
ensure GetAllLogsForExport/GetUserLogsForExport accept a limit (and return an
error if limit exceeded) or yield rows in pages, and update writeLogsCSV to
stream rows to the response as they are fetched instead of buffering the entire
slice. Ensure you reference parseAdminLogFilter/parseLogFilter when wiring the
limit and preserve existing behavior for small exports.

In `@model/log.go`:
- Around line 520-544: Both GetUserLogsForExport and GetAllLogsForExport load
all matching rows into memory which can OOM; enforce a configurable maximum
export size or switch to chunked/streaming retrieval: add a MaxExportLimit (or
use an existing field on LogFilter) and apply tx = tx.Limit(max) before Find to
cap results, or replace Find with streaming Rows + tx.Order(...).Rows() and
process/fetch in batches (e.g., scan into slice chunks) to avoid loading
millions at once; update both GetUserLogsForExport and GetAllLogsForExport to
use the chosen cap/streaming approach and return a clear error if the requested
export would exceed the configured limit.

In `@model/task_cas_test.go`:
- Line 51: The cleanup call DB.Exec("DELETE FROM options") can fail silently
because the TestMain migrations do not include the Option model; update TestMain
to migrate the Option model (or ensure the options table is created before
tests) and change the DB.Exec call in task_cas_test.go to check and handle the
returned error (fail the test or log the error) so missing tables or exec errors
do not silently break test cleanup; refer to TestMain, the Option
model/migration, and the DB.Exec("DELETE FROM options") call to locate the
changes.

In `@web/src/hooks/usage-logs/useUsageLogsData.jsx`:
- Around line 812-847: In handleExport, the catch currently passes the raw error
object to showError which may render as [object Object]; update the catch to
extract a user-friendly message (e.g. error.response?.data?.message ||
error.message || String(error)) and call showError with that string; ensure you
modify the catch block in handleExport where showError(error) is called so it
uses the extracted message and preserves existing setExporting(false) behavior
in finally.

---

Nitpick comments:
In `@controller/log.go`:
- Around line 39-92: The writeLogsCSV function currently returns silently on CSV
write errors (calls to writer.Write) and when setting the Content-Disposition
header it injects an unsanitized filename; update writeLogsCSV to log any
writer.Write errors before returning (e.g., use the project's logger or
c.Error/c.String with an error message) for the header write and per-record
writes, and sanitize/escape the filename used in the Content-Disposition header
(build the filename from time.Now().Format and pass it through a safe escaper
like url.QueryEscape or remove/escape quotes and control characters) so header
injection can't occur; reference the writeLogsCSV function and its writer.Write
calls and the c.Header("Content-Disposition", ...) usage when making these
changes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3516f47a-e556-4595-b9e8-8cb62021029b

📥 Commits

Reviewing files that changed from the base of the PR and between ed6ff0f and e2df0e8.

📒 Files selected for processing (24)
  • controller/log.go
  • controller/log_export_test.go
  • controller/relay.go
  • model/log.go
  • model/log_filter_test.go
  • model/log_request_path_backfill.go
  • model/log_request_path_backfill_test.go
  • model/option.go
  • model/task.go
  • model/task_cas_test.go
  • router/api-router.go
  • service/channel_affinity_usage_cache_test.go
  • service/task_billing.go
  • web/src/components/table/usage-logs/UsageLogsActions.jsx
  • web/src/components/table/usage-logs/UsageLogsFilters.jsx
  • web/src/constants/console.constants.js
  • web/src/hooks/usage-logs/useUsageLogsData.jsx
  • web/src/i18n/locales/en.json
  • web/src/i18n/locales/fr.json
  • web/src/i18n/locales/ja.json
  • web/src/i18n/locales/ru.json
  • web/src/i18n/locales/vi.json
  • web/src/i18n/locales/zh-CN.json
  • web/src/i18n/locales/zh-TW.json

Comment thread controller/log.go
Comment on lines +124 to +145
func ExportAllLogsCSV(c *gin.Context) {
filters := parseAdminLogFilter(c)
logs, err := model.GetAllLogsForExport(filters)
if err != nil {
common.ApiError(c, err)
return
}
writeLogsCSV(c, logs, true)
}

func ExportUserLogsCSV(c *gin.Context) {
userId := c.GetInt("id")
filters := parseLogFilter(c)
filters.UserID = &userId

logs, err := model.GetUserLogsForExport(filters)
if err != nil {
common.ApiError(c, err)
return
}
writeLogsCSV(c, logs, false)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if there's any limit applied in the model layer for export functions
ast-grep --pattern 'func GetAllLogsForExport($$$) {
  $$$
}'

ast-grep --pattern 'func GetUserLogsForExport($$$) {
  $$$
}'

# Also check if there are any existing export limits defined
rg -n "export.*limit|ExportLimit|MaxExport" --type go

Repository: QuantumNous/new-api

Length of output: 45


🏁 Script executed:

# Search for the export functions in the model layer
fd -e go -path "*/model/*" | xargs grep -l "GetAllLogsForExport\|GetUserLogsForExport"

Repository: QuantumNous/new-api

Length of output: 233


🏁 Script executed:

# If model directory exists, let's list and inspect it
ls -la model/ 2>/dev/null | head -20

Repository: QuantumNous/new-api

Length of output: 1319


🏁 Script executed:

# Search for these functions across all go files
rg "func.*GetAllLogsForExport|func.*GetUserLogsForExport" --type go

Repository: QuantumNous/new-api

Length of output: 231


🏁 Script executed:

# Also check for any log-related limits or configuration
rg -n "MaxExport|ExportLimit|LOG.*LIMIT|log.*limit" --type go -i

Repository: QuantumNous/new-api

Length of output: 2564


🏁 Script executed:

# Get the line numbers for the export functions
rg -n "func GetUserLogsForExport|func GetAllLogsForExport" model/log.go

Repository: QuantumNous/new-api

Length of output: 213


🏁 Script executed:

# Read the model/log.go file to see the implementations
wc -l model/log.go

Repository: QuantumNous/new-api

Length of output: 79


🏁 Script executed:

# Read the export function implementations
sed -n '520,545p' model/log.go

Repository: QuantumNous/new-api

Length of output: 866


🏁 Script executed:

# Check the applyLogFilters function to see if it applies any limit
rg -n "func applyLogFilters" model/log.go

Repository: QuantumNous/new-api

Length of output: 139


🏁 Script executed:

# Read the applyLogFilters implementation
sed -n '460,520p' model/log.go

Repository: QuantumNous/new-api

Length of output: 1833


🏁 Script executed:

# Read the applyLogFilters implementation starting from line 286
sed -n '286,350p' model/log.go

Repository: QuantumNous/new-api

Length of output: 2062


🏁 Script executed:

# Check the controller code to see if any limit is applied before calling the export functions
sed -n '124,145p' controller/log.go

Repository: QuantumNous/new-api

Length of output: 534


🏁 Script executed:

# Check the writeLogsCSV function to see if it has any safeguards
rg -n "func writeLogsCSV" controller/log.go

Repository: QuantumNous/new-api

Length of output: 141


🏁 Script executed:

# Read the writeLogsCSV implementation
sed -n '85,125p' controller/log.go

Repository: QuantumNous/new-api

Length of output: 1059


🏁 Script executed:

# Read the writeLogsCSV function completely
sed -n '39,85p' controller/log.go

Repository: QuantumNous/new-api

Length of output: 1144


Add limit to export queries to prevent memory exhaustion.

GetAllLogsForExport and GetUserLogsForExport fetch all matching logs without any row limit. Unlike the search functions which cap at 10,000 rows via logSearchCountLimit, these export functions can load millions of logs into memory, exhausting server resources. Implement a configurable export limit or consider streaming the CSV output without buffering all logs in memory first.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controller/log.go` around lines 124 - 145, The export handlers
ExportAllLogsCSV and ExportUserLogsCSV call
GetAllLogsForExport/GetUserLogsForExport which currently load all matching rows
into memory; change these handlers and/or model functions to enforce a sane
export limit (e.g., reuse logSearchCountLimit or add a new configurable
exportLimit) or implement chunked streaming: pass the parsed filters and a
limit/offset (or a streaming callback) to the model methods, ensure
GetAllLogsForExport/GetUserLogsForExport accept a limit (and return an error if
limit exceeded) or yield rows in pages, and update writeLogsCSV to stream rows
to the response as they are fetched instead of buffering the entire slice.
Ensure you reference parseAdminLogFilter/parseLogFilter when wiring the limit
and preserve existing behavior for small exports.

Comment thread model/log.go
Comment on lines +520 to +544
func GetUserLogsForExport(filters LogFilter) (logs []*Log, err error) {
tx, err := applyLogFilters(LOG_DB.Model(&Log{}), filters)
if err != nil {
return nil, err
}
err = tx.Order("logs.created_at asc, logs.id asc").Find(&logs).Error
if err != nil {
common.SysError("failed to query logs for export: " + err.Error())
return nil, errors.New("查询日志失败")
}
return logs, nil
}

func GetAllLogsForExport(filters LogFilter) (logs []*Log, err error) {
tx, err := applyLogFilters(LOG_DB.Model(&Log{}), filters)
if err != nil {
return nil, err
}
err = tx.Order("logs.created_at asc, logs.id asc").Find(&logs).Error
if err != nil {
common.SysError("failed to query logs for export: " + err.Error())
return nil, errors.New("failed to query logs for export")
}
return logs, nil
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Export functions lack row limits - potential memory exhaustion.

Both GetUserLogsForExport and GetAllLogsForExport query all matching logs without any limit. For accounts with extensive history, this could load millions of records into memory, causing OOM conditions.

Consider adding a configurable maximum export limit or implementing streaming/chunked export.

💡 Suggested safeguard
+const maxExportRows = 100000 // Configurable limit
+
 func GetUserLogsForExport(filters LogFilter) (logs []*Log, err error) {
 	tx, err := applyLogFilters(LOG_DB.Model(&Log{}), filters)
 	if err != nil {
 		return nil, err
 	}
-	err = tx.Order("logs.created_at asc, logs.id asc").Find(&logs).Error
+	err = tx.Order("logs.created_at asc, logs.id asc").Limit(maxExportRows).Find(&logs).Error
 	if err != nil {
 		common.SysError("failed to query logs for export: " + err.Error())
 		return nil, errors.New("查询日志失败")
 	}
 	return logs, nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@model/log.go` around lines 520 - 544, Both GetUserLogsForExport and
GetAllLogsForExport load all matching rows into memory which can OOM; enforce a
configurable maximum export size or switch to chunked/streaming retrieval: add a
MaxExportLimit (or use an existing field on LogFilter) and apply tx =
tx.Limit(max) before Find to cap results, or replace Find with streaming Rows +
tx.Order(...).Rows() and process/fetch in batches (e.g., scan into slice chunks)
to avoid loading millions at once; update both GetUserLogsForExport and
GetAllLogsForExport to use the chosen cap/streaming approach and return a clear
error if the requested export would exceed the configured limit.

Comment thread model/task_cas_test.go
DB.Exec("DELETE FROM tokens")
DB.Exec("DELETE FROM logs")
DB.Exec("DELETE FROM channels")
DB.Exec("DELETE FROM options")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Ensure options table exists for this cleanup path.

Line 51 deletes from options, but TestMain migration (Line 36) doesn’t include Option. With unchecked Exec errors, this can silently fail and make cleanup misleading.

🔧 Proposed fix
-	if err := db.AutoMigrate(&Task{}, &User{}, &Token{}, &Log{}, &Channel{}); err != nil {
+	if err := db.AutoMigrate(&Task{}, &User{}, &Token{}, &Log{}, &Channel{}, &Option{}); err != nil {
 		panic("failed to migrate: " + err.Error())
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@model/task_cas_test.go` at line 51, The cleanup call DB.Exec("DELETE FROM
options") can fail silently because the TestMain migrations do not include the
Option model; update TestMain to migrate the Option model (or ensure the options
table is created before tests) and change the DB.Exec call in task_cas_test.go
to check and handle the returned error (fail the test or log the error) so
missing tables or exec errors do not silently break test cleanup; refer to
TestMain, the Option model/migration, and the DB.Exec("DELETE FROM options")
call to locate the changes.

Comment on lines +812 to +847
const handleExport = async () => {
if (exporting) {
return;
}

setExporting(true);
try {
const query = buildQueryString(
buildLogQueryParams({ includeAdminFields: isAdminUser }),
);
const exportUrl = isAdminUser ? '/api/log/export' : '/api/log/self/export';
const response = await API.get(`${exportUrl}?${query}`, {
responseType: 'blob',
disableDuplicate: true,
skipErrorHandler: true,
});

const contentType = response.headers['content-type'] || '';
if (contentType.includes('application/json')) {
const text = await response.data.text();
const payload = JSON.parse(text);
showError(payload.message || t('导出日志失败'));
return;
}

const filename =
parseExportFilename(response.headers['content-disposition']) ||
`usage-logs-${new Date().toISOString().slice(0, 10)}.csv`;
downloadBlob(response.data, filename);
showSuccess(t('日志导出成功'));
} catch (error) {
showError(error);
} finally {
setExporting(false);
}
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Error object passed directly to showError may not display correctly.

At line 843, showError(error) receives the raw error object. If showError expects a string message, this may display [object Object] or similar. Consider extracting the error message.

🐛 Proposed fix
     } catch (error) {
-      showError(error);
+      showError(error?.message || error?.toString() || t('导出日志失败'));
     } finally {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const handleExport = async () => {
if (exporting) {
return;
}
setExporting(true);
try {
const query = buildQueryString(
buildLogQueryParams({ includeAdminFields: isAdminUser }),
);
const exportUrl = isAdminUser ? '/api/log/export' : '/api/log/self/export';
const response = await API.get(`${exportUrl}?${query}`, {
responseType: 'blob',
disableDuplicate: true,
skipErrorHandler: true,
});
const contentType = response.headers['content-type'] || '';
if (contentType.includes('application/json')) {
const text = await response.data.text();
const payload = JSON.parse(text);
showError(payload.message || t('导出日志失败'));
return;
}
const filename =
parseExportFilename(response.headers['content-disposition']) ||
`usage-logs-${new Date().toISOString().slice(0, 10)}.csv`;
downloadBlob(response.data, filename);
showSuccess(t('日志导出成功'));
} catch (error) {
showError(error);
} finally {
setExporting(false);
}
};
const handleExport = async () => {
if (exporting) {
return;
}
setExporting(true);
try {
const query = buildQueryString(
buildLogQueryParams({ includeAdminFields: isAdminUser }),
);
const exportUrl = isAdminUser ? '/api/log/export' : '/api/log/self/export';
const response = await API.get(`${exportUrl}?${query}`, {
responseType: 'blob',
disableDuplicate: true,
skipErrorHandler: true,
});
const contentType = response.headers['content-type'] || '';
if (contentType.includes('application/json')) {
const text = await response.data.text();
const payload = JSON.parse(text);
showError(payload.message || t('导出日志失败'));
return;
}
const filename =
parseExportFilename(response.headers['content-disposition']) ||
`usage-logs-${new Date().toISOString().slice(0, 10)}.csv`;
downloadBlob(response.data, filename);
showSuccess(t('日志导出成功'));
} catch (error) {
showError(error?.message || error?.toString() || t('导出日志失败'));
} finally {
setExporting(false);
}
};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/hooks/usage-logs/useUsageLogsData.jsx` around lines 812 - 847, In
handleExport, the catch currently passes the raw error object to showError which
may render as [object Object]; update the catch to extract a user-friendly
message (e.g. error.response?.data?.message || error.message || String(error))
and call showError with that string; ensure you modify the catch block in
handleExport where showError(error) is called so it uses the extracted message
and preserves existing setExporting(false) behavior in finally.

@ghost

This comment was marked as spam.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant