Skip to content

Reopen lnotspotl/device has crashed#1723

Open
aljazdu wants to merge 3 commits intodevelopfrom
lnotspotl/device_has_crashed
Open

Reopen lnotspotl/device has crashed#1723
aljazdu wants to merge 3 commits intodevelopfrom
lnotspotl/device_has_crashed

Conversation

@aljazdu
Copy link
Contributor

@aljazdu aljazdu commented Mar 16, 2026

Purpose

Reopening device has crashed PR after fixing some issues.

Specification

This PR now fixes:

  • Broken RVC2 branch pipelines
  • RVC4 devices hanging indefinitely after a crash
  • Crash logs not being collected when re-connection was turned off

Dependencies & Potential Impact

None / not applicable

Deployment Plan

None / not applicable

Testing & Validation

Tested with both the RVC2 and RVC4 devices, with varying settings.

Summary by CodeRabbit

  • New Features

    • Crash dump collection with callbacks and device crash detection
    • Platform identification APIs
    • Python bindings for crash dump access and handling
    • New environment variables for crash dump control, API keys, and model management
  • Documentation

    • Updated environment variable reference with crash dump and model configuration options
  • Examples

    • Added C++ and Python examples demonstrating crash dump collection and callback registration

@aljazdu aljazdu requested a review from moratom March 16, 2026 14:56
@aljazdu aljazdu self-assigned this Mar 16, 2026
@aljazdu aljazdu added the testable PR is ready to be tested label Mar 16, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 16, 2026

📝 Walkthrough

Walkthrough

This PR introduces crash dump collection and management capabilities with platform-specific implementations (RVC2/RVC4), relocates the Platform enum to a dedicated header, refactors compression utilities from tar-specific to generic archive APIs, and expands Python bindings to expose crash dump and platform functionality. Changes span core library, device management, bindings, utilities, examples, and tests.

Changes

Cohort / File(s) Summary
Core Crash Dump Implementation
include/depthai/device/CrashDump.hpp, src/device/CrashDump.cpp, src/device/CrashDumpManager.hpp, src/device/CrashDumpManager.cpp
Introduces base CrashDump class with virtual interface and platform-specific subclasses (CrashDumpRVC2, CrashDumpRVC4). Adds serialization (toTar/fromTar), JSON extra-data storage via operator[], and metadata tracking. CrashDumpManager handles collection from devices with version/build enrichment.
Platform Enum Migration
include/depthai/device/Platform.hpp, src/device/Platform.cpp, include/depthai/device/Device.hpp, src/device/Device.cpp
Relocates Platform enum and conversion functions from Device to dedicated Platform header/source. Removes platform2string/string2platform and getPlatform/getPlatformAsString from Device.
Device API Refactoring
include/depthai/device/DeviceBase.hpp, src/device/DeviceBase.cpp, include/depthai/device/DeviceGate.hpp, src/device/DeviceGate.cpp
Adds crash-dump callback registration, hasCrashed() state query, and platform getters to DeviceBase. Changes getCrashDump/getState return types to std::unique\_ptr. Refactors waitForSessionEnd from returning optional to void. Integrates CrashDumpManager for collection.
Python Bindings
bindings/python/CMakeLists.txt, bindings/python/src/PlatformBindings.hpp, bindings/python/src/PlatformBindings.cpp, bindings/python/src/CrashDumpBindings.hpp, bindings/python/src/CrashDumpBindings.cpp, bindings/python/src/DeviceBindings.cpp, bindings/python/src/py\\\_bindings.cpp
Adds pybind11 bindings for Platform enum and CrashDump classes/nested types with dict-like extra-attribute access. Updates DeviceBindings to expose hasCrashed, platform getters, and callback registration with GIL management. Registers new bindings in py\_bindings.cpp.
Archive Utility Refactoring
include/depthai/utility/Compression.hpp, src/utility/Compression.cpp
Renames tar-specific APIs to generic archive terminology (tarFiles→archiveFiles/archiveFilesCompressed, untarFiles→extractFiles, filenamesInTar→filenamesInArchive). Removes deflate/inflate functions. Supports both compressed and uncompressed archives.
Crash Dump Examples
examples/cpp/Misc/CrashDump/crash\\\_dump.cpp, examples/python/Misc/CrashDump/crash\\\_dump.py
Replaces video-loop demo with callback-driven crash-dump collection. Registers callback, triggers manual device crash, waits for dump, verifies hasCrashed(), and prints extra user data. Includes environment setup and timeout handling.
Logging & Utilities
src/utility/LogCollection.hpp, src/utility/LogCollection.cpp, src/utility/Platform.hpp, src/utility/Platform.cpp
Updates crash-dump logging to accept CrashDump directly (removes GenericCrashDump variant). Adds getOSPlatform() function. Enhances getTempPath() with error throwing and per-process naming on Windows.
Test Updates
tests/CMakeLists.txt, tests/src/ondevice\\\_tests/crashdump\\\_test.cpp, tests/src/ondevice\\\_tests/dynamic\\\_calibration\\\_test.cpp, tests/src/ondevice\\\_tests/pipeline/node/record\\\_replay\\\_test.cpp, tests/src/onhost\\\_tests/replay\\\_test.cpp
Adds new crashdump\_test with callback and hasCrashed checks. Updates archive utility calls in dynamic\_calibration, record\_replay, and replay tests from tar-specific to archive-generic APIs.
Configuration & Documentation
CMakeLists.txt, bindings/python/CMakeLists.txt, cmake/Depthai/DepthaiDeviceSideConfig.cmake, README.md, examples/cpp/Misc/CrashDump/CMakeLists.txt, include/depthai/xlink/XLinkConnection.hpp
Adds CrashDump.cpp, CrashDumpManager.cpp, Platform.cpp to core CMakeLists. Adds PlatformBindings.cpp, CrashDumpBindings.cpp to Python bindings. Updates device-side commit hash. Documents new environment variables (DEPTHAI\_CRASHDUMP\_\*, DEPTHAI\_HUB\_API\_KEY, etc.). Removes default initializers from DeviceInfo fields.

Sequence Diagram

sequenceDiagram
    participant App as Application
    participant Dev as Device
    participant CB as Callback Handler
    participant CDM as CrashDumpManager
    participant Dump as CrashDump

    App->>Dev: registerCrashdumpCallback(callback)
    App->>Dev: crashDevice()
    Note over Dev: Device crashes and reboots
    Dev->>CB: Trigger crash detected
    CB->>CDM: collectCrashDump()
    CDM->>Dump: Create platform-specific<br/>CrashDumpRVC2/RVC4
    CDM->>Dump: Populate metadata<br/>(version, timestamp, deviceId)
    Dump->>Dump: Serialize to tar/bytes
    CB->>App: callback(std::shared_ptr<CrashDump>)
    App->>Dev: hasCrashed()
    Dev-->>App: true
    App->>Dump: operator[](key) access<br/>extra JSON data
    App->>App: Process and log crash info
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Suggested reviewers

  • whoactuallycares
  • MaticTonin

Poem

🐰 Hops excitedly
A crash-dump garden blooms so bright,
Platform seeds planted left and right,
Callbacks catch the falling sparks,
Archive paths mark sturdy marks—
Bindings bound, the SDK takes flight! 🚀

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 8.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The PR title is vague and uses a generic term "Reopen" without clearly indicating the main change; the suffix "device has crashed" is partially related but lacks specificity about what was actually fixed or reopened. Revise the title to be more descriptive, e.g., 'Fix crash dump collection and RVC4 hang on device crash' or 'Reopen crash handling with RVC2/RVC4 fixes,' to better summarize the key improvements.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch lnotspotl/device_has_crashed
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can disable sequence diagrams in the walkthrough.

Disable the reviews.sequence_diagrams setting to disable sequence diagrams in the walkthrough.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/utility/Compression.cpp (1)

131-134: ⚠️ Potential issue | 🟡 Minor

Potential issue with large files or invalid entry sizes.

The entire file is read into memory at once. For large files, this could cause memory issues. Additionally, archive_entry_size may return negative values for unknown sizes (e.g., streaming archives).

Consider adding validation and potentially chunked reading for large files.

Suggested validation
                 size_t size = archive_entry_size(entry);
+                if(size == 0) {
+                    outFileStream.close();
+                    break;
+                }
                 std::vector<uint8_t> buff(size);
-                archive_read_data(a, buff.data(), size);
+                la_ssize_t bytesRead = archive_read_data(a, buff.data(), size);
+                if(bytesRead < 0) {
+                    outFileStream.close();
+                    throw std::runtime_error(fmt::format("Failed to read data from archive: {}", archive_error_string(a)));
+                }
-                outFileStream.write(reinterpret_cast<char*>(buff.data()), size);
+                outFileStream.write(reinterpret_cast<char*>(buff.data()), bytesRead);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utility/Compression.cpp` around lines 131 - 134,
archive_entry_size(entry) can be negative/unknown and allocating a vector of
that size reads the whole entry into memory, which fails for large or streaming
entries; instead, validate the size and perform chunked reads: if
archive_entry_size(entry) > 0 you may still prefer to read in fixed-size chunks,
and if size is negative/unknown, loop calling archive_read_data(a, buffer,
bufferSize) until it returns 0 (EOF) or error, writing each returned byte-count
to outFileStream; reference archive_entry_size, archive_read_data, buff (replace
with a fixed-size stack/heap buffer), outFileStream, entry and a to implement
this safe, incremental read-and-write loop and handle/archive_read_* error
returns appropriately.
src/device/DeviceBase.cpp (1)

561-568: ⚠️ Potential issue | 🔴 Critical

Gate crash handling needs an RVC4 + gate guard.

This block now runs for every !isRvc2 device. But gate is only initialized in the X_LINK_GATE* boot path, and CrashDumpManager still throws on Platform::RVC3. That gives you either a null dereference at Line 565 or an exception from Line 609 during close().

Proposed fix
     bool waitForGate = true;
-    if(!isRvc2) {
+    const bool isRvc4 = deviceInfo.platform == X_LINK_RVC4;
+    if(isRvc4 && gate) {
         // Check if the device is still alive and well, if yes, don't wait for gate, crash dump not relevant
         try {
             auto gateState = gate->getState();
             crashed = (gateState == DeviceGate::SessionState::CRASHED || gateState == DeviceGate::SessionState::DESTROYED);
             waitForGate = !pimpl->rpcCall("isRunning").as<bool>();
             pimpl->logger.debug("Will wait for gate: {}", waitForGate);
         } catch(const std::exception& ex) {
             pimpl->logger.debug("isRunning call error: {}", ex.what());
         }
     }
...
-    if(gate && waitForGate) {
+    if(isRvc4 && gate && waitForGate) {
         if(crashed && !crashDumpHandled.load()) {
             collectAndLogCrashDump();
         }
         gate->waitForSessionEnd();
     }

Also applies to: 607-611

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/device/DeviceBase.cpp` around lines 561 - 568, The gate/crash handling
currently runs for all !isRvc2 devices but gate and CrashDumpManager are only
valid for RVC4 paths; fix by adding an explicit RVC4 + gate guard around this
whole block (use isRvc4 && gate != nullptr) so you only call gate->getState(),
set waitForGate, log via pimpl->logger, and create/use CrashDumpManager when
isRvc4 && gate is non-null; also apply the same isRvc4 && gate != nullptr guard
to the later close()/cleanup section referenced (the code that may throw around
CrashDumpManager and gate), and keep existing checks like
DeviceGate::SessionState::CRASHED/DESTROYED and
pimpl->rpcCall("isRunning").as<bool>() intact inside the guarded region.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@include/depthai/device/CrashDump.hpp`:
- Around line 204-206: Extract the repeated literal "1.0.0" into a single
constant on the base class CrashDump (e.g., a protected static constexpr const
char* CRASH_DUMP_FORMAT_VERSION) and then update the derived classes
CrashDumpRVC2 and CrashDumpRVC4 to return that constant from
getCrashDumpVersion() instead of the hard-coded string.

In `@include/depthai/device/DeviceBase.hpp`:
- Around line 970-977: The code uses inconsistent camelCase for crash dump
identifiers; rename crashdumpCallback and crashdumpCallbackMtx to match
crashDumpHandled (e.g., crashDumpCallback and crashDumpCallbackMtx) and update
all references/usages accordingly (including any captures, assignments, mutex
locks, and function parameter references) so the identifier casing is consistent
across DeviceBase.hpp and any compilation units that reference these symbols.

In `@include/depthai/device/DeviceGate.hpp`:
- Around line 45-46: The public API change to waitForSessionEnd() is breaking;
add a deprecated compatibility shim in the DeviceGate class that preserves the
original waitForSessionEnd() behavior and forwards to the new flow (e.g., call
the new wait method(s) and then getCrashDump() as the old sequence did), mark
the shim as [[deprecated]] and document the migration to the new explicit
getCrashDump() call so consumers have time to adopt the new API; reference the
existing symbols waitForSessionEnd() and getCrashDump() when implementing the
wrapper so it simply invokes the new sequence and returns the original expected
result/side-effects.

In `@src/device/CrashDump.cpp`:
- Around line 293-297: The metadata written by CrashDumpRVC4::toTar includes
originalFilename but the deserializers never restore it; update
CrashDumpRVC4::fromTar (and the other deserialization path around the 336-339
area) to read metadata["originalFilename"] (or use
metadata.value("originalFilename", "")) and assign it to the
CrashDumpRVC4::filename member so filename is preserved across archive/bytes
round-trips.

In `@src/device/CrashDumpManager.cpp`:
- Around line 59-67: The CrashDumpCollector is never setting the CrashDump field
depthaiBuildDatetime before serialization; update the same initialization block
that sets dump->depthaiVersionBuildInfo to also assign
dump->depthaiBuildDatetime (e.g., from build::BUILD_DATETIME or the appropriate
build timestamp constant) so that writeMetadata() persists a populated
depthaiBuildDatetime; locate the initialization using the dump pointer and the
depthaiVersion* assignments to add this single assignment.

In `@src/device/CrashDumpManager.hpp`:
- Around line 25-27: CrashDumpManager is declared movable but contains a raw
DeviceBase* (devicePtr), so default moves leave both objects pointing to the
same device; replace the defaulted move constructor and move assignment by
implementing custom move operations that transfer ownership and nullify the
source's devicePtr (update CrashDumpManager(CrashDumpManager&&) to set
this->devicePtr = other.devicePtr and then other.devicePtr = nullptr, and
implement CrashDumpManager& operator=(CrashDumpManager&&) to guard
self-assignment, release or nullify current devicePtr if needed, move the
pointer from other, set other.devicePtr = nullptr, and return *this) so the
moved-from object is left in a safe empty state.

In `@src/device/DeviceBase.cpp`:
- Around line 1157-1164: The monitor currently only treats
DeviceGate::SessionState::CRASHED as a crash and may skip dump collection for
gates that go straight to DeviceGate::SessionState::DESTROYED; update the check
around gate->getState() (and the local crashed flag) to consider either CRASHED
or DESTROYED as a crash condition so that the existing logic that calls
collectAndLogCrashDump() when crashed && !crashDumpHandled.load() will run for
DESTROYED states as well.
- Around line 465-483: crashDumpHandled is being set before running the user
callback and logCollection::logCrashDump, so move the
crashDumpHandled.store(true) to after those operations and only set it when they
complete successfully; specifically, keep the callback copy logic using
crashdumpCallback and crashdumpCallbackMtx, invoke callbackCopy(crashDump) and
then call logCollection::logCrashDump(pipelineSchema, *crashDump, deviceInfo),
and only after both succeed call crashDumpHandled.store(true). Also wrap the
callback and logCrashDump calls in a try/catch so exceptions are caught and
logged via pimpl->logger.warn/error (and crashDumpHandled remains false on
failure) to allow retry paths to still process the dump.
- Around line 492-515: The loop currently exits as soon as getDeviceById()
returns any match because it uses while(!found && ...), causing a timeout if the
device is still in its pre-reboot state; change the loop condition and logic so
polling continues until either the device is observed in a reboot state (check
rebootingDeviceInfo.state == X_LINK_UNBOOTED || X_LINK_BOOTLOADER) or the
timeout elapses. Concretely, keep calling XLinkConnection::getDeviceById(...)
each iteration and only break/handle the crash when found AND the state matches
the reboot states (the same check used to construct DeviceBase rebootingDevice),
otherwise continue polling until timeout; update references to found,
rebootingDeviceInfo, getDeviceById, and the do/while condition accordingly so
you don't stop on the first sighting of an old-state device.

In `@src/utility/Compression.cpp`:
- Line 121: Replace the debug-only assert with a proper runtime check: in the
scope where filesInArchive and filesOnDisk are compared (the code currently
contains assert(filesInArchive.size() == filesOnDisk.size());), validate sizes
at runtime and handle mismatch deterministically (throw a suitable exception or
return an error code/value and log a clear message) instead of using assert;
ensure the check references filesInArchive.size() and filesOnDisk.size() and
that subsequent code path assumes sizes are equal only after the runtime
validation succeeds.
- Line 21: The assert(filesOnDisk.size() == filesInArchive.size()) must be
replaced with a runtime parameter validation that handles mismatches instead of
being removed in release builds; check sizes of filesOnDisk and filesInArchive,
and if they differ, either throw a descriptive exception (e.g.,
std::invalid_argument or std::runtime_error) or return an error status, logging
the sizes and context, so the subsequent loop that iterates both containers
cannot go out-of-bounds; update the surrounding function (the code block using
filesOnDisk/filesInArchive) to propagate the error appropriately.

In `@src/utility/Platform.cpp`:
- Around line 172-177: The current implementation builds basePath/baseStr and
uses _wmktemp_s plus std::filesystem::create_directories, which has a TOCTOU
race; replace this with a retry loop that atomically creates a uniquely-suffixed
directory using the Windows API CreateDirectory (or CreateDirectoryW) rather
than generating a name then creating it, e.g. generate a random/GUID suffix
appended to basePath (where basePath and baseStr are used) and call
CreateDirectoryW in a loop until it succeeds or a retry limit is reached; on
success return the created std::filesystem::path, and on repeated failure throw
the same std::runtime_error with an explanatory message.

In `@tests/src/ondevice_tests/crashdump_test.cpp`:
- Around line 45-58: Replace the fixed 12-second sleep in the TEST_CASE with a
polling loop that repeatedly checks device.hasCrashed() after short sleeps
(e.g., 100–200ms) up to a maximum timeout (e.g., 12s); call device.crashDevice()
as before, then loop: if device.hasCrashed() break and REQUIRE true, otherwise
sleep a short interval and retry until timeout and then
REQUIRE(device.hasCrashed()) fails—update references in the test to use
dai::Device, device.crashDevice(), device.hasCrashed(),
std::this_thread::sleep_for and std::chrono for timing.

In `@tests/src/ondevice_tests/pipeline/node/record_replay_test.cpp`:
- Around line 55-61: filenamesInArchive(RECORDING_PATH) can include directory
entries (paths ending with '/'), and extractFiles(...) will fail if parent
directories don't exist; update the loop that builds recordingExtFiles to skip
entries that are directories (e.g., filename.back() == '/') and for each file
entry call std::filesystem::create_directories(recordingPath.parent_path()) (use
testFolder + "extracted" base) before calling
dai::utility::extractFiles(RECORDING_PATH, recordingFilenames,
recordingExtFiles); ensure you only pass file entries in recordingFilenames and
matching created parent directories for recordingExtFiles when calling
extractFiles (see filenamesInArchive, recordingFilenames, recordingExtFiles,
extractFiles, RECORDING_PATH, testFolder).

In `@tests/src/onhost_tests/replay_test.cpp`:
- Around line 29-35: The code passes raw recordingFilenames from
dai::utility::filenamesInArchive(RECORDING_PATH) into extractFiles which fails
if the archive contains directory entries or nested paths because extractFiles
expects destination parent directories to exist; update the loop that builds
recordingExtFiles to skip directory entries (e.g., filter out entries that are
directory headers or that end with a slash) and ensure you call
std::filesystem::create_directories(...) for each destination's parent_path
before calling dai::utility::extractFiles; reference filenamesInArchive,
recordingFilenames, recordingExtFiles, extractFiles, RECORDING_PATH and
testFolder when making the change.

---

Outside diff comments:
In `@src/device/DeviceBase.cpp`:
- Around line 561-568: The gate/crash handling currently runs for all !isRvc2
devices but gate and CrashDumpManager are only valid for RVC4 paths; fix by
adding an explicit RVC4 + gate guard around this whole block (use isRvc4 && gate
!= nullptr) so you only call gate->getState(), set waitForGate, log via
pimpl->logger, and create/use CrashDumpManager when isRvc4 && gate is non-null;
also apply the same isRvc4 && gate != nullptr guard to the later close()/cleanup
section referenced (the code that may throw around CrashDumpManager and gate),
and keep existing checks like DeviceGate::SessionState::CRASHED/DESTROYED and
pimpl->rpcCall("isRunning").as<bool>() intact inside the guarded region.

In `@src/utility/Compression.cpp`:
- Around line 131-134: archive_entry_size(entry) can be negative/unknown and
allocating a vector of that size reads the whole entry into memory, which fails
for large or streaming entries; instead, validate the size and perform chunked
reads: if archive_entry_size(entry) > 0 you may still prefer to read in
fixed-size chunks, and if size is negative/unknown, loop calling
archive_read_data(a, buffer, bufferSize) until it returns 0 (EOF) or error,
writing each returned byte-count to outFileStream; reference archive_entry_size,
archive_read_data, buff (replace with a fixed-size stack/heap buffer),
outFileStream, entry and a to implement this safe, incremental read-and-write
loop and handle/archive_read_* error returns appropriately.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 0f8121a3-e8a5-4c94-b056-42eda5817494

📥 Commits

Reviewing files that changed from the base of the PR and between c8a7a2e and 7eea19a.

📒 Files selected for processing (39)
  • CMakeLists.txt
  • README.md
  • bindings/python/CMakeLists.txt
  • bindings/python/src/CrashDumpBindings.cpp
  • bindings/python/src/CrashDumpBindings.hpp
  • bindings/python/src/DeviceBindings.cpp
  • bindings/python/src/PlatformBindings.cpp
  • bindings/python/src/PlatformBindings.hpp
  • bindings/python/src/py_bindings.cpp
  • cmake/Depthai/DepthaiDeviceSideConfig.cmake
  • examples/cpp/Misc/CrashDump/CMakeLists.txt
  • examples/cpp/Misc/CrashDump/crash_dump.cpp
  • examples/python/Misc/CrashDump/crash_dump.py
  • include/depthai/device/CrashDump.hpp
  • include/depthai/device/Device.hpp
  • include/depthai/device/DeviceBase.hpp
  • include/depthai/device/DeviceGate.hpp
  • include/depthai/device/Platform.hpp
  • include/depthai/utility/Compression.hpp
  • include/depthai/xlink/XLinkConnection.hpp
  • src/device/CrashDump.cpp
  • src/device/CrashDumpManager.cpp
  • src/device/CrashDumpManager.hpp
  • src/device/Device.cpp
  • src/device/DeviceBase.cpp
  • src/device/DeviceGate.cpp
  • src/device/Platform.cpp
  • src/opencv/HolisticRecordReplay.cpp
  • src/utility/Compression.cpp
  • src/utility/LogCollection.cpp
  • src/utility/LogCollection.hpp
  • src/utility/PipelineImplHelper.cpp
  • src/utility/Platform.cpp
  • src/utility/Platform.hpp
  • tests/CMakeLists.txt
  • tests/src/ondevice_tests/crashdump_test.cpp
  • tests/src/ondevice_tests/dynamic_calibration_test.cpp
  • tests/src/ondevice_tests/pipeline/node/record_replay_test.cpp
  • tests/src/onhost_tests/replay_test.cpp
💤 Files with no reviewable changes (2)
  • examples/cpp/Misc/CrashDump/CMakeLists.txt
  • src/device/Device.cpp

Comment on lines +204 to +206
std::string getCrashDumpVersion() const override {
return "1.0.0";
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider extracting version string to a constant.

The version string "1.0.0" is duplicated in both CrashDumpRVC2 and CrashDumpRVC4. If both platforms share the same crash dump format version, consider defining a constant in the base class.

Optional refactor

Add to base class:

class CrashDump {
   protected:
    static constexpr const char* CRASH_DUMP_FORMAT_VERSION = "1.0.0";
    // ...
};

Then in derived classes:

std::string getCrashDumpVersion() const override {
    return CRASH_DUMP_FORMAT_VERSION;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@include/depthai/device/CrashDump.hpp` around lines 204 - 206, Extract the
repeated literal "1.0.0" into a single constant on the base class CrashDump
(e.g., a protected static constexpr const char* CRASH_DUMP_FORMAT_VERSION) and
then update the derived classes CrashDumpRVC2 and CrashDumpRVC4 to return that
constant from getCrashDumpVersion() instead of the hard-coded string.

Comment on lines +970 to +977
// Crash dump callback
std::mutex crashdumpCallbackMtx;
std::function<void(std::shared_ptr<CrashDump>)> crashdumpCallback;

// Watchdog thread
std::thread watchdogThread;
std::atomic<bool> watchdogRunning{true};
std::atomic<bool> crashed{false};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Minor naming inconsistency.

The naming convention is inconsistent between crashdumpCallback/crashdumpCallbackMtx (lowercase 'd') and crashDumpHandled (uppercase 'D'). Consider unifying the naming for consistency.

Suggested fix for consistency
     // Crash dump callback
-    std::mutex crashdumpCallbackMtx;
-    std::function<void(std::shared_ptr<CrashDump>)> crashdumpCallback;
+    std::mutex crashDumpCallbackMtx;
+    std::function<void(std::shared_ptr<CrashDump>)> crashDumpCallback;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Crash dump callback
std::mutex crashdumpCallbackMtx;
std::function<void(std::shared_ptr<CrashDump>)> crashdumpCallback;
// Watchdog thread
std::thread watchdogThread;
std::atomic<bool> watchdogRunning{true};
std::atomic<bool> crashed{false};
// Crash dump callback
std::mutex crashDumpCallbackMtx;
std::function<void(std::shared_ptr<CrashDump>)> crashDumpCallback;
// Watchdog thread
std::thread watchdogThread;
std::atomic<bool> watchdogRunning{true};
std::atomic<bool> crashed{false};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@include/depthai/device/DeviceBase.hpp` around lines 970 - 977, The code uses
inconsistent camelCase for crash dump identifiers; rename crashdumpCallback and
crashdumpCallbackMtx to match crashDumpHandled (e.g., crashDumpCallback and
crashDumpCallbackMtx) and update all references/usages accordingly (including
any captures, assignments, mutex locks, and function parameter references) so
the identifier casing is consistent across DeviceBase.hpp and any compilation
units that reference these symbols.

Comment on lines +45 to +46
// Waits for the gate session to end
void waitForSessionEnd();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preserve a compatibility path for waitForSessionEnd().

This public signature change is source-breaking for existing consumers. If the new flow is waitForSessionEnd(); getCrashDump();, please keep the old behavior behind a deprecated shim or document the migration in the public API before merging.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@include/depthai/device/DeviceGate.hpp` around lines 45 - 46, The public API
change to waitForSessionEnd() is breaking; add a deprecated compatibility shim
in the DeviceGate class that preserves the original waitForSessionEnd() behavior
and forwards to the new flow (e.g., call the new wait method(s) and then
getCrashDump() as the old sequence did), mark the shim as [[deprecated]] and
document the migration to the new explicit getCrashDump() call so consumers have
time to adopt the new API; reference the existing symbols waitForSessionEnd()
and getCrashDump() when implementing the wrapper so it simply invokes the new
sequence and returns the original expected result/side-effects.

Comment on lines +293 to +297
void CrashDumpRVC4::toTar(const fs::path& tarPath) const {
// Create metadata
nlohmann::json metadata;
writeMetadata(metadata);
metadata["originalFilename"] = filename;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Reload originalFilename on RVC4 deserialization.

Line 297 writes the filename into metadata, but fromTar() never restores it. CrashDumpRVC4::filename comes back empty after any archive/bytes round-trip.

Proposed fix
     std::string metadataContent;
     readFromFile(metadataPath, metadataContent);
     nlohmann::json metadata = nlohmann::json::parse(metadataContent);
     readMetadata(metadata);
+    filename = fromJson(metadata, "originalFilename", "");

Also applies to: 336-339

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/device/CrashDump.cpp` around lines 293 - 297, The metadata written by
CrashDumpRVC4::toTar includes originalFilename but the deserializers never
restore it; update CrashDumpRVC4::fromTar (and the other deserialization path
around the 336-339 area) to read metadata["originalFilename"] (or use
metadata.value("originalFilename", "")) and assign it to the
CrashDumpRVC4::filename member so filename is preserved across archive/bytes
round-trips.

Comment on lines +59 to +67
dump->depthaiVersion = build::VERSION;
dump->depthaiVersionMajor = std::to_string(build::VERSION_MAJOR);
dump->depthaiVersionMinor = std::to_string(build::VERSION_MINOR);
dump->depthaiVersionPatch = std::to_string(build::VERSION_PATCH);
dump->depthaiVersionPreReleaseType = build::PRE_RELEASE_TYPE;
dump->depthaiVersionPreReleaseVersion = std::to_string(build::PRE_RELEASE_VERSION);
dump->depthaiVersionBuildInfo = build::BUILD_DATETIME;
dump->depthaiCommitHash = build::COMMIT;
dump->depthaiCommitDatetime = build::COMMIT_DATETIME;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Populate depthaiBuildDatetime before serializing the dump.

CrashDump::writeMetadata() persists a separate depthaiBuildDatetime field, but this collector never assigns it, so dumps produced here always leave that public field empty.

Proposed fix
     dump->depthaiVersionBuildInfo = build::BUILD_DATETIME;
+    dump->depthaiBuildDatetime = build::BUILD_DATETIME;
     dump->depthaiCommitHash = build::COMMIT;
     dump->depthaiCommitDatetime = build::COMMIT_DATETIME;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
dump->depthaiVersion = build::VERSION;
dump->depthaiVersionMajor = std::to_string(build::VERSION_MAJOR);
dump->depthaiVersionMinor = std::to_string(build::VERSION_MINOR);
dump->depthaiVersionPatch = std::to_string(build::VERSION_PATCH);
dump->depthaiVersionPreReleaseType = build::PRE_RELEASE_TYPE;
dump->depthaiVersionPreReleaseVersion = std::to_string(build::PRE_RELEASE_VERSION);
dump->depthaiVersionBuildInfo = build::BUILD_DATETIME;
dump->depthaiCommitHash = build::COMMIT;
dump->depthaiCommitDatetime = build::COMMIT_DATETIME;
dump->depthaiVersion = build::VERSION;
dump->depthaiVersionMajor = std::to_string(build::VERSION_MAJOR);
dump->depthaiVersionMinor = std::to_string(build::VERSION_MINOR);
dump->depthaiVersionPatch = std::to_string(build::VERSION_PATCH);
dump->depthaiVersionPreReleaseType = build::PRE_RELEASE_TYPE;
dump->depthaiVersionPreReleaseVersion = std::to_string(build::PRE_RELEASE_VERSION);
dump->depthaiVersionBuildInfo = build::BUILD_DATETIME;
dump->depthaiBuildDatetime = build::BUILD_DATETIME;
dump->depthaiCommitHash = build::COMMIT;
dump->depthaiCommitDatetime = build::COMMIT_DATETIME;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/device/CrashDumpManager.cpp` around lines 59 - 67, The CrashDumpCollector
is never setting the CrashDump field depthaiBuildDatetime before serialization;
update the same initialization block that sets dump->depthaiVersionBuildInfo to
also assign dump->depthaiBuildDatetime (e.g., from build::BUILD_DATETIME or the
appropriate build timestamp constant) so that writeMetadata() persists a
populated depthaiBuildDatetime; locate the initialization using the dump pointer
and the depthaiVersion* assignments to add this single assignment.

throw std::runtime_error("Could not open archive.");
}
assert(filesInTar.size() == filesOnDisk.size());
assert(filesInArchive.size() == filesOnDisk.size());
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Same assert issue - replace with runtime check.

Suggested fix
-    assert(filesInArchive.size() == filesOnDisk.size());
+    if(filesInArchive.size() != filesOnDisk.size()) {
+        throw std::invalid_argument("filesInArchive and filesOnDisk must have the same size");
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
assert(filesInArchive.size() == filesOnDisk.size());
if(filesInArchive.size() != filesOnDisk.size()) {
throw std::invalid_argument("filesInArchive and filesOnDisk must have the same size");
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utility/Compression.cpp` at line 121, Replace the debug-only assert with
a proper runtime check: in the scope where filesInArchive and filesOnDisk are
compared (the code currently contains assert(filesInArchive.size() ==
filesOnDisk.size());), validate sizes at runtime and handle mismatch
deterministically (throw a suitable exception or return an error code/value and
log a clear message) instead of using assert; ensure the check references
filesInArchive.size() and filesOnDisk.size() and that subsequent code path
assumes sizes are equal only after the runtime validation succeeds.

Comment on lines +172 to +177
auto basePath = std::filesystem::temp_directory_path() / L"depthai_XXXXXX";
auto baseStr = basePath.wstring();
if(_wmktemp_s(baseStr.data(), baseStr.size() + 1) != 0 || !std::filesystem::create_directories(baseStr)) {
throw std::runtime_error("Failed to create a unique temporary directory");
}
return std::filesystem::path(baseStr);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Does Microsoft's _wmktemp_s create a file or directory atomically, or does it only generate a unique pathname? What Windows API pattern is recommended for securely creating a unique temporary directory?

💡 Result:

_wmktemp_s (like _mktemp_s) only generates a unique filename string by modifying the template buffer; it does not create a file or directory, so it is not an atomic create and is inherently subject to a TOCTOU race if you later create something at that path. [1]

Recommended secure Windows API pattern for a unique temp directory: get the temp parent directory (for example via GetTempPath2), generate a hard-to-guess name (GUID/random), then attempt to create it with CreateDirectory; if it fails with ERROR_ALREADY_EXISTS, retry with a new name. CreateDirectory itself is the “atomic existence check + create” step (success or ERROR_ALREADY_EXISTS). [2][3]

Sources:
[1] Microsoft Learn: _mktemp_s, _wmktemp_s [1] (learn.microsoft.com)
[2] Microsoft Learn: CreateDirectory (notes ERROR_ALREADY_EXISTS) [2] (learn.microsoft.com)
[3] Microsoft Learn: “Creating and Using a Temporary File” (uses GetTempPath2 as the temp-location API) [3] (learn.microsoft.com)

Citations:


Create the Windows temp directory atomically.

_wmktemp_s() only generates a unique pathname; it does not create a file or directory. The directory is then created separately by create_directories(), leaving a TOCTOU race window where concurrent processes can collide and fail intermittently. The POSIX mkdtemp() path avoids this with an atomic operation. Use a retry loop calling CreateDirectory with a random or GUID-suffixed path instead, which atomically verifies non-existence and creates the directory in one operation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utility/Platform.cpp` around lines 172 - 177, The current implementation
builds basePath/baseStr and uses _wmktemp_s plus
std::filesystem::create_directories, which has a TOCTOU race; replace this with
a retry loop that atomically creates a uniquely-suffixed directory using the
Windows API CreateDirectory (or CreateDirectoryW) rather than generating a name
then creating it, e.g. generate a random/GUID suffix appended to basePath (where
basePath and baseStr are used) and call CreateDirectoryW in a loop until it
succeeds or a retry limit is reached; on success return the created
std::filesystem::path, and on repeated failure throw the same std::runtime_error
with an explanatory message.

Comment on lines +45 to +58
TEST_CASE("hasCrashed returns true after device crash") {
dai::utility::setEnv("DEPTHAI_CRASHDUMP", "0"); // don't save nor upload crash dump
dai::utility::setEnv("DEPTHAI_CRASH_DEVICE", "1");

dai::Device device;
REQUIRE_FALSE(device.hasCrashed());

device.crashDevice();

// Give the device time to crash and for the watchdog to detect it
std::this_thread::sleep_for(std::chrono::seconds(12));

REQUIRE(device.hasCrashed());
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider using a polling loop with shorter sleep intervals instead of a fixed 12-second sleep.

The fixed 12-second sleep may cause unnecessarily long test runs when the crash is detected quickly, or flaky failures if the watchdog timing varies. A polling approach (similar to the Python example) would be more robust.

♻️ Proposed refactor to use polling
     device.crashDevice();
 
-    // Give the device time to crash and for the watchdog to detect it
-    std::this_thread::sleep_for(std::chrono::seconds(12));
-
-    REQUIRE(device.hasCrashed());
+    // Poll for crash detection with timeout
+    bool crashDetected = false;
+    auto deadline = std::chrono::steady_clock::now() + std::chrono::seconds(20);
+    while(std::chrono::steady_clock::now() < deadline) {
+        if(device.hasCrashed()) {
+            crashDetected = true;
+            break;
+        }
+        std::this_thread::sleep_for(std::chrono::milliseconds(200));
+    }
+    REQUIRE(crashDetected);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/src/ondevice_tests/crashdump_test.cpp` around lines 45 - 58, Replace
the fixed 12-second sleep in the TEST_CASE with a polling loop that repeatedly
checks device.hasCrashed() after short sleeps (e.g., 100–200ms) up to a maximum
timeout (e.g., 12s); call device.crashDevice() as before, then loop: if
device.hasCrashed() break and REQUIRE true, otherwise sleep a short interval and
retry until timeout and then REQUIRE(device.hasCrashed()) fails—update
references in the test to use dai::Device, device.crashDevice(),
device.hasCrashed(), std::this_thread::sleep_for and std::chrono for timing.

Comment on lines +55 to +61
auto recordingFilenames = dai::utility::filenamesInArchive(RECORDING_PATH);
std::vector<std::filesystem::path> recordingExtFiles;
recordingExtFiles.reserve(recordingFilenames.size());
for(const auto& filename : recordingFilenames) {
recordingExtFiles.push_back(std::filesystem::path(testFolder).append("extracted").append(filename));
}
dai::utility::untarFiles(RECORDING_PATH, recordingFilenames, recordingExtFiles);
dai::utility::extractFiles(RECORDING_PATH, recordingFilenames, recordingExtFiles);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's find the Compression implementation files
find . -type f \( -name "Compression.hpp" -o -name "Compression.cpp" \) | head -20

Repository: luxonis/depthai-core

Length of output: 135


🏁 Script executed:

# Find the test file
find . -type f -name "record_replay_test.cpp"

Repository: luxonis/depthai-core

Length of output: 127


🏁 Script executed:

# Search for filenamesInArchive and extractFiles implementations
rg "filenamesInArchive|extractFiles" --type cpp -B 2 -A 10 | head -100

Repository: luxonis/depthai-core

Length of output: 7798


🏁 Script executed:

# Let's examine the implementations more fully
cat -n ./src/utility/Compression.cpp | head -150

Repository: luxonis/depthai-core

Length of output: 6345


🏁 Script executed:

# Let's see the dynamic_calibration_test filtering logic
rg "if.*empty|Treat entries" tests/src/ondevice_tests/dynamic_calibration_test.cpp -B 5 -A 8

Repository: luxonis/depthai-core

Length of output: 1711


🏁 Script executed:

# Let's examine the record_replay_test.cpp file to see the full context
cat -n ./tests/src/ondevice_tests/pipeline/node/record_replay_test.cpp | head -100

Repository: luxonis/depthai-core

Length of output: 4030


🏁 Script executed:

# Check how files are accessed later in record_replay_test
rg "setReplay|extracted|recordingExtFiles" tests/src/ondevice_tests/pipeline/node/record_replay_test.cpp -A 2 -B 2

Repository: luxonis/depthai-core

Length of output: 1483


🏁 Script executed:

# Check RECORDING_PATH definition to understand what archive is being tested
rg "RECORDING_PATH" tests/src/ondevice_tests/pipeline/node/record_replay_test.cpp

Repository: luxonis/depthai-core

Length of output: 238


🏁 Script executed:

# Compare with how dynamic_calibration_test.cpp is structured
head -80 tests/src/ondevice_tests/dynamic_calibration_test.cpp | tail -30

Repository: luxonis/depthai-core

Length of output: 1301


Filter directory entries and create parent paths before extracting into extracted/.

filenamesInArchive() returns all archive entries including directories (as paths ending with /), and extractFiles() does not create parent directories—it opens the exact destination path for writing. Passing the raw list will fail if RECORDING_PATH contains any directory entries or nested file paths. Filter directory entries and call create_directories() on parent paths for files before extraction, as shown in dynamic_calibration_test.cpp.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/src/ondevice_tests/pipeline/node/record_replay_test.cpp` around lines
55 - 61, filenamesInArchive(RECORDING_PATH) can include directory entries (paths
ending with '/'), and extractFiles(...) will fail if parent directories don't
exist; update the loop that builds recordingExtFiles to skip entries that are
directories (e.g., filename.back() == '/') and for each file entry call
std::filesystem::create_directories(recordingPath.parent_path()) (use testFolder
+ "extracted" base) before calling dai::utility::extractFiles(RECORDING_PATH,
recordingFilenames, recordingExtFiles); ensure you only pass file entries in
recordingFilenames and matching created parent directories for recordingExtFiles
when calling extractFiles (see filenamesInArchive, recordingFilenames,
recordingExtFiles, extractFiles, RECORDING_PATH, testFolder).

Comment on lines +29 to +35
auto recordingFilenames = dai::utility::filenamesInArchive(RECORDING_PATH);
std::vector<std::filesystem::path> recordingExtFiles;
recordingExtFiles.reserve(recordingFilenames.size());
for(const auto& filename : recordingFilenames) {
recordingExtFiles.push_back(std::filesystem::path(testFolder).append("extracted").append(filename));
}
dai::utility::untarFiles(RECORDING_PATH, recordingFilenames, recordingExtFiles);
dai::utility::extractFiles(RECORDING_PATH, recordingFilenames, recordingExtFiles);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Filter directory entries before calling extractFiles().

Line 29-35 now forwards the raw filenamesInArchive() output to extractFiles(), but src/utility/Compression.cpp opens every requested destination as a file and never creates parent directories. Any directory header or nested path in RECORDING_PATH will make this helper fail. tests/src/ondevice_tests/dynamic_calibration_test.cpp lines 58-84 already has the required filtering and create_directories() logic.

💡 Suggested hardening
-        auto recordingFilenames = dai::utility::filenamesInArchive(RECORDING_PATH);
-        std::vector<std::filesystem::path> recordingExtFiles;
-        recordingExtFiles.reserve(recordingFilenames.size());
-        for(const auto& filename : recordingFilenames) {
-            recordingExtFiles.push_back(std::filesystem::path(testFolder).append("extracted").append(filename));
-        }
-        dai::utility::extractFiles(RECORDING_PATH, recordingFilenames, recordingExtFiles);
+        auto recordingFilenames = dai::utility::filenamesInArchive(RECORDING_PATH);
+        std::vector<std::string> srcFiles;
+        std::vector<std::filesystem::path> recordingExtFiles;
+        srcFiles.reserve(recordingFilenames.size());
+        recordingExtFiles.reserve(recordingFilenames.size());
+        for(const auto& filename : recordingFilenames) {
+            if(filename.empty()) continue;
+
+            auto outPath = std::filesystem::path(testFolder).append("extracted").append(filename);
+            const bool isDirEntry = filename.back() == '/' || std::filesystem::path{filename}.filename().string().empty();
+            if(isDirEntry) {
+                std::filesystem::create_directories(outPath);
+                continue;
+            }
+
+            std::filesystem::create_directories(outPath.parent_path());
+            srcFiles.push_back(filename);
+            recordingExtFiles.push_back(outPath);
+        }
+        dai::utility::extractFiles(RECORDING_PATH, srcFiles, recordingExtFiles);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
auto recordingFilenames = dai::utility::filenamesInArchive(RECORDING_PATH);
std::vector<std::filesystem::path> recordingExtFiles;
recordingExtFiles.reserve(recordingFilenames.size());
for(const auto& filename : recordingFilenames) {
recordingExtFiles.push_back(std::filesystem::path(testFolder).append("extracted").append(filename));
}
dai::utility::untarFiles(RECORDING_PATH, recordingFilenames, recordingExtFiles);
dai::utility::extractFiles(RECORDING_PATH, recordingFilenames, recordingExtFiles);
auto recordingFilenames = dai::utility::filenamesInArchive(RECORDING_PATH);
std::vector<std::string> srcFiles;
std::vector<std::filesystem::path> recordingExtFiles;
srcFiles.reserve(recordingFilenames.size());
recordingExtFiles.reserve(recordingFilenames.size());
for(const auto& filename : recordingFilenames) {
if(filename.empty()) continue;
auto outPath = std::filesystem::path(testFolder).append("extracted").append(filename);
const bool isDirEntry = filename.back() == '/' || std::filesystem::path{filename}.filename().string().empty();
if(isDirEntry) {
std::filesystem::create_directories(outPath);
continue;
}
std::filesystem::create_directories(outPath.parent_path());
srcFiles.push_back(filename);
recordingExtFiles.push_back(outPath);
}
dai::utility::extractFiles(RECORDING_PATH, srcFiles, recordingExtFiles);
🧰 Tools
🪛 Cppcheck (2.20.0)

[information] 31-31: Include file

(missingIncludeSystem)


[information] 32-32: Include file

(missingIncludeSystem)


[information] 33-33: Include file

(missingIncludeSystem)


[information] 29-29: Include file

(missingIncludeSystem)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/src/onhost_tests/replay_test.cpp` around lines 29 - 35, The code passes
raw recordingFilenames from dai::utility::filenamesInArchive(RECORDING_PATH)
into extractFiles which fails if the archive contains directory entries or
nested paths because extractFiles expects destination parent directories to
exist; update the loop that builds recordingExtFiles to skip directory entries
(e.g., filter out entries that are directory headers or that end with a slash)
and ensure you call std::filesystem::create_directories(...) for each
destination's parent_path before calling dai::utility::extractFiles; reference
filenamesInArchive, recordingFilenames, recordingExtFiles, extractFiles,
RECORDING_PATH and testFolder when making the change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

testable PR is ready to be tested

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant