Conversation
…e/holistic_rnr_improvements
…e/holistic_rnr_improvements
…e/holistic_rnr_improvements
…s/depthai-core into feature/record_replay_features
Signed-off-by: stas.bucik <stas.bucik@luxonis.com>
Signed-off-by: stas.bucik <stas.bucik@luxonis.com>
Signed-off-by: stas.bucik <stas.bucik@luxonis.com>
Signed-off-by: stas.bucik <stas.bucik@luxonis.com>
Signed-off-by: stas.bucik <stas.bucik@luxonis.com>
Signed-off-by: stas.bucik <stas.bucik@luxonis.com>
…e/move_global_properties
…epthai-core into feature/record_replay_features
…epthai-core into feature/record_replay_features
…e/record_replay_features
|
I found an issue when recording on RVC2 and replaying on RVC4. The setup:
I did some digging and comparing between old recordings and new ones and the difference is that new recording now stores "camera_info.json" which includes "camera_features" as opposed to just cameraData in the old calibration.json. If i replace the camera_info.json with an old calibration.json, everything works. If you need, I can send the recorded tar file in DMs. Thanks Edit: Works the other way around (record on RVC4, replay on RVC2) |
…e/record_replay_features
📝 WalkthroughWalkthroughAdds DeviceProperties and device-level property APIs, moves device-scoped fields out of GlobalProperties, introduces camera-sync and mock-features for holistic recording/replay, exposes new Python bindings for device properties, and updates tests and utilities (tar reading, replay timing, MessageDemux host-run behavior). Changes
Sequence Diagram(s)sequenceDiagram
participant PythonClient as Client (Python)
participant Pipeline as Pipeline
participant Device as DeviceBase
participant Sync as Sync/MessageDemux
participant Storage as FileSystem/Replay
rect rgba(200,200,255,0.5)
Client->>Pipeline: setDefaultDeviceProperties(DeviceProperties)
Pipeline-->>Device: store or forward properties
end
rect rgba(200,255,200,0.5)
Pipeline->>Device: start pipeline (enableHolisticRecord with syncCameraOutputs)
Device->>Sync: link camera outputs to Sync/MessageDemux
Sync->>Pipeline: emit synchronized MessageGroup
Pipeline->>Storage: write camera_info.json + recordings
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment Tip You can disable poems in the walkthrough.Disable the |
…e/record_replay_features
…e/record_replay_features
There was a problem hiding this comment.
Actionable comments posted: 14
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bindings/python/src/pipeline/PipelineBindings.cpp`:
- Around line 201-202: The Pipeline wrapper currently holds the Python GIL when
calling Pipeline::setDefaultDeviceProperties and
Pipeline::getDefaultDeviceProperties, which can block during RPC to the implicit
default device; update the pybind11 bindings for these methods to release the
GIL by adding a py::call_guard<py::gil_scoped_release>() (or equivalent
py::gil_scoped_release call guard) to each .def for setDefaultDeviceProperties
and getDefaultDeviceProperties so the underlying DeviceBase RPC
(defaultDevice->setProperties/getProperties) runs without holding the GIL.
In `@examples/cpp/RecordReplay/holistic_record.cpp`:
- Around line 53-59: The existing comment "Discard frames" is ambiguous; update
the comment above the Sync node creation (pipeline.create<dai::node::Sync>() and
sync->setSyncAttempts(0)) to explicitly state that setting syncAttempts to 0
causes the Sync node to drop any frames that cannot be synchronized across camA,
camB, camC and imu, preventing queue buildup during recording; reference the
linked outputs (camAOut->link(sync->inputs["camA"]), camBOut->link(...),
camCOut->link(...), imu->out.link(...)) so readers understand this configuration
intentionally favors real-time recording over waiting for exact synchronization.
- Around line 65-72: The loop that calls viewFinderQueue->get<dai::ImgFrame>()
can receive a null pointer even if pipeline.isRunning() is true; update the loop
around the call in holistic_record.cpp to check that the returned frame is
non-null before calling frame->getCvFrame() (and handle the null case by
continuing or breaking the loop and/or logging) — specifically modify the block
using viewFinderQueue->get<dai::ImgFrame>() together with isRunning and
pipeline.isRunning() to defensively guard against a null frame pointer.
- Around line 14-18: isRunning is modified in signalHandler and read in the main
loop without synchronization; change its type to a signal-safe flag (either
std::atomic<bool> isRunning or volatile sig_atomic_t isRunning) and update both
signalHandler and the main loop to use that type; include <atomic> if using
std::atomic and ensure reads/writes are simple atomic operations (or include
<csignal> and use volatile sig_atomic_t for C-compatible behavior) so the data
race is eliminated.
In `@include/depthai/properties/GlobalProperties.hpp`:
- Around line 91-105: DeviceProperties::setFrom currently merges maps and only
copies optionals when set, leaving stale state; change it to replace state
wholesale by assigning optionals and maps directly: assign calibData and
eepromId from other (do not guard with if), assign cameraTuningBlobSize and
cameraTuningBlobUri directly from other, and replace cameraSocketTuningBlobSize
and cameraSocketTuningBlobUri by direct assignment (or clear then copy) instead
of iterating/merging so callers can clear values by passing empty
optionals/maps.
In `@src/device/DeviceBase.cpp`:
- Line 924: DeviceBase is currently calling
mockCameraFeatures(utility::getEnvAs<std::string>("DEPTHAI_REPLAY", ""))
unconditionally during initialization, which injects camera_info.json into the
live device even when callers haven't validated compatibility; move this
behavior behind the explicit replay path: only call mockCameraFeatures when the
replay flow is active (e.g., when DEPTHAI_REPLAY is set AND the device is being
initialized for replay) or validate the recording first and refuse to mutate
device state for incompatible recordings. Update DeviceBase initialization to
check the explicit replay condition before invoking mockCameraFeatures, and
ensure startPipeline() still receives a clean device state when recordings are
incompatible.
- Around line 1854-1858: The mockCameraFeatures call is skipped on reconnect
because hasMockedFeatures remains true; either reset that flag when a new
session/connection is established (e.g., clear hasMockedFeatures at the start of
DeviceBase::init2 or in the reconnect path) or remove the "!hasMockedFeatures"
guard inside DeviceBase::mockCameraFeatures so it always calls
utility::mockCameraFeatures(*this, replayPath) when replayPath is provided;
update the code around DeviceBase::mockCameraFeatures and/or DeviceBase::init2
to ensure mocked features are reapplied per connection by clearing or avoiding
the persistent flag.
In `@src/opencv/HolisticRecordReplay.cpp`:
- Around line 110-140: When syncCameraOutputs is true but no camera nodes are
found in sources (maxRequestedFps remains 0.0f), add a debug/warning log to make
this explicit; locate the loop that inspects sources and the subsequent block
that creates sync and demux (symbols: syncCameraOutputs, sources,
maxRequestedFps, sync, demux, pipeline.create<dai::node::Sync>()), and after
computing maxRequestedFps (before creating sync/demux) emit a logger call (use
the project's existing logging facility) indicating "syncCameraOutputs enabled
but no camera sources found, sync threshold not set" so operators can see the
edge case. Ensure the log is at debug/warn level and does not change existing
control flow.
In `@src/pipeline/node/host/Replay.cpp`:
- Around line 419-432: In ReplayMetadataOnly::run(), you must initialize
first-frame state before using firstTs/lastTs in timestamp math; move the
first-frame block (setting prevMsgTs = buffer->getTimestampDevice(), firstTs =
buffer->getTimestamp(), tsOffset = firstTs, and initialize lastTs appropriately)
to execute when first is true before computing deviceTsOffset, setTimestamp,
setTimestampDevice, lastInterval, and lastTs (or set lastTs = firstTs right
after initialization). Ensure the code uses the same ordering/fix applied in
ReplayVideo::run(): perform the first-frame initialization when first is true,
then proceed with sequence/timestamp adjustments and clear or flip the first
flag.
- Around line 323-336: The timestamp update computes lastInterval using lastTs
before the first-frame initialization, so on the first loop lastTs is epoch and
lastInterval becomes huge; move the first-frame initialization block (the code
setting prevMsgTs, firstTs, tsOffset when first is true) to execute before
updating timestamps and computing lastInterval (i.e., perform the first-check
and set prevMsgTs, firstTs, tsOffset from buffer first), then update buffer
timestamps, set lastInterval =
duration_cast<milliseconds>(buffer->getTimestamp() - lastTs) and update lastTs
and lastSeqNum thereafter; adjust references to buffer, lastTs, lastInterval,
first, prevMsgTs, firstTs, and tsOffset accordingly so single-frame looping uses
correct offsets.
In `@src/pipeline/node/MessageDemux.cpp`:
- Around line 26-29: The log for the null-check after
input.get<dai::MessageGroup>() is misleading; update the handling around the
variable `message` (result of input.get<dai::MessageGroup>()) to log a clearer
message indicating a null return typically means a timeout or pipeline shutdown
rather than a type mismatch, e.g., change the call site using
`logger->error(...)` to something like a warning/error that says "received null
MessageGroup (possible timeout or pipeline shutdown) - skipping" and optionally
include contextual info (stream id or timestamp) to aid debugging.
In `@src/pipeline/Pipeline.cpp`:
- Around line 565-567: Acquire calibMtx before reading calibData to avoid the
race: move the std::lock_guard<std::mutex> lock(calibMtx) above the check of
defaultDeviceProperties->calibData.has_value(), then check has_value() and, if
true, construct and return the CalibrationHandler using
defaultDeviceProperties->calibData.value(); ensure all reads of calibData occur
while calibrated under calibMtx to match writers that lock calibMtx.
- Around line 1189-1191: The unconditional call to
getDefaultDevice()->mockCameraFeatures(pathToRecording) can apply incompatible
recorded sensor settings across devices; change the call site to first acquire
the device via getDefaultDevice(), verify compatibility with the recording
(e.g., device model/firmware/sensor IDs or a metadata check in the recording),
and only call mockCameraFeatures(pathToRecording) when compatible; otherwise
skip mocking and log a warning or apply a safe fallback (per-feature mapping or
no-op) so replaying an RVC2 recording on an RVC4 device does not force
incompatible sensor settings.
In `@src/utility/PipelineImplHelper.cpp`:
- Around line 116-121: The replay flow must fall back to the old
calibration.json when camera_info.json is absent: in setupHolisticReplay()
update the logic that loads camera info (which currently only tries
"camera_info.json") to attempt loading "calibration.json" if "camera_info.json"
is not found and convert/accept the old format; also ensure
recordReplayFilenames and the filenames/outFiles construction (the vectors
initialized with "camera_info" / "camera_info.json") tolerate or map the legacy
key "calibration" so the pipeline will pass the legacy path through to
setupHolisticReplay() when present. Reference symbols: setupHolisticReplay(),
recordReplayFilenames, "camera_info.json", "calibration.json".
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: d9ea8f3f-3b7a-40f2-921b-e561ae4695fb
📒 Files selected for processing (27)
bindings/python/src/DeviceBindings.cppbindings/python/src/pipeline/PipelineBindings.cppcmake/Depthai/DepthaiDeviceRVC4Config.cmakecmake/Depthai/DepthaiDeviceSideConfig.cmakecmake/Depthai/DepthaiDynamicCalibrationConfig.cmakecmake/depthaiOptions.cmakeexamples/cpp/RecordReplay/holistic_record.cppexamples/python/RecordReplay/holistic_record.pyinclude/depthai/device/DeviceBase.hppinclude/depthai/pipeline/Pipeline.hppinclude/depthai/pipeline/node/MessageDemux.hppinclude/depthai/properties/GlobalProperties.hppinclude/depthai/utility/Compression.hppinclude/depthai/utility/RecordReplay.hppsrc/device/DeviceBase.cppsrc/opencv/HolisticRecordReplay.cppsrc/opencv/RecordReplay.cppsrc/pipeline/Pipeline.cppsrc/pipeline/node/AutoCalibration.cppsrc/pipeline/node/MessageDemux.cppsrc/pipeline/node/host/Replay.cppsrc/properties/Properties.cppsrc/utility/Compression.cppsrc/utility/HolisticRecordReplay.hppsrc/utility/PipelineImplHelper.cpptests/CMakeLists.txttests/src/ondevice_tests/device_properties_test.cpp
| .def("setDefaultDeviceProperties", &Pipeline::setDefaultDeviceProperties, py::arg("deviceProperties"), DOC(dai, Pipeline, setDefaultDeviceProperties)) | ||
| .def("getDefaultDeviceProperties", &Pipeline::getDefaultDeviceProperties, DOC(dai, Pipeline, getDefaultDeviceProperties)) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify the two new Pipeline bindings are currently direct-bound without GIL release
rg -n -C2 'setDefaultDeviceProperties|getDefaultDeviceProperties' bindings/python/src/pipeline/PipelineBindings.cpp
# Compare with RPC-heavy bindings that explicitly release GIL
rg -n -C2 'setProperties|getProperties|setSippBufferSize|setSippDmaBufferSize' bindings/python/src/DeviceBindings.cppRepository: luxonis/depthai-core
Length of output: 2060
🏁 Script executed:
# Find Pipeline class implementation to understand if these methods perform I/O/RPC
fd -t f 'Pipeline\.cpp|Pipeline\.hpp' --exec grep -l "setDefaultDeviceProperties\|getDefaultDeviceProperties" {} \;
# Get implementation details
rg -n "setDefaultDeviceProperties|getDefaultDeviceProperties" -A5 -B2 --type=cpp --type=hppRepository: luxonis/depthai-core
Length of output: 176
🏁 Script executed:
# Check implementation of the methods in Pipeline.cpp
rg -n "setDefaultDeviceProperties|getDefaultDeviceProperties" ./src/pipeline/Pipeline.cpp -A10 -B2
# Also check header for method signatures
rg -n "setDefaultDeviceProperties|getDefaultDeviceProperties" ./include/depthai/pipeline/Pipeline.hpp -A5 -B2Repository: luxonis/depthai-core
Length of output: 2641
Release GIL for default-device property bindings.
When an implicit device exists, setDefaultDeviceProperties and getDefaultDeviceProperties perform RPC via defaultDevice->setProperties() and defaultDevice->getProperties() (see PipelineImpl implementation). These underlying DeviceBase methods are already GIL-released in DeviceBindings.cpp, but the Pipeline wrapper methods currently keep the GIL held, creating an inconsistency that can block Python threads during RPC calls.
🔧 Suggested fix
- .def("setDefaultDeviceProperties", &Pipeline::setDefaultDeviceProperties, py::arg("deviceProperties"), DOC(dai, Pipeline, setDefaultDeviceProperties))
- .def("getDefaultDeviceProperties", &Pipeline::getDefaultDeviceProperties, DOC(dai, Pipeline, getDefaultDeviceProperties))
+ .def(
+ "setDefaultDeviceProperties",
+ [](Pipeline& p, const DeviceProperties& deviceProperties) {
+ py::gil_scoped_release release;
+ p.setDefaultDeviceProperties(deviceProperties);
+ },
+ py::arg("deviceProperties"),
+ DOC(dai, Pipeline, setDefaultDeviceProperties))
+ .def(
+ "getDefaultDeviceProperties",
+ [](Pipeline& p) {
+ py::gil_scoped_release release;
+ return p.getDefaultDeviceProperties();
+ },
+ DOC(dai, Pipeline, getDefaultDeviceProperties))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindings/python/src/pipeline/PipelineBindings.cpp` around lines 201 - 202,
The Pipeline wrapper currently holds the Python GIL when calling
Pipeline::setDefaultDeviceProperties and Pipeline::getDefaultDeviceProperties,
which can block during RPC to the implicit default device; update the pybind11
bindings for these methods to release the GIL by adding a
py::call_guard<py::gil_scoped_release>() (or equivalent py::gil_scoped_release
call guard) to each .def for setDefaultDeviceProperties and
getDefaultDeviceProperties so the underlying DeviceBase RPC
(defaultDevice->setProperties/getProperties) runs without holding the GIL.
| // Signal handling for clean shutdown | ||
| static bool isRunning = true; | ||
| void signalHandler(int signum) { | ||
| isRunning = false; | ||
| } |
There was a problem hiding this comment.
Data race: isRunning should be atomic or volatile sig_atomic_t.
The isRunning flag is written by the signal handler (line 17) and read by the main loop (line 65) without synchronization. This is technically undefined behavior.
🔧 Proposed fix
-// Signal handling for clean shutdown
-static bool isRunning = true;
+#include <atomic>
+
+// Signal handling for clean shutdown
+static std::atomic<bool> isRunning{true};
void signalHandler(int signum) {
+ (void)signum; // Suppress unused parameter warning
isRunning = false;
}Or using the C-compatible approach:
-static bool isRunning = true;
+static volatile sig_atomic_t isRunning = 1;🧰 Tools
🪛 Cppcheck (2.20.0)
[information] 14-14: Include file
(missingIncludeSystem)
[information] 14-14: Include file
(missingIncludeSystem)
[information] 15-15: Include file
(missingIncludeSystem)
[information] 14-14: Include file
(missingIncludeSystem)
[information] 14-14: Include file
(missingIncludeSystem)
[information] 15-15: Include file
(missingIncludeSystem)
[information] 16-16: Include file
(missingIncludeSystem)
[information] 17-17: Include file
(missingIncludeSystem)
[information] 18-18: Include file
(missingIncludeSystem)
[information] 14-14: Include file
(missingIncludeSystem)
[information] 15-15: Include file
(missingIncludeSystem)
[information] 16-16: Include file
(missingIncludeSystem)
[information] 17-17: Include file
(missingInclude)
[information] 18-18: Include file
(missingInclude)
[information] 14-14: Include file
(missingIncludeSystem)
[information] 15-15: Include file
(missingIncludeSystem)
[information] 16-16: Include file
(missingIncludeSystem)
[information] 14-14: Include file
(missingIncludeSystem)
[information] 17-17: Include file
(missingIncludeSystem)
[information] 18-18: Include file
(missingIncludeSystem)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/cpp/RecordReplay/holistic_record.cpp` around lines 14 - 18,
isRunning is modified in signalHandler and read in the main loop without
synchronization; change its type to a signal-safe flag (either std::atomic<bool>
isRunning or volatile sig_atomic_t isRunning) and update both signalHandler and
the main loop to use that type; include <atomic> if using std::atomic and ensure
reads/writes are simple atomic operations (or include <csignal> and use volatile
sig_atomic_t for C-compatible behavior) so the data race is eliminated.
| // Discard frames | ||
| auto sync = pipeline.create<dai::node::Sync>(); | ||
| sync->setSyncAttempts(0); | ||
| camAOut->link(sync->inputs["camA"]); | ||
| camBOut->link(sync->inputs["camB"]); | ||
| camCOut->link(sync->inputs["camC"]); | ||
| imu->out.link(sync->inputs["imu"]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Clarify the purpose of the Sync node with setSyncAttempts(0).
The comment says "Discard frames" but the Sync node with setSyncAttempts(0) may not clearly convey that intent. Consider adding a more detailed comment explaining that this configuration effectively drops unsynchronized frames to prevent queue buildup during recording.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/cpp/RecordReplay/holistic_record.cpp` around lines 53 - 59, The
existing comment "Discard frames" is ambiguous; update the comment above the
Sync node creation (pipeline.create<dai::node::Sync>() and
sync->setSyncAttempts(0)) to explicitly state that setting syncAttempts to 0
causes the Sync node to drop any frames that cannot be synchronized across camA,
camB, camC and imu, preventing queue buildup during recording; reference the
linked outputs (camAOut->link(sync->inputs["camA"]), camBOut->link(...),
camCOut->link(...), imu->out.link(...)) so readers understand this configuration
intentionally favors real-time recording over waiting for exact synchronization.
| while(isRunning && pipeline.isRunning()) { | ||
| auto frame = viewFinderQueue->get<dai::ImgFrame>(); | ||
| cv::imshow("Video", frame->getCvFrame()); | ||
| auto key = cv::waitKey(10); | ||
| if(key == 'q') { | ||
| break; | ||
| } | ||
| } catch(...) { | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider handling null frame from queue.
If the pipeline stops or times out, viewFinderQueue->get<dai::ImgFrame>() could return null. While pipeline.isRunning() check provides some protection, a null check would be more defensive.
💡 Suggested improvement
while(isRunning && pipeline.isRunning()) {
auto frame = viewFinderQueue->get<dai::ImgFrame>();
+ if(!frame) continue;
cv::imshow("Video", frame->getCvFrame());📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| while(isRunning && pipeline.isRunning()) { | |
| auto frame = viewFinderQueue->get<dai::ImgFrame>(); | |
| cv::imshow("Video", frame->getCvFrame()); | |
| auto key = cv::waitKey(10); | |
| if(key == 'q') { | |
| break; | |
| } | |
| } catch(...) { | |
| } | |
| while(isRunning && pipeline.isRunning()) { | |
| auto frame = viewFinderQueue->get<dai::ImgFrame>(); | |
| if(!frame) continue; | |
| cv::imshow("Video", frame->getCvFrame()); | |
| auto key = cv::waitKey(10); | |
| if(key == 'q') { | |
| break; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/cpp/RecordReplay/holistic_record.cpp` around lines 65 - 72, The loop
that calls viewFinderQueue->get<dai::ImgFrame>() can receive a null pointer even
if pipeline.isRunning() is true; update the loop around the call in
holistic_record.cpp to check that the returned frame is non-null before calling
frame->getCvFrame() (and handle the null case by continuing or breaking the loop
and/or logging) — specifically modify the block using
viewFinderQueue->get<dai::ImgFrame>() together with isRunning and
pipeline.isRunning() to defensively guard against a null frame pointer.
| DeviceProperties& setFrom(const DeviceProperties& other) { | ||
| leonCssFrequencyHz = other.leonCssFrequencyHz; | ||
| leonMssFrequencyHz = other.leonMssFrequencyHz; | ||
| pipelineName = other.pipelineName; | ||
| pipelineVersion = other.pipelineVersion; | ||
| if(other.calibData) { | ||
| calibData = other.calibData; | ||
| eepromId = other.eepromId; | ||
| } | ||
| if(other.cameraTuningBlobSize) cameraTuningBlobSize = other.cameraTuningBlobSize; | ||
| cameraTuningBlobUri = other.cameraTuningBlobUri; | ||
| cameraSocketTuningBlobSize = other.cameraSocketTuningBlobSize; | ||
| cameraSocketTuningBlobUri = other.cameraSocketTuningBlobUri; | ||
| for(const auto& [k, v] : other.cameraSocketTuningBlobSize) { | ||
| cameraSocketTuningBlobSize[k] = v; | ||
| } | ||
| for(const auto& [k, v] : other.cameraSocketTuningBlobUri) { | ||
| cameraSocketTuningBlobUri[k] = v; | ||
| } |
There was a problem hiding this comment.
setFrom leaves stale optional/map state instead of replacing it.
Current logic merges map entries and only copies some optionals when set, so callers cannot clear previously configured tuning/calibration values.
🔧 Suggested fix
DeviceProperties& setFrom(const DeviceProperties& other) {
leonCssFrequencyHz = other.leonCssFrequencyHz;
leonMssFrequencyHz = other.leonMssFrequencyHz;
- if(other.calibData) {
- calibData = other.calibData;
- eepromId = other.eepromId;
- }
- if(other.cameraTuningBlobSize) cameraTuningBlobSize = other.cameraTuningBlobSize;
+ calibData = other.calibData;
+ eepromId = other.eepromId;
+ cameraTuningBlobSize = other.cameraTuningBlobSize;
cameraTuningBlobUri = other.cameraTuningBlobUri;
- for(const auto& [k, v] : other.cameraSocketTuningBlobSize) {
- cameraSocketTuningBlobSize[k] = v;
- }
- for(const auto& [k, v] : other.cameraSocketTuningBlobUri) {
- cameraSocketTuningBlobUri[k] = v;
- }
+ cameraSocketTuningBlobSize = other.cameraSocketTuningBlobSize;
+ cameraSocketTuningBlobUri = other.cameraSocketTuningBlobUri;
xlinkChunkSize = other.xlinkChunkSize;
sippBufferSize = other.sippBufferSize;
sippDmaBufferSize = other.sippDmaBufferSize;
return *this;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@include/depthai/properties/GlobalProperties.hpp` around lines 91 - 105,
DeviceProperties::setFrom currently merges maps and only copies optionals when
set, leaving stale state; change it to replace state wholesale by assigning
optionals and maps directly: assign calibData and eepromId from other (do not
guard with if), assign cameraTuningBlobSize and cameraTuningBlobUri directly
from other, and replace cameraSocketTuningBlobSize and cameraSocketTuningBlobUri
by direct assignment (or clear then copy) instead of iterating/merging so
callers can clear values by passing empty optionals/maps.
| if(!message) { | ||
| logger->error("Received message is not a message group - skipping"); | ||
| continue; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider clarifying the error message for null message handling.
When input.get<dai::MessageGroup>() returns null, it typically indicates a timeout or pipeline shutdown rather than a type mismatch (since the input is already typed to accept MessageGroup). The current error message "not a message group" may be misleading.
💡 Suggested improvement
if(!message) {
- logger->error("Received message is not a message group - skipping");
+ logger->trace("No message received (timeout or shutdown) - skipping");
continue;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if(!message) { | |
| logger->error("Received message is not a message group - skipping"); | |
| continue; | |
| } | |
| if(!message) { | |
| logger->trace("No message received (timeout or shutdown) - skipping"); | |
| continue; | |
| } |
🧰 Tools
🪛 Cppcheck (2.20.0)
[information] 26-26: Include file
(missingIncludeSystem)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/pipeline/node/MessageDemux.cpp` around lines 26 - 29, The log for the
null-check after input.get<dai::MessageGroup>() is misleading; update the
handling around the variable `message` (result of
input.get<dai::MessageGroup>()) to log a clearer message indicating a null
return typically means a timeout or pipeline shutdown rather than a type
mismatch, e.g., change the call site using `logger->error(...)` to something
like a warning/error that says "received null MessageGroup (possible timeout or
pipeline shutdown) - skipping" and optionally include contextual info (stream id
or timestamp) to aid debugging.
| if(defaultDeviceProperties != nullptr && defaultDeviceProperties->calibData.has_value()) { | ||
| std::lock_guard<std::mutex> lock(calibMtx); | ||
| return CalibrationHandler(defaultDeviceProperties->calibData.value()); |
There was a problem hiding this comment.
Lock before checking calibData to avoid a read/write race.
defaultDeviceProperties->calibData.has_value() is read before acquiring calibMtx, while writers update it under the mutex. This introduces an unsynchronized access path.
🔧 Suggested fix
- if(defaultDeviceProperties != nullptr && defaultDeviceProperties->calibData.has_value()) {
- std::lock_guard<std::mutex> lock(calibMtx);
- return CalibrationHandler(defaultDeviceProperties->calibData.value());
- }
+ if(defaultDeviceProperties != nullptr) {
+ std::lock_guard<std::mutex> lock(calibMtx);
+ if(defaultDeviceProperties->calibData.has_value()) {
+ return CalibrationHandler(defaultDeviceProperties->calibData.value());
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if(defaultDeviceProperties != nullptr && defaultDeviceProperties->calibData.has_value()) { | |
| std::lock_guard<std::mutex> lock(calibMtx); | |
| return CalibrationHandler(defaultDeviceProperties->calibData.value()); | |
| if(defaultDeviceProperties != nullptr) { | |
| std::lock_guard<std::mutex> lock(calibMtx); | |
| if(defaultDeviceProperties->calibData.has_value()) { | |
| return CalibrationHandler(defaultDeviceProperties->calibData.value()); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/pipeline/Pipeline.cpp` around lines 565 - 567, Acquire calibMtx before
reading calibData to avoid the race: move the std::lock_guard<std::mutex>
lock(calibMtx) above the check of
defaultDeviceProperties->calibData.has_value(), then check has_value() and, if
true, construct and return the CalibrationHandler using
defaultDeviceProperties->calibData.value(); ensure all reads of calibData occur
while calibrated under calibMtx to match writers that lock calibMtx.
| if(getDefaultDevice() != nullptr) { | ||
| getDefaultDevice()->mockCameraFeatures(pathToRecording); | ||
| } |
There was a problem hiding this comment.
Unconditional replay feature mocking breaks cross-device compatibility.
This call applies recorded camera features to the live device without compatibility fallback. It matches the reported failure path (RVC2 recording replayed on RVC4 leading to sensor mismatch at start).
🔧 Suggested mitigation in this call site
- if(getDefaultDevice() != nullptr) {
- getDefaultDevice()->mockCameraFeatures(pathToRecording);
- }
+ if(auto device = getDefaultDevice(); device != nullptr) {
+ try {
+ device->mockCameraFeatures(pathToRecording);
+ } catch(const std::exception& ex) {
+ Logging::getInstance().logger.warn(
+ "Failed to apply replay camera features from '{}': {}. Falling back to device-reported features.",
+ pathToRecording,
+ ex.what());
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if(getDefaultDevice() != nullptr) { | |
| getDefaultDevice()->mockCameraFeatures(pathToRecording); | |
| } | |
| if(auto device = getDefaultDevice(); device != nullptr) { | |
| try { | |
| device->mockCameraFeatures(pathToRecording); | |
| } catch(const std::exception& ex) { | |
| Logging::getInstance().logger.warn( | |
| "Failed to apply replay camera features from '{}': {}. Falling back to device-reported features.", | |
| pathToRecording, | |
| ex.what()); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/pipeline/Pipeline.cpp` around lines 1189 - 1191, The unconditional call
to getDefaultDevice()->mockCameraFeatures(pathToRecording) can apply
incompatible recorded sensor settings across devices; change the call site to
first acquire the device via getDefaultDevice(), verify compatibility with the
recording (e.g., device model/firmware/sensor IDs or a metadata check in the
recording), and only call mockCameraFeatures(pathToRecording) when compatible;
otherwise skip mocking and log a warning or apply a safe fallback (per-feature
mapping or no-op) so replaying an RVC2 recording on an RVC4 device does not
force incompatible sensor settings.
| std::vector<std::filesystem::path> filenames = {pipeline->recordReplayFilenames["record_config"], pipeline->recordReplayFilenames["camera_info"]}; | ||
| std::vector<std::string> outFiles = {"record_config.json", "camera_info.json"}; | ||
| filenames.reserve(pipeline->recordReplayFilenames.size() * 2 + 1); | ||
| outFiles.reserve(pipeline->recordReplayFilenames.size() * 2 + 1); | ||
| for(auto& rstr : pipeline->recordReplayFilenames) { | ||
| if(rstr.first != "record_config" && rstr.first != "calibration") { | ||
| if(rstr.first != "record_config" && rstr.first != "camera_info") { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check if the replay setup handles both old (calibration.json) and new (camera_info.json) formats
# Search for calibration.json handling in replay-related code
echo "=== Checking for calibration.json backward compatibility in replay code ==="
rg -n -C3 'calibration' --type cpp src/opencv/HolisticRecordReplay.cpp src/utility/HolisticRecordReplay.cpp 2>/dev/null || echo "No matches in HolisticRecordReplay files"
# Check how camera_info is being loaded/used
echo ""
echo "=== Checking camera_info handling ==="
rg -n -C3 'camera_info' --type cpp src/opencv/ src/utility/
# Check mockCameraFeatures implementation for format handling
echo ""
echo "=== Checking mockCameraFeatures implementation ==="
ast-grep --pattern 'mockCameraFeatures($_, $_) {
$$$
}'Repository: luxonis/depthai-core
Length of output: 8322
Add backward compatibility to replay logic for old calibration.json format.
The recording flow now writes camera_info.json instead of the previous calibration.json format. However, the replay logic in setupHolisticReplay() (src/opencv/HolisticRecordReplay.cpp, lines 268–303) only attempts to load from camera_info.json. When this file is not found, the function returns false without attempting to fall back to the old calibration.json format.
This breaks replay of older recordings. Add a fallback mechanism to check for and load from calibration.json if camera_info.json is not present, ensuring users can replay recordings made before this format change.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utility/PipelineImplHelper.cpp` around lines 116 - 121, The replay flow
must fall back to the old calibration.json when camera_info.json is absent: in
setupHolisticReplay() update the logic that loads camera info (which currently
only tries "camera_info.json") to attempt loading "calibration.json" if
"camera_info.json" is not found and convert/accept the old format; also ensure
recordReplayFilenames and the filenames/outFiles construction (the vectors
initialized with "camera_info" / "camera_info.json") tolerate or map the legacy
key "calibration" so the pipeline will pass the legacy path through to
setupHolisticReplay() when present. Reference symbols: setupHolisticReplay(),
recordReplayFilenames, "camera_info.json", "calibration.json".
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/pipeline/node/host/Replay.cpp (1)
256-262:⚠️ Potential issue | 🟡 MinorInconsistent offset handling when metadata ends before video.
When metadata reaches EOF first (lines 256-262), both players restart but
seqNumOffsetandtsOffsetare not updated. When video reaches EOF first (lines 279-280), offsets are updated before restart.If the metadata file contains fewer messages than video frames, this asymmetry causes metadata to replay from the beginning with stale offsets, resulting in duplicate sequence numbers and incorrect timestamps until the video also loops.
Consider updating offsets consistently when either source triggers a loop restart, or document the expected invariant that both sources must have matching frame counts.
🔧 Suggested fix for consistent offset handling
if(loop) { + seqNumOffset = lastSeqNum + 1; + tsOffset = lastTs + lastInterval; bytePlayer.restart(); if(hasVideo) { videoPlayer.restart(); } continue; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/pipeline/node/host/Replay.cpp` around lines 256 - 262, When handling EOF in the metadata branch where bytePlayer restarts (the block checking if(loop) that calls bytePlayer.restart() and videoPlayer.restart()), you must update seqNumOffset and tsOffset the same way as done when video triggers an EOF restart so offsets remain consistent; locate the metadata EOF branch that uses bytePlayer and hasVideo and apply the same offset adjustment logic used in the video EOF branch (adjust seqNumOffset and tsOffset based on the last seen sequence/timestamp or loop-count deltas) before calling restart, or centralize the restart+offset update into a helper used by both branches to ensure identical behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/pipeline/node/host/Replay.cpp`:
- Around line 256-262: When handling EOF in the metadata branch where bytePlayer
restarts (the block checking if(loop) that calls bytePlayer.restart() and
videoPlayer.restart()), you must update seqNumOffset and tsOffset the same way
as done when video triggers an EOF restart so offsets remain consistent; locate
the metadata EOF branch that uses bytePlayer and hasVideo and apply the same
offset adjustment logic used in the video EOF branch (adjust seqNumOffset and
tsOffset based on the last seen sequence/timestamp or loop-count deltas) before
calling restart, or centralize the restart+offset update into a helper used by
both branches to ensure identical behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: a342ea6b-9eda-4e64-ae37-bc01ecc5a991
📒 Files selected for processing (1)
src/pipeline/node/host/Replay.cpp
📜 Review details
🔇 Additional comments (3)
src/pipeline/node/host/Replay.cpp (3)
235-241: LGTM!The looping state variables are appropriately declared with correct types for tracking sequence numbers and timestamps across loop iterations.
323-340: LGTM!The first-frame initialization now correctly executes before timestamp calculations, addressing the previously flagged issue. The logic properly handles:
- First frame: initializes
firstTs,lastTs,tsOffsetbefore timestamp math- Subsequent frames: calculates
lastIntervalfrom the difference- Loop restarts: uses updated
tsOffsetwith correct accumulated timing
388-440: LGTM!The
ReplayMetadataOnly::run()implementation correctly mirrors the looping logic fromReplayVideo::run():
- Proper first-frame initialization before timestamp calculations
- Correct offset updates on loop restart
- Consistent sequence number and timestamp handling across iterations
The previously flagged initialization order issue has been addressed.
Purpose
Specification
None / not applicable
Dependencies & Potential Impact
None / not applicable
Deployment Plan
None / not applicable
Testing & Validation
None / not applicable
Summary by CodeRabbit
New Features
Tests