-
Notifications
You must be signed in to change notification settings - Fork 89
feat(yscope-clp-core): Introduce ClpArchiveWriter for compression functionalities; set up basic integration tests system for the python distribution.
#1934
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds a public open_archive API and a ClpArchiveWriter implementation with supporting types, exceptions, and utilities; updates packaging deps, CI/taskfiles, and adds integration tests for archive creation and decompression. (23 words) Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API as open_archive()
participant Writer as ClpArchiveWriter
participant FS as FileSystem
participant CLP as clp-s
rect rgba(200,200,255,0.5)
Client->>API: open_archive(file, mode="w", **kwargs)
API->>API: validate mode == "w"
API->>Writer: instantiate(target=file, **kwargs)
API-->>Client: return Writer
end
rect rgba(200,255,200,0.5)
Client->>Writer: add(source)
alt source is stream
Writer->>FS: _write_stream_to_temp_file(stream) -> temp file
FS-->>Writer: temp file path
else source is path
Writer-->>Writer: register path
end
end
rect rgba(255,200,200,0.5)
Client->>Writer: compress()
Writer->>CLP: run clp-s with collected sources -> produce archive
CLP->>FS: write archive output
CLP-->>Writer: exit status
Writer->>FS: cleanup temp files
Writer-->>Client: compress complete
end
Client->>Writer: close()
Writer->>Writer: ensure compressed & cleanup
Writer-->>Client: closed
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Ruff (0.14.14)integration-tests/tests/python-wheels/test_yscope_clp_core/__init__.py�[1;31mruff failed�[0m integration-tests/tests/python-wheels/test_yscope_clp_core/test_yscope_clp_core.py�[1;31mruff failed�[0m Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
yscope-clp-core archive writer and integration tests.ClpArchiveWriter for compression functionalities; set up basic integration tests system for the python distribution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🤖 Fix all issues with AI agents
In `@integration-tests/tests/python-wheels/test_yscope_clp_core/__init__.py`:
- Line 1: The module-level docstring in __init__.py contains a grammatical error
("the correctness the CLP"); update that docstring string to read "Integration
tests verifying the correctness of the CLP core python binding library." so the
missing preposition "of" is added; modify the literal at the top of the file
accordingly.
In
`@integration-tests/tests/python-wheels/test_yscope_clp_core/test_yscope_clp_core.py`:
- Around line 57-59: Replace the hard-coded file paths by programmatically
discovering the PostgreSQL log under postgresql.extraction_dir and creating the
expected output directory: use pathlib globbing on postgresql.extraction_dir
(e.g., postgresql.extraction_dir.rglob("*.log") or a more specific pattern like
"**/postgresql*.log") to find the first matching file and assign it to
input_path, and derive output_path from test_paths.decompression_dir (ensure the
"original" directory is created if absent); update the test code where
input_path and output_path are set so it fails clearly if no log is found.
- Around line 1-31: Fix typos and grammar in the module and test docstrings:
change the module docstring to "Integration tests verifying the correctness of
the CLP core Python binding library." and update the test_archive_writer
docstring to clean English (e.g., "environment" instead of "envrionment",
"coarsely" instead of "coarsly", remove "we simply checks" -> "we simply check"
or rephrase to "This test verifies that an archive produced by the writer can be
successfully decompressed by clp-s with structurally identical output"), and
adjust any other minor wording (capitalize "Python", fix punctuation) in the
file and in the docstring for test_archive_writer to improve clarity; update
strings in this file and the docstrings inside the test function accordingly.
- Around line 42-64: Wrap the test body that uses open_archive and the assertion
in a try/finally and convert the archive writer to a context manager to
guarantee proper close and cleanup: replace the manual writer =
open_archive(...); writer.add(...); writer.compress(); writer.close() pattern
with a with open_archive(..., mode="w") as writer: block, perform add() and
compress() inside that block, run the decompress and assertion inside a try
block, and call test_paths.clear_test_outputs() in the finally block so cleanup
runs even if is_json_file_structurally_equal/assertion fails.
In `@python-wheels/yscope-clp-core/yscope_clp_core/_api.py`:
- Around line 1-31: The module docstring and the open_archive docstring contain
typos: change "interfact" to "interface" in the module docstring and correct
"opeartions" to "operations" in the parameter description for the `open_archive`
function; update any nearby misspellings in the same docstring sentences (e.g.,
"The stream is then used directly without further seeking opeartions" → "without
further seeking operations") so the public API docs are spelled correctly and
read clearly.
In `@python-wheels/yscope-clp-core/yscope_clp_core/_archive_io.py`:
- Around line 172-195: The __exit__ method can still trigger compress via
close() when an exception occurred or after compress() failed; to fix, ensure
you flip the internal flag (self._open = False) before calling close() whenever
exc_type is not None or after catching an exception from self.compress() so
close() will not attempt compression again; update __exit__ to set self._open =
False in the exception branch and prior to the final close(), referencing
__exit__, compress(), close(), self._open and self._archive_dir.
In `@python-wheels/yscope-clp-core/yscope_clp_core/_types.py`:
- Around line 1-51: Fix the typos in the module and class docstrings: update the
module docstring string "Commonly shard package class and data types." to
"Commonly shared package classes and data types." and update the
CompressionKwargs docstring phrase "Keyword arguments use by
`ClpArchiveWriter`..." to "Keyword arguments used by `ClpArchiveWriter`...";
also consider pluralizing "class" to "classes" as shown to match the rest of the
sentence.
In `@python-wheels/yscope-clp-core/yscope_clp_core/_utils.py`:
- Around line 68-86: The except block can reference temp_file_path before it's
assigned if NamedTemporaryFile raises; initialize temp_file_path to None before
the try (e.g. set temp_file_path = None above the try) so the except branch can
safely test it, leaving the rest of the cleanup/raise logic using
ClpCoreRuntimeError intact (affects tempfile.NamedTemporaryFile, temp_file_path,
shutil.copyfileobj, FILE_OBJ_COPY_CHUNK_SIZE).
In `@taskfiles/tests/integration.yaml`:
- Line 46: The pip install command uses a broad glob ("uv pip install
--force-reinstall {{.G_PYTHON_WHEELS_BUILD_DIR}}/*.whl") which may install
unintended wheel files; change the glob to target only the yscope-clp-core wheel
(e.g., use {{.G_PYTHON_WHEELS_BUILD_DIR}}/yscope-clp-core*.whl or the exact
wheel filename) so only the intended package is installed, or add a
filter/explicit filename resolution before passing to pip.
integration-tests/tests/python-wheels/test_yscope_clp_core/__init__.py
Outdated
Show resolved
Hide resolved
integration-tests/tests/python-wheels/test_yscope_clp_core/test_yscope_clp_core.py
Outdated
Show resolved
Hide resolved
integration-tests/tests/python-wheels/test_yscope_clp_core/test_yscope_clp_core.py
Show resolved
Hide resolved
integration-tests/tests/python-wheels/test_yscope_clp_core/test_yscope_clp_core.py
Show resolved
Hide resolved
integration-tests/tests/python-wheels/test_yscope_clp_core/test_yscope_clp_core.py
Outdated
Show resolved
Hide resolved
integration-tests/tests/python-wheels/test_yscope_clp_core/test_yscope_clp_core.py
Outdated
Show resolved
Hide resolved
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@python-wheels/yscope-clp-core/yscope_clp_core/_archive_io.py`:
- Around line 114-116: Replace the Yoda-style emptiness check in the method
using self._files_to_compress: change the condition "if 0 ==
len(self._files_to_compress):" to the more idiomatic "if not
self._files_to_compress:" so the code reads naturally in Python; keep the
existing error message and the ClpCoreValueError raise that references
self._archive_dir unchanged.
- Around line 127-167: close() currently re-invokes compress() and if compress()
raises it returns before clearing _open and removing temp artifacts; wrap the
compress() invocation in a try/finally (or try/except/finally) so that
regardless of success or failure you set self._open = False and run the cleanup
loop over self._temp_files_to_compress and cleanup self._temp_archive_dir, and
then re-raise the compress() exception so callers still see errors; modify the
close() implementation around the compress() call to ensure cleanup always runs
while preserving the original exception behavior, referencing the
methods/attributes compress(), close(), self._open, self._temp_files_to_compress
and self._temp_archive_dir.
- Around line 48-73: When self._file (set via
ArchiveSourceAdapter.validate_python) is a binary stream rather than a path,
validate it is writable during initialization to fail fast: check it has a write
method and attempt a harmless write+flush/seek roundtrip (or at minimum call
write(b"") and flush) inside the __init__ after self._file is assigned; if that
raises, wrap and raise ClpCoreRuntimeError with a clear message so compression
work is not started on a non‑writable stream.
| self._file: ArchiveSource = ArchiveSourceAdapter.validate_python(file) | ||
| self._compression_level: int | None = kwargs.get("compression_level") | ||
| self._timestamp_key: str | None = kwargs.get("timestamp_key") | ||
|
|
||
| self._files_to_compress: list[Path] = [] | ||
| self._temp_files_to_compress: list[Path] = [] | ||
|
|
||
| self._archive_dir: Path | ||
| self._temp_archive_dir: TemporaryDirectory[str] | None = None | ||
| if isinstance(self._file, (str, os.PathLike)): | ||
| self._archive_dir = Path(self._file) | ||
| try: | ||
| if self._archive_dir.is_dir(): | ||
| shutil.rmtree(self._archive_dir) | ||
| elif self._archive_dir.is_file(): | ||
| self._archive_dir.unlink() | ||
| except Exception as e: | ||
| err_msg = f"Failed to overwrite existing archive at {self._archive_dir}." | ||
| raise ClpCoreRuntimeError(err_msg) from e | ||
| else: | ||
| # Create a temporary directory for the archive to live in, and feed the archive into the | ||
| # binary stream when done. | ||
| self._temp_archive_dir = TemporaryDirectory() | ||
| self._archive_dir = Path(self._temp_archive_dir.name) | ||
|
|
||
| self._open: bool = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate archive output streams are writable.
A non‑writable binary stream will fail only after compression; fail fast in __init__ to avoid wasted work.
🔧 Proposed fix
- self._file: ArchiveSource = ArchiveSourceAdapter.validate_python(file)
+ self._file: ArchiveSource = ArchiveSourceAdapter.validate_python(file)
+ if isinstance(self._file, io.IOBase) and not self._file.writable():
+ err_msg = "Archive output stream must be writable."
+ raise ClpCoreTypeError(err_msg)🤖 Prompt for AI Agents
In `@python-wheels/yscope-clp-core/yscope_clp_core/_archive_io.py` around lines 48
- 73, When self._file (set via ArchiveSourceAdapter.validate_python) is a binary
stream rather than a path, validate it is writable during initialization to fail
fast: check it has a write method and attempt a harmless write+flush/seek
roundtrip (or at minimum call write(b"") and flush) inside the __init__ after
self._file is assigned; if that raises, wrap and raise ClpCoreRuntimeError with
a clear message so compression work is not started on a non‑writable stream.
| if 0 == len(self._files_to_compress): | ||
| err_msg = f"Nothing to compress for archive {self._archive_dir}." | ||
| raise ClpCoreValueError(err_msg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Prefer a non‑Yoda emptiness check.
Using if not self._files_to_compress: reads more naturally in Python. Based on learnings: “The team prefers to avoid YODA conditions in Python for consistency.”
🤖 Prompt for AI Agents
In `@python-wheels/yscope-clp-core/yscope_clp_core/_archive_io.py` around lines
114 - 116, Replace the Yoda-style emptiness check in the method using
self._files_to_compress: change the condition "if 0 ==
len(self._files_to_compress):" to the more idiomatic "if not
self._files_to_compress:" so the code reads naturally in Python; keep the
existing error message and the ClpCoreValueError raise that references
self._archive_dir unchanged.
| try: | ||
| subprocess.run(compress_cmd, check=True, stderr=subprocess.PIPE, text=True) | ||
| except subprocess.CalledProcessError as e: | ||
| err_msg = f"Compression for archive {self._archive_dir} failed: {e.stderr.strip()}" | ||
| raise ClpCoreRuntimeError(err_msg) from e | ||
|
|
||
| try: | ||
| if self._temp_archive_dir is not None: | ||
| with _get_single_file_in_dir(self._archive_dir).open("rb") as archive_file: | ||
| shutil.copyfileobj( | ||
| archive_file, cast("IO[bytes]", self._file), length=FILE_OBJ_COPY_CHUNK_SIZE | ||
| ) | ||
| except Exception as e: | ||
| err_msg = "Failed to write compressed contents to the provided archive stream." | ||
| raise ClpCoreRuntimeError(err_msg) from e | ||
|
|
||
| self._open = False | ||
| self.close() | ||
|
|
||
| def close(self) -> None: | ||
| """ | ||
| Close the writer and clean up temporary resources. | ||
|
|
||
| If the archive is still open, this method first performs compression before marking the | ||
| writer as closed and performing cleanup. This method is idempotent and safe to call multiple | ||
| times. | ||
| :raise: Propagates `compress()`'s exceptions. | ||
| """ | ||
| if self._open: | ||
| self.compress() | ||
|
|
||
| self._open = False | ||
|
|
||
| for p in self._temp_files_to_compress: | ||
| with suppress(OSError): | ||
| p.unlink() | ||
|
|
||
| if self._temp_archive_dir is not None: | ||
| with suppress(OSError): | ||
| self._temp_archive_dir.cleanup() | ||
| self._temp_archive_dir = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure cleanup runs after compression failures.
If compress() raises, close() re‑runs compression and exits before cleanup, leaving temp files/dirs behind. Ensure _open is cleared and cleanup executes even when compression fails.
🛠️ Proposed fix
- try:
- subprocess.run(compress_cmd, check=True, stderr=subprocess.PIPE, text=True)
- except subprocess.CalledProcessError as e:
- err_msg = f"Compression for archive {self._archive_dir} failed: {e.stderr.strip()}"
- raise ClpCoreRuntimeError(err_msg) from e
+ try:
+ subprocess.run(compress_cmd, check=True, stderr=subprocess.PIPE, text=True)
+ except subprocess.CalledProcessError as e:
+ self._open = False
+ err_msg = f"Compression for archive {self._archive_dir} failed: {e.stderr.strip()}"
+ raise ClpCoreRuntimeError(err_msg) from e
@@
- except Exception as e:
- err_msg = "Failed to write compressed contents to the provided archive stream."
- raise ClpCoreRuntimeError(err_msg) from e
+ except Exception as e:
+ self._open = False
+ err_msg = "Failed to write compressed contents to the provided archive stream."
+ raise ClpCoreRuntimeError(err_msg) from e
@@
- if self._open:
- self.compress()
-
- self._open = False
-
- for p in self._temp_files_to_compress:
+ try:
+ if self._open:
+ self.compress()
+ finally:
+ self._open = False
+ for p in self._temp_files_to_compress:
+ with suppress(OSError):
+ p.unlink()
+
+ if self._temp_archive_dir is not None:
+ with suppress(OSError):
+ self._temp_archive_dir.cleanup()
+ self._temp_archive_dir = None
- with suppress(OSError):
- p.unlink()
-
- if self._temp_archive_dir is not None:
- with suppress(OSError):
- self._temp_archive_dir.cleanup()
- self._temp_archive_dir = None🤖 Prompt for AI Agents
In `@python-wheels/yscope-clp-core/yscope_clp_core/_archive_io.py` around lines
127 - 167, close() currently re-invokes compress() and if compress() raises it
returns before clearing _open and removing temp artifacts; wrap the compress()
invocation in a try/finally (or try/except/finally) so that regardless of
success or failure you set self._open = False and run the cleanup loop over
self._temp_files_to_compress and cleanup self._temp_archive_dir, and then
re-raise the compress() exception so callers still see errors; modify the
close() implementation around the compress() call to ensure cleanup always runs
while preserving the original exception behavior, referencing the
methods/attributes compress(), close(), self._open, self._temp_files_to_compress
and self._temp_archive_dir.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
taskfile.yaml (1)
600-612:⚠️ Potential issue | 🟠 MajorRestore the
coredependency to ensureclp-sis always built before the wheel.This task exports
YSCOPE_CLP_CORE_CLP_S_EXEpointing to the core build directory. Withdeps: ["core"]commented out, the task lacks an explicit dependency guarantee. While integration tests happen to listcoreas a prerequisite at the parent level, the root-level task definition can be invoked standalone without buildingcore, risking build failures.Proposed fix
python-wheels-yscope-clp-core: vars: TMP_DIR: "{{.G_BUILD_DIR}}/tmp-{{.TASK}}" dir: "{{.G_PYTHON_WHEELS_YSCOPE_CLP_CORE_DIR}}" - `#deps`: ["core"] + deps: ["core"]
quinntaylormitchell
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
preliminary review on integration-tests/* files only:
looks pretty good, only a few comments.
integration-tests/tests/python-wheels/test_yscope_clp_core/__init__.py
Outdated
Show resolved
Hide resolved
integration-tests/tests/python-wheels/test_yscope_clp_core/test_yscope_clp_core.py
Outdated
Show resolved
Hide resolved
integration-tests/tests/python-wheels/test_yscope_clp_core/test_yscope_clp_core.py
Outdated
Show resolved
Hide resolved
| writer.compress() | ||
| writer.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| writer.compress() | |
| writer.close() | |
| writer.close() |
from the code in _archive_io.py it looks like ClpArchiveWriter.compress() gets called on writer automatically when calling writer.close(); do you want to keep both of these lines for readability or just do it in one?
|
|
||
| # TODO: Replace hard-coded paths with programmatic file discovery. | ||
| input_path = postgresql.extraction_dir / "postgresql" / "postgresql.log" | ||
| output_path = test_paths.decompression_dir / "original" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| output_path = test_paths.decompression_dir / "original" | |
| output_path = test_paths.decompression_dir / "decompressed_postgresql" |
it feels weird to call this "original" when it's not the original file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@taskfile.yaml`:
- Around line 606-608: The task currently removes and recreates '{{.TMP_DIR}}'
but does not ensure the destination wheel output directory exists, so the
WITH_CORE=false branch that runs cp into '{{.G_PYTHON_WHEELS_BUILD_DIR}}' can
fail; update the task to create the wheel output directory before any cp by
adding a mkdir -p '{{.G_PYTHON_WHEELS_BUILD_DIR}}' step (or include it alongside
the existing mkdir for TMP_DIR) so that the subsequent cp into
G_PYTHON_WHEELS_BUILD_DIR succeeds.
- Around line 611-615: The taskfile currently exports YSCOPE_CLP_CORE_CLP_S_EXE
and proceeds to package when WITH_CORE=true but no longer depends on the core
build, so clp-s can be absent or stale; update the task to either add a
conditional dependency on the core target (when WITH_CORE == "true") or add an
explicit guard before the build step that checks the existence and executability
of "{{.G_CORE_COMPONENT_BUILD_DIR}}/clp-s" and fails with a clear message if
missing. Target the YSCOPE_CLP_CORE_CLP_S_EXE export/uv run block in
taskfile.yaml and ensure the guard/dependency triggers fast failure and
descriptive output when clp-s is not present.
| - |- | ||
| export YSCOPE_CLP_CORE_CLP_S_EXE="{{.G_CORE_COMPONENT_BUILD_DIR}}/clp-s" | ||
| {{- if eq "true" .WITH_CORE}} | ||
| export YSCOPE_CLP_CORE_CLP_S_EXE="{{.G_CORE_COMPONENT_BUILD_DIR}}/clp-s" | ||
| {{- end}} | ||
| uv run --no-cache python3 -m build --wheel --outdir "{{.TMP_DIR}}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure core binary availability when WITH_CORE=true.
WITH_CORE now defaults to true, but the task no longer depends on core, so clp-s can be missing or stale. Add a guard (or a conditional dep) so this fails fast and clearly when clp-s isn’t built.
✅ Suggested guard to avoid packaging a missing/stale core binary
- |-
{{- if eq "true" .WITH_CORE}}
+ if [ ! -x "{{.G_CORE_COMPONENT_BUILD_DIR}}/clp-s" ]; then
+ echo "clp-s not found; run 'task core' or set WITH_CORE=false" >&2
+ exit 1
+ fi
export YSCOPE_CLP_CORE_CLP_S_EXE="{{.G_CORE_COMPONENT_BUILD_DIR}}/clp-s"
{{- end}}
uv run --no-cache python3 -m build --wheel --outdir "{{.TMP_DIR}}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - |- | |
| export YSCOPE_CLP_CORE_CLP_S_EXE="{{.G_CORE_COMPONENT_BUILD_DIR}}/clp-s" | |
| {{- if eq "true" .WITH_CORE}} | |
| export YSCOPE_CLP_CORE_CLP_S_EXE="{{.G_CORE_COMPONENT_BUILD_DIR}}/clp-s" | |
| {{- end}} | |
| uv run --no-cache python3 -m build --wheel --outdir "{{.TMP_DIR}}" | |
| - |- | |
| {{- if eq "true" .WITH_CORE}} | |
| if [ ! -x "{{.G_CORE_COMPONENT_BUILD_DIR}}/clp-s" ]; then | |
| echo "clp-s not found; run 'task core' or set WITH_CORE=false" >&2 | |
| exit 1 | |
| fi | |
| export YSCOPE_CLP_CORE_CLP_S_EXE="{{.G_CORE_COMPONENT_BUILD_DIR}}/clp-s" | |
| {{- end}} | |
| uv run --no-cache python3 -m build --wheel --outdir "{{.TMP_DIR}}" |
🤖 Prompt for AI Agents
In `@taskfile.yaml` around lines 611 - 615, The taskfile currently exports
YSCOPE_CLP_CORE_CLP_S_EXE and proceeds to package when WITH_CORE=true but no
longer depends on the core build, so clp-s can be absent or stale; update the
task to either add a conditional dependency on the core target (when WITH_CORE
== "true") or add an explicit guard before the build step that checks the
existence and executability of "{{.G_CORE_COMPONENT_BUILD_DIR}}/clp-s" and fails
with a clear message if missing. Target the YSCOPE_CLP_CORE_CLP_S_EXE export/uv
run block in taskfile.yaml and ensure the guard/dependency triggers fast failure
and descriptive output when clp-s is not present.
486b9ff to
0f49da2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@taskfile.yaml`:
- Around line 615-618: The inline "task core" call fails when executed inside
subdirs because go-task only reads the current working directory; update the
invocation in taskfile.yaml so it explicitly points to the root Taskfile
(replace the plain "task core" with a call that passes the root Taskfile via the
--taskfile flag, e.g. "task --taskfile <path-to-root-Taskfile> core"), ensuring
the path chosen resolves correctly when invoked from
python-wheels/yscope-clp-core and keep the export of YSCOPE_CLP_CORE_CLP_S_EXE
unchanged.
In `@taskfiles/tests/integration.yaml`:
- Around line 50-57: Remove the temporary debug output from the install task's
cmd block by deleting the echo "AAA", echo "BBB", echo "CCC" lines and the two
ls -l lines that list "{{.G_PYTHON_WHEELS_BUILD_DIR}}" and the wheel file; leave
only the essential steps (uv venv --allow-existing and uv pip install
--force-reinstall "{{.G_PYTHON_WHEELS_BUILD_DIR}}"/yscope_clp_core-*.whl) so CI
logs are not cluttered by diagnostic output.
| {{- if eq "true" .WITH_CORE}} | ||
| task core | ||
| export YSCOPE_CLP_CORE_CLP_S_EXE="{{.G_CORE_COMPONENT_BUILD_DIR}}/clp-s" | ||
| {{- end}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and examine the taskfile.yaml file
fd -type f -name "taskfile.yaml" -o -name "Taskfile.yaml" | head -20Repository: y-scope/clp
Length of output: 225
🏁 Script executed:
# Let's check the directory structure to understand the context
git ls-files | grep -i taskfile | head -20Repository: y-scope/clp
Length of output: 321
🏁 Script executed:
# Read the taskfile.yaml around lines 615-618 with context
sed -n '600,635p' taskfile.yamlRepository: y-scope/clp
Length of output: 1158
🏁 Script executed:
# Search for TASKFILE variable definition in the main taskfile
rg "TASKFILE" taskfile.yamlRepository: y-scope/clp
Length of output: 423
🏁 Script executed:
# Check the structure of the python-wheels/yscope-clp-core directory mentioned in the comment
find . -type d -name "python-wheels" -o -name "yscope-clp-core" | head -10Repository: y-scope/clp
Length of output: 142
🏁 Script executed:
# Check if there's a Taskfile in python-wheels/yscope-clp-core
ls -la python-wheels/yscope-clp-core/ | grep -i taskRepository: y-scope/clp
Length of output: 37
🏁 Script executed:
# Search for "core" task definition in taskfile.yaml
rg "^ core:" taskfile.yamlRepository: y-scope/clp
Length of output: 62
🏁 Script executed:
# Look for all task invocations in taskfile.yaml to see patterns
rg "^\s+task " taskfile.yaml | head -30Repository: y-scope/clp
Length of output: 74
🏁 Script executed:
# Check if TASKFILE variable is defined in the main taskfile
head -100 taskfile.yamlRepository: y-scope/clp
Length of output: 3381
🏁 Script executed:
# Check the full python-wheels-yscope-clp-core task definition
rg -A 40 "python-wheels-yscope-clp-core:" taskfile.yaml | head -50Repository: y-scope/clp
Length of output: 1158
🏁 Script executed:
# Look for any Taskfile in python-wheels directory
find python-wheels -name "Taskfile*" -o -name "taskfile*"Repository: y-scope/clp
Length of output: 37
🏁 Script executed:
# Check if TASKFILE is a built-in go-task variable by looking for documentation or usage patterns
rg "TASKFILE" taskfile.yaml taskfiles/ | grep -v "G_" | head -20Repository: y-scope/clp
Length of output: 877
🏁 Script executed:
# Search for how task calls are made with -t flag in the codebase
rg 'task -t' taskfile.yaml taskfiles/Repository: y-scope/clp
Length of output: 37
🌐 Web query:
go-task -t flag TASKFILE variable documentation
💡 Result:
-t/--taskfileflag: Use this to run Task with a specific Taskfile path (instead of auto-discovery). Example:task --taskfile Custom.yml. You can also pass-to read the Taskfile from stdin:cat Taskfile.yml | task -t -. [1][2]TASKFILEenvironment variable: There isn’t a documentedTASKFILEenv var for selecting the Taskfile. Task’s own env vars are documented as being prefixed withTASK_(e.g.,TASK_TEMP_DIR). [3]- If you meant “Taskfile template variables”: In templates you can access special variables such as
ROOT_TASKFILE(absolute path of the root Taskfile), plus otherCLI_*/TASKvariables. [4]
Sources:
[1] https://taskfile.dev/docs/guide
[2] https://taskfile.dev/docs/getting-started
[3] https://taskfile.dev/docs/reference/environment
[4] https://taskfile.dev/next/reference/templating/
🌐 Web query:
go-task parent taskfile search behavior
💡 Result:
go-task (Task) does not automatically search parent directories for a Taskfile. If you don’t pass anything, it looks only in the current directory for one of the supported names (e.g. Taskfile.yml, Taskfile.yaml, etc.). (taskfile.dev)
To run a “root” Taskfile from a subfolder, you must point Task at it, e.g.:
task --dir .. <task>(or a specific path to the repo root) (taskfile.dev)task --taskfile ../Taskfile.yml <task>(taskfile.dev)
Related (when composing Taskfiles): in includes, any relative taskfile: path is resolved relative to the directory containing the including (parent) Taskfile; and if taskfile: points to a directory, Task will look for Taskfile.yml / Taskfile.yaml inside that directory. (taskfile.dev)
Citations:
- 1: https://taskfile.dev/usage/?utm_source=openai
- 2: https://taskfile.dev/usage/?utm_source=openai
- 3: https://taskfile.dev/usage/?utm_source=openai
- 4: https://taskfile.dev/next/reference/schema/?utm_source=openai
Explicitly reference the root Taskfile to ensure task core resolves correctly from the subdirectory.
When this task runs from python-wheels/yscope-clp-core, the task core call will fail because go-task does not search parent directories—it only looks in the current working directory for a Taskfile. The subdirectory contains no Taskfile, so the root task cannot be found.
🔧 Suggested hardening
- task core
+ task -t "{{.TASKFILE}}" core🤖 Prompt for AI Agents
In `@taskfile.yaml` around lines 615 - 618, The inline "task core" call fails when
executed inside subdirs because go-task only reads the current working
directory; update the invocation in taskfile.yaml so it explicitly points to the
root Taskfile (replace the plain "task core" with a call that passes the root
Taskfile via the --taskfile flag, e.g. "task --taskfile <path-to-root-Taskfile>
core"), ensuring the path chosen resolves correctly when invoked from
python-wheels/yscope-clp-core and keep the export of YSCOPE_CLP_CORE_CLP_S_EXE
unchanged.
Co-authored-by: Quinn Taylor Mitchell <[email protected]>
Description
This is the first formal PR in the sequence to merge #1872 (making this PR a direct continuation of #1732), which provides the initial programmatic interface for users to operate on CLP archives within Python environments.
Specifically, this PR introduces the
ClpArchiveWriterclass and sets up a basic integration suite that verifies the correctness of the archive writing functions. Search/Decompression features and more comprehensive testing will follow in future PRs.Checklist
breaking change.
Validation performed
Summary by CodeRabbit
New Features
Tests
Dependencies
Chores