-
Notifications
You must be signed in to change notification settings - Fork 909
feat: add bedrock as supported llm provider #1830
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
feat: add bedrock as supported llm provider #1830
Conversation
Signed-off-by: xdurawa <[email protected]>
- Resolved conflicts in test_suites.yml to include both OpenRouter and Bedrock tests - Updated add.py to include bedrock provider with latest dev branch model defaults - Resolved uv.lock conflicts by using dev branch version
- Reverted README.md to original state - Reverted cognee-starter-kit/README.md to original state - Documentation will be updated separately by maintainers
Please make sure all the checkboxes are checked:
|
|
| GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
|---|---|---|---|---|---|
| 9573981 | Triggered | Generic Password | 06a3458 | .github/workflows/temporal_graph_tests.yml | View secret |
| 8719688 | Triggered | Generic Password | 06a3458 | .github/workflows/temporal_graph_tests.yml | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secrets safely. Learn here the best practices.
- Revoke and rotate these secrets.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
WalkthroughAdds AWS Bedrock as a new LLM provider to the system. Changes include three GitHub Actions jobs for Bedrock testing, a new BedrockAdapter class supporting multiple authentication methods, configuration extensions for Bedrock settings, and provider enum and settings updates to expose Bedrock as a selectable LLM option. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (6)
cognee/api/v1/add/add.py (1)
153-165: Consider documenting Bedrock-specific authentication and config variables.The new
"bedrock"option inLLM_PROVIDERlooks consistent, but it would help to briefly call out which env vars are required whenLLM_PROVIDER="bedrock"(e.g., whetherLLM_API_KEYis used vs AWS credentials/profile and any region/model settings) and ensure the documented provider value matches exactly what the settings/enum accept, to avoid drift between docs and config behavior.cognee/infrastructure/files/storage/s3_config.py (1)
12-13: Consider separating AWS/Bedrock config from S3-specific config.Adding
aws_bedrock_runtime_endpointtoS3Configcouples Bedrock LLM configuration with S3 storage settings. While both share AWS credentials, the class name becomes semantically misleading.A future refactor could extract shared AWS fields into a base
AWSConfigclass, withS3ConfigandBedrockConfiginheriting from it. This keeps the current PR non-breaking but improves cohesion long-term.cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/get_llm_client.py (1)
174-188: Remove commented code and consider validating that at least one auth method is configured.The commented-out API key check (lines 175-176) should be removed rather than left as dead code. Since Bedrock supports three authentication methods (API key, AWS credentials, AWS profile), consider adding validation that at least one is available:
elif provider == LLMProvider.BEDROCK: - # if llm_config.llm_api_key is None and raise_api_key_error: - # raise LLMAPIKeyNotSetError() - from cognee.infrastructure.llm.structured_output_framework.litellm_instructor.llm.bedrock.adapter import ( BedrockAdapter, ) + from cognee.infrastructure.files.storage.s3_config import get_s3_config + + s3_config = get_s3_config() + has_auth = ( + llm_config.llm_api_key + or (s3_config.aws_access_key_id and s3_config.aws_secret_access_key) + or s3_config.aws_profile_name + ) + if not has_auth and raise_api_key_error: + raise LLMAPIKeyNotSetError() return BedrockAdapter(This ensures users get a clear early error if no authentication is configured, rather than a cryptic failure at request time.
.github/workflows/test_llms.yml (1)
156-163: Consider restricting file permissions on AWS credentials file.The credentials file is created with default permissions. For defense-in-depth, explicitly set restrictive permissions:
- name: Configure AWS Profile run: | mkdir -p ~/.aws + chmod 700 ~/.aws cat > ~/.aws/credentials << EOF [bedrock-test] aws_access_key_id = ${{ secrets.AWS_ACCESS_KEY_ID }} aws_secret_access_key = ${{ secrets.AWS_SECRET_ACCESS_KEY }} EOF + chmod 600 ~/.aws/credentialsWhile GitHub-hosted runners are ephemeral, this follows AWS security best practices and prevents issues if the workflow is reused in self-hosted environments.
cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/adapter.py (2)
146-152: Redundant conditional after system prompt validation.Since
MissingSystemPromptPathErroris raised at line 145 whensystem_promptis falsy, the conditional at line 150 (if system_prompt else None) is redundant—system_promptwill always be truthy at that point. However, note thatLLMGateway.read_query_promptcould potentially return an empty string, so you may want to validate its return value instead.- system_prompt = LLMGateway.read_query_prompt(system_prompt) - - formatted_prompt = ( - f"""System Prompt:\n{system_prompt}\n\nUser Input:\n{text_input}\n""" - if system_prompt - else None - ) - return formatted_prompt + system_prompt_content = LLMGateway.read_query_prompt(system_prompt) + if not system_prompt_content: + raise MissingSystemPromptPathError() + + return f"System Prompt:\n{system_prompt_content}\n\nUser Input:\n{text_input}\n"
1-26: Consider adding logging for debugging Bedrock requests.Per coding guidelines, use shared logging utilities from
cognee.shared.logging_utils. Logging would help debug authentication issues and API errors, especially given the multiple authentication methods supported.+from cognee.shared.logging_utils import get_logger + +logger = get_logger(__name__)Then add logging in key places like
_create_bedrock_requestto log which authentication method is being used. Based on coding guidelines.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.github/workflows/test_llms.yml(1 hunks)cognee/api/v1/add/add.py(1 hunks)cognee/infrastructure/files/storage/s3_config.py(1 hunks)cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/__init__.py(1 hunks)cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/adapter.py(1 hunks)cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/get_llm_client.py(4 hunks)cognee/modules/settings/get_settings.py(3 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Use 4-space indentation in Python code
Use snake_case for Python module and function names
Use PascalCase for Python class names
Use ruff format before committing Python code
Use ruff check for import hygiene and style enforcement with line-length 100 configured in pyproject.toml
Prefer explicit, structured error handling in Python code
Files:
cognee/infrastructure/files/storage/s3_config.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/adapter.pycognee/api/v1/add/add.pycognee/modules/settings/get_settings.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/__init__.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/get_llm_client.py
⚙️ CodeRabbit configuration file
**/*.py: When reviewing Python code for this project:
- Prioritize portability over clarity, especially when dealing with cross-Python compatibility. However, with the priority in mind, do still consider improvements to clarity when relevant.
- As a general guideline, consider the code style advocated in the PEP 8 standard (excluding the use of spaces for indentation) and evaluate suggested changes for code style compliance.
- As a style convention, consider the code style advocated in CEP-8 and evaluate suggested changes for code style compliance.
- As a general guideline, try to provide any relevant, official, and supporting documentation links to any tool's suggestions in review comments. This guideline is important for posterity.
- As a general rule, undocumented function definitions and class definitions in the project's Python code are assumed incomplete. Please consider suggesting a short summary of the code for any of these incomplete definitions as docstrings when reviewing.
Files:
cognee/infrastructure/files/storage/s3_config.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/adapter.pycognee/api/v1/add/add.pycognee/modules/settings/get_settings.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/__init__.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/get_llm_client.py
cognee/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Use shared logging utilities from cognee.shared.logging_utils in Python code
Files:
cognee/infrastructure/files/storage/s3_config.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/adapter.pycognee/api/v1/add/add.pycognee/modules/settings/get_settings.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/__init__.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/get_llm_client.py
cognee/{modules,infrastructure,tasks}/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Co-locate feature-specific helpers under their respective package (modules/, infrastructure/, or tasks/)
Files:
cognee/infrastructure/files/storage/s3_config.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/adapter.pycognee/modules/settings/get_settings.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/__init__.pycognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/get_llm_client.py
.github/**
⚙️ CodeRabbit configuration file
.github/**: * When the project is hosted on GitHub: All GitHub-specific configurations, templates, and tools should be found in the '.github' directory tree.
- 'actionlint' erroneously generates false positives when dealing with GitHub's
${{ ... }}syntax in conditionals.- 'actionlint' erroneously generates incorrect solutions when suggesting the removal of valid
${{ ... }}syntax.
Files:
.github/workflows/test_llms.yml
cognee/api/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Public APIs should be type-annotated in Python where practical
Files:
cognee/api/v1/add/add.py
🧬 Code graph analysis (2)
cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/__init__.py (1)
cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/adapter.py (1)
BedrockAdapter(28-153)
cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/get_llm_client.py (1)
cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/adapter.py (1)
BedrockAdapter(28-153)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
- GitHub Check: End-to-End Tests / Concurrent Subprocess access test
- GitHub Check: End-to-End Tests / Test graph edge ingestion
- GitHub Check: CLI Tests / CLI Functionality Tests
- GitHub Check: CLI Tests / CLI Integration Tests
- GitHub Check: End-to-End Tests / S3 Bucket Test
- GitHub Check: End-to-End Tests / Test multi tenancy with different situations in Cognee
- GitHub Check: End-to-End Tests / Test permissions with different situations in Cognee
- GitHub Check: End-to-End Tests / Conversation sessions test (FS)
- GitHub Check: End-to-End Tests / Test Feedback Enrichment
- GitHub Check: End-to-End Tests / Conversation sessions test (Redis)
- GitHub Check: End-to-End Tests / Test Entity Extraction
- GitHub Check: Basic Tests / Run Integration Tests
- GitHub Check: End-to-End Tests / Server Start Test
- GitHub Check: End-to-End Tests / Deduplication Test
- GitHub Check: Basic Tests / Run Unit Tests
- GitHub Check: End-to-End Tests / Run Telemetry Pipeline Test
- GitHub Check: Basic Tests / Run Simple Examples
🔇 Additional comments (9)
cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/__init__.py (1)
1-5: LGTM!Clean module initialization with proper docstring and explicit
__all__export. This follows Python packaging conventions and provides a clear public API surface.cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/get_llm_client.py (2)
27-36: LGTM!The BEDROCK enum member and docstring update are properly aligned with the existing provider pattern.
158-160: LGTM!Good fix adding the
raise_api_key_errorcondition to match other providers' patterns.cognee/modules/settings/get_settings.py (2)
19-19: LGTM!The
bedrockenum member follows the existing naming convention.
81-84: LGTM!Bedrock provider entry follows the established pattern for provider configuration.
.github/workflows/test_llms.yml (2)
88-113: LGTM!The API key authentication test job follows the established workflow pattern with proper setup and environment configuration.
115-141: Verify: EMBEDDING_API_KEY uses BEDROCK_API_KEY while LLM uses AWS credentials.Line 137 uses
BEDROCK_API_KEYfor embedding while the LLM relies on AWS credentials (lines 134-135). This appears intentional to test AWS credential authentication for the LLM provider, but ensure the embedding provider also supports this mixed authentication scenario.cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/adapter.py (2)
103-127: LGTM!The async method properly implements rate limiting, retry logic, observability, and has appropriate error handling that distinguishes between content policy violations and other retry exceptions.
71-74: Incorrect message ordering: system message should precede user message.The messages array places the user message before the system message. Standard LLM API convention (including Bedrock) expects the system message first to properly set the context for the conversation.
"messages": [ - {"role": "user", "content": text_input}, {"role": "system", "content": system_prompt}, + {"role": "user", "content": text_input}, ],Likely an incorrect or invalid review comment.
| model: str | ||
| api_key: str |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Type hint mismatch for api_key.
The class attribute declares api_key: str, but the __init__ parameter defaults to None. Update the type hint for consistency with the Optional nature.
- model: str
- api_key: str
+ model: str
+ api_key: str | None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| model: str | |
| api_key: str | |
| model: str | |
| api_key: str | None |
🤖 Prompt for AI Agents
In
cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/adapter.py
around lines 37 to 38, the class attribute declares api_key: str but the
__init__ parameter defaults to None; change the annotation to api_key:
Optional[str] and import Optional from typing (or use from typing import
Optional at the top) so the type matches the optional default, and update any
related type hints/usages in the file to accept Optional[str].
| self.aclient = instructor.from_litellm( | ||
| litellm.acompletion, mode=instructor.Mode(self.instructor_mode) | ||
| ) | ||
| self.client = instructor.from_litellm(litellm.completion) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent instructor_mode between async and sync clients.
The async client (self.aclient) is configured with instructor.Mode(self.instructor_mode), but the sync client (self.client) does not use the mode parameter. This could cause different behavior between acreate_structured_output and create_structured_output.
self.aclient = instructor.from_litellm(
litellm.acompletion, mode=instructor.Mode(self.instructor_mode)
)
- self.client = instructor.from_litellm(litellm.completion)
+ self.client = instructor.from_litellm(
+ litellm.completion, mode=instructor.Mode(self.instructor_mode)
+ )| @observe | ||
| @sleep_and_retry_sync() | ||
| @rate_limit_sync | ||
| def create_structured_output( | ||
| self, text_input: str, system_prompt: str, response_model: Type[BaseModel] | ||
| ) -> BaseModel: | ||
| """Generate structured output from AWS Bedrock API (synchronous).""" | ||
|
|
||
| request_params = self._create_bedrock_request(text_input, system_prompt, response_model) | ||
| return self.client.chat.completions.create(**request_params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling in sync method.
The async version acreate_structured_output handles ContentPolicyViolationError and InstructorRetryException, but this sync version lacks equivalent handling. This inconsistency could expose raw exceptions to callers using the sync API.
@observe
@sleep_and_retry_sync()
@rate_limit_sync
def create_structured_output(
self, text_input: str, system_prompt: str, response_model: Type[BaseModel]
) -> BaseModel:
"""Generate structured output from AWS Bedrock API (synchronous)."""
- request_params = self._create_bedrock_request(text_input, system_prompt, response_model)
- return self.client.chat.completions.create(**request_params)
+ try:
+ request_params = self._create_bedrock_request(text_input, system_prompt, response_model)
+ return self.client.chat.completions.create(**request_params)
+
+ except (
+ ContentPolicyViolationError,
+ InstructorRetryException,
+ ) as error:
+ if (
+ isinstance(error, InstructorRetryException)
+ and "content management policy" not in str(error).lower()
+ ):
+ raise error
+
+ raise ContentPolicyFilterError(
+ f"The provided input contains content that is not aligned with our content policy: {text_input}"
+ )Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In
cognee/infrastructure/llm/structured_output_framework/litellm_instructor/llm/bedrock/adapter.py
around lines 129 to 138, the synchronous create_structured_output lacks the
exception handling present in acreate_structured_output; wrap the
client.chat.completions.create call in a try/except that mirrors the async
method: catch ContentPolicyViolationError and surface it the same way
(log/context and re-raise or convert to the same error type returned by the
async method), and catch InstructorRetryException to apply the same
retry/backoff logic used in acreate_structured_output (retry the request the
same number of times and finally raise if still failing); ensure logging and
error messages match the async handling so sync callers get consistent behavior.
| "bedrock": [ | ||
| { | ||
| "value": "eu.anthropic.claude-sonnet-4-5-20250929-v1:0", | ||
| "label": "Claude 4.5 Sonnet", | ||
| }, | ||
| { | ||
| "value": "eu.anthropic.claude-haiku-4-5-20251001-v1:0", | ||
| "label": "Claude 4.5 Haiku", | ||
| }, | ||
| { | ||
| "value": "eu.amazon.nova-lite-v1:0", | ||
| "label": "Amazon Nova Lite", | ||
| }, | ||
| ], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
AWS Bedrock model ID format eu. prefix cross-region availability
💡 Result:
Short answer:
-
Bedrock model IDs are region-agnostic strings in the form provider.model-name-version:revision (e.g., anthropic.claude-3-haiku-20240307-v1:0 or amazon.nova-lite-v1:0). [1]
-
To use cross‑Region inference you use an inference‑profile ID that prefixes the model ID with a geography or profile identifier (examples: us., eu., apac., or global.), e.g. eu.anthropic.claude-3-...:0 or us.anthropic.claude-3-haiku-20240307-v1:0. The prefix tells Bedrock which geographic set of destination Regions may be used for routing. [2][3]
-
Notes: some models are only reachable via their inference‑profile IDs (not directly by the base model ID); Global profiles can route to all commercial Regions while geographic profiles (us/eu/apac) keep routing within that geography. You can query GetInferenceProfile or consult the Bedrock docs to see exact source→destination Region mappings. [2][3]
Sources: AWS Bedrock model IDs & supported models, inference‑profile docs, and geographic cross‑Region inference docs. [1][2][3][4]
🌐 Web query:
AWS Bedrock cross-region model identifiers documentation
💡 Result:
Here are the official AWS docs that describe Bedrock cross‑region (inference profile) model identifiers and how to use them:
- Amazon Bedrock — Supported Regions and models for inference profiles (lists inference‑profile IDs / prefixes and region groupings). [1]
- Amazon Bedrock — Foundation model reference / model IDs (shows region‑agnostic model IDs and notes which models are available only via cross‑Region inference). [2]
- AWS blog: Getting started with cross‑region inference in Amazon Bedrock (how inference profiles work and how to call them). [3]
- AWS What’s New: Amazon Bedrock now supports cross‑region inference (announcement with links to supported models list). [4]
If you want, I can open any of those pages and extract the exact inference‑profile ID examples (e.g., the geographic prefixes like "us.", "apac.", and specific model IDs) for the models you care about.
Sources:
[1] AWS Docs — Supported Regions and models for inference profiles. (docs.aws.amazon.com)
[2] AWS Docs — Foundation model reference / model IDs. (docs.aws.amazon.com)
[3] AWS Blog — Getting started with cross‑region inference in Amazon Bedrock. (aws.amazon.com)
[4] AWS What’s New — Amazon Bedrock now supports cross‑region inference. (aws.amazon.com)
Citations:
- 1: https://docs.aws.amazon.com/bedrock/latest/userguide/batch-inference-supported.html?utm_source=openai
- 2: https://docs.aws.amazon.com/bedrock/latest/userguide/foundation-models-reference.html?utm_source=openai
- 3: https://aws.amazon.com/blogs/machine-learning/getting-started-with-cross-region-inference-in-amazon-bedrock/?utm_source=openai
- 4: https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-bedrock-cross-region-inference/?utm_source=openai
🏁 Script executed:
# Check the file context around lines 165-178 to understand the broader purpose
cd cognee && find . -name "get_settings.py" -path "*/settings/*"Repository: topoteretes/cognee
Length of output: 96
🏁 Script executed:
cat -n cognee/modules/settings/get_settings.py | head -200 | tail -50Repository: topoteretes/cognee
Length of output: 2390
🏁 Script executed:
# Check if there are other geographic profiles (us., global., etc.) defined elsewhere
rg "anthropic|amazon\.nova" cognee/modules/settings/get_settings.py -B2 -A2Repository: topoteretes/cognee
Length of output: 1051
Bedrock configuration uses only EU inference profiles.
These model IDs use the eu. prefix, which is an AWS Bedrock inference profile for cross-region routing within EU geography. This configuration provides no alternatives for US, APAC, or global inference profiles (e.g., us.anthropic.claude-..., apac.amazon.nova-lite-v1:0, or global.anthropic.claude-...).
Consider adding model IDs for other geographic profiles to support users in non-EU regions, or document that this configuration is EU-specific.
🤖 Prompt for AI Agents
In cognee/modules/settings/get_settings.py around lines 165 to 178, the Bedrock
model list only includes EU-prefixed inference profile IDs, which excludes US,
APAC, and global routing options; update the list to include equivalent model
IDs for other geographic inference profiles (e.g., add us., apac., and
global-prefixed variants for Anthropic and Amazon Nova models) or add a clear
comment/documentation string indicating the list is intentionally EU-specific;
ensure values follow the same format ("region.model-name:version") and keep
labels consistent with existing entries.
Description
Added support for AWS Bedrock, and the models that are available there. This was a contributor PR that was never finished, so now I polished it up and made it work.
Type of Change
Screenshots/Videos (if applicable)
Pre-submission Checklist
DCO Affirmation
I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.