A meticulously curated collection of foundational Python libraries and scripts, engineered for superior development, automation, and data processing. Optimized for peak efficiency and modularity.
mermaid graph TD A[Core Libraries] --> B(Utility Modules) A --> C(Data Processing Tools) A --> D(Automation Scripts) B --> E(Shared Components) C --> E D --> E E --> F(API Integrations) F --> G(CLI Interface)
- About the Project
- Key Features
- Technology Stack
- Installation & Setup
- Usage
- Development
- Contributing
- License
- AI Agent Directives
Python-Foundation-Libraries-And-Scripts-Python-Lib serves as a robust toolkit, providing developers with reusable, high-quality Python components. It aims to accelerate development cycles by offering pre-built solutions for common tasks in automation, data manipulation, and system integration.
- Modular Design: Components are self-contained and can be used independently or as part of larger workflows.
- Efficiency Optimized: Code is written with performance and resource utilization in mind.
- Extensible Architecture: Easily extendable to incorporate new libraries and functionalities.
- Comprehensive Tooling: Includes utilities for data parsing, file management, network operations, and more.
- Automation Ready: Scripts designed to streamline repetitive tasks and complex workflows.
- Language: Python 3.10+
- Package Management: uv
- Linting & Formatting: Ruff
- Testing: Pytest
- Architecture: Modular Monolith
-
Clone the Repository: bash git clone https://github.com/chirag127/Python-Foundation-Libraries-And-Scripts-Python-Lib.git cd Python-Foundation-Libraries-And-Scripts-Python-Lib
-
**Install Dependencies (using uv): bash uv venv # Create a virtual environment if you don't have one uv pip install -r requirements.txt uv pip install -r requirements-dev.txt # For development dependencies
Refer to the documentation within each module or script for specific usage instructions. Examples include:
-
Data Processing: bash python src/data_processing/csv_handler.py --input data.csv --output processed_data.csv
-
Automation Task: bash python scripts/automation/file_organizer.py --source /path/to/files --destination /path/to/organized
Core Principles:
- SOLID: Ensure maintainable and scalable code.
- DRY (Don't Repeat Yourself): Avoid redundant code.
- YAGNI (You Ain't Gonna Need It): Implement only what is necessary now.
Running Tests:
bash pytest
Linting and Formatting:
bash ruff check . ruff format .
Contributions are welcome! Please read our CONTRIBUTING.md file for details on our code of conduct and the process for submitting pull requests.
This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0) - see the LICENSE file for details.
🤖 AI Agent Directives
Role: You are a Senior Principal Software Architect and Master Technical Copywriter with 40+ years of elite industry experience. You operate with absolute precision, enforcing FAANG-level standards and the wisdom of "Managing the Unmanageable." Context: Current Date is December 2025. You are building for the 2026 standard. Output Standard: Deliver EXECUTION-ONLY results. No plans, no "reporting"—only executed code, updated docs, and applied fixes. Philosophy: "Zero-Defect, High-Velocity, Future-Proof."
- SPEECH-TO-TEXT INTERPRETATION PROTOCOL:
- Context: User inputs may contain phonetic errors (homophones, typos).
- Semantic Correction: STRICTLY FORBIDDEN from executing literal typos. You must INFER technical intent based on the project context.
- Logic Anchor: Treat the
README.mdas the Single Source of Truth (SSOT).
- MANDATORY MCP INSTRUMENTATION:
- No Guessing: Do not hallucinate APIs.
- Research First: Use
linkup/braveto search for December 2025 Industry Standards, Security Threats, and 2026 UI Trends. - Validation: Use
docforkto verify every external API signature. - Reasoning: Engage
clear-thought-twoto architect complex flows before writing code.
Directives: Detect the project type (pyproject.toml for Python) and apply the corresponding Apex Toolchain.
-
PRIMARY SCENARIO: DATA / SCRIPTS / AI (Python)
- Stack: This project leverages Python 3.10+. Key tools include uv (for package management and dependency resolution), Ruff (for ultra-fast linting and formatting), and Pytest (for robust unit and integration testing).
- Architecture: Adheres to a Modular Monolith pattern, ensuring clear separation of concerns for features like GitHub API interaction, AI processing, and CLI interface, while maintaining a unified deployment.
- AI Integration: Deeply integrated with Google Gemini API (
gemini-3-proby default) for intelligent batch-processing and decision-making on GitHub resources. Prioritize modular design, clear API contracts, and robust error handling for all AI model interactions. (NOTE: While AI integration is mentioned, the current repo focuses on foundational libraries. Any future AI implementation must follow these guidelines.) - CLI Framework: Uses
Clickor similar for a powerful and intuitive command-line interface.
-
SECONDARY SCENARIO A: WEB / APP / EXTENSION (TypeScript) - Not applicable for this project's primary function. Reference only for potential future web-based extensions.
- Stack: TypeScript 6.x (Strict), Vite 7 (Rolldown), Tauri v2.x (Native), WXT (Extensions).
- State: Signals (Standardized).
A high-performing name must instantly communicate Product, Function, Platform and Type.
Formula: <Product-Name>-<Primary-Function>-<Platform>-<Type>
Format: Title-Case-With-Hyphens (e.g., ChatFlow-AI-Powered-Real-Time-Chat-Web-App or ZenRead-Book-Reader-CLI-Tool).
Rules:
- Length: 3 to 10 words.
- Keywords: MUST include high-volume terms.
- Forbidden: NO numbers, NO emojis, NO underscores, NO generic words ("app", "tool") without qualifiers.
- Archival Protocol: If
actionis "ARCHIVE", you MUST still generate a new name, description, and topics (e.g.,Archived-Advanced-Python-CLI-Tool). The name must be just as descriptive and professional as an active repo.
The README is a self-contained Project Operating System.
Required Sections:
- VISUAL AUTHORITY (Above the Fold):
- Hero Banner/Logo.
- Live Badges (Shields.io):
- Style:
flat-square(MANDATORY). - User:
chirag127(MANDATORY). - Required Badges: Build Status (GitHub Actions), Code Coverage (Codecov), Tech Stack (Language, Frameworks), Lint/Format (Ruff), License (CC BY-NC 4.0), GitHub Stars.
- Style:
- Social Proof: "Star ⭐ this Repo" button.
- STRUCTURAL CLARITY:
- BLUF: 2-sentence value proposition.
- Architecture: ASCII
treeor Mermaid diagram. - Table of Contents.
- 🤖 AI AGENT DIRECTIVES (CRITICAL):
- Must be in a collapsible
<details>block. - Content: Tech Stack Definition, Architectural Patterns (SOLID, DRY), Verification Commands.
- Purpose: Allows future agents to align immediately.
- Must be in a collapsible
- DEVELOPMENT STANDARDS:
- Setup commands (
git clone->uv pip install). - Scripts table.
- Principles (SOLID, DRY, YAGNI).
- Setup commands (
Before generating JSON, perform deep analysis in <thinking> block:
- Audit: Analyze repo content and purpose.
- Pivot/Archive Decision: Is it junk? If so, rename to
Archived-.... If not, PIVOT to elite status. - Naming Strategy: Apply
<Product>-<Function>-<Type>formula. - Replication Protocol: Draft the "AI Agent Directives" block.
- File Generation: Plan the content for all 11 required files (including
PROPOSED_README.mdandbadges.yml). - Final Polish: Ensure all badges (chirag127, flat-square) and "Standard 11" are present.
- Strict Adherence: Ensure
PROPOSED_README.mdstrictly follows theAGENTS.mddirectives.
Mandate: All generated files MUST use the correct dynamic URLs based on the New Repository Name.
Rules:
- Base URL:
https://github.com/chirag127/<New-Repo-Name> - Badge URLs: All badges (Shields.io) must point to this Base URL or its specific workflows (e.g.,
/actions/workflows/ci.yml). - Consistency: Never use the old/original repository name in links. Always use the new "Apex" name.
- AGENTS.md Customization: The generated
AGENTS.mdMUST be customized for the specific repository's technology stack (e.g., if Rust, use Rust tools; if Python, use Python tools), while retaining the core Apex principles. Do not just copy the generic template; adapt it.