diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 77572178..2c9a7cc4 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -44,6 +44,17 @@ In case of any conflicting instructions, the following hierarchy shall apply. If - Write concise, efficient, and well-documented code for a global audience. - Consider non-native English speakers in code comments and documentation, using clear and simple language. +## Consistency & Uniformity + +Uniformity, clarity, and ease of use are paramount across all infrastructures and samples. Every infrastructure and every sample should look and feel as alike as possible so that users maintain familiarity as they move between them. A user who has completed one sample should never feel like they are viewing something entirely new when they open the next. + +- **Follow the established templates.** New infrastructures must follow the structure of existing infrastructures. New samples must follow `samples/_TEMPLATE`. Deviations are permitted only when a sample has genuinely unique requirements, and those deviations should be minimal. +- **Use consistent naming, headings, and cell order.** Markdown headings, variable names, section labels (e.g. `USER CONFIGURATION`, `SYSTEM CONFIGURATION`), emoji usage, and code cell ordering must match the patterns established by the template and existing artefacts. +- **Keep README structure uniform.** Infrastructure READMEs and sample READMEs each follow their own standard layout (see the guidelines below). Readers should be able to predict where to find objectives, configuration steps, and execution instructions. +- **Reuse shared utilities.** Use `NotebookHelper`, `InfrastructureNotebookHelper`, `ApimRequests`, `ApimTesting`, and shared Bicep modules rather than inventing ad-hoc alternatives. Shared code is the single best tool for enforcing uniformity. +- **Mirror tone and depth.** Similar sections across artefacts should use similar levels of detail. If one sample's README explains configuration in three sentences, another sample of comparable complexity should do the same. +- **Validate against peers.** Before finalising a new infrastructure or sample, compare it side-by-side with at least one existing peer to identify structural or stylistic drift. + ## General Coding Guidelines - All code, scripts, and configuration must be cross-platform compatible, supporting Windows, Linux, and macOS. If any special adjustments are to be made, please clearly indicate so in comments. @@ -57,7 +68,13 @@ In case of any conflicting instructions, the following hierarchy shall apply. If - Break down complex logic into smaller, manageable functions or classes. - Use type annotations and docstrings where appropriate. - Prefer standard libraries and well-maintained dependencies. -- Use samples/_TEMPLATE as a baseline for new samples. This template provides a consistent structure and format for new samples, ensuring they are easy to understand and maintain. +- Use `samples/_TEMPLATE` as the baseline for every new sample. The template provides the canonical structure, cell order, and format. New samples must not deviate from this structure unless the sample has genuinely unique requirements. + +## Linting and Style + +- Ruff is the Python linter; follow `pyproject.toml` for line length and rule selection. +- Prefer explicit imports over `from module import *` to avoid `F403/F405`. +- Wrap long strings or function calls to stay within the configured line length. ## Repository Structure @@ -70,6 +87,265 @@ In case of any conflicting instructions, the following hierarchy shall apply. If - `shared/`: Shared resources, such as Bicep modules, Python libraries, and other reusable components. - `tests/`: Contains unit tests for Python code and Bicep modules. This folder should contain all tests for all code in the repository. +## Infrastructure Development Guidelines + +Infrastructures live in `infrastructure/[infra-name]/` and provide the foundational Azure environment that samples deploy onto. All infrastructures must follow the same structure and patterns so that users experience a consistent workflow regardless of which architecture they choose. + +### Infrastructure File Structure + +Each infrastructure in `infrastructure/[infra-name]/` must contain: +- `create.ipynb` - Jupyter notebook that deploys the infrastructure +- `create_infrastructure.py` - Python helper script for infrastructure creation logic +- `main.bicep` - Bicep template for deploying the infrastructure resources +- `params.json` - Bicep parameter file +- `clean-up.ipynb` - Jupyter notebook for tearing down the infrastructure +- `README.md` - Documentation explaining the architecture, objectives, and execution steps + +### Infrastructure Jupyter Notebook (`create.ipynb`) Structure + +All infrastructure notebooks must follow this exact cell pattern: + +#### Cell 1: Configure & Create (Markdown) +- Heading: `### ๐Ÿ› ๏ธ Configure Infrastructure Parameters & Create the Infrastructure` +- One-sentence description naming the specific infrastructure +- Bold reminder: `โ—๏ธ **Modify entries under _User-defined parameters_**.` +- Optional: a short note if the infrastructure has unique deployment phases (e.g. private link approval) + +#### Cell 2: Configure & Create (Python Code) +- Import only `APIM_SKU`, `INFRASTRUCTURE` from `apimtypes`, `InfrastructureNotebookHelper` from `utils`, and `print_ok` from `console` +- `USER CONFIGURATION` section with `rg_location`, `index`, and `apim_sku` (comment each with inline description) +- `SYSTEM CONFIGURATION` section: instantiate `InfrastructureNotebookHelper` and call `create_infrastructure()` +- Final line: `print_ok('All done!')` + +#### Cell 3: Clean Up (Markdown) +- Heading: `### ๐Ÿ—‘๏ธ Clean up resources` +- Standard text: "When you're finished experimenting, it's advisable to remove all associated resources from Azure to avoid unnecessary cost. Use the clean-up notebook for that." + +### Infrastructure README.md + +Use this consistent layout: +- **Title** - Name of the architecture (e.g. "Simple API Management Infrastructure") +- **Description** - One to two sentences summarising the architecture and its value +- **Architecture diagram** - `` tag referencing the SVG in the infrastructure folder +- **๐ŸŽฏ Objectives** - Numbered list of what the infrastructure provides +- **โš™๏ธ Configuration** - One-sentence reference to the notebook's initialise-variables section +- **โ–ถ๏ธ Execution** - Expected runtime badge and numbered steps to run the notebook +- **Reference links** - Markdown reference-style links at the bottom + +--- + +## Sample Development Guidelines + +### Sample File Structure + +Each sample in `samples/[sample-name]/` must contain: +- `create.ipynb` - Jupyter notebook that deploys and demonstrates the sample +- `main.bicep` - Bicep template for deploying sample resources +- `README.md` - Documentation explaining the sample, use cases, and concepts +- `*.xml` - APIM policy files (if applicable to the sample) +- `*.kql` - KQL (Kusto Query Language) files (if applicable to the sample) + +### Jupyter Notebook (`create.ipynb`) Structure + +Follow this pattern for **all** sample `create.ipynb` files. Consistency here is critical - users should recognise the layout immediately from having used any other sample: + +#### Cell 1: Title & Overview (Markdown) +- Notebook title and brief description +- Reference to README.md for detailed information + +#### Cell 2: What This Sample Does (Markdown) +- Bullet list of key actions/demonstrations +- Keep focused on user-facing outcomes + +#### Cell 3: Initialize Notebook Variables (Markdown) +- Heading with note that only USER CONFIGURATION should be modified + +#### Cell 4: Initialize Notebook Variables (Python Code) +**This cell should be straightforward configuration only. No Azure SDK calls here.** + +Structure: +1. Import statements at the top: + - Standard library imports (time, json, tempfile, requests, pathlib, datetime) + - `utils`, `apimtypes`, `console`, `azure_resources` (including `az`, `get_infra_rg_name`, `get_account_info`) +2. USER CONFIGURATION section: + - `rg_location`: Azure region (default: 'eastus2') + - `index`: Deployment index for resource naming (default: 1) + - `deployment`: Selected infrastructure type (reference INFRASTRUCTURE enum options) + - `api_prefix`: Prefix for APIs to avoid naming collisions + - `tags`: List of descriptive tags + - Sample-specific configuration (e.g., SKU, feature flags, thresholds) +3. SYSTEM CONFIGURATION section: + - `sample_folder`: Folder name matching the sample directory + - `rg_name`: Computed using `get_infra_rg_name(deployment, index)` + - `supported_infras`: List of compatible infrastructure types + - `nb_helper`: Instance of `utils.NotebookHelper(...)` - **Do NOT check if resource group exists here** +4. Get account info: + - Call `get_account_info()` to retrieve subscription ID and user info +5. Final line: `print_ok('Notebook initialized')` + +**Important:** Do NOT call `az` commands in this cell. Do NOT create a config dictionary. Do NOT initialize deployment outputs. All Azure operations and variable definitions should happen in subsequent operation cells. + +#### Cell 5+: Functional Cells (Markdown + Code pairs) +- Each logical operation gets a markdown heading cell followed by one or more code cells + +**First operation cell (typically deployment):** + +โš ๏ธ **CRITICAL**: Use `nb_helper.deploy_sample()` for all sample deployments. This method: + - Automatically validates the infrastructure exists (checks resource group) + - Prompts user to select or create infrastructure if needed + - Handles all Azure availability checks internally + - Returns deployment outputs including the APIM service name + +**Process:** +1. Print configuration summary using variables from init cell +2. Build `bicep_parameters` dict with sample-specific parameters (e.g., `location`, `costExportFrequency`) + - **DO NOT** manually query for APIM services + - **DO NOT** pass `apimServiceName` to `bicep_parameters` if the infrastructure already provides it +3. Call `nb_helper.deploy_sample(bicep_parameters)` to deploy Bicep template +4. Extract deployment outputs and store as **individual variables** (not in a dictionary) + - Example: `apim_name = output.get('apimServiceName')`, `app_insights_name = output.get('applicationInsightsName')` + +**Invalid approach** (do NOT do this): +```python +# โŒ WRONG - Manual APIM service queries +apim_list_result = az.run(f'az apim list --resource-group {rg_name}...') +apim_name = apim_list_result.json_data[0]['name'] # WRONG! + +# โŒ WRONG - Passing APIM name in bicep parameters when it should come from output +bicep_parameters = {'apimServiceName': {'value': apim_name}} +``` + +**Valid approach** (do this): +```python +# โœ… CORRECT - Let deploy_sample() handle infrastructure validation +bicep_parameters = { + 'location': {'value': rg_location}, + 'costExportFrequency': {'value': cost_export_frequency} +} +output = nb_helper.deploy_sample(bicep_parameters) +apim_name = output.get('apimServiceName') # Get from output +``` + +**Subsequent cells:** +- Check prerequisites with `if 'variable_name' not in locals(): raise SystemExit(1)` +- Use variables directly in code (e.g., `rg_name`, `subscription_id`, `apim_name`) +- Do NOT recreate or duplicate variables from previous cells +- Follow pattern: Markdown description โ†’ Code implementation โ†’ Output validation + +### Variable Management + +**Do NOT use a config dictionary.** Use individual variables that flow naturally through cells: +- Init cell defines user and system configuration variables +- Deployment cell creates new variables for deployment outputs (e.g., `apim_name`, `app_insights_name`) +- Subsequent cells reference these variables directly +- Check prerequisites using `if 'variable_name' not in locals():` pattern +- Variables created in one cell are automatically available in all subsequent cells + +Example: +```python +# Init cell +apim_sku = APIM_SKU.BASICV2 +deployment = INFRASTRUCTURE.SIMPLE_APIM +subscription_id = get_account_info()[2] + +# Deployment cell +apim_name = apim_services[0]['name'] +app_insights_name = output.get('applicationInsightsName') + +# Cost export cell +if 'app_insights_name' not in locals(): + raise SystemExit(1) +storage_account_id = f'/subscriptions/{subscription_id}/...' +``` + +### NotebookHelper Usage + +**What NotebookHelper does:** +- `__init__()`: Initializes with sample folder, resource group name, location, infrastructure type, and supported infrastructure list +- `deploy_sample(bicep_parameters)`: Orchestrates the complete deployment process: + 1. Checks if the desired resource group/infrastructure exists + 2. If not found, queries all available infrastructures and prompts user to select or create new + 3. Executes the Bicep deployment with provided parameters + 4. Returns `Output` object containing deployment results (resource names, IDs, connection strings, endpoints) + +**How to use:** +1. Initialize in the configuration cell (Cell 4): + ```python + nb_helper = utils.NotebookHelper( + sample_folder, + rg_name, + rg_location, + deployment, + supported_infras, + index=index, + apim_sku=APIM_SKU.BASICV2 # Optional: default is BASICV2 + ) + ``` + +2. Call in the deployment cell (Cell 5+): + ```python + bicep_parameters = { + 'location': {'value': rg_location}, + # ... other sample-specific parameters + } + output = nb_helper.deploy_sample(bicep_parameters) + ``` + +3. Extract outputs: + ```python + apim_name = output.get('apimServiceName') + app_insights_name = output.get('applicationInsightsName') + # ... extract all needed resources + ``` + +**CRITICAL: Do not bypass NotebookHelper!** +- โŒ Do NOT manually check `az group exists` +- โŒ Do NOT manually query `az apim list` to find APIM services +- โŒ Do NOT check if resources exist before deployment +- โœ… Let `deploy_sample()` handle all infrastructure validation, selection, and existence checking + +### Bicep Template (`main.bicep`) + +- Deploy only resources specific to the sample (don't re-deploy APIM infrastructure) +- Accept parameters for APIM service name, location, sample-specific config +- Use `shared/bicep/` modules where available for reusable components +- Return outputs for all created resources (names, IDs, connection strings, etc.) + +### Sample README.md + +Every sample README must follow this standard layout to maintain uniformity across the repository. Users should be able to predict where to find each piece of information: + +- **Title** - `# Samples: [Sample Name]` +- **Description** - One to two sentences summarising the sample +- **Supported infrastructures badge** - `โš™๏ธ **Supported infrastructures**: ...` +- **Expected runtime badge** - `๐Ÿ‘Ÿ **Expected *Run All* runtime (excl. infrastructure prerequisite): ~N minutes**` +- **๐ŸŽฏ Objectives** - Numbered list of learning or experimentation goals +- **๐Ÿ“ Scenario** (if applicable) - Use case or scenario context; omit if not relevant +- **๐Ÿ›ฉ๏ธ Lab Components** - What the lab deploys and how it benefits the user +- **โš™๏ธ Configuration** - How to choose an infrastructure and run the notebook +- **๐Ÿงน Clean Up** (if applicable) - Reference to a clean-up notebook or manual steps +- **๐Ÿ”— Additional Resources** (if applicable) - Links to relevant documentation + +Match the heading emojis, heading levels, and section ordering exactly. If a section is not applicable, omit it entirely rather than leaving it empty. + +### Testing and Traffic Generation + +- Use the `ApimRequests` and `ApimTesting` classes from `apimrequests.py` and `apimtesting.py` for all API testing and traffic generation in notebooks. +- Do not use the `requests` library directly for calling APIM endpoints. +- Use `utils.get_endpoint(deployment, rg_name, apim_gateway_url)` to determine the correct endpoint URL and headers based on the infrastructure type. +- Example: + ```python + from apimrequests import ApimRequests + from apimtesting import ApimTesting + + tests = ApimTesting("Sample Tests", sample_folder, nb_helper.deployment) + endpoint_url, request_headers = utils.get_endpoint(deployment, rg_name, apim_gateway_url) + reqs = ApimRequests(endpoint_url, subscription_key, request_headers) + + output = reqs.singleGet('/api-path', msg='Calling API') + tests.verify('Expected String' in output, True) + ``` + ## Language-specific Instructions - Python: see `.github/python.instructions.md` @@ -111,6 +387,31 @@ In case of any conflicting instructions, the following hierarchy shall apply. If - Less is more. Don't be too verbose in the diagrams. - Never include subscription IDs, resource group names, or any other sensitive information in the diagrams. That data is not relevant. +### KQL (Kusto Query Language) Instructions + +- Store KQL queries in dedicated `.kql` files within the sample folder rather than embedding them inline in Python code. This keeps notebooks readable and lets users copy-paste the query directly into a Log Analytics or Azure Data Explorer query editor. +- Load `.kql` files at runtime using `utils.determine_policy_path()` and `Path.read_text()`: + ```python + from pathlib import Path + kql_path = utils.determine_policy_path('my-query.kql', sample_folder) + kql_query = Path(kql_path).read_text(encoding='utf-8') + ``` +- Parameterise KQL queries using native `let` bindings. Define parameters as `let` statements prepended to the query body at runtime, keeping the `.kql` file free of Python string interpolation: + ```python + kusto_query = f"let buName = '{bu_name}';\nlet threshold = {alert_threshold};\n{kql_template}" + ``` +- In the `.kql` file, document available parameters in a comment header so users know which `let` bindings to supply: + ```kql + // Parameters (prepend as KQL 'let' bindings before running): + // let buName = 'bu-hr'; // Business unit subscription ID + // let threshold = 1000; // Request count threshold + ApiManagementGatewayLogs + | where ApimSubscriptionId == buName + | summarize RequestCount = count() + | where RequestCount > threshold + ``` +- When executing KQL via `az rest` or `az monitor log-analytics query`, write the query body to a temporary JSON file and pass it with `--body @tempfile.json` to avoid shell pipe-character interpretation issues on Windows. + ### API Management Policy XML Instructions - Policies should use camelCase for all variable names. diff --git a/.github/python.instructions.md b/.github/python.instructions.md index 2ac6068a..1cc2d6e8 100644 --- a/.github/python.instructions.md +++ b/.github/python.instructions.md @@ -5,16 +5,22 @@ applyTo: '**/*.py' # Copilot Instructions (Python) -## Critical: Load Pylint Configuration First +## Critical: Load Ruff Configuration First -**BEFORE making any changes to Python files**, always load the pylint configuration into context: +**BEFORE making any changes to Python files**, always load the ruff configuration into context: -1. Use `read_file` to load `.pylintrc` -2. Review the disabled rules and enabled checks +1. Use `read_file` to load `pyproject.toml` and review the `[tool.ruff]` and `[tool.ruff.lint]` sections +2. Review the ignored rules and per-file exceptions 3. Apply these rules when writing or modifying Python code This ensures all code changes comply with the project's linting standards from the start. +## Ruff Expectations + +- Use explicit imports (avoid `from module import *`), especially in notebooks, to prevent `F403/F405`. +- Keep lines within the configured length limit (see `pyproject.toml`), and wrap long strings or calls. +- Avoid f-strings without placeholders (e.g., `F541`). + ## Goals - Make changes that are easy to review, test, and maintain. @@ -62,8 +68,8 @@ This ensures all code changes comply with the project's linting standards from t Before completing any Python code changes, verify: -- All pylint warnings and errors are resolved (`pylint --rcfile=.pylintrc `) - - Pylint rules cover these, but we don't see .pylintrc being added to the context. Therefore, please pay special attention to these common occurrences: +- All ruff warnings and errors are resolved (`ruff check `) + - Ruff rules cover these, but we don't see `pyproject.toml` being added to context. Therefore, please pay special attention to these common occurrences: - No trailing whitespace - No assertion of empty strings in tests (use `assert not`) - Code follows PEP 8 and the style guidelines in this file diff --git a/.github/workflows/python-tests.yml b/.github/workflows/python-tests.yml index 1baa9491..c1522e0a 100644 --- a/.github/workflows/python-tests.yml +++ b/.github/workflows/python-tests.yml @@ -42,18 +42,18 @@ jobs: uv run python -c "import coverage.html; print(coverage.html.__file__)" # Lint the Python files & upload the result statistics - - name: Run pylint analysis - id: pylint + - name: Run ruff analysis + id: ruff run: | - mkdir -p tests/python/pylint/reports - # Use python -m pylint and tee to ensure output is captured and visible in logs - uv run python -m pylint --rcfile .pylintrc infrastructure samples setup shared 2>&1 | tee tests/python/pylint/reports/latest.txt + mkdir -p tests/python/ruff/reports + uv run ruff check infrastructure samples setup shared 2>&1 | tee tests/python/ruff/reports/latest.txt + uv run ruff check --output-format json infrastructure samples setup shared > tests/python/ruff/reports/latest.json 2>/dev/null || true - - name: Upload pylint reports + - name: Upload ruff reports uses: actions/upload-artifact@v4 with: - name: pylint-reports-${{ matrix.python-version }} - path: tests/python/pylint/reports/ + name: ruff-reports-${{ matrix.python-version }} + path: tests/python/ruff/reports/ # Static code analysis through simple compilation to ensure code is syntactically sound - name: Verify bytecode compilation @@ -82,17 +82,13 @@ jobs: - name: Extract and Summarize Metrics id: metrics run: | - # Pylint Score - TEXT_REPORT="tests/python/pylint/reports/latest.txt" - if [ -s "$TEXT_REPORT" ]; then - PYLINT_SCORE=$(grep -Eo 'Your code has been rated at [0-9.]+/10' "$TEXT_REPORT" | grep -Eo '[0-9.]+/10' | head -n 1) - if [ -n "$PYLINT_SCORE" ]; then - echo "pylint_score=$PYLINT_SCORE" >> "$GITHUB_OUTPUT" - else - echo "pylint_score=N/A" >> "$GITHUB_OUTPUT" - fi + # Ruff Issue Count + JSON_REPORT="tests/python/ruff/reports/latest.json" + if [ -f "$JSON_REPORT" ] && command -v jq &> /dev/null; then + RUFF_ISSUES=$(jq 'length' "$JSON_REPORT" 2>/dev/null || echo "N/A") + echo "ruff_issues=$RUFF_ISSUES" >> "$GITHUB_OUTPUT" else - echo "pylint_score=N/A" >> "$GITHUB_OUTPUT" + echo "ruff_issues=N/A" >> "$GITHUB_OUTPUT" fi # Coverage Percentage @@ -114,7 +110,7 @@ jobs: | Metric | Status | Value | | :--- | :---: | :--- | - | **Pylint Score** | ${{ steps.pylint.outcome == 'success' && 'โœ…' || 'โš ๏ธ' }} | `${{ steps.metrics.outputs.pylint_score }}` | + | **Ruff** | ${{ steps.ruff.outcome == 'success' && 'โœ…' || 'โš ๏ธ' }} | `${{ steps.metrics.outputs.ruff_issues }} issue(s)` | | **Unit Tests** | ${{ steps.pytest.outcome == 'success' && 'โœ…' || 'โŒ' }} | `${{ steps.pytest.outcome }}` | | **Code Coverage** | ๐Ÿ“Š | `${{ steps.metrics.outputs.coverage }}` | @@ -122,7 +118,7 @@ jobs: - name: Generate Job Summary run: | - PYLINT_SCORE="${{ steps.metrics.outputs.pylint_score }}" + RUFF_ISSUES="${{ steps.metrics.outputs.ruff_issues }}" PYTEST_OUTCOME="${{ steps.pytest.outcome }}" COVERAGE="${{ steps.metrics.outputs.coverage }}" @@ -130,7 +126,7 @@ jobs: echo "" >> $GITHUB_STEP_SUMMARY echo "| Category | Status | Detail |" >> $GITHUB_STEP_SUMMARY echo "| :--- | :---: | :--- |" >> $GITHUB_STEP_SUMMARY - echo "| **Pylint** | ${{ steps.pylint.outcome == 'success' && 'โœ…' || 'โš ๏ธ' }} | Score: \`${PYLINT_SCORE:-N/A}\` |" >> $GITHUB_STEP_SUMMARY + echo "| **Ruff** | ${{ steps.ruff.outcome == 'success' && 'โœ…' || 'โš ๏ธ' }} | Issues: \`${RUFF_ISSUES:-N/A}\` |" >> $GITHUB_STEP_SUMMARY echo "| **Pytest** | ${{ steps.pytest.outcome == 'success' && 'โœ…' || 'โŒ' }} | Outcome: \`${PYTEST_OUTCOME:-N/A}\` |" >> $GITHUB_STEP_SUMMARY echo "| **Coverage** | ๐Ÿ“Š | Total: \`${COVERAGE:-N/A}\` |" >> $GITHUB_STEP_SUMMARY echo "" >> $GITHUB_STEP_SUMMARY diff --git a/.gitignore b/.gitignore index 31a157c5..823ed46a 100644 --- a/.gitignore +++ b/.gitignore @@ -33,7 +33,7 @@ htmlcov/ tests/python/htmlcov/ # Pylint reports -tests/python/pylint/reports/ +tests/python/ruff/reports/ tests/python/$JsonReport tests/python/$TextReport diff --git a/.pylintrc b/.pylintrc deleted file mode 100644 index fbe19985..00000000 --- a/.pylintrc +++ /dev/null @@ -1,41 +0,0 @@ -[MAIN] -jobs = 0 -persistent = no - -[MESSAGES CONTROL] -enable = all -disable = - C0103, # Invalid name - removal of this disabled rule will require deliberate and careful refactoring - C0302, # Too many lines in module - R0801, # Duplicate code - removal of this disabled rule will require deliberate and careful refactoring - R0902, # Too many instance attributes - R0903, # Too few public methods - R0911, # Too many return statements - R0912, # Too many branches - R0913, # Too many arguments - R0914, # Too many locals - R0915, # Too many statements - R0917, # Too many positional arguments - R1702, # Too many nested blocks - -[REPORTS] -output-format = colorized -reports = no -score = yes -msg-template = {path}:{line}: {msg_id}: {msg} - -[FORMAT] -max-line-length = 150 -expected-line-ending-format = LF - -[DESIGN] -# Allow unused arguments in test fixtures (pytest mocks/fixtures commonly have unused params) -dummy-variables-rgx = ^_|^mock_|^fixture_|^suppress_|^monkeypatch|^temp_|^fake_ - -# Disable specific rules for test files as they need to operate specifically and differently from production code -[MESSAGES CONTROL:tests/python/test_*.py] -disable = - W0212, # Access to a protected member - W0613, # Unused argument - W0621, # Redefining name from outer scopes - W0718 # Catching too general exception (test mocks need broad catches) diff --git a/.vscode/extensions.json b/.vscode/extensions.json index 3a74d06a..0ddf859d 100644 --- a/.vscode/extensions.json +++ b/.vscode/extensions.json @@ -1,10 +1,14 @@ { + "unwantedRecommendations": [ + "ms-python.pylint" + ], "recommendations": [ "ms-python.python", "ms-python.debugpy", "ms-toolsai.jupyter", "ms-azuretools.vscode-bicep", "ms-vscode.azurecli", + "charliermarsh.ruff", "GitHub.copilot", "GitHub.copilot-chat", "donjayamanne.vscode-default-python-kernel" diff --git a/.vscode/settings.json b/.vscode/settings.json index 6dbea931..fcae7935 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -4,9 +4,10 @@ }, "[python]": { "editor.codeActionsOnSave": { - "source.organizeImports": "explicit", - "source.unusedImports": "explicit" + "source.fixAll.ruff": "explicit", + "source.organizeImports.ruff": "explicit" }, + "editor.defaultFormatter": "charliermarsh.ruff", "editor.formatOnSave": true }, "editor.renderWhitespace": "trailing", @@ -17,6 +18,7 @@ "jupyter.kernels.trusted": [ "./.venv/Scripts/python.exe" ], + "pylint.enabled": false, "python.analysis.exclude": [ "**/node_modules", "**/__pycache__", diff --git a/README.md b/README.md index 8b7494e2..d14b3127 100644 --- a/README.md +++ b/README.md @@ -50,11 +50,11 @@ It's quick and easy to get started! | Infrastructure Name | Description | |:-------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [Simple API Management][infra-simple-apim] | **Just the basics with a publicly accessible API Management instance** fronting your APIs. This is the innermost way to experience and experiment with the APIM policies. | | [API Management & Container Apps][infra-apim-aca] | APIs are often implemented in containers running in **Azure Container Apps**. This architecture accesses the container apps publicly. It's beneficial to test both APIM and container app URLs to contrast and compare experiences of API calls through and bypassing APIM. It is not intended to be a security baseline. | -| [Front Door & API Management & Container Apps][infra-afd-apim-pe] | **A secure implementation of Azure Front Door connecting to APIM via the new private link integration!** This traffic, once it traverses through Front Door, rides entirely on Microsoft-owned and operated networks. The connection from APIM to Container Apps is secured but through a VNet configuration (it is also entirely possible to do this via private link). **APIM Standard V2** is used here to accept a private link from Front Door. | | [Application Gateway (Private Endpoint) & API Management & Container Apps][infra-appgw-apim-pe] | **A secure implementation of Azure Application Gateway connecting to APIM via the new private link integration!** This traffic, once it traverses through App Gateway, uses a private endpoint set up in the VNet's private endpoint subnet. The connection from APIM to Container Apps is secured but through a VNet configuration (it is also entirely possible to do this via private link). APIM Standard V2 is used here to accept a private link from App Gateway. | | [Application Gateway (VNet) & API Management & Container Apps][infra-appgw-apim] | Full VNet injection of APIM and ACA! APIM is shielded from any type of traffic unless it comes through App Gateway. This offers maximum isolation for instances in which customers seek VNet injection. | +| [Front Door & API Management & Container Apps][infra-afd-apim-pe] | **A secure implementation of Azure Front Door connecting to APIM via the new private link integration!** This traffic, once it traverses through Front Door, rides entirely on Microsoft-owned and operated networks. The connection from APIM to Container Apps is secured but through a VNet configuration (it is also entirely possible to do this via private link). **APIM Standard V2** is used here to accept a private link from Front Door. | +| [Simple API Management][infra-simple-apim] | **Just the basics with a publicly accessible API Management instance** fronting your APIs. This is the innermost way to experience and experiment with the APIM policies. | ## ๐Ÿ“ List of Samples @@ -65,11 +65,12 @@ It's quick and easy to get started! |:------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------|:------------------------------| | [AuthX][sample-authx] | Authentication and role-based authorization in a mock HR API. | All infrastructures | | [AuthX Pro][sample-authx-pro] | Authentication and role-based authorization in a mock product with multiple APIs and policy fragments. | All infrastructures | +| [Azure Maps][sample-azure-maps] | Proxying calls to Azure Maps with APIM policies. | All infrastructures | +| [Costing & Showback][sample-costing] | Track and allocate API costs per business unit using APIM subscriptions, Log Analytics, and Cost Management. | All infrastructures | +| [Credential Manager (with Spotify)][sample-oauth-3rd-party] | Authenticate with APIM which then uses its Credential Manager with Spotify's REST API. | All infrastructures | | [General][sample-general] | Basic demo of APIM sample setup and policy usage. | All infrastructures | | [Load Balancing][sample-load-balancing] | Priority and weighted load balancing across backends. | apim-aca, afd-apim (with ACA) | | [Secure Blob Access][sample-secure-blob-access] | Secure blob access via the [valet key pattern][valet-key-pattern]. | All infrastructures | -| [Credential Manager (with Spotify)][sample-oauth-3rd-party] | Authenticate with APIM which then uses its Credential Manager with Spotify's REST API. | All infrastructures | -| [Azure Maps][sample-azure-maps] | Proxying calls to Azure Maps with APIM policies. | All infrastructures | ### Compatibility Matrix @@ -96,7 +97,7 @@ Use the interactive APIM Samples Developer CLI to verify setup, run tests, and m This menu-driven interface provides quick access to: - **Setup**: Complete environment setup and verify local setup - **Verify**: Show Azure account info, list soft-deleted resources, and list deployed infrastructures -- **Tests**: Run pylint, pytest, and full Python checks +- **Tests**: Run ruff, pytest, and full Python checks APIM Samples Developer CLI showing final linting, test, and code coverage results @@ -252,7 +253,7 @@ The repo uses the bicep linter and has rules defined in `bicepconfig.json`. See ### ๐Ÿ” Code Quality & Linting -The repository uses [pylint][pylint-docs] to maintain Python code quality standards. The configuration is located in `.pylintrc`, and the APIM Samples Developer CLI supports linting. +The repository uses [Ruff][ruff-docs] to maintain Python code quality standards. The configuration is located in `pyproject.toml` under `[tool.ruff]`, and the APIM Samples Developer CLI supports linting. ### ๐Ÿงช Testing & Code Coverage @@ -279,6 +280,8 @@ Furthermore, [Houssem Dellai][houssem-dellai] was instrumental in setting up a w [Andrew Redman][andrew-redman] for contributing the _Azure Maps_ sample. +[Naga Venkata Cheruvu][naga-cheruvu] for contributing the _Costing & Showback_ sample. + The original author of this project is [Simon Kurtz][simon-kurtz]. @@ -315,6 +318,7 @@ _For much more API Management content, please also check out [APIM Love](https:/ [bicep-linter-docs]: https://learn.microsoft.com/azure/azure-resource-manager/bicep/bicep-config-linter [houssem-dellai]: https://github.com/HoussemDellai [import-troubleshooting]: .devcontainer/IMPORT-TROUBLESHOOTING.md +[naga-cheruvu]: https://github.com/ncheruvu-MSFT [infra-afd-apim-pe]: ./infrastructure/afd-apim-pe [infra-apim-aca]: ./infrastructure/apim-aca [infra-appgw-apim]: ./infrastructure/appgw-apim/ @@ -323,11 +327,12 @@ _For much more API Management content, please also check out [APIM Love](https:/ [openssf]: https://www.bestpractices.dev/projects/11057 [pytest-docs]: https://docs.pytest.org/ [pytest-docs-versioned]: https://docs.pytest.org/en/8.2.x/ -[pylint-docs]: https://pylint.pycqa.org/ +[ruff-docs]: https://docs.astral.sh/ruff/ [python]: https://www.python.org/ [sample-authx]: ./samples/authX/README.md [sample-authx-pro]: ./samples/authX-pro/README.md [sample-azure-maps]: ./samples/azure-maps/README.md +[sample-costing]: ./samples/costing/README.md [sample-general]: ./samples/general/README.md [sample-load-balancing]: ./samples/load-balancing/README.md [sample-oauth-3rd-party]: ./samples/oauth-3rd-party/README.md diff --git a/assets/APIM-Samples-Slide-Deck.html b/assets/APIM-Samples-Slide-Deck.html new file mode 100644 index 00000000..23f36045 --- /dev/null +++ b/assets/APIM-Samples-Slide-Deck.html @@ -0,0 +1,367 @@ + + + + + + Azure API Management Samples - Overview + + + + + + + +
+
+
+ +
Azure · API Management
+

Azure API Management
Samples

+

+ Deploy high-fidelity APIM infrastructures in minutes and experiment with + real-world policy samples — an innovative a la carte approach + that is neither too much nor too little. +

+ +
+
+ 🎓 +
+ Educate + Common APIM architectures seen across industries +
+
+
+ +
+ Empower + Safely experiment with APIM policies +
+
+
+ 🚀 +
+ Accelerate + High-fidelity building blocks for integration +
+
+
+ + https://aka.ms/apim/samples +
+ + + + +
+
+
+

What's Inside

+
+ +
+ +
+

5 Production-Grade Infrastructures

+
    +
  • Simple APIM — Publicly accessible; fastest to deploy (~5 min)
  • +
  • APIM & Container Apps — APIs in ACA with public access
  • +
  • Front Door & APIM (PE) — Private Link via Microsoft backbone
  • +
  • App Gateway & APIM (PE) — Private endpoint to APIM Standard V2
  • +
  • App Gateway & APIM (VNet) — Full VNet injection, max isolation
  • +
+
+ + +
+

8 Real-World Policy Samples

+
    +
  • AuthX — Authentication & role-based authorization
  • +
  • AuthX Pro — Multi-API auth with policy fragments
  • +
  • Azure Maps — Proxying calls to Azure Maps
  • +
  • Costing & Showback — Cost allocation per business unit
  • +
  • OAuth 3rd Party — Credential Manager with Spotify
  • +
  • General — Basic APIM setup & policy usage
  • +
  • Load Balancing — Priority & weighted backends
  • +
  • Secure Blob Access — Valet key pattern
  • +
+
+ + +
+

Key Features

+
    +
  • A la carte — Mix any sample with any compatible infrastructure
  • +
  • Jupyter Notebooks — Guided, interactive deployment & experimentation
  • +
  • Bicep IaC — Repeatable, parameterized infrastructure as code
  • +
  • Dev Container — One-click Codespaces or local Dev Container setup
  • +
  • Developer CLI — Interactive menu for setup, testing & verification
  • +
  • Cross-Platform — Windows, Linux, macOS support
  • +
  • Tested — CI with pytest, pylint, and coverage
  • +
+
+
+ +
+ 5 Infrastructures +
+ 8 Samples +
+ Codespaces Ready +
+ OpenSSF Best Practices +
+
+ + + + +
+
+
+

Infrastructure × Sample Compatibility

+
+

+ Most samples work with all infrastructures — a truly a la carte experience. +

+ +
+ Infrastructure and Sample Compatibility Matrix +
+ + +
+ + + diff --git a/assets/diagrams/Infrastructure-Sample-Compatibility.svg b/assets/diagrams/Infrastructure-Sample-Compatibility.svg index a99bb526..3028de74 100644 --- a/assets/diagrams/Infrastructure-Sample-Compatibility.svg +++ b/assets/diagrams/Infrastructure-Sample-Compatibility.svg @@ -1,4 +1,4 @@ - + @@ -14,16 +14,17 @@ - - + + + - Infrastructure & Sample Compatibility - Many samples can be deployed onto many infrastructures (many-to-many) + Infrastructure & Sample Compatibility + Many samples can be deployed onto many infrastructures (many-to-many) - INFRASTRUCTURES - SAMPLES + INFRASTRUCTURES + SAMPLES @@ -72,33 +73,36 @@ Azure Maps - - + - General - - + Costing + - Load Balancing + General - + - OAuth 3rd-Party + Load Balancing - + - Secure Blob Access + OAuth 3rd-Party + + + + + Secure Blob Access - - - - - + + + + + @@ -136,8 +140,7 @@ - - + @@ -148,10 +151,9 @@ - - - - + + + @@ -161,9 +163,10 @@ - - - + + + + 1 @@ -173,7 +176,7 @@ - + @@ -185,16 +188,32 @@ + + + + + + + + + + + + - + - - - Compatible + + + Compatible + + + + Not compatible - - - Not compatible (requires ACA backends) + + 1 Requires ACA backends + diff --git a/pyproject.toml b/pyproject.toml index 988a2a94..58a93571 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -21,8 +21,44 @@ dependencies = [ [dependency-groups] # Developer tooling (installed automatically by default with `uv sync`) dev = [ - "pylint>=4.0.0", + "ruff>=0.9.0", "pytest>=9.0.0", "pytest-cov>=7.0.0", "coverage>=7.6.4", ] + +[tool.ruff] +line-length = 150 + +[tool.ruff.lint] +select = [ + "E", # pycodestyle errors + "W", # pycodestyle warnings + "F", # Pyflakes + "PLC", # Pylint convention + "PLE", # Pylint error + "PLR", # Pylint refactoring + "PLW", # Pylint warning +] +ignore = [ + "PLR0911", # Too many return statements + "PLR0912", # Too many branches + "PLR0913", # Too many arguments + "PLR0914", # Too many local variables + "PLR0915", # Too many statements + "PLR0917", # Too many positional arguments + "PLR1702", # Too many nested blocks +] +dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+_*)|mock_|fixture_|suppress_|temp_|fake_|monkeypatch.*)" + +[tool.ruff.lint.per-file-ignores] +"*.ipynb" = [ + "F821", # Undefined name โ€” notebook cells share state across cells + "F401", # Imported but unused โ€” imports in one cell are used in later cells +] +"tests/python/test_*.py" = [ + "SLF001", # Private member access + "ARG001", # Unused function argument + "ARG002", # Unused method argument + "BLE001", # Blind exception catch +] diff --git a/samples/_TEMPLATE/create.ipynb b/samples/_TEMPLATE/create.ipynb index 2cab293f..e1585619 100644 --- a/samples/_TEMPLATE/create.ipynb +++ b/samples/_TEMPLATE/create.ipynb @@ -16,7 +16,9 @@ "outputs": [], "source": [ "import utils\n", - "from apimtypes import *\n", + "from typing import List\n", + "\n", + "from apimtypes import API, APIM_SKU, INFRASTRUCTURE\n", "from console import print_error, print_ok\n", "from azure_resources import get_infra_rg_name\n", "\n", @@ -39,8 +41,22 @@ "\n", "sample_folder = '_TEMPLATE'\n", "rg_name = get_infra_rg_name(deployment, index)\n", - "supported_infras = [INFRASTRUCTURE.AFD_APIM_PE, INFRASTRUCTURE.APIM_ACA, INFRASTRUCTURE.APPGW_APIM, INFRASTRUCTURE.APPGW_APIM_PE, INFRASTRUCTURE.SIMPLE_APIM]\n", - "nb_helper = utils.NotebookHelper(sample_folder, rg_name, rg_location, deployment, supported_infras, index = index, apim_sku = apim_sku)\n", + "supported_infras = [\n", + " INFRASTRUCTURE.AFD_APIM_PE,\n", + " INFRASTRUCTURE.APIM_ACA,\n", + " INFRASTRUCTURE.APPGW_APIM,\n", + " INFRASTRUCTURE.APPGW_APIM_PE,\n", + " INFRASTRUCTURE.SIMPLE_APIM\n", + "]\n", + "nb_helper = utils.NotebookHelper(\n", + " sample_folder,\n", + " rg_name,\n", + " rg_location,\n", + " deployment,\n", + " supported_infras,\n", + " index = index,\n", + " apim_sku = apim_sku\n", + ")\n", "\n", "# Define the APIs and their operations and policies\n", "\n", @@ -85,7 +101,10 @@ "\n", "if output.success:\n", " # Extract deployment outputs for testing\n", - " afd_endpoint_url = output.get('fdeSecureUrl', 'Front Door Endpoint URL') # may be deleted if Front Door is not part of a supported infrastructure\n", + " afd_endpoint_url = output.get(\n", + " 'fdeSecureUrl',\n", + " 'Front Door Endpoint URL'\n", + " ) # may be deleted if Front Door is not part of a supported infrastructure\n", " apim_name = output.get('apimServiceName', 'APIM Service Name')\n", " apim_gateway_url = output.get('apimResourceGatewayURL', 'APIM API Gateway URL')\n", " apim_apis = output.getJson('apiOutputs', 'APIs')\n", @@ -113,25 +132,12 @@ "metadata": {}, "outputs": [], "source": [ - "from apimrequests import ApimRequests\n", "from apimtesting import ApimTesting\n", "\n", "# Initialize testing framework\n", "tests = ApimTesting(\"Template Sample Tests\", sample_folder, nb_helper.deployment)\n", "\n", - "# Example API testing (uncomment and customize as needed)\n", - "# Determine endpoints, URLs, etc. prior to test execution\n", - "# endpoint_url, request_headers = utils.get_endpoint(deployment, rg_name, apim_gateway_url)\n", - "\n", - "# ********** TEST EXECUTIONS **********\n", - "\n", - "# reqs = ApimRequests(afd_endpoint_url, api_subscription_key)\n", - "# output = reqs.singleGet('/', msg = 'Calling API via Azure Front Door. Expect 200.')\n", - "# tests.verify('expected_value' in output, True)\n", - "\n", - "tests.print_summary()\n", - "\n", - "print_ok('All done!')" + "# Example API testing (uncomment and customize as needed)" ] } ], diff --git a/samples/authX-pro/create.ipynb b/samples/authX-pro/create.ipynb index 62da61e6..46a045c2 100644 --- a/samples/authX-pro/create.ipynb +++ b/samples/authX-pro/create.ipynb @@ -16,7 +16,9 @@ "outputs": [], "source": [ "import utils\n", - "from apimtypes import *\n", + "from typing import List\n", + "\n", + "from apimtypes import API, APIM_SKU, GET_APIOperation, INFRASTRUCTURE, NamedValue, PolicyFragment, POST_APIOperation, Product, Role\n", "from console import print_error, print_ok\n", "from azure_resources import get_infra_rg_name\n", "\n", @@ -28,8 +30,8 @@ "index = 1\n", "apim_sku = APIM_SKU.BASICV2 # Options: 'BASICV2', 'STANDARDV2', 'PREMIUMV2'\n", "deployment = INFRASTRUCTURE.SIMPLE_APIM # Options: see supported_infras below\n", - "api_prefix = 'authX-pro-' # ENTER A PREFIX FOR THE APIS TO REDUCE COLLISION POTENTIAL WITH OTHER SAMPLES\n", - "tags = ['authX-pro', 'jwt', 'policy-fragment'] # ENTER DESCRIPTIVE TAG(S)\n", + "api_prefix = 'authX-pro-' # ENTER A PREFIX FOR THE APIS TO REDUCE COLLISION POTENTIAL WITH OTHER SAMPLES\n", + "tags = ['authX-pro', 'jwt', 'policy-fragment'] # ENTER DESCRIPTIVE TAG(S)\n", "\n", "\n", "\n", @@ -40,8 +42,23 @@ "# Create the notebook helper with JWT support\n", "sample_folder = 'authX-pro'\n", "rg_name = get_infra_rg_name(deployment, index)\n", - "supported_infras = [INFRASTRUCTURE.AFD_APIM_PE, INFRASTRUCTURE.APIM_ACA, INFRASTRUCTURE.APPGW_APIM, INFRASTRUCTURE.APPGW_APIM_PE, INFRASTRUCTURE.SIMPLE_APIM]\n", - "nb_helper = utils.NotebookHelper(sample_folder, rg_name, rg_location, deployment, supported_infras, True, index = index, apim_sku = apim_sku)\n", + "supported_infras = [\n", + " INFRASTRUCTURE.AFD_APIM_PE,\n", + " INFRASTRUCTURE.APIM_ACA,\n", + " INFRASTRUCTURE.APPGW_APIM,\n", + " INFRASTRUCTURE.APPGW_APIM_PE,\n", + " INFRASTRUCTURE.SIMPLE_APIM\n", + "]\n", + "nb_helper = utils.NotebookHelper(\n", + " sample_folder,\n", + " rg_name,\n", + " rg_location,\n", + " deployment,\n", + " supported_infras,\n", + " True,\n", + " index = index,\n", + " apim_sku = apim_sku\n", + ")\n", "\n", "# Define the APIs and their operations and policies\n", "\n", @@ -71,9 +88,16 @@ "\n", "hr_product_name = 'hr'\n", "products: List[Product] = [\n", - " Product(hr_product_name, 'Human Resources',\n", - " 'Product for Human Resources APIs providing access to employee data, organizational structure, benefits information, and HR management services. Includes JWT-based authentication for HR members.',\n", - " 'published', True, False, pol_hr_product)\n", + " Product(\n", + " hr_product_name,\n", + " 'Human Resources',\n", + " 'Product for Human Resources APIs providing access to employee data, organizational structure, '\n", + " 'benefits information, and HR management services. Includes JWT-based authentication for HR members.',\n", + " 'published',\n", + " True,\n", + " False,\n", + " pol_hr_product\n", + " )\n", "]\n", "\n", "# Define the APIs and their operations and policies\n", @@ -85,15 +109,33 @@ "hr_employees_path = f'{api_prefix}employees'\n", "hr_employees_get = GET_APIOperation('Gets the employees', pol_hr_get,)\n", "hr_employees_post = POST_APIOperation('Creates a new employee', pol_hr_post)\n", - "hr_employees = API(hr_employees_path, 'Employees Pro', hr_employees_path, 'This is a Human Resources API for employee information', pol_hr_all_operations_pro,\n", - " operations = [hr_employees_get, hr_employees_post], tags = tags, productNames = [hr_product_name], subscriptionRequired = False)\n", + "hr_employees = API(\n", + " hr_employees_path,\n", + " 'Employees Pro',\n", + " hr_employees_path,\n", + " 'This is a Human Resources API for employee information',\n", + " pol_hr_all_operations_pro,\n", + " operations = [hr_employees_get, hr_employees_post],\n", + " tags = tags,\n", + " productNames = [hr_product_name],\n", + " subscriptionRequired = False\n", + ")\n", "\n", "# API 2: Benefits (HR)\n", "hr_benefits_path = f'{api_prefix}benefits'\n", "hr_benefits_get = GET_APIOperation('Gets employee benefits', pol_hr_get)\n", "hr_benefits_post = POST_APIOperation('Creates employee benefits', pol_hr_post)\n", - "hr_benefits = API(hr_benefits_path, 'Benefits Pro', hr_benefits_path, 'This is a Human Resources API for employee benefits', pol_hr_all_operations_pro,\n", - " operations = [hr_benefits_get, hr_benefits_post], tags = tags, productNames = [hr_product_name], subscriptionRequired = False)\n", + "hr_benefits = API(\n", + " hr_benefits_path,\n", + " 'Benefits Pro',\n", + " hr_benefits_path,\n", + " 'This is a Human Resources API for employee benefits',\n", + " pol_hr_all_operations_pro,\n", + " operations = [hr_benefits_get, hr_benefits_post],\n", + " tags = tags,\n", + " productNames = [hr_product_name],\n", + " subscriptionRequired = False\n", + ")\n", "\n", "# APIs Array\n", "apis: List[API] = [hr_employees, hr_benefits]\n", @@ -173,7 +215,10 @@ "\n", "# 1) HR Administrator\n", "# Create a JSON Web Token with a payload and sign it with the symmetric key from above.\n", - "encoded_jwt_token_hr_admin = AuthFactory.create_symmetric_jwt_token_for_user(UserHelper.get_user_by_role(Role.HR_ADMINISTRATOR), nb_helper.jwt_key_value)\n", + "encoded_jwt_token_hr_admin = AuthFactory.create_symmetric_jwt_token_for_user(\n", + " UserHelper.get_user_by_role(Role.HR_ADMINISTRATOR),\n", + " nb_helper.jwt_key_value\n", + ")\n", "print(f'\\nJWT token for HR Admin:\\n{encoded_jwt_token_hr_admin}') # this value is used to call the APIs via APIM\n", "\n", "# Set up an APIM requests object with the JWT token\n", @@ -181,21 +226,36 @@ "reqsApimAdmin.headers['Authorization'] = f'Bearer {encoded_jwt_token_hr_admin}'\n", "\n", "# Call APIM\n", - "output = reqsApimAdmin.singleGet(hr_employees_path, msg = 'Calling GET Employees API via API Management Gateway URL. Expect 200.')\n", + "output = reqsApimAdmin.singleGet(\n", + " hr_employees_path,\n", + " msg = 'Calling GET Employees API via API Management Gateway URL. Expect 200.'\n", + ")\n", "tests.verify(output, 'Successful GET')\n", "\n", - "output = reqsApimAdmin.singlePost(hr_employees_path, msg = 'Calling POST Employees API via API Management Gateway URL. Expect 200.')\n", + "output = reqsApimAdmin.singlePost(\n", + " hr_employees_path,\n", + " msg = 'Calling POST Employees API via API Management Gateway URL. Expect 200.'\n", + ")\n", "tests.verify(output, 'Successful POST')\n", "\n", - "output = reqsApimAdmin.singleGet(hr_benefits_path, msg = 'Calling GET Benefits API via API Management Gateway URL. Expect 200.')\n", + "output = reqsApimAdmin.singleGet(\n", + " hr_benefits_path,\n", + " msg = 'Calling GET Benefits API via API Management Gateway URL. Expect 200.'\n", + ")\n", "tests.verify(output, 'Successful GET')\n", "\n", - "output = reqsApimAdmin.singlePost(hr_benefits_path, msg = 'Calling POST Benefits API via API Management Gateway URL. Expect 200.')\n", + "output = reqsApimAdmin.singlePost(\n", + " hr_benefits_path,\n", + " msg = 'Calling POST Benefits API via API Management Gateway URL. Expect 200.'\n", + ")\n", "tests.verify(output, 'Successful POST')\n", "\n", "# 2) HR Associate\n", "# Create a JSON Web Token with a payload and sign it with the symmetric key from above.\n", - "encoded_jwt_token_hr_associate = AuthFactory.create_symmetric_jwt_token_for_user(UserHelper.get_user_by_role(Role.HR_ASSOCIATE), nb_helper.jwt_key_value)\n", + "encoded_jwt_token_hr_associate = AuthFactory.create_symmetric_jwt_token_for_user(\n", + " UserHelper.get_user_by_role(Role.HR_ASSOCIATE),\n", + " nb_helper.jwt_key_value\n", + ")\n", "print(f'\\nJWT token for HR Associate:\\n{encoded_jwt_token_hr_associate}') # this value is used to call the APIs via APIM\n", "\n", "# Set up an APIM requests object with the JWT token\n", @@ -203,16 +263,28 @@ "reqsApimAssociate.headers['Authorization'] = f'Bearer {encoded_jwt_token_hr_associate}'\n", "\n", "# Call APIM\n", - "output = reqsApimAssociate.singleGet(hr_employees_path, msg = 'Calling GET Employees API via API Management Gateway URL. Expect 200.')\n", + "output = reqsApimAssociate.singleGet(\n", + " hr_employees_path,\n", + " msg = 'Calling GET Employees API via API Management Gateway URL. Expect 200.'\n", + ")\n", "tests.verify(output, 'Successful GET')\n", "\n", - "output = reqsApimAssociate.singlePost(hr_employees_path, msg = 'Calling POST Employees API via API Management Gateway URL. Expect 403.')\n", + "output = reqsApimAssociate.singlePost(\n", + " hr_employees_path,\n", + " msg = 'Calling POST Employees API via API Management Gateway URL. Expect 403.'\n", + ")\n", "tests.verify(output, 'Access denied - no matching roles found')\n", "\n", - "output = reqsApimAssociate.singleGet(hr_benefits_path, msg = 'Calling GET Benefits API via API Management Gateway URL. Expect 200.')\n", + "output = reqsApimAssociate.singleGet(\n", + " hr_benefits_path,\n", + " msg = 'Calling GET Benefits API via API Management Gateway URL. Expect 200.'\n", + ")\n", "tests.verify(output, 'Successful GET')\n", "\n", - "output = reqsApimAssociate.singlePost(hr_benefits_path, msg = 'Calling POST Benefits API via API Management Gateway URL. Expect 403.')\n", + "output = reqsApimAssociate.singlePost(\n", + " hr_benefits_path,\n", + " msg = 'Calling POST Benefits API via API Management Gateway URL. Expect 403.'\n", + ")\n", "tests.verify(output, 'Access denied - no matching roles found')\n", "\n", "# 3) HR Administrator but no HR product subscription key (api-key)\n", @@ -221,7 +293,13 @@ "reqsApimAdminNoHrProduct.headers['Authorization'] = f'Bearer {encoded_jwt_token_hr_admin}'\n", "\n", "# Call APIM\n", - "output = reqsApimAdminNoHrProduct.singleGet(hr_employees_path, msg = 'Calling GET Employees API via API Management Gateway URL but with no HR product subscription key. Expect 403.')\n", + "output = reqsApimAdminNoHrProduct.singleGet(\n", + " hr_employees_path,\n", + " msg = (\n", + " 'Calling GET Employees API via API Management Gateway URL '\n", + " 'but with no HR product subscription key. Expect 403.'\n", + " )\n", + ")\n", "tests.verify(output, 'Access denied - no matching product found')\n", "\n", "# 4) HR Associate but no HR product subscription key (api-key)\n", @@ -230,7 +308,13 @@ "reqsApimAssociateNoHrProduct.headers['Authorization'] = f'Bearer {encoded_jwt_token_hr_associate}'\n", "\n", "# Call APIM\n", - "output = reqsApimAssociateNoHrProduct.singleGet(hr_employees_path, msg = 'Calling GET Employees API via API Management Gateway URL but with no HR product subscription key. Expect 403.')\n", + "output = reqsApimAssociateNoHrProduct.singleGet(\n", + " hr_employees_path,\n", + " msg = (\n", + " 'Calling GET Employees API via API Management Gateway URL '\n", + " 'but with no HR product subscription key. Expect 403.'\n", + " )\n", + ")\n", "tests.verify(output, 'Access denied - no matching product found')\n", "\n", "tests.print_summary()\n", diff --git a/samples/authX/create.ipynb b/samples/authX/create.ipynb index c09d042e..e51c5787 100644 --- a/samples/authX/create.ipynb +++ b/samples/authX/create.ipynb @@ -16,7 +16,9 @@ "outputs": [], "source": [ "import utils\n", - "from apimtypes import *\n", + "from typing import List\n", + "\n", + "from apimtypes import API, APIM_SKU, GET_APIOperation, INFRASTRUCTURE, NamedValue, POST_APIOperation, Role\n", "from console import print_error, print_ok\n", "from azure_resources import get_infra_rg_name\n", "\n", @@ -40,8 +42,23 @@ "# Create the notebook helper with JWT support\n", "sample_folder = 'authX'\n", "rg_name = get_infra_rg_name(deployment, index)\n", - "supported_infras = [INFRASTRUCTURE.AFD_APIM_PE, INFRASTRUCTURE.APIM_ACA, INFRASTRUCTURE.APPGW_APIM, INFRASTRUCTURE.APPGW_APIM_PE, INFRASTRUCTURE.SIMPLE_APIM]\n", - "nb_helper = utils.NotebookHelper(sample_folder, rg_name, rg_location, deployment, supported_infras, True, index = index, apim_sku = apim_sku)\n", + "supported_infras = [\n", + " INFRASTRUCTURE.AFD_APIM_PE,\n", + " INFRASTRUCTURE.APIM_ACA,\n", + " INFRASTRUCTURE.APPGW_APIM,\n", + " INFRASTRUCTURE.APPGW_APIM_PE,\n", + " INFRASTRUCTURE.SIMPLE_APIM\n", + "]\n", + "nb_helper = utils.NotebookHelper(\n", + " sample_folder,\n", + " rg_name,\n", + " rg_location,\n", + " deployment,\n", + " supported_infras,\n", + " True,\n", + " index = index,\n", + " apim_sku = apim_sku\n", + ")\n", "\n", "# Define the APIs and their operations and policies\n", "\n", @@ -70,7 +87,16 @@ "hr_employees_path = f'{api_prefix}employees'\n", "hr_employees_get = GET_APIOperation('Gets the employees', pol_hr_get)\n", "hr_employees_post = POST_APIOperation('Creates a new employee', pol_hr_post)\n", - "hr_employees = API(hr_employees_path, 'Employees', hr_employees_path, 'This is a Human Resources API to obtain employee information', pol_hr_all_operations, operations = [hr_employees_get, hr_employees_post], tags = tags, subscriptionRequired = True)\n", + "hr_employees = API(\n", + " hr_employees_path,\n", + " 'Employees',\n", + " hr_employees_path,\n", + " 'This is a Human Resources API to obtain employee information',\n", + " pol_hr_all_operations,\n", + " operations = [hr_employees_get, hr_employees_post],\n", + " tags = tags,\n", + " subscriptionRequired = True\n", + ")\n", "\n", "# APIs Array\n", "apis: List[API] = [hr_employees]\n", @@ -147,39 +173,64 @@ "# ********** TEST EXECUTIONS **********\n", "\n", "# 1) HR Administrator - Full access\n", - "encoded_jwt_token_hr_admin = AuthFactory.create_symmetric_jwt_token_for_user(UserHelper.get_user_by_role(Role.HR_ADMINISTRATOR), nb_helper.jwt_key_value)\n", + "encoded_jwt_token_hr_admin = AuthFactory.create_symmetric_jwt_token_for_user(\n", + " UserHelper.get_user_by_role(Role.HR_ADMINISTRATOR),\n", + " nb_helper.jwt_key_value\n", + ")\n", "print(f'\\nJWT token for HR Admin:\\n{encoded_jwt_token_hr_admin}')\n", "\n", "reqsApimAdmin = ApimRequests(endpoint_url, hr_api_apim_subscription_key, request_headers)\n", "reqsApimAdmin.headers['Authorization'] = f'Bearer {encoded_jwt_token_hr_admin}'\n", "\n", - "output = reqsApimAdmin.singleGet(hr_employees_path, msg = 'Calling GET Employees API as HR Admin. Expect 200.')\n", + "output = reqsApimAdmin.singleGet(\n", + " hr_employees_path,\n", + " msg = 'Calling GET Employees API as HR Admin. Expect 200.'\n", + ")\n", "tests.verify(output, 'Returning a mock employee')\n", "\n", - "output = reqsApimAdmin.singlePost(hr_employees_path, msg = 'Calling POST Employees API as HR Admin. Expect 200.')\n", + "output = reqsApimAdmin.singlePost(\n", + " hr_employees_path,\n", + " msg = 'Calling POST Employees API as HR Admin. Expect 200.'\n", + ")\n", "tests.verify(output, 'A mock employee has been created.')\n", "\n", "# 2) HR Associate - Read-only access\n", - "encoded_jwt_token_hr_associate = AuthFactory.create_symmetric_jwt_token_for_user(UserHelper.get_user_by_role(Role.HR_ASSOCIATE), nb_helper.jwt_key_value)\n", + "encoded_jwt_token_hr_associate = AuthFactory.create_symmetric_jwt_token_for_user(\n", + " UserHelper.get_user_by_role(Role.HR_ASSOCIATE),\n", + " nb_helper.jwt_key_value\n", + ")\n", "print(f'\\nJWT token for HR Associate:\\n{encoded_jwt_token_hr_associate}')\n", "\n", "reqsApimAssociate = ApimRequests(endpoint_url, hr_api_apim_subscription_key, request_headers)\n", "reqsApimAssociate.headers['Authorization'] = f'Bearer {encoded_jwt_token_hr_associate}'\n", "\n", - "output = reqsApimAssociate.singleGet(hr_employees_path, msg = 'Calling GET Employees API as HR Associate. Expect 200.')\n", + "output = reqsApimAssociate.singleGet(\n", + " hr_employees_path,\n", + " msg = 'Calling GET Employees API as HR Associate. Expect 200.'\n", + ")\n", "tests.verify(output, 'Returning a mock employee')\n", "\n", - "output = reqsApimAssociate.singlePost(hr_employees_path, msg = 'Calling POST Employees API as HR Associate. Expect 403.')\n", + "output = reqsApimAssociate.singlePost(\n", + " hr_employees_path,\n", + " msg = 'Calling POST Employees API as HR Associate. Expect 403.'\n", + ")\n", "tests.verify(output, '')\n", "\n", "# 3) Missing API subscription key\n", "reqsNoApiSubscription = ApimRequests(endpoint_url, None, request_headers)\n", "reqsNoApiSubscription.headers['Authorization'] = f'Bearer {encoded_jwt_token_hr_admin}'\n", "\n", - "output = reqsNoApiSubscription.singleGet(hr_employees_path, msg = 'Calling GET Employees API without API subscription key. Expect 401.')\n", + "output = reqsNoApiSubscription.singleGet(\n", + " hr_employees_path,\n", + " msg = 'Calling GET Employees API without API subscription key. Expect 401.'\n", + ")\n", "outputJson = utils.get_json(output)\n", "tests.verify(outputJson['statusCode'], 401)\n", - "tests.verify(outputJson['message'], 'Access denied due to missing subscription key. Make sure to include subscription key when making requests to an API.')\n", + "tests.verify(\n", + " outputJson['message'],\n", + " 'Access denied due to missing subscription key. '\n", + " 'Make sure to include subscription key when making requests to an API.'\n", + ")\n", "\n", "tests.print_summary()\n", "\n", diff --git a/samples/azure-maps/create.ipynb b/samples/azure-maps/create.ipynb index 49a8e2a7..19123e0f 100644 --- a/samples/azure-maps/create.ipynb +++ b/samples/azure-maps/create.ipynb @@ -20,7 +20,9 @@ "outputs": [], "source": [ "import utils\n", - "from apimtypes import *\n", + "from typing import List\n", + "\n", + "from apimtypes import API, APIM_SKU, APIOperation, GET_APIOperation2, HTTP_VERB, INFRASTRUCTURE, NamedValue\n", "from console import print_error, print_info, print_ok\n", "from azure_resources import get_infra_rg_name\n", "\n", @@ -43,8 +45,22 @@ "\n", "sample_folder = 'azure-maps'\n", "rg_name = get_infra_rg_name(deployment, index)\n", - "supported_infras = [INFRASTRUCTURE.AFD_APIM_PE, INFRASTRUCTURE.APIM_ACA, INFRASTRUCTURE.APPGW_APIM, INFRASTRUCTURE.APPGW_APIM_PE, INFRASTRUCTURE.SIMPLE_APIM]\n", - "nb_helper = utils.NotebookHelper(sample_folder, rg_name, rg_location, deployment, supported_infras, index = index, apim_sku = apim_sku)\n", + "supported_infras = [\n", + " INFRASTRUCTURE.AFD_APIM_PE,\n", + " INFRASTRUCTURE.APIM_ACA,\n", + " INFRASTRUCTURE.APPGW_APIM,\n", + " INFRASTRUCTURE.APPGW_APIM_PE,\n", + " INFRASTRUCTURE.SIMPLE_APIM\n", + "]\n", + "nb_helper = utils.NotebookHelper(\n", + " sample_folder,\n", + " rg_name,\n", + " rg_location,\n", + " deployment,\n", + " supported_infras,\n", + " index = index,\n", + " apim_sku = apim_sku\n", + ")\n", "azure_maps_url = 'https://atlas.microsoft.com'\n", "\n", "# Define the APIs and their operations and policies\n", @@ -61,12 +77,39 @@ "\n", "# API 1: Maps\n", "map_path = f'{api_prefix}map'\n", - "mapApi_v2_default_get = GET_APIOperation2('get-default-route', 'Get default route', '/default/*', 'This is the default route that will allow all requests to go through to the backend api', pol_map_default_route_v2_aad_get)\n", - "mapApi_v1_async_post = APIOperation('async-geocode-batch', 'Async Geocode Batch', '/geocode/batch/async', HTTP_VERB.POST, 'Post geocode batch async endpoint', pol_map_async_geocode_batch_v1_keyauth_post)\n", - "mapApi_v2_geocode_get = GET_APIOperation2('get-geocode', 'Get Geocode', '/geocode', 'Get geocode endpoint', pol_map_geocode_v2_aad_get)\n", - "\n", - "maps = API(map_path, 'Map API', map_path, 'This is the proxy for Azure Maps',\n", - " operations = [mapApi_v2_default_get, mapApi_v1_async_post, mapApi_v2_geocode_get], tags = tags, serviceUrl = azure_maps_url)\n", + "mapApi_v2_default_get = GET_APIOperation2(\n", + " 'get-default-route',\n", + " 'Get default route',\n", + " '/default/*',\n", + " 'This is the default route that will allow all requests to go through '\n", + " 'to the backend api',\n", + " pol_map_default_route_v2_aad_get\n", + ")\n", + "mapApi_v1_async_post = APIOperation(\n", + " 'async-geocode-batch',\n", + " 'Async Geocode Batch',\n", + " '/geocode/batch/async',\n", + " HTTP_VERB.POST,\n", + " 'Post geocode batch async endpoint',\n", + " pol_map_async_geocode_batch_v1_keyauth_post\n", + ")\n", + "mapApi_v2_geocode_get = GET_APIOperation2(\n", + " 'get-geocode',\n", + " 'Get Geocode',\n", + " '/geocode',\n", + " 'Get geocode endpoint',\n", + " pol_map_geocode_v2_aad_get\n", + ")\n", + "\n", + "maps = API(\n", + " map_path,\n", + " 'Map API',\n", + " map_path,\n", + " 'This is the proxy for Azure Maps',\n", + " operations = [mapApi_v2_default_get, mapApi_v1_async_post, mapApi_v2_geocode_get],\n", + " tags = tags,\n", + " serviceUrl = azure_maps_url\n", + ")\n", "\n", "# APIs Array\n", "apis: List[API] = [maps]\n", @@ -128,7 +171,6 @@ "source": [ "from apimtesting import ApimTesting\n", "from apimrequests import ApimRequests\n", - "import json\n", "\n", "# Initialize testing framework\n", "tests = ApimTesting(\"Azure Maps Sample Tests\", sample_folder, deployment)\n", @@ -144,12 +186,20 @@ "# Test Azure Maps API endpoints\n", "print_info(\"Testing Azure Maps API operations...\")\n", "\n", + "location_query = '15127%20NE%2024th%20Street%20Redmond%20WA'\n", + "\n", "# Test default route with SAS token auth\n", - "output = reqs.singleGet(f'{map_path}/default/geocode?query=15127%20NE%2024th%20Street%20Redmond%20WA', msg = 'Calling Default Route API with SAS Token Auth. Expect 200.')\n", + "output = reqs.singleGet(\n", + " f'{map_path}/default/geocode?query={location_query}',\n", + " msg = 'Calling Default Route API with SAS Token Auth. Expect 200.'\n", + ")\n", "tests.verify('address' in output, True)\n", "\n", "# Test geocode v2 with AAD auth\n", - "output = reqs.singleGet(f'{map_path}/geocode?query=15127%20NE%2024th%20Street%20Redmond%20WA', msg = 'Calling Geocode v2 API with AAD Auth. Expect 200.')\n", + "output = reqs.singleGet(\n", + " f'{map_path}/geocode?query={location_query}',\n", + " msg = 'Calling Geocode v2 API with AAD Auth. Expect 200.'\n", + ")\n", "tests.verify('address' in output, True)\n", "\n", "# TODO: 12/05/25 - SJK: Need to fix the implementation for this as it presently fails.\n", @@ -162,7 +212,10 @@ "# {\"query\": \"?query=Pike Pl, Seattle, WA 98101&lat=47.610970&lon=-122.342469&radius=1000\"},\n", "# {\"query\": \"?query=Champ de Mars, 5 Avenue Anatole France, 75007 Paris, France&limit=1\"}\n", "# ]\n", - "# }, msg = 'Calling Async Geocode Batch v1 API with Share Key Auth. Expect initial 202, then a 200 on the polling response', timeout = 120, poll_interval = 3)\n", + "# }, msg = (\n", + "# 'Calling Async Geocode Batch v1 API with Share Key Auth. '\n", + "# 'Expect initial 202, then a 200 on the polling response'\n", + "# ), timeout = 120, poll_interval = 3)\n", "\n", "# # Verify batch response contains successful requests\n", "# tests.verify('summary' in output and 'successfulRequests' in output and\n", @@ -170,11 +223,17 @@ "\n", "# Test unauthorized access (should fail with 401)\n", "reqsNoApiSubscription = ApimRequests(endpoint_url, None, request_headers)\n", - "output = reqsNoApiSubscription.singleGet(f'{map_path}/geocode?query=15127%20NE%2024th%20Street%20Redmond%20WA',\n", - " msg='Calling Geocode v2 API without API subscription key. Expect 401.')\n", + "output = reqsNoApiSubscription.singleGet(\n", + " f'{map_path}/geocode?query={location_query}',\n", + " msg = 'Calling Geocode v2 API without API subscription key. Expect 401.'\n", + ")\n", "outputJson = utils.get_json(output)\n", "tests.verify(outputJson['statusCode'], 401)\n", - "tests.verify(outputJson['message'], 'Access denied due to missing subscription key. Make sure to include subscription key when making requests to an API.')\n", + "tests.verify(\n", + " outputJson['message'],\n", + " 'Access denied due to missing subscription key. '\n", + " 'Make sure to include subscription key when making requests to an API.'\n", + ")\n", "\n", "tests.print_summary()\n", "print_ok('โœ… All tests completed successfully!')" diff --git a/samples/costing/README.md b/samples/costing/README.md new file mode 100644 index 00000000..6e5f78c0 --- /dev/null +++ b/samples/costing/README.md @@ -0,0 +1,205 @@ +# Samples: APIM Costing & Showback + +This sample demonstrates how to track and allocate API costs using Azure API Management with Azure Monitor, Application Insights, Log Analytics, and Cost Management. This setup enables organizations to determine the cost of API consumption per business unit, department, or application. + +โš™๏ธ **Supported infrastructures**: All infrastructures (or bring your own existing APIM deployment) + +๐Ÿ‘Ÿ **Expected *Run All* runtime (excl. infrastructure prerequisite): ~15 minutes** + +## ๐ŸŽฏ Objectives + +1. **Track API usage by caller** - Use APIM subscription keys to identify business units, departments, or applications +2. **Capture request metrics** - Log subscriptionId, apiName, operationName, and status codes +3. **Aggregate cost data** - Combine API usage metrics with Azure Cost Management data +4. **Visualize showback data** - Create Azure Monitor Workbooks to display cost allocation by caller +5. **Enable cost governance** - Establish patterns for consistent tagging and naming conventions + +## โœ… Prerequisites + +Before running this sample, ensure you have the following: + +### Required + +| Prerequisite | Description | +|---|---| +| **Azure subscription** | An active Azure subscription with Owner or Contributor access | +| **Azure CLI** | Logged in (`az login`) with the correct subscription selected (`az account set -s `) | +| **APIM instance** | Either deploy one via this repo's infrastructure, or bring your own (see below) | +| **Python environment** | Python 3.12+ with dependencies installed (`uv sync` or `pip install -r requirements.txt`) | + +### Azure RBAC Permissions + +The signed-in user needs the following role assignments: + +| Role | Scope | Purpose | +|---|---|---| +| **Contributor** | Resource Group | Deploy Bicep resources (App Insights, Log Analytics, Storage, Workbook, Diagnostic Settings) | +| **Cost Management Contributor** | Subscription | Create Cost Management export | +| **Storage Blob Data Contributor** | Storage Account | Write cost export data (auto-assigned by the notebook) | + +### For Workbook Consumers + +Users who only need to **view** the deployed Azure Monitor Workbook (not deploy the sample) need: + +| Role | Scope | Purpose | +|---|---|---| +| **Monitoring Reader** | Resource Group | Open and view the workbook | +| **Log Analytics Reader** | Log Analytics Workspace | Execute the Kusto queries that power the workbook | + +> ๐Ÿ’ก If a user can open the workbook but sees empty visualizations, they are likely missing **Log Analytics Reader** on the workspace. + +## โš™๏ธ Configuration + +### Important: Sample Index + +The `create.ipynb` notebook passes a **`sampleIndex` parameter** to the Bicep template. This parameter ensures unique resource naming when deploying multiple instances of this sample. The notebook automatically provides this value; you only need to verify it matches your deployment scenario: + +```python +sample_index = 2 # Increment this for multiple sample deployments +``` + +This index is used in resource names (e.g., `appi-cost-2-xxxx`, `log-cost-2-xxxx`) to avoid naming conflicts when running multiple instances of the sample. + +### Option A: Use a repository infrastructure (recommended) + +1. Navigate to the desired [infrastructure](../../infrastructure/) folder (e.g., [simple-apim](../../infrastructure/simple-apim/)) and follow its README.md to deploy. +2. Open `create.ipynb` and set: + ```python + infrastructure = INFRASTRUCTURE.SIMPLE_APIM # Match your deployed infra + index = 1 # Match your infra index + sample_index = 1 # Increment for multiple sample deployments + ``` +3. Run All Cells. + +### Option B: Bring your own existing APIM + +You can use any existing Azure API Management instance. The sample only adds diagnostic settings and sample resources to your APIM - it does **not** modify your existing APIs or policies. + +1. Open `create.ipynb` and **uncomment** the two lines in the User Configuration section: + ```python + existing_rg_name = 'your-resource-group-name' + existing_apim_name = 'your-apim-service-name' + ``` +2. Set the correct Azure subscription: `az account set -s ` +3. Run All Cells. + +**What the sample deploys into your resource group:** +- Application Insights instance +- Log Analytics Workspace +- Storage Account (for cost exports) +- Diagnostic Settings on your APIM (routes gateway logs to Log Analytics) +- Azure Monitor Workbook +- A sample API (`cost-tracking-api`) with 5 business unit subscriptions + +**What it does NOT touch:** +- Your existing APIs, policies, or subscriptions +- Your APIM SKU or networking configuration +- Any resources outside the specified resource group (except the subscription-scoped Cost Management export) + +## ๐Ÿ“ Scenario + +Organizations often need to allocate the cost of shared API Management infrastructure to different consumers (business units, departments, applications, or customers). This sample addresses: + +- **Cost Transparency**: Understanding which teams or applications drive API consumption +- **Chargeback/Showback**: Producing data that can inform internal billing or cost awareness +- **Resource Optimization**: Identifying high-cost consumers and opportunities for optimization +- **Budget Planning**: Historical usage patterns to forecast future costs + +### Key Principle: Cost Determination, Not Billing + +This sample focuses on **producing cost data**, not implementing billing processes. You determine costs; how you use that information (showback reports, chargeback, budgeting) is a separate business decision. + +## ๐Ÿ›ฉ๏ธ Lab Components + +This lab deploys and configures: + +- **Application Insights** - Receives APIM diagnostic logs for request tracking +- **Log Analytics Workspace** - Stores `ApiManagementGatewayLogs` with detailed request metadata (resource-specific mode) +- **Storage Account** - Receives Azure Cost Management exports +- **Cost Management Export** - Automated export of cost data (configurable frequency) +- **Diagnostic Settings** - Links APIM to Log Analytics with `logAnalyticsDestinationType: Dedicated` for resource-specific tables +- **Sample API & Subscriptions** - 5 subscriptions representing different business units +- **Azure Monitor Workbook** - Pre-built dashboard with: + - Cost allocation table (base + variable cost per BU) + - Base vs variable cost stacked bar chart + - Cost breakdown by API + - Request count and distribution charts + - Success/error rate analysis + - Response code distribution +- **Live Pricing Integration** - Auto-detects your APIM SKU and fetches current pricing from the [Azure Retail Prices API](https://learn.microsoft.com/rest/api/cost-management/retail-prices/azure-retail-prices) +- **Budget Alerts** (optional) - Per-BU scheduled query alerts when request thresholds are exceeded + +### Cost Allocation Model + +| Component | Formula | +|---|---| +| **Base Cost Share** | `Base Monthly Cost x (BU Requests / Total Requests)` | +| **Variable Cost** | `BU Requests x (Rate per 1K / 1000)` | +| **Total Allocated** | `Base Cost Share + Variable Cost` | + +### What Gets Logged + +| Field | Description | +|---|---| +| `ApimSubscriptionId` | Identifies the caller (BU / department / app) | +| `ApiId` | Which API was called | +| `OperationId` | Specific operation within the API | +| `ResponseCode` | Success / failure indication | +| Request count | Number of requests (primary cost metric) | + +> **Important**: The API must have `subscriptionRequired: true` for `ApimSubscriptionId` to be populated in logs. This sample configures it automatically. + +## ๐Ÿ–ผ๏ธ Expected Results + +After running the notebook, you will have: + +1. **Application Insights** showing real-time API requests +2. **Log Analytics** with queryable `ApiManagementGatewayLogs` (resource-specific table) +3. **Storage Account** receiving cost export data +4. **Azure Monitor Workbook** displaying cost allocation and usage analytics +5. **Portal links** printed in the notebook's final cell for quick access + +### Cost Management Export + +The cost export is configured automatically using a system-assigned managed identity with **Storage Blob Data Contributor** access. + +![Cost Report - Export Overview](screenshots/costreport-01.png) + +![Cost Report - Export Details](screenshots/costreport-02.png) + +### Azure Monitor Workbook Dashboard + +The deployed workbook provides a comprehensive view of API cost allocation and usage analytics across business units. + +![Dashboard - Cost Allocation Overview](screenshots/Dashboard-01.png) + +![Dashboard - Cost Breakdown by Business Unit](screenshots/Dashboard-02.png) + +![Dashboard - Request Distribution](screenshots/Dashboard-03.png) + +![Dashboard - Usage Analytics](screenshots/Dashboard-04.png) + +![Dashboard - Response Code Analysis](screenshots/Dashboard-05.png) + +## ๐Ÿงน Clean Up + +To remove all resources created by this sample, open and run `clean-up.ipynb`. This deletes: +- Sample API and subscriptions from APIM +- Application Insights, Log Analytics, Storage Account +- Azure Monitor Workbook +- Cost Management export + +> The clean-up notebook does **not** delete your APIM instance or resource group. + +## ๐Ÿ”— Additional Resources + +- [Azure API Management Pricing](https://azure.microsoft.com/pricing/details/api-management/) +- [Azure Retail Prices API](https://learn.microsoft.com/rest/api/cost-management/retail-prices/azure-retail-prices) +- [Azure Cost Management Documentation](https://learn.microsoft.com/azure/cost-management-billing/) +- [Log Analytics Kusto Query Language](https://learn.microsoft.com/azure/data-explorer/kusto/query/) +- [Azure Monitor Workbooks](https://learn.microsoft.com/azure/azure-monitor/visualize/workbooks-overview) +- [APIM Diagnostic Settings](https://learn.microsoft.com/azure/api-management/api-management-howto-use-azure-monitor) + +[infrastructure-architectures]: ../../README.md#infrastructure-architectures +[infrastructure-folder]: ../../infrastructure/ +[simple-apim-infra]: ../../infrastructure/simple-apim/ diff --git a/samples/costing/budget-alert-threshold.kql b/samples/costing/budget-alert-threshold.kql new file mode 100644 index 00000000..0f557ebf --- /dev/null +++ b/samples/costing/budget-alert-threshold.kql @@ -0,0 +1,9 @@ +// Fires when a business unit exceeds a request threshold in a 1-hour window. +// +// Parameters (prepend as KQL 'let' bindings before running): +// let buName = 'bu-hr'; // Business unit subscription ID +// let threshold = 1000; // Request count threshold +ApiManagementGatewayLogs +| where TimeGenerated > ago(1h) and ApimSubscriptionId == buName +| summarize RequestCount = count() +| where RequestCount > threshold diff --git a/samples/costing/cost-export.bicep b/samples/costing/cost-export.bicep new file mode 100644 index 00000000..184f9b9a --- /dev/null +++ b/samples/costing/cost-export.bicep @@ -0,0 +1,79 @@ +// ------------------------------ +// COST MANAGEMENT EXPORT MODULE +// ------------------------------ +// This module deploys a Cost Management export at subscription scope. +// It must be called from a resource-group-scoped template using: +// scope: subscription() + +targetScope = 'subscription' + + +// ------------------------------ +// PARAMETERS +// ------------------------------ + +@description('Name of the cost export') +param costExportName string + +@description('Resource ID of the storage account for export delivery') +param storageAccountId string + +@description('Container name for cost export data') +param containerName string = 'cost-exports' + +@description('Root folder path within the container') +param rootFolderPath string = 'apim-costing' + +@description('Export recurrence frequency') +@allowed([ + 'Daily' + 'Weekly' + 'Monthly' +]) +param recurrence string = 'Daily' + +@description('Start date for the export schedule (UTC)') +param startDate string + + +// ------------------------------ +// RESOURCES +// ------------------------------ + +// https://learn.microsoft.com/azure/templates/microsoft.costmanagement/exports +resource costExport 'Microsoft.CostManagement/exports@2023-11-01' = { + name: costExportName + properties: { + definition: { + type: 'ActualCost' + timeframe: 'MonthToDate' + dataSet: { + granularity: 'Daily' + } + } + deliveryInfo: { + destination: { + resourceId: storageAccountId + container: containerName + rootFolderPath: rootFolderPath + } + } + format: 'Csv' + schedule: { + status: 'Active' + recurrence: recurrence + recurrencePeriod: { + from: startDate + to: '2099-12-31T00:00:00Z' + } + } + } +} + + +// ------------------------------ +// OUTPUTS +// ------------------------------ + +@description('Name of the deployed cost export') +output costExportName string = costExport.name diff --git a/samples/costing/create.ipynb b/samples/costing/create.ipynb new file mode 100644 index 00000000..1109f11b --- /dev/null +++ b/samples/costing/create.ipynb @@ -0,0 +1,791 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "655130e3", + "metadata": {}, + "source": [ + "### ๐Ÿ› ๏ธ Initialize Notebook Variables\n", + "\n", + "**Only modify entries under _USER CONFIGURATION_.**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2cceefff", + "metadata": {}, + "outputs": [], + "source": [ + "import utils\n", + "\n", + "from apimtypes import API, APIM_SKU, GET_APIOperation2, INFRASTRUCTURE\n", + "from console import print_error, print_info, print_ok, print_val, print_warning\n", + "from azure_resources import get_infra_rg_name, get_account_info\n", + "\n", + "# ------------------------------\n", + "# USER CONFIGURATION\n", + "# ------------------------------\n", + "\n", + "rg_location = 'eastus2'\n", + "index = 1\n", + "apim_sku = APIM_SKU.BASICV2 # Options: 'BASICV2', 'STANDARDV2', 'PREMIUMV2'\n", + "deployment = INFRASTRUCTURE.SIMPLE_APIM # Options: see supported_infras below\n", + "api_prefix = 'costing-' # ENTER A PREFIX FOR THE APIS TO REDUCE COLLISION POTENTIAL WITH OTHER SAMPLES\n", + "tags = ['costing', 'cost-management', 'observability'] # ENTER DESCRIPTIVE TAG(S)\n", + "\n", + "# Cost export configuration\n", + "cost_export_frequency = 'Daily' # Options: 'Daily', 'Weekly', 'Monthly'\n", + "\n", + "# Sample data generation\n", + "generate_sample_load = True # Generate sample API calls to demonstrate cost tracking\n", + "sample_requests_per_subscription = 50 # Base request count per business unit (multiplied by each BU's weight)\n", + "\n", + "# Budget alerts\n", + "alert_threshold = 1000 # Request count threshold per BU per hour\n", + "alert_email = 'alerts@contoso.com' # Email for alert notifications (leave empty to skip)\n", + "\n", + "\n", + "\n", + "# ------------------------------\n", + "# SYSTEM CONFIGURATION\n", + "# ------------------------------\n", + "\n", + "sample_folder = 'costing'\n", + "rg_name = get_infra_rg_name(deployment, index)\n", + "supported_infras = [\n", + " INFRASTRUCTURE.AFD_APIM_PE,\n", + " INFRASTRUCTURE.APIM_ACA,\n", + " INFRASTRUCTURE.APPGW_APIM,\n", + " INFRASTRUCTURE.APPGW_APIM_PE,\n", + " INFRASTRUCTURE.SIMPLE_APIM\n", + "]\n", + "nb_helper = utils.NotebookHelper(\n", + " sample_folder,\n", + " rg_name,\n", + " rg_location,\n", + " deployment,\n", + " supported_infras,\n", + " True,\n", + " index = index,\n", + " apim_sku = apim_sku\n", + ")\n", + "\n", + "# Define the API and its operations\n", + "api_path = 'cost-demo'\n", + "cost_demo_get = GET_APIOperation2('get-status', 'Get Status', '/get', 'Get Status')\n", + "\n", + "apis = [\n", + " API(\n", + " f'{api_prefix}cost-tracking-api',\n", + " 'Cost Tracking Demo API',\n", + " api_path,\n", + " 'API for demonstrating cost tracking and allocation',\n", + " operations = [cost_demo_get],\n", + " tags = tags,\n", + " subscriptionRequired = True,\n", + " serviceUrl = 'https://httpbin.org'\n", + " )\n", + "]\n", + "\n", + "# Define business units\n", + "business_units = [\n", + " {'name': 'bu-hr', 'display': 'Business Unit - Human Resources', 'request_weight': 1.0},\n", + " {'name': 'bu-finance', 'display': 'Business Unit - Finance', 'request_weight': 2.5},\n", + " {'name': 'bu-marketing', 'display': 'Business Unit - Marketing', 'request_weight': 0.5},\n", + " {'name': 'bu-engineering', 'display': 'Business Unit - Engineering', 'request_weight': 3.0}\n", + "]\n", + "\n", + "# Get Azure account information\n", + "current_user, current_user_id, tenant_id, subscription_id = get_account_info()\n", + "\n", + "if not subscription_id:\n", + " print_error('Could not determine Azure subscription ID. Run: az login')\n", + " raise SystemExit(1)\n", + "\n", + "print_ok('Notebook initialized')" + ] + }, + { + "cell_type": "markdown", + "id": "3fa3d77e", + "metadata": {}, + "source": [ + "### ๐Ÿš€ Deploy Infrastructure and APIs\n", + "\n", + "Creates the bicep deployment into the previously-specified resource group. A bicep parameters, `params.json`, file will be created prior to execution." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fc7d872c", + "metadata": {}, + "outputs": [], + "source": [ + "# Build the bicep parameters\n", + "bicep_parameters = {\n", + " 'location' : {'value': rg_location},\n", + " 'costExportFrequency' : {'value': cost_export_frequency},\n", + " 'index' : {'value': index},\n", + " 'apis' : {'value': [api.to_dict() for api in apis]},\n", + " 'businessUnits' : {'value': [{'name': bu['name'], 'displayName': bu['display']} for bu in business_units]}\n", + "}\n", + "\n", + "# Deploy the sample\n", + "output = nb_helper.deploy_sample(bicep_parameters)\n", + "\n", + "if output.success:\n", + " # Extract deployment outputs\n", + " apim_name = output.get('apimServiceName', 'APIM Service Name')\n", + " apim_gateway_url = output.get('apimResourceGatewayURL', 'APIM API Gateway URL')\n", + " app_insights_name = output.get('applicationInsightsName', 'Application Insights Name')\n", + " app_insights_connection_string = output.get('applicationInsightsConnectionString', '')\n", + " log_analytics_name = output.get('logAnalyticsWorkspaceName', 'Log Analytics Workspace Name')\n", + " storage_account_name = output.get('storageAccountName', 'Storage Account Name')\n", + " workbook_name = output.get('workbookName', 'Workbook Name')\n", + " workbook_id = output.get('workbookId', '')\n", + " cost_export_name = f'apim-cost-export-{index}-{rg_name}'\n", + "\n", + " # Extract subscription keys\n", + " subscription_keys_output = output.getJson('subscriptionKeys', 'Subscription Keys', secure=True)\n", + "\n", + " # Map keys to business units\n", + " subscriptions = {}\n", + " if subscription_keys_output:\n", + " for bu in business_units:\n", + " sub_id = bu['name']\n", + " primary_key = next((item['primaryKey'] for item in subscription_keys_output if item['name'] == sub_id), None)\n", + "\n", + " subscriptions[sub_id] = {\n", + " 'display_name': bu['display'],\n", + " 'primary_key': primary_key,\n", + " 'request_weight': bu.get('request_weight', 1.0)\n", + " }\n", + "\n", + " print_ok('Deployment completed successfully')\n", + "else:\n", + " print_error(\"Deployment failed!\")\n", + " raise SystemExit(1)" + ] + }, + { + "cell_type": "markdown", + "id": "2b6c16c3", + "metadata": {}, + "source": [ + "### ๐Ÿ”ง Configure Cost Management Export\n", + "\n", + "Automatically set up cost data export to the storage account." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "075b180c", + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import tempfile\n", + "from datetime import datetime, timedelta, timezone\n", + "from pathlib import Path\n", + "\n", + "from azure_resources import run\n", + "\n", + "if 'storage_account_name' not in locals():\n", + " print_error('Please run the deployment cell first')\n", + " raise SystemExit(1)\n", + "\n", + "print_info('Configuring automated Cost Management export (managed identity)...')\n", + "\n", + "# Get storage account resource ID\n", + "storage_account_id = (\n", + " f'/subscriptions/{subscription_id}'\n", + " f'/resourceGroups/{rg_name}'\n", + " f'/providers/Microsoft.Storage/storageAccounts/{storage_account_name}'\n", + ")\n", + "\n", + "# Export scope and name\n", + "export_scope = f'/subscriptions/{subscription_id}'\n", + "api_version = '2025-03-01'\n", + "\n", + "# Register required resource provider\n", + "print_info('Registering Microsoft.CostManagementExports resource provider...')\n", + "register_result = run(\n", + " 'az provider register --namespace Microsoft.CostManagementExports --wait',\n", + " log_command=False\n", + ")\n", + "\n", + "if register_result.success:\n", + " print_ok('Resource provider registered successfully')\n", + "\n", + "# Check if export already exists\n", + "existing_export = run(\n", + " f'az rest --method GET '\n", + " f'--url \"{export_scope}/providers/Microsoft.CostManagement/exports/{cost_export_name}'\n", + " f'?api-version={api_version}\" -o json',\n", + " log_command=False\n", + ")\n", + "\n", + "if existing_export.success:\n", + " print_warning(f'Cost export \"{cost_export_name}\" already exists - recreating...')\n", + " run(\n", + " f'az rest --method DELETE '\n", + " f'--url \"{export_scope}/providers/Microsoft.CostManagement/exports/{cost_export_name}'\n", + " f'?api-version={api_version}\"',\n", + " log_command=False\n", + " )\n", + "\n", + "# Build recurrence settings\n", + "recurrence_map = {'Daily': 'Daily', 'Weekly': 'Weekly', 'Monthly': 'Monthly'}\n", + "recurrence = recurrence_map.get(cost_export_frequency, 'Daily')\n", + "\n", + "start_date = (datetime.now(timezone.utc) + timedelta(days=1)).strftime('%Y-%m-%dT00:00:00Z')\n", + "end_date = (datetime.now(timezone.utc) + timedelta(days=365)).strftime('%Y-%m-%dT00:00:00Z')\n", + "\n", + "# Build the export body with system-assigned managed identity\n", + "export_body = {\n", + " 'identity': {\n", + " 'type': 'systemAssigned'\n", + " },\n", + " 'location': 'global',\n", + " 'properties': {\n", + " 'definition': {\n", + " 'type': 'ActualCost',\n", + " 'timeframe': 'MonthToDate',\n", + " 'dataSet': {\n", + " 'granularity': 'Daily'\n", + " }\n", + " },\n", + " 'deliveryInfo': {\n", + " 'destination': {\n", + " 'type': 'AzureBlob',\n", + " 'container': 'cost-exports',\n", + " 'rootFolderPath': 'apim-costing',\n", + " 'resourceId': storage_account_id\n", + " }\n", + " },\n", + " 'schedule': {\n", + " 'status': 'Active',\n", + " 'recurrence': recurrence,\n", + " 'recurrencePeriod': {\n", + " 'from': start_date,\n", + " 'to': end_date\n", + " }\n", + " },\n", + " 'format': 'Csv'\n", + " }\n", + "}\n", + "\n", + "print_info('Creating cost export with managed identity...')\n", + "\n", + "# Write body to a temp file for cross-platform compatibility\n", + "with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as body_file:\n", + " json.dump(export_body, body_file)\n", + " body_file_path = body_file.name\n", + "\n", + "try:\n", + " export_result = run(\n", + " f'az rest --method PUT '\n", + " f'--url \"{export_scope}/providers/Microsoft.CostManagement/exports/{cost_export_name}'\n", + " f'?api-version={api_version}\" '\n", + " f'--body @{body_file_path} -o json',\n", + " log_command=False\n", + " )\n", + "finally:\n", + " Path(body_file_path).unlink(missing_ok=True)\n", + "\n", + "if export_result and export_result.success:\n", + " print_ok(f'Cost export created: {cost_export_name}')\n", + " print_val('Export frequency', recurrence)\n", + " print_val('Authentication', 'System-assigned managed identity')\n", + " cost_export_configured = True\n", + "\n", + " # Extract the managed identity principal ID from the response\n", + " export_data = json.loads(export_result.text)\n", + " principal_id = export_data.get('identity', {}).get('principalId')\n", + "\n", + " if principal_id:\n", + " print_info('Assigning Storage Blob Data Contributor role to export identity...')\n", + "\n", + " role_assignment = run(\n", + " f'az role assignment create '\n", + " f'--assignee-object-id {principal_id} '\n", + " f'--assignee-principal-type ServicePrincipal '\n", + " f'--role \"Storage Blob Data Contributor\" '\n", + " f'--scope {storage_account_id}',\n", + " log_command=False\n", + " )\n", + "\n", + " if role_assignment.success:\n", + " print_ok('Storage Blob Data Contributor role assigned to export identity')\n", + " else:\n", + " print_warning('Could not assign role - you may need to do this manually')\n", + " else:\n", + " print_warning('Could not retrieve export identity principal ID')\n", + "\n", + " print_info('Cost data will be exported automatically starting tomorrow')\n", + "else:\n", + " print_error('Failed to create cost export')\n", + " print_warning('Continuing without cost export - you can configure it manually later')\n", + " cost_export_configured = False" + ] + }, + { + "cell_type": "markdown", + "id": "69d3d7da", + "metadata": {}, + "source": [ + "### ๐Ÿ“ค Trigger Initial Cost Export\n", + "\n", + "Run the first cost export manually to populate data immediately." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "80be9404", + "metadata": {}, + "outputs": [], + "source": [ + "from azure_resources import run\n", + "\n", + "if 'cost_export_configured' not in locals():\n", + " print_error('Please run the cost export configuration cell first')\n", + " raise SystemExit(1)\n", + "\n", + "if cost_export_configured:\n", + " export_scope = f'/subscriptions/{subscription_id}'\n", + " api_version = '2025-03-01'\n", + "\n", + " print_info(f'Triggering first cost export run for \"{cost_export_name}\"...')\n", + "\n", + " run_result = run(\n", + " f'az rest --method POST '\n", + " f'--url \"{export_scope}/providers/Microsoft.CostManagement/exports/{cost_export_name}'\n", + " f'/run?api-version={api_version}\"',\n", + " log_command=False\n", + " )\n", + "\n", + " if run_result.success:\n", + " print_ok('Cost export run triggered successfully')\n", + " print_info('Data will appear in the storage container within a few minutes')\n", + " else:\n", + " print_warning('Could not trigger export run - it will run on its next scheduled recurrence')\n", + "else:\n", + " print_warning('Cost export was not configured - skipping manual run')" + ] + }, + { + "cell_type": "markdown", + "id": "8048503a", + "metadata": {}, + "source": [ + "### ๐Ÿš€ Generate Sample API Traffic\n", + "\n", + "Generate sample API calls from each business unit subscription to demonstrate cost tracking and allocation.\n", + "\n", + "This will create request logs in Application Insights and Log Analytics that can be used for cost analysis." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bc3d366c", + "metadata": {}, + "outputs": [], + "source": [ + "from apimrequests import ApimRequests\n", + "\n", + "if 'apim_gateway_url' not in locals():\n", + " print_error('Please run the deployment cell first')\n", + " raise SystemExit(1)\n", + "\n", + "if generate_sample_load:\n", + " print_info('Generating sample API traffic...')\n", + "\n", + " # Determine endpoints, URLs, etc. prior to test execution\n", + " endpoint_url, request_headers = utils.get_endpoint(deployment, rg_name, apim_gateway_url)\n", + "\n", + " # Send requests for each business unit, weighted by its configured request_weight\n", + " for subscription_id_sub, sub_info in subscriptions.items():\n", + " bu_request_count = max(1, int(sample_requests_per_subscription * sub_info.get('request_weight', 1.0)))\n", + "\n", + " reqs = ApimRequests(endpoint_url, sub_info['primary_key'], request_headers)\n", + " reqs.multiGet(\n", + " f'/{api_path}/get',\n", + " bu_request_count,\n", + " msg = f'Generating {bu_request_count} requests for {subscription_id_sub}',\n", + " printResponse = False,\n", + " sleepMs = 10\n", + " )\n", + "\n", + " print_info('Note: It may take 2-5 minutes for logs to appear in Application Insights and Log Analytics')\n", + "else:\n", + " print_info('Sample load generation skipped (generate_sample_load = False)')" + ] + }, + { + "cell_type": "markdown", + "id": "179f9e77", + "metadata": {}, + "source": [ + "### ๐Ÿ” Verify Log Ingestion\n", + "\n", + "Waits for diagnostic logs to arrive in Log Analytics (auto-retries for up to 10 minutes)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9db5899e", + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import tempfile\n", + "import time\n", + "from pathlib import Path\n", + "\n", + "from azure_resources import run\n", + "\n", + "if 'log_analytics_name' not in locals():\n", + " print_error('Please run the deployment cell first')\n", + " raise SystemExit(1)\n", + "\n", + "print_info('Waiting for APIM logs to arrive in Log Analytics...')\n", + "print_info('Log ingestion typically takes 2-5 minutes after generating traffic')\n", + "print()\n", + "\n", + "print_val('Workspace', log_analytics_name)\n", + "\n", + "# Build the workspace resource ID for the ARM query endpoint\n", + "workspace_resource_id = (\n", + " f'/subscriptions/{subscription_id}'\n", + " f'/resourceGroups/{rg_name}'\n", + " f'/providers/Microsoft.OperationalInsights/workspaces/{log_analytics_name}'\n", + ")\n", + "\n", + "# Load KQL from external file and wrap it in a JSON body\n", + "kql_path = utils.determine_policy_path('verify-log-ingestion.kql', sample_folder)\n", + "kql_query = Path(kql_path).read_text(encoding='utf-8')\n", + "\n", + "query_body = {'query': kql_query}\n", + "\n", + "with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:\n", + " json.dump(query_body, f)\n", + " query_file_path = f.name\n", + "\n", + "# Poll Log Analytics until gateway logs with subscription IDs appear\n", + "max_wait_minutes = 10\n", + "poll_interval_seconds = 30\n", + "max_attempts = (max_wait_minutes * 60) // poll_interval_seconds\n", + "logs_found = False\n", + "\n", + "try:\n", + " for attempt in range(1, max_attempts + 1):\n", + " result = run(\n", + " f'az rest --method POST '\n", + " f'--url \"https://management.azure.com{workspace_resource_id}/api/query?api-version=2020-08-01\" '\n", + " f'--body @{query_file_path} -o json',\n", + " log_command=False\n", + " )\n", + "\n", + " # A non-transient error (e.g. bad API version, auth failure) should stop immediately\n", + " if not result.success:\n", + " print_error(f'Query failed: {result.text[:300]}')\n", + " break\n", + "\n", + " # Parse the tabular response and check whether any rows were returned\n", + " # The Log Analytics REST API returns PascalCase keys (Tables, Rows)\n", + " if result.json_data:\n", + " tables = result.json_data.get('Tables', [])\n", + " if tables:\n", + " rows = tables[0].get('Rows', [])\n", + " if rows and len(rows) > 0:\n", + " row_count = int(rows[0][0])\n", + " if row_count > 0:\n", + " print_ok(f'Found {row_count} log entries with subscription IDs')\n", + " logs_found = True\n", + " break\n", + "\n", + " elapsed = attempt * poll_interval_seconds\n", + " remaining = (max_wait_minutes * 60) - elapsed\n", + " print_info(f' No logs yet... retrying in {poll_interval_seconds}s ({remaining}s remaining)')\n", + " time.sleep(poll_interval_seconds)\n", + "finally:\n", + " Path(query_file_path).unlink(missing_ok=True)\n", + "\n", + "if logs_found:\n", + " print_ok('Log ingestion verified - workbook should now display data')\n", + "elif result.success:\n", + " print_warning(f'Logs did not appear within {max_wait_minutes} minutes')\n", + " print_info('This can happen with newly created workspaces. Tips:')\n", + " print_info(' 1. Wait a few more minutes and re-run this cell')\n", + " print_info(' 2. Verify diagnostic settings in Azure Portal')\n", + " print_info(' 3. Re-run the traffic generation cell to send more requests')" + ] + }, + { + "cell_type": "markdown", + "id": "6ec1ac38", + "metadata": {}, + "source": [ + "### ๐Ÿ“Š Cost Analysis & Sample Kusto Queries\n", + "\n", + "#### Cost Allocation Model\n", + "\n", + "| Component | Formula |\n", + "|---|---|\n", + "| **Base Cost** | Monthly platform cost for the APIM SKU |\n", + "| **Base Cost Share** | `Base Monthly Cost ร— (BU Requests รท Total Requests)` |\n", + "| **Variable Cost** | `BU Requests ร— (Rate per 1K รท 1000)` |\n", + "| **Total Allocated** | `Base Cost Share + Variable Cost` |\n", + "\n", + "The next cell uses reasonable defaults. For current pricing, see the [Azure Retail Prices API](https://learn.microsoft.com/rest/api/cost-management/retail-prices/azure-retail-prices)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5a426e0c", + "metadata": {}, + "outputs": [], + "source": [ + "# Store in variables for use by other cells\n", + "base_monthly_cost = 150.00\n", + "per_k_rate = 0.003\n", + "included_requests_k = 10000\n", + "print_val('Base monthly cost (default)', f'${base_monthly_cost:.2f}')\n", + "print_val('Overage rate per 1K (default)', f'${per_k_rate}')\n", + "\n", + "# Sample Kusto queries for cost analysis\n", + "print()\n", + "print_info('Sample Kusto Queries for Log Analytics:')\n", + "print_info('These queries use the ApiManagementGatewayLogs table (resource-specific mode).')\n", + "print()" + ] + }, + { + "cell_type": "markdown", + "id": "d1908112", + "metadata": {}, + "source": [ + "### ๐Ÿ”” Set Up Budget Alerts per Business Unit\n", + "\n", + "Create Azure Monitor scheduled query alerts that fire when a business unit subscription exceeds a configurable request threshold.\n", + "\n", + "Each alert:\n", + "- Runs a Kusto query every **5 minutes** against the Log Analytics workspace\n", + "- Triggers when a business unit exceeds the threshold in a **1-hour** rolling window\n", + "- Sends notifications via an **Action Group** (email)\n", + "\n", + "> Adjust `alert_threshold` and `alert_email` in the initialization cell to match your requirements." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f49f9785", + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import tempfile\n", + "from pathlib import Path\n", + "\n", + "from azure_resources import run\n", + "\n", + "if not alert_email:\n", + " print_warning('No alert_email configured - skipping budget alert setup')\n", + " print_info('Set alert_email above to enable budget alerts per business unit')\n", + "else:\n", + " print_info('Setting up budget alerts per business unit subscription...')\n", + "\n", + " # Get Log Analytics workspace resource ID\n", + " workspace_result = run(\n", + " f'az monitor log-analytics workspace show '\n", + " f'--resource-group {rg_name} '\n", + " f'--workspace-name {log_analytics_name} '\n", + " f'--query id -o tsv',\n", + " log_command=False\n", + " )\n", + " workspace_id = workspace_result.text.strip()\n", + "\n", + " # Create an Action Group for alert notifications\n", + " action_group_name = f'ag-apim-cost-alerts-{index}'\n", + " print_info(f'Creating action group: {action_group_name}...')\n", + "\n", + " ag_result = run(\n", + " f'az monitor action-group create '\n", + " f'--resource-group {rg_name} '\n", + " f'--name {action_group_name} '\n", + " f'--short-name apimcost '\n", + " f'--action email cost-alert-email {alert_email} '\n", + " f'-o json',\n", + " log_command=False\n", + " )\n", + "\n", + " if ag_result.success:\n", + " action_group_id = ag_result.json_data.get('id', '')\n", + " print_ok(f'Action group created: {action_group_name}')\n", + " else:\n", + " print_error(f'Failed to create action group: {ag_result.text}')\n", + " action_group_id = None\n", + "\n", + " if action_group_id:\n", + " # Load the KQL template from an external file\n", + " kql_path = utils.determine_policy_path('budget-alert-threshold.kql', sample_folder)\n", + " kql_template = Path(kql_path).read_text(encoding='utf-8')\n", + "\n", + " bu_list = list(subscriptions.keys())\n", + " if not bu_list:\n", + " bu_list = ['bu-hr', 'bu-finance', 'bu-marketing', 'bu-engineering']\n", + "\n", + " print_info(f'Creating alerts for {len(bu_list)} business units (threshold: {alert_threshold} requests/hour)...')\n", + "\n", + " for bu_name in bu_list:\n", + " alert_name = f'apim-budget-{bu_name}-{index}'\n", + "\n", + " # Prepend KQL let bindings to parameterise the query\n", + " kusto_query = f\"let buName = '{bu_name}';\\nlet threshold = {alert_threshold};\\n{kql_template}\"\n", + "\n", + " alert_body = {\n", + " 'location': rg_location,\n", + " 'properties': {\n", + " 'displayName': f'APIM Budget Alert: {bu_name}',\n", + " 'description': f'Fires when {bu_name} exceeds {alert_threshold} API requests per hour',\n", + " 'severity': 2,\n", + " 'enabled': True,\n", + " 'evaluationFrequency': 'PT5M',\n", + " 'windowSize': 'PT1H',\n", + " 'scopes': [workspace_id],\n", + " 'criteria': {\n", + " 'allOf': [\n", + " {\n", + " 'query': kusto_query,\n", + " 'timeAggregation': 'Count',\n", + " 'operator': 'GreaterThan',\n", + " 'threshold': 0,\n", + " 'failingPeriods': {\n", + " 'numberOfEvaluationPeriods': 1,\n", + " 'minFailingPeriodsToAlert': 1\n", + " }\n", + " }\n", + " ]\n", + " },\n", + " 'actions': {\n", + " 'actionGroups': [action_group_id]\n", + " }\n", + " }\n", + " }\n", + "\n", + " alert_id = (\n", + " f'/subscriptions/{subscription_id}'\n", + " f'/resourceGroups/{rg_name}'\n", + " f'/providers/Microsoft.Insights/scheduledQueryRules/{alert_name}'\n", + " )\n", + "\n", + " # Write body to a temp file to avoid shell quoting issues on Windows\n", + " with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:\n", + " json.dump(alert_body, f)\n", + " alert_body_path = f.name\n", + "\n", + " try:\n", + " result = run(\n", + " f'az rest --method PUT '\n", + " f'--uri https://management.azure.com{alert_id}?api-version=2023-03-15-preview '\n", + " f'--body @{alert_body_path}',\n", + " log_command=False\n", + " )\n", + " finally:\n", + " Path(alert_body_path).unlink(missing_ok=True)\n", + "\n", + " if result.success:\n", + " print_ok(f' Alert created: {alert_name}')\n", + " else:\n", + " print_error(f' Failed to create alert for {bu_name}: {result.text[:200]}')\n", + "\n", + " print()\n", + " print_ok('Budget alerts configured!')\n", + " print_val('Action Group', action_group_name)\n", + " print_val('Alert Email', alert_email)\n", + " print_val('Threshold', f'{alert_threshold} requests per hour per BU')\n", + " print_val('Evaluation', 'Every 5 minutes, 1-hour rolling window')" + ] + }, + { + "cell_type": "markdown", + "id": "83ac6281", + "metadata": {}, + "source": [ + "### ๐Ÿ”— Verify Costing Setup\n", + "\n", + "Open these resources in the Azure Portal to verify the sample is working. They are listed in priority order.\n", + "\n", + "1. **Azure Monitor Workbook** - Confirm the cost dashboard renders and shows per-business-unit request breakdowns.\n", + "2. **Log Analytics Workspace** - Open the **Logs** blade and run a query against `ApiManagementGatewayLogs` to verify subscription-level data is flowing.\n", + "3. **Cost Management Exports** - Check that the scheduled export exists and, after its first run, that CSV files appear in the storage container.\n", + "4. **APIM Service** - Review the subscriptions blade to confirm all business-unit subscriptions are active." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4d8c1864", + "metadata": {}, + "outputs": [], + "source": [ + "from console import print_plain\n", + "\n", + "base_url = 'https://portal.azure.com/#@/resource'\n", + "rg_path = f'/subscriptions/{subscription_id}/resourceGroups/{rg_name}'\n", + "\n", + "# Priority-ordered links for verifying the costing sample\n", + "print_info('1. Azure Monitor Workbook (cost dashboard)')\n", + "if 'workbook_id' in locals() and workbook_id:\n", + " print_plain(f' {base_url}{workbook_id}/workbook')\n", + "else:\n", + " print_plain(' (not deployed)')\n", + "print_plain()\n", + "\n", + "print_info('2. Log Analytics Workspace (run KQL queries)')\n", + "print_plain(f' {base_url}{rg_path}/providers/Microsoft.OperationalInsights/workspaces/{log_analytics_name}/Overview')\n", + "print_plain()\n", + "\n", + "print_info('3. Cost Management Exports (verify cost data)')\n", + "print_plain(' https://portal.azure.com/#view/Microsoft_Azure_CostManagement/Menu/~/exports')\n", + "print_plain()\n", + "\n", + "print_info('4. APIM Service (subscriptions & APIs)')\n", + "print_plain(f' {base_url}{rg_path}/providers/Microsoft.ApiManagement/service/{apim_name}/overview')\n", + "print_plain()\n", + "\n", + "print_ok('Setup complete!')\n", + "print_info('To clean up resources, open and run: clean-up.ipynb')" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python (.venv)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/samples/costing/main.bicep b/samples/costing/main.bicep new file mode 100644 index 00000000..5dbee471 --- /dev/null +++ b/samples/costing/main.bicep @@ -0,0 +1,267 @@ +// ------------------ +// PARAMETERS +// ------------------ + +@description('Location to be used for resources. Defaults to the resource group location') +param location string = resourceGroup().location + +@description('The unique suffix to append. Defaults to a unique string based on subscription and resource group IDs.') +param resourceSuffix string = uniqueString(subscription().id, resourceGroup().id) + +@description('Name of the API Management service') +param apimName string = 'apim-${resourceSuffix}' + +@description('Deployment index for unique resource naming') +param index int + +@description('Enable Application Insights for APIM diagnostics') +param enableApplicationInsights bool = true + +@description('Enable Log Analytics for APIM diagnostics') +param enableLogAnalytics bool = true + +@description('Storage account SKU for cost exports') +@allowed([ + 'Standard_LRS' + 'Standard_GRS' + 'Standard_ZRS' +]) +param storageAccountSku string = 'Standard_LRS' + +@description('Cost export frequency') +@allowed([ + 'Daily' + 'Weekly' + 'Monthly' +]) +param costExportFrequency string = 'Daily' + +@description('Start date for cost export schedule. Defaults to current deployment time.') +param costExportStartDate string = utcNow('yyyy-MM-ddT00:00:00Z') + +@description('Deploy the Cost Management export from Bicep. When false (default), the notebook handles export creation with retry logic to avoid key-access propagation failures.') +param enableCostExport bool = false + +@description('Array of APIs to deploy') +param apis array = [] + +@description('Array of business units to create subscriptions for') +param businessUnits array = [] + + +// ------------------ +// VARIABLES +// ------------------ + +var applicationInsightsName = 'appi-cost-${index}-${take(resourceSuffix, 4)}' +var logAnalyticsWorkspaceName = 'log-cost-${index}-${take(resourceSuffix, 4)}' +var storageAccountName = 'stcost${take(string(index), 1)}${take(replace(resourceSuffix, '-', ''), 12)}' +var workbookName = 'APIM Cost Tracking ${index}' +var costExportName = 'apim-cost-export' +var diagnosticSettingsNameSuffix = 'costing-diagnostics-${index}' + + +// ------------------ +// RESOURCES +// ------------------ + +// https://learn.microsoft.com/azure/templates/microsoft.apimanagement/service +resource apimService 'Microsoft.ApiManagement/service@2024-06-01-preview' existing = { + name: apimName +} + +// APIM APIs +module apisModule '../../shared/bicep/modules/apim/v1/api.bicep' = [for api in apis: if(!empty(apis)) { + name: 'api-${api.name}' + params: { + apimName: apimName + appInsightsInstrumentationKey: appInsightsInstrKey + appInsightsId: appInsightsResourceId + api: api + } +}] + +// Create subscriptions for different business units +resource subscriptions 'Microsoft.ApiManagement/service/subscriptions@2024-06-01-preview' = [for bu in businessUnits: { + name: bu.name + parent: apimService + properties: { + displayName: bu.displayName + scope: '/apis/${apis[0].name}' + state: 'active' + } + dependsOn: [ + apisModule + ] +}] + +// Deploy Log Analytics Workspace using shared module +// https://learn.microsoft.com/azure/templates/microsoft.operationalinsights/workspaces +module logAnalyticsModule '../../shared/bicep/modules/operational-insights/v1/workspaces.bicep' = if (enableLogAnalytics) { + name: 'logAnalytics' + params: { + location: location + resourceSuffix: resourceSuffix + logAnalyticsName: logAnalyticsWorkspaceName + } +} + +// Deploy Application Insights using shared module +// https://learn.microsoft.com/azure/templates/microsoft.insights/components +module applicationInsightsModule '../../shared/bicep/modules/monitor/v1/appinsights.bicep' = if (enableApplicationInsights) { + name: 'applicationInsights' + params: { + location: location + resourceSuffix: resourceSuffix + applicationInsightsName: applicationInsightsName + applicationInsightsLocation: location + customMetricsOptedInType: 'WithDimensions' + useWorkbook: false + #disable-next-line BCP318 + lawId: enableLogAnalytics ? logAnalyticsModule.outputs.id : '' + } +} + + +// https://learn.microsoft.com/azure/templates/microsoft.storage/storageaccounts +resource storageAccount 'Microsoft.Storage/storageAccounts@2023-05-01' = { + name: storageAccountName + location: location + sku: { + name: storageAccountSku + } + kind: 'StorageV2' + properties: { + accessTier: 'Hot' + minimumTlsVersion: 'TLS1_2' + supportsHttpsTrafficOnly: true + allowBlobPublicAccess: false + allowSharedKeyAccess: false + } + + resource blobService 'blobServices' = { + name: 'default' + + resource costExportsContainer 'containers' = { + name: 'cost-exports' + properties: { + publicAccess: 'None' + } + } + } +} + +// Helper variables to safely access properties from conditionally deployed resources +#disable-next-line BCP318 +var appInsightsInstrKey = enableApplicationInsights ? applicationInsightsModule.outputs.instrumentationKey : '' +var appInsightsConnectionStr = enableApplicationInsights ? 'InstrumentationKey=${appInsightsInstrKey}' : '' + +// Helper variables for diagnostics module +#disable-next-line BCP318 +var logAnalyticsWorkspaceId = enableLogAnalytics ? logAnalyticsModule.outputs.id : '' +#disable-next-line BCP318 +var appInsightsResourceId = enableApplicationInsights ? applicationInsightsModule.outputs.id : '' + +// Deploy APIM diagnostics using shared module +module apimDiagnosticsModule '../../shared/bicep/modules/apim/v1/diagnostics.bicep' = if (!empty(apimName)) { + name: 'apimDiagnostics' + params: { + location: location + apimServiceName: apimName + apimResourceGroupName: resourceGroup().name + enableLogAnalytics: enableLogAnalytics + logAnalyticsWorkspaceId: logAnalyticsWorkspaceId + enableApplicationInsights: enableApplicationInsights + appInsightsInstrumentationKey: appInsightsInstrKey + appInsightsResourceId: appInsightsResourceId + diagnosticSettingsNameSuffix: diagnosticSettingsNameSuffix + } +} + + +// https://learn.microsoft.com/azure/templates/microsoft.insights/workbooks +resource workbook 'Microsoft.Insights/workbooks@2023-06-01' = if (enableLogAnalytics) { + name: guid(resourceGroup().id, 'apim-costing-workbook', string(index)) + location: location + kind: 'shared' + properties: { + displayName: workbookName + serializedData: string(loadJsonContent('workbook.json')) + version: '1.0' + #disable-next-line BCP318 + sourceId: enableLogAnalytics ? logAnalyticsModule.outputs.id : '' + category: 'APIM' + } +} + +// Cost Management exports are subscription-scoped and must be deployed via a module. +// https://learn.microsoft.com/azure/templates/microsoft.costmanagement/exports +module costExportModule './cost-export.bicep' = if (enableCostExport) { + name: 'costExportDeployment' + scope: subscription() + params: { + costExportName: costExportName + storageAccountId: storageAccount.id + recurrence: costExportFrequency + startDate: costExportStartDate + } + dependsOn: [ + storageAccount::blobService::costExportsContainer + ] +} + +#disable-next-line BCP318 +var costExportOutputName = enableCostExport ? costExportModule.outputs.costExportName : costExportName + +// Variables for output values +var workbookDisplayName = workbookName +var workbookIdOutput = enableLogAnalytics ? workbook.id : '' + +// ------------------ +// OUTPUTS +// ------------------ + +output apimServiceId string = apimService.id +output apimServiceName string = apimService.name +output apimResourceGatewayURL string = apimService.properties.gatewayUrl + +@description('Name of the Application Insights resource') +#disable-next-line BCP318 +output applicationInsightsName string = enableApplicationInsights ? applicationInsightsModule.outputs.applicationInsightsName : '' + +@description('Application Insights instrumentation key') +output applicationInsightsInstrumentationKey string = appInsightsInstrKey + +@description('Application Insights connection string') +output applicationInsightsConnectionString string = appInsightsConnectionStr + +@description('Name of the Log Analytics Workspace') +output logAnalyticsWorkspaceName string = enableLogAnalytics ? logAnalyticsWorkspaceName : '' + +@description('Log Analytics Workspace ID') +#disable-next-line BCP318 +output logAnalyticsWorkspaceId string = enableLogAnalytics ? logAnalyticsModule.outputs.id : '' + +@description('Name of the Storage Account for cost exports') +output storageAccountName string = storageAccount.name + +@description('Storage Account ID') +output storageAccountId string = storageAccount.id + +@description('Cost exports container name') +output costExportsContainerName string = 'cost-exports' + +@description('Name of the Azure Monitor Workbook') +output workbookName string = workbookDisplayName + +@description('Workbook ID') +output workbookId string = workbookIdOutput + +@description('Name of the Cost Management export') +output costExportName string = costExportOutputName + +@description('Subscription keys for the business units') +output subscriptionKeys array = [for (bu, i) in businessUnits: { + name: bu.name + primaryKey: listSecrets(subscriptions[i].id, '2024-06-01-preview').primaryKey +}] diff --git a/samples/costing/screenshots/Dashboard-01.png b/samples/costing/screenshots/Dashboard-01.png new file mode 100644 index 00000000..ecd3b4da Binary files /dev/null and b/samples/costing/screenshots/Dashboard-01.png differ diff --git a/samples/costing/screenshots/Dashboard-02.png b/samples/costing/screenshots/Dashboard-02.png new file mode 100644 index 00000000..bbce14c9 Binary files /dev/null and b/samples/costing/screenshots/Dashboard-02.png differ diff --git a/samples/costing/screenshots/Dashboard-03.png b/samples/costing/screenshots/Dashboard-03.png new file mode 100644 index 00000000..6355e0b4 Binary files /dev/null and b/samples/costing/screenshots/Dashboard-03.png differ diff --git a/samples/costing/screenshots/Dashboard-04.png b/samples/costing/screenshots/Dashboard-04.png new file mode 100644 index 00000000..b8b8ff30 Binary files /dev/null and b/samples/costing/screenshots/Dashboard-04.png differ diff --git a/samples/costing/screenshots/Dashboard-05.png b/samples/costing/screenshots/Dashboard-05.png new file mode 100644 index 00000000..3cab610e Binary files /dev/null and b/samples/costing/screenshots/Dashboard-05.png differ diff --git a/samples/costing/screenshots/README.md b/samples/costing/screenshots/README.md new file mode 100644 index 00000000..0c21cb0a --- /dev/null +++ b/samples/costing/screenshots/README.md @@ -0,0 +1,35 @@ +# Screenshots for APIM Costing Sample + +This directory contains screenshots showing expected results after running the costing sample. + +## Cost Management Export + +### Cost Report - Export Overview + +![Cost Report - Export Overview](costreport-01.png) + +### Cost Report - Export Details + +![Cost Report - Export Details](costreport-02.png) + +## Azure Monitor Workbook Dashboard + +### Cost Allocation Overview + +![Dashboard - Cost Allocation Overview](Dashboard-01.png) + +### Cost Breakdown by Business Unit + +![Dashboard - Cost Breakdown by Business Unit](Dashboard-02.png) + +### Request Distribution + +![Dashboard - Request Distribution](Dashboard-03.png) + +### Usage Analytics + +![Dashboard - Usage Analytics](Dashboard-04.png) + +### Response Code Analysis + +![Dashboard - Response Code Analysis](Dashboard-05.png) diff --git a/samples/costing/screenshots/costreport-01.png b/samples/costing/screenshots/costreport-01.png new file mode 100644 index 00000000..405f5354 Binary files /dev/null and b/samples/costing/screenshots/costreport-01.png differ diff --git a/samples/costing/screenshots/costreport-02.png b/samples/costing/screenshots/costreport-02.png new file mode 100644 index 00000000..ae9afe51 Binary files /dev/null and b/samples/costing/screenshots/costreport-02.png differ diff --git a/samples/costing/verify-log-ingestion.kql b/samples/costing/verify-log-ingestion.kql new file mode 100644 index 00000000..5d81d23d --- /dev/null +++ b/samples/costing/verify-log-ingestion.kql @@ -0,0 +1,5 @@ +// Counts APIM gateway log entries that carry a subscription ID. +// Used to verify that diagnostic logs are flowing into Log Analytics. +ApiManagementGatewayLogs +| where ApimSubscriptionId != '' +| summarize Count = count() diff --git a/samples/costing/workbook.json b/samples/costing/workbook.json new file mode 100644 index 00000000..70a02a38 --- /dev/null +++ b/samples/costing/workbook.json @@ -0,0 +1,449 @@ +{ + "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json", + "fallbackResourceIds": [], + "fromTemplateId": "sentinel-UserWorkbook", + "items": [ + { + "content": { + "parameters": [ + { + "id": "b859a101-1fbb-4c30-a1df-97d7d4b0d6f2", + "isRequired": true, + "label": "Time Range", + "name": "TimeRange", + "type": 4, + "typeSettings": { + "allowCustom": true, + "selectableValues": [ + { "durationMs": 86400000 }, + { "durationMs": 172800000 }, + { "durationMs": 604800000 }, + { "durationMs": 1209600000 }, + { "durationMs": 2592000000 }, + { "durationMs": 5184000000 }, + { "durationMs": 7776000000 } + ] + }, + "value": { + "durationMs": 2592000000 + }, + "version": "KqlParameterItem/1.0" + }, + { + "id": "c1a2b3d4-e5f6-7890-abcd-ef1234567890", + "isRequired": true, + "label": "Base Monthly APIM Cost ($)", + "name": "BaseMonthlyCost", + "type": 1, + "typeSettings": { + "paramValidationRules": [ + { + "match": true, + "message": "Enter a valid dollar amount (e.g. 150.00)", + "regExp": "^\\d+(\\.\\d{1,2})?$" + } + ] + }, + "value": "150.00", + "version": "KqlParameterItem/1.0" + }, + { + "id": "d2b3c4e5-f6a7-8901-bcde-f12345678901", + "isRequired": true, + "label": "Variable Cost per 1000 Requests ($)", + "name": "PerRequestRate", + "type": 1, + "typeSettings": { + "paramValidationRules": [ + { + "match": true, + "message": "Enter a valid rate (e.g. 0.003)", + "regExp": "^\\d+(\\.\\d{1,6})?$" + } + ] + }, + "value": "0.003", + "version": "KqlParameterItem/1.0" + }, + { + "id": "e3c4d5f6-a7b8-9012-cdef-234567890abc", + "isHiddenWhenLocked": true, + "label": "Selected Business Unit", + "name": "SelectedBusinessUnit", + "type": 1, + "value": "*", + "version": "KqlParameterItem/1.0" + } + ], + "queryType": 0, + "resourceType": "microsoft.operationalinsights/workspaces", + "style": "pills", + "version": "KqlParameterItem/1.0" + }, + "name": "parameters - 0", + "type": 9 + }, + { + "content": { + "json": "## APIM Cost Allocation & Showback Dashboard\n\nThis workbook splits the **base APIM infrastructure cost** across business units proportionally by usage, then adds **variable per-API costs** based on request volume.\n\n| Parameter | Description |\n|---|---|\n| **Base Monthly APIM Cost** | Fixed platform cost (SKU, networking, etc.) split proportionally by request share |\n| **Variable Cost per 1K Requests** | Usage-based rate applied on top of the base allocation |\n\n> Adjust parameters above to model different pricing scenarios." + }, + "name": "text - header", + "type": 1 + }, + { + "content": { + "expandable": true, + "expanded": true, + "groupType": "editable", + "items": [ + { + "content": { + "json": "| Component | Formula |\n|---|---|\n| **Base Cost** | Monthly platform cost for the APIM SKU (see parameter above): **${BaseMonthlyCost}** |\n| **Base Cost Share** | `Base Monthly Cost x (BU Requests / Total Requests)` |\n| **Variable Cost** | `BU Requests x (Rate per 1K / 1000)` |\n| **Total Allocated** | `Base Cost Share + Variable Cost` |\n\n> The base monthly cost and variable rate parameters are editable above. Use the notebook's pricing lookup cell to auto-detect values from the [Azure Retail Prices API](https://learn.microsoft.com/rest/api/cost-management/retail-prices/azure-retail-prices) and keep them in sync with your APIM SKU." + }, + "name": "text - cost-model-detail", + "type": 1 + } + ], + "loadType": "always", + "title": "Cost Allocation Model", + "version": "NotebookGroup/1.0" + }, + "name": "group - cost-model", + "type": 12 + }, + { + "content": { + "expandable": true, + "expanded": true, + "groupType": "editable", + "items": [ + { + "content": { + "exportDefaultValue": "*", + "exportFieldName": "Business Unit", + "exportParameterName": "SelectedBusinessUnit", + "gridSettings": { + "formatters": [ + { + "columnMatch": "Usage Share (%)", + "formatOptions": { + "max": 100, + "min": 0, + "palette": "blue" + }, + "formatter": 8 + }, + { + "columnMatch": "Base Cost Share ($)", + "formatOptions": { + "min": 0, + "palette": "blue" + }, + "formatter": 8 + }, + { + "columnMatch": "Total Allocated ($)", + "formatOptions": { + "min": 0, + "palette": "turquoise" + }, + "formatter": 8 + } + ] + }, + "query": "let baseCost = todouble('{BaseMonthlyCost}');\r\nlet perKRate = todouble('{PerRequestRate}');\r\nlet logs = ApiManagementGatewayLogs\r\n| where TimeGenerated {TimeRange} and ApimSubscriptionId != '';\r\nlet totalRequests = toscalar(logs | summarize count());\r\nlogs\r\n| summarize RequestCount = count() by ApimSubscriptionId\r\n| extend UsageShare = round(RequestCount * 100.0 / totalRequests, 2)\r\n| extend BaseCostShare = round(baseCost * RequestCount / totalRequests, 2)\r\n| extend VariableCost = round(RequestCount * perKRate / 1000.0, 2)\r\n| extend TotalAllocatedCost = round(BaseCostShare + VariableCost, 2)\r\n| order by TotalAllocatedCost desc\r\n| project\r\n ['Business Unit'] = ApimSubscriptionId,\r\n ['Requests'] = RequestCount,\r\n ['Usage Share (%)'] = UsageShare,\r\n ['Base Cost ($)'] = baseCost,\r\n ['Base Cost Share ($)'] = BaseCostShare,\r\n ['Variable Cost ($)'] = VariableCost,\r\n ['Total Allocated ($)'] = TotalAllocatedCost", + "queryType": 0, + "resourceType": "microsoft.operationalinsights/workspaces", + "size": 0, + "timeContext": { + "durationMs": 2592000000 + }, + "title": "Cost Allocation by Business Unit (click a row to filter charts below)", + "version": "KqlItem/1.0", + "visualization": "table" + }, + "name": "query - cost-allocation-table", + "type": 3 + }, + { + "content": { + "chartSettings": { + "customThresholdLine": "{BaseMonthlyCost}", + "customThresholdLineStyle": 1, + "seriesLabelSettings": [ + { "color": "blue", "label": "Base Cost ($)", "seriesName": "BaseCostShare" }, + { "color": "orange", "label": "Variable Cost ($)", "seriesName": "VariableCost" } + ], + "xAxis": "ApimSubscriptionId", + "ySettings": { + "min": 0 + } + }, + "query": "let baseCost = todouble('{BaseMonthlyCost}');\r\nlet perKRate = todouble('{PerRequestRate}');\r\nlet logs = ApiManagementGatewayLogs\r\n| where TimeGenerated {TimeRange} and ApimSubscriptionId != '';\r\nlet totalRequests = toscalar(logs | summarize count());\r\nlogs\r\n| summarize RequestCount = count() by ApimSubscriptionId\r\n| extend BaseCostShare = round(baseCost * RequestCount / totalRequests, 2)\r\n| extend VariableCost = round(RequestCount * perKRate / 1000.0, 2)\r\n| project ApimSubscriptionId, BaseCostShare, VariableCost", + "queryType": 0, + "resourceType": "microsoft.operationalinsights/workspaces", + "size": 0, + "timeContext": { + "durationMs": 2592000000 + }, + "title": "Base vs Variable Cost Split by Business Unit", + "version": "KqlItem/1.0", + "visualization": "barchart" + }, + "name": "query - cost-allocation-chart", + "type": 3 + }, + { + "content": { + "gridSettings": { + "formatters": [ + { + "columnMatch": "Total ($)", + "formatOptions": { + "min": 0, + "palette": "turquoise" + }, + "formatter": 8 + } + ] + }, + "query": "let baseCost = todouble('{BaseMonthlyCost}');\r\nlet perKRate = todouble('{PerRequestRate}');\r\nlet selectedBU = '{SelectedBusinessUnit}';\r\nlet logs = ApiManagementGatewayLogs\r\n| where TimeGenerated {TimeRange} and ApimSubscriptionId != '';\r\nlet filteredLogs = logs\r\n| where selectedBU == '*' or ApimSubscriptionId == selectedBU;\r\nlet totalRequests = toscalar(logs | summarize count());\r\nfilteredLogs\r\n| summarize RequestCount = count() by ApimSubscriptionId, ApiId\r\n| extend BaseCostShare = round(baseCost * RequestCount / totalRequests, 2)\r\n| extend VariableCost = round(RequestCount * perKRate / 1000.0, 2)\r\n| extend TotalCost = round(BaseCostShare + VariableCost, 2)\r\n| order by TotalCost desc\r\n| project\r\n ['Business Unit'] = ApimSubscriptionId,\r\n ['API'] = ApiId,\r\n ['Requests'] = RequestCount,\r\n ['Base Share ($)'] = BaseCostShare,\r\n ['Variable ($)'] = VariableCost,\r\n ['Total ($)'] = TotalCost\r\n| take 25", + "queryType": 0, + "resourceType": "microsoft.operationalinsights/workspaces", + "size": 0, + "timeContext": { + "durationMs": 2592000000 + }, + "title": "Cost Breakdown by Business Unit & API (Top 25)", + "version": "KqlItem/1.0", + "visualization": "table" + }, + "name": "query - cost-per-api", + "type": 3 + } + ], + "loadType": "always", + "title": "Cost Allocation Summary", + "version": "NotebookGroup/1.0" + }, + "name": "group - cost-allocation", + "type": 12 + }, + { + "content": { + "expandable": true, + "expanded": true, + "groupType": "editable", + "items": [ + { + "content": { + "chartSettings": { + "ySettings": { + "min": 0 + } + }, + "query": "ApiManagementGatewayLogs\r\n| where TimeGenerated {TimeRange} and ApimSubscriptionId != ''\r\n| summarize RequestCount = count() by ApimSubscriptionId\r\n| order by RequestCount desc\r\n| project BusinessUnit = ApimSubscriptionId, RequestCount", + "queryType": 0, + "resourceType": "microsoft.operationalinsights/workspaces", + "size": 0, + "timeContext": { + "durationMs": 2592000000 + }, + "title": "Request Count by Business Unit", + "version": "KqlItem/1.0", + "visualization": "barchart" + }, + "name": "query - request-count", + "type": 3 + }, + { + "content": { + "chartSettings": { + "seriesLabelSettings": [ + { "color": "blue", "seriesName": "RequestCount" } + ] + }, + "query": "let apimLogs = ApiManagementGatewayLogs\r\n| where TimeGenerated {TimeRange} and ApimSubscriptionId != '';\r\nlet totalCount = toscalar(apimLogs | count);\r\napimLogs\r\n| summarize RequestCount = count() by ApimSubscriptionId\r\n| extend Percentage = round(RequestCount * 100.0 / totalCount, 2)\r\n| order by RequestCount desc\r\n| project BusinessUnit = ApimSubscriptionId, RequestCount, ['Percentage (%)'] = Percentage", + "queryType": 0, + "resourceType": "microsoft.operationalinsights/workspaces", + "size": 0, + "timeContext": { + "durationMs": 2592000000 + }, + "title": "Request Distribution Across Business Units", + "version": "KqlItem/1.0", + "visualization": "piechart" + }, + "name": "query - distribution", + "type": 3 + }, + { + "content": { + "query": "let selectedBU = '{SelectedBusinessUnit}';\r\nApiManagementGatewayLogs\r\n| where TimeGenerated {TimeRange} and ApimSubscriptionId != ''\r\n| where selectedBU == '*' or ApimSubscriptionId == selectedBU\r\n| summarize RequestCount = count() by bin(TimeGenerated, 1h), ApimSubscriptionId\r\n| project TimeGenerated, BusinessUnit = ApimSubscriptionId, RequestCount", + "queryType": 0, + "resourceType": "microsoft.operationalinsights/workspaces", + "size": 0, + "timeContext": { + "durationMs": 2592000000 + }, + "title": "Request Trends Over Time by Business Unit", + "version": "KqlItem/1.0", + "visualization": "timechart" + }, + "name": "query - trends", + "type": 3 + } + ], + "loadType": "always", + "title": "Usage Analytics", + "version": "NotebookGroup/1.0" + }, + "name": "group - usage-analytics", + "type": 12 + }, + { + "content": { + "expandable": true, + "expanded": true, + "groupType": "editable", + "items": [ + { + "content": { + "gridSettings": { + "formatters": [ + { + "columnMatch": "Success Rate (%)", + "formatOptions": { + "max": 100, + "min": 0, + "palette": "redGreen" + }, + "formatter": 8 + }, + { + "columnMatch": "Error Rate (%)", + "formatOptions": { + "max": 100, + "min": 0, + "palette": "greenRed" + }, + "formatter": 8 + } + ] + }, + "query": "let selectedBU = '{SelectedBusinessUnit}';\r\nApiManagementGatewayLogs\r\n| where TimeGenerated {TimeRange} and ApimSubscriptionId != ''\r\n| where selectedBU == '*' or ApimSubscriptionId == selectedBU\r\n| summarize \r\n TotalRequests = count(),\r\n SuccessRequests = countif(ResponseCode < 400),\r\n ClientErrors = countif(ResponseCode >= 400 and ResponseCode < 500),\r\n ServerErrors = countif(ResponseCode >= 500)\r\n by ApimSubscriptionId\r\n| extend SuccessRate = round(SuccessRequests * 100.0 / TotalRequests, 2)\r\n| extend ErrorRate = round((ClientErrors + ServerErrors) * 100.0 / TotalRequests, 2)\r\n| project \r\n BusinessUnit = ApimSubscriptionId, \r\n TotalRequests, \r\n SuccessRequests, \r\n ClientErrors, \r\n ServerErrors, \r\n ['Success Rate (%)'] = SuccessRate,\r\n ['Error Rate (%)'] = ErrorRate\r\n| order by TotalRequests desc", + "queryType": 0, + "resourceType": "microsoft.operationalinsights/workspaces", + "size": 0, + "timeContext": { + "durationMs": 2592000000 + }, + "title": "Success & Error Metrics by Business Unit", + "version": "KqlItem/1.0", + "visualization": "table" + }, + "name": "query - success-errors", + "type": 3 + }, + { + "content": { + "chartSettings": { + "seriesLabelSettings": [ + { "color": "blue", "label": "2xx Success", "seriesName": "2xx" }, + { "color": "turquoise", "label": "3xx Redirect", "seriesName": "3xx" }, + { "color": "orange", "label": "4xx Client Error", "seriesName": "4xx" }, + { "color": "redBright", "label": "5xx Server Error", "seriesName": "5xx" } + ] + }, + "query": "let selectedBU = '{SelectedBusinessUnit}';\r\nApiManagementGatewayLogs\r\n| where TimeGenerated {TimeRange} and ApimSubscriptionId != ''\r\n| where selectedBU == '*' or ApimSubscriptionId == selectedBU\r\n| extend ResponseClass = case(\r\n ResponseCode >= 200 and ResponseCode < 300, '2xx',\r\n ResponseCode >= 300 and ResponseCode < 400, '3xx',\r\n ResponseCode >= 400 and ResponseCode < 500, '4xx',\r\n ResponseCode >= 500, '5xx',\r\n 'Other')\r\n| summarize RequestCount = count() by ApimSubscriptionId, ResponseClass\r\n| order by ApimSubscriptionId, ResponseClass\r\n| project BusinessUnit = ApimSubscriptionId, ResponseClass, RequestCount", + "queryType": 0, + "resourceType": "microsoft.operationalinsights/workspaces", + "size": 0, + "timeContext": { + "durationMs": 2592000000 + }, + "title": "Response Code Distribution by Business Unit", + "version": "KqlItem/1.0", + "visualization": "categoricalbar" + }, + "name": "query - response-codes", + "type": 3 + } + ], + "loadType": "always", + "title": "Health & Reliability", + "version": "NotebookGroup/1.0" + }, + "name": "group - health", + "type": 12 + }, + { + "content": { + "expandable": true, + "expanded": false, + "groupType": "editable", + "items": [ + { + "content": { + "json": "Select a **Business Unit** row in the Cost Allocation table above to see that unit's cost trend over time.\n\nThe chart shows daily base cost share and variable cost for the selected business unit. The horizontal line represents the total base monthly cost (${BaseMonthlyCost}) for reference." + }, + "name": "text - drilldown-help", + "type": 1 + }, + { + "content": { + "chartSettings": { + "customThresholdLine": "{BaseMonthlyCost}", + "customThresholdLineStyle": 1, + "seriesLabelSettings": [ + { "color": "blue", "label": "Base Cost Share ($)", "seriesName": "BaseCostShare" }, + { "color": "orange", "label": "Variable Cost ($)", "seriesName": "VariableCost" }, + { "color": "purple", "label": "Total Allocated ($)", "seriesName": "TotalAllocatedCost" } + ], + "ySettings": { + "min": 0 + } + }, + "query": "let baseCost = todouble('{BaseMonthlyCost}');\r\nlet perKRate = todouble('{PerRequestRate}');\r\nlet selectedBU = '{SelectedBusinessUnit}';\r\nlet allLogs = ApiManagementGatewayLogs\r\n| where TimeGenerated {TimeRange} and ApimSubscriptionId != '';\r\nlet dailyTotal = allLogs\r\n| summarize DayTotal = count() by bin(TimeGenerated, 1d);\r\nallLogs\r\n| where selectedBU != '*' and ApimSubscriptionId == selectedBU\r\n| summarize RequestCount = count() by bin(TimeGenerated, 1d)\r\n| join kind=inner dailyTotal on TimeGenerated\r\n| extend BaseCostShare = round(baseCost * RequestCount / DayTotal, 2)\r\n| extend VariableCost = round(RequestCount * perKRate / 1000.0, 4)\r\n| extend TotalAllocatedCost = round(BaseCostShare + VariableCost, 2)\r\n| project TimeGenerated, BaseCostShare, VariableCost, TotalAllocatedCost", + "queryType": 0, + "resourceType": "microsoft.operationalinsights/workspaces", + "size": 0, + "timeContext": { + "durationMs": 2592000000 + }, + "title": "Cost Trend for: {SelectedBusinessUnit}", + "version": "KqlItem/1.0", + "visualization": "linechart" + }, + "conditionalVisibility": { + "comparison": "isNotEqualTo", + "parameterName": "SelectedBusinessUnit", + "value": "*" + }, + "name": "query - drilldown-cost-trend", + "type": 3 + }, + { + "content": { + "json": "> Select a business unit row in the Cost Allocation table to view its cost trend.", + "style": "info" + }, + "conditionalVisibility": { + "comparison": "isEqualTo", + "parameterName": "SelectedBusinessUnit", + "value": "*" + }, + "name": "text - drilldown-placeholder", + "type": 1 + } + ], + "loadType": "always", + "title": "Business Unit Drill-Down (over time)", + "version": "NotebookGroup/1.0" + }, + "name": "group - drilldown", + "type": 12 + } + ], + "version": "Notebook/1.0" +} diff --git a/samples/general/create.ipynb b/samples/general/create.ipynb index b46d3abb..f7c201f2 100644 --- a/samples/general/create.ipynb +++ b/samples/general/create.ipynb @@ -16,7 +16,9 @@ "outputs": [], "source": [ "import utils\n", - "from apimtypes import *\n", + "from typing import List\n", + "\n", + "from apimtypes import API, API_ID_XML_POLICY_PATH, APIM_SKU, GET_APIOperation, INFRASTRUCTURE, REQUEST_HEADERS_XML_POLICY_PATH\n", "from console import print_error, print_ok\n", "from azure_resources import get_infra_rg_name\n", "\n", @@ -39,22 +41,56 @@ "\n", "sample_folder = 'general'\n", "rg_name = get_infra_rg_name(deployment, index)\n", - "supported_infras = [INFRASTRUCTURE.AFD_APIM_PE, INFRASTRUCTURE.APIM_ACA, INFRASTRUCTURE.APPGW_APIM, INFRASTRUCTURE.APPGW_APIM_PE, INFRASTRUCTURE.SIMPLE_APIM]\n", - "nb_helper = utils.NotebookHelper(sample_folder, rg_name, rg_location, deployment, supported_infras, index = index, apim_sku = apim_sku)\n", + "supported_infras = [\n", + " INFRASTRUCTURE.AFD_APIM_PE,\n", + " INFRASTRUCTURE.APIM_ACA,\n", + " INFRASTRUCTURE.APPGW_APIM,\n", + " INFRASTRUCTURE.APPGW_APIM_PE,\n", + " INFRASTRUCTURE.SIMPLE_APIM\n", + "]\n", + "nb_helper = utils.NotebookHelper(\n", + " sample_folder,\n", + " rg_name,\n", + " rg_location,\n", + " deployment,\n", + " supported_infras,\n", + " index = index,\n", + " apim_sku = apim_sku\n", + ")\n", "\n", "# Define the APIs and their operations and policies\n", "\n", "# API 1: Request Headers\n", "rh_path = f'{api_prefix}request-headers'\n", "pol_request_headers_get = utils.read_policy_xml(REQUEST_HEADERS_XML_POLICY_PATH)\n", - "request_headers_get = GET_APIOperation('Gets the request headers for the current request and returns them. Great for troubleshooting.', pol_request_headers_get)\n", - "request_headers = API(rh_path, 'Request Headers', rh_path, 'API for request headers', operations = [request_headers_get], tags = tags)\n", + "request_headers_get = GET_APIOperation(\n", + " 'Gets the request headers for the current request and returns them. Great for troubleshooting.',\n", + " pol_request_headers_get\n", + ")\n", + "request_headers = API(\n", + " rh_path,\n", + " 'Request Headers',\n", + " rh_path,\n", + " 'API for request headers',\n", + " operations = [request_headers_get],\n", + " tags = tags\n", + ")\n", "\n", "# API 2: API ID: Extract and trace API Identifier\n", "api_id_path = f'{api_prefix}api-id'\n", "pol_api_id_get = utils.read_policy_xml(API_ID_XML_POLICY_PATH)\n", - "api_id_get = GET_APIOperation('Gets the API identifier for the current request and traces it', pol_api_id_get)\n", - "api_id = API(api_id_path, 'API Identifier (api-42)', api_id_path, 'API for extracting and tracing API identifier', operations = [api_id_get], tags = tags)\n", + "api_id_get = GET_APIOperation(\n", + " 'Gets the API identifier for the current request and traces it',\n", + " pol_api_id_get\n", + ")\n", + "api_id = API(\n", + " api_id_path,\n", + " 'API Identifier (api-42)',\n", + " api_id_path,\n", + " 'API for extracting and tracing API identifier',\n", + " operations = [api_id_get],\n", + " tags = tags\n", + ")\n", "\n", "# APIs Array\n", "apis: List[API] = [request_headers, api_id]\n", diff --git a/samples/load-balancing/create.ipynb b/samples/load-balancing/create.ipynb index 7688f085..691e3482 100644 --- a/samples/load-balancing/create.ipynb +++ b/samples/load-balancing/create.ipynb @@ -16,7 +16,8 @@ "outputs": [], "source": [ "import utils\n", - "from apimtypes import *\n", + "\n", + "from apimtypes import API, APIM_SKU, GET_APIOperation, HttpStatusCode, INFRASTRUCTURE\n", "from console import print_error, print_info, print_message, print_ok\n", "from azure_resources import get_infra_rg_name\n", "\n", @@ -29,49 +30,7 @@ "apim_sku = APIM_SKU.BASICV2 # Options: 'BASICV2', 'STANDARDV2', 'PREMIUMV2'\n", "deployment = INFRASTRUCTURE.APIM_ACA # Options: see supported_infras below\n", "api_prefix = 'lb-' # ENTER A PREFIX FOR THE APIS TO REDUCE COLLISION POTENTIAL WITH OTHER SAMPLES\n", - "tags = ['load-balancing'] # ENTER DESCRIPTIVE TAG(S)\n", - "\n", - "\n", - "\n", - "# ------------------------------\n", - "# SYSTEM CONFIGURATION\n", - "# ------------------------------\n", - "\n", - "sample_folder = 'load-balancing'\n", - "rg_name = get_infra_rg_name(deployment, index)\n", - "supported_infras = [INFRASTRUCTURE.AFD_APIM_PE, INFRASTRUCTURE.APIM_ACA, INFRASTRUCTURE.APPGW_APIM, INFRASTRUCTURE.APPGW_APIM_PE]\n", - "nb_helper = utils.NotebookHelper(sample_folder, rg_name, rg_location, deployment, supported_infras, index = index, apim_sku = apim_sku)\n", - "\n", - "# Define the APIs and their operations and policies\n", - "\n", - "# Load and configure backend pool policies\n", - "pol_aca_backend_pool_load_balancing = utils.read_policy_xml('aca-backend-pool-load-balancing.xml', sample_name = sample_folder)\n", - "pol_aca_backend_pool_load_balancing_429 = utils.read_policy_xml('aca-backend-pool-load-balancing-with-429.xml', sample_name = sample_folder)\n", - "pol_aca_backend_pool_prioritized = pol_aca_backend_pool_load_balancing.format(retry_count = 1, backend_id = 'aca-backend-pool-web-api-429-prioritized')\n", - "pol_aca_backend_pool_prioritized_and_weighted = pol_aca_backend_pool_load_balancing.format(retry_count = 2, backend_id = 'aca-backend-pool-web-api-429-prioritized-and-weighted')\n", - "pol_aca_backend_pool_weighted_equal = pol_aca_backend_pool_load_balancing.format(retry_count = 1, backend_id = 'aca-backend-pool-web-api-429-weighted-50-50')\n", - "pol_aca_backend_pool_weighted_unequal = pol_aca_backend_pool_load_balancing.format(retry_count = 1, backend_id = 'aca-backend-pool-web-api-429-weighted-80-20')\n", - "pol_aca_backend_pool_429_prioritized = pol_aca_backend_pool_load_balancing_429.format(retry_count = 1, backend_id = 'aca-backend-pool-web-api-429-prioritized')\n", - "\n", - "# Standard GET operation for all APIs\n", - "get = GET_APIOperation('This is a standard GET')\n", - "\n", - "# API 1: Prioritized backend pool\n", - "lb_prioritized = API(f'{api_prefix}prioritized-aca-pool', 'Prioritized backend pool', f'/{api_prefix}prioritized', 'This is the API for the prioritized backend pool.', pol_aca_backend_pool_prioritized, [get], tags)\n", - "# API 2: Prioritized & weighted backend pool\n", - "lb_prioritized_weighted = API(f'{api_prefix}prioritized-weighted-aca-pool', 'Prioritized & weighted backend pool', f'/{api_prefix}prioritized-weighted', 'This is the API for the prioritized & weighted backend pool.', pol_aca_backend_pool_prioritized_and_weighted, [get], tags)\n", - "# API 3: Weighted backend pool (equal distribution)\n", - "lb_equal_weight = API(f'{api_prefix}weighted-equal-aca-pool', 'Weighted backend pool (equal)', f'/{api_prefix}weighted-equal', 'This is the API for the weighted (equal) backend pool.', pol_aca_backend_pool_weighted_equal, [get], tags)\n", - "# API 4: Weighted backend pool (unequal distribution)\n", - "lb_unequal_weight = API(f'{api_prefix}weighted-unequal-aca-pool', 'Weighted backend pool (unequal)', f'/{api_prefix}weighted-unequal', 'This is the API for the weighted (unequal) backend pool.', pol_aca_backend_pool_weighted_unequal, [get], tags)\n", - "# API 5: Prioritized backend pool with 503-to-429 error handling\n", - "lb_429_prioritized = API(f'{api_prefix}429-prioritized-aca-pool', 'Prioritized backend pool (503โ†’429)', f'/{api_prefix}429-prioritized', 'This is the API for the prioritized backend pool with enhanced error handling that converts 503 to 429.', pol_aca_backend_pool_429_prioritized, [get], tags)\n", - "\n", - "# APIs Array\n", - "apis: List[API] = [lb_prioritized, lb_prioritized_weighted, lb_equal_weight, lb_unequal_weight, lb_429_prioritized]\n", - "\n", - "\n", - "print_ok('Notebook initialized')" + "tags = ['load-balancing'] # ENTER DESCRIPTIVE TAG(S)" ] }, { @@ -125,17 +84,12 @@ "metadata": {}, "outputs": [], "source": [ - "# Test and verify load balancing behavior\n", "import json\n", "import time\n", + "\n", "from apimrequests import ApimRequests\n", "from apimtesting import ApimTesting\n", "\n", - "def zzzs():\n", - " sleep_in_s = 5\n", - " print_message(f'Waiting for {sleep_in_s} seconds for the backend timeouts to reset before starting the next set of calls', blank_above=True)\n", - " time.sleep(sleep_in_s)\n", - "\n", "tests = ApimTesting(\"Load Balancing Sample Tests\", sample_folder, deployment)\n", "\n", "# Determine endpoints, URLs, etc. prior to test execution\n", @@ -160,30 +114,59 @@ "tests.verify(len(api_results_prioritized), 15)\n", "\n", "# 2) Weighted equal distribution\n", - "zzzs()\n", - "print_message('2/5: Starting API calls for weighted distribution (50/50)', blank_above = True)\n", + "time.sleep(2)\n", + "print_message(\n", + " '2/5: Starting API calls for weighted distribution (50/50)',\n", + " blank_above = True\n", + ")\n", "reqs.subscriptionKey = apim_apis[2]['subscriptionPrimaryKey']\n", - "api_results_weighted_equal = reqs.multiGet('/lb-weighted-equal', runs = 15, msg='Calling weighted (equal) APIs')\n", + "api_results_weighted_equal = reqs.multiGet(\n", + " '/lb-weighted-equal',\n", + " runs = 15,\n", + " msg = 'Calling weighted (equal) APIs'\n", + ")\n", "tests.verify(len(api_results_weighted_equal), 15)\n", "\n", "# 3) Weighted unequal distribution\n", - "zzzs()\n", - "print_message('3/5: Starting API calls for weighted distribution (80/20)', blank_above = True)\n", + "time.sleep(2)\n", + "print_message(\n", + " '3/5: Starting API calls for weighted distribution (80/20)',\n", + " blank_above = True\n", + ")\n", "reqs.subscriptionKey = apim_apis[3]['subscriptionPrimaryKey']\n", - "api_results_weighted_unequal = reqs.multiGet('/lb-weighted-unequal', runs = 15, msg = 'Calling weighted (unequal) APIs')\n", + "api_results_weighted_unequal = reqs.multiGet(\n", + " '/lb-weighted-unequal',\n", + " runs = 15,\n", + " msg = 'Calling weighted (unequal) APIs'\n", + ")\n", "tests.verify(len(api_results_weighted_unequal), 15)\n", "\n", "# 4) Prioritized and weighted distribution\n", - "zzzs()\n", - "print_message('4/5: Starting API calls for prioritized & weighted distribution', blank_above=True)\n", + "time.sleep(2)\n", + "print_message(\n", + " '4/5: Starting API calls for prioritized & weighted distribution',\n", + " blank_above = True\n", + ")\n", "reqs.subscriptionKey = apim_apis[1]['subscriptionPrimaryKey']\n", - "api_results_prioritized_and_weighted = reqs.multiGet('/lb-prioritized-weighted', runs=20, msg='Calling prioritized & weighted APIs')\n", + "api_results_prioritized_and_weighted = reqs.multiGet(\n", + " '/lb-prioritized-weighted',\n", + " runs = 20,\n", + " msg = 'Calling prioritized & weighted APIs'\n", + ")\n", "tests.verify(len(api_results_prioritized_and_weighted), 20)\n", "\n", "# 5) Prioritized and weighted with recovery time\n", - "zzzs()\n", - "print_message('5/5: Starting API calls for prioritized & weighted distribution (500ms sleep)', blank_above = True)\n", - "api_results_prioritized_and_weighted_sleep = reqs.multiGet('/lb-prioritized-weighted', runs = 20, msg = 'Calling prioritized & weighted APIs', sleepMs=500)\n", + "time.sleep(2)\n", + "print_message(\n", + " '5/5: Starting API calls for prioritized & weighted distribution (500ms sleep)',\n", + " blank_above = True\n", + ")\n", + "api_results_prioritized_and_weighted_sleep = reqs.multiGet(\n", + " '/lb-prioritized-weighted',\n", + " runs = 20,\n", + " msg = 'Calling prioritized & weighted APIs',\n", + " sleepMs = 500\n", + ")\n", "tests.verify(len(api_results_prioritized_and_weighted_sleep), 20)\n", "\n", "tests.print_summary()\n", @@ -215,9 +198,13 @@ " title = 'Prioritized Distribution',\n", " x_label = 'Run #',\n", " y_label = 'Response Time (ms)',\n", - " fig_text = 'The chart shows a total of 15 requests across a prioritized backend pool with two backends.\\n' \\\n", - " 'Each backend, in sequence, was able to serve five requests for a total of ten requests until the pool became unhealthy (all backends were exhausted).\\n' \\\n", - " 'The average response time is calculated excluding statistical outliers above the 95th percentile (the first request usually takes longer).'\n", + " fig_text = (\n", + " 'The chart shows a total of 15 requests across a prioritized backend pool with two backends.\\n'\n", + " 'Each backend, in sequence, was able to serve five requests for a total of ten requests '\n", + " 'until the pool became unhealthy (all backends were exhausted).\\n'\n", + " 'The average response time is calculated excluding statistical outliers above the 95th percentile '\n", + " '(the first request usually takes longer).'\n", + " )\n", ").plot()\n", "\n", "charts.BarChart(\n", @@ -225,9 +212,13 @@ " title = 'Weighted Distribution (50/50)',\n", " x_label = 'Run #',\n", " y_label = 'Response Time (ms)',\n", - " fig_text = 'The chart shows a total of 15 requests across an equally-weighted backend pool with two backends.\\n' \\\n", - " 'Each backend, alternatingly, was able to serve five requests for a total of ten requests until the pool became unhealthy (all backends were exhausted).\\n' \\\n", - " 'The average response time is calculated excluding statistical outliers above the 95th percentile (the first request usually takes longer).'\n", + " fig_text = (\n", + " 'The chart shows a total of 15 requests across an equally-weighted backend pool with two backends.\\n'\n", + " 'Each backend, alternatingly, was able to serve five requests for a total of ten requests '\n", + " 'until the pool became unhealthy (all backends were exhausted).\\n'\n", + " 'The average response time is calculated excluding statistical outliers above the 95th percentile '\n", + " '(the first request usually takes longer).'\n", + " )\n", ").plot()\n", "\n", "charts.BarChart(\n", @@ -235,9 +226,13 @@ " title = 'Weighted Distribution (80/20)',\n", " x_label = 'Run #',\n", " y_label = 'Response Time (ms)',\n", - " fig_text = 'The chart shows a total of 15 requests across an unequally-weighted backend pool with two backends.\\n' \\\n", - " 'Each backend was able to serve requests for a total of ten requests until the pool became unhealthy (all backends were exhausted).\\n' \\\n", - " 'The average response time is calculated excluding statistical outliers above the 95th percentile (the first request usually takes longer).'\n", + " fig_text = (\n", + " 'The chart shows a total of 15 requests across an unequally-weighted backend pool with two backends.\\n'\n", + " 'Each backend was able to serve requests for a total of ten requests until the pool became unhealthy '\n", + " '(all backends were exhausted).\\n'\n", + " 'The average response time is calculated excluding statistical outliers above the 95th percentile '\n", + " '(the first request usually takes longer).'\n", + " )\n", ").plot()\n", "\n", "charts.BarChart(\n", @@ -245,10 +240,13 @@ " title = 'Prioritized & Weighted Distribution',\n", " x_label = 'Run #',\n", " y_label = 'Response Time (ms)',\n", - " fig_text = 'The chart shows a total of 20 requests across a prioritized and equally-weighted backend pool with three backends.\\n' \\\n", - " 'The first backend is set up as the only priority 1 backend. It serves its five requests before the second and third backends - each part of\\n' \\\n", - " 'priority 2 and weight equally - commence taking requests.\\n' \\\n", - " 'The average response time is calculated excluding statistical outliers above the 95th percentile (the first request usually takes longer).'\n", + " fig_text = (\n", + " 'The chart shows a total of 20 requests across a prioritized and equally-weighted backend pool with three backends.\\n'\n", + " 'The first backend is set up as the only priority 1 backend. It serves its five requests before the '\n", + " 'second and third backends - each part of priority 2 and weight equally - commence taking requests.\\n'\n", + " 'The average response time is calculated excluding statistical outliers above the 95th percentile '\n", + " '(the first request usually takes longer).'\n", + " )\n", ").plot()\n", "\n", "charts.BarChart(\n", @@ -256,10 +254,15 @@ " title = 'Prioritized & Weighted Distribution (500ms sleep)',\n", " x_label = 'Run #',\n", " y_label = 'Response Time (ms)',\n", - " fig_text = 'The chart shows a total of 20 requests across a prioritized and equally-weighted backend pool with three backends (same as previously).\\n' \\\n", - " 'The key difference to the previous chart is that each request is now followed by a 500ms sleep, which allows timed-out backends to recover.\\n' \\\n", - " 'The average response time is calculated excluding statistical outliers above the 95th percentile (the first request usually takes longer).'\n", - ").plot()\n" + " fig_text = (\n", + " 'The chart shows a total of 20 requests across a prioritized and equally-weighted backend pool with three backends '\n", + " '(same as previously).\\n'\n", + " 'The key difference to the previous chart is that each request is now followed by a 500ms sleep, '\n", + " 'which allows timed-out backends to recover.\\n'\n", + " 'The average response time is calculated excluding statistical outliers above the 95th percentile '\n", + " '(the first request usually takes longer).'\n", + " )\n", + ").plot()" ] }, { @@ -277,7 +280,6 @@ "metadata": {}, "outputs": [], "source": [ - "# Test the 503-to-429 error handling API\n", "print_message('Testing 503-to-429 error handling...', blank_above=True)\n", "\n", "# Use the 429 API\n", @@ -287,7 +289,7 @@ "api_results_429_handling = reqs.multiGet('/lb-429-prioritized', runs=12, msg='Calling 429-prioritized API to trigger error handling')\n", "\n", "# Count 429 responses by status code\n", - "count_429 = sum(1 for result in api_results_429_handling if result['status_code'] == 429)\n", + "count_429 = sum(1 for result in api_results_429_handling if result['status_code'] == HttpStatusCode.TOO_MANY_REQUESTS)\n", "\n", "# Verify that at least one 429 response was returned\n", "tests.verify(count_429 > 0, True)\n", @@ -295,7 +297,7 @@ "# Verify Retry-After header is present for 429 responses and absent for others\n", "for result in api_results_429_handling:\n", " has_retry_after = 'Retry-After' in result['headers']\n", - " if result['status_code'] == 429:\n", + " if result['status_code'] == HttpStatusCode.TOO_MANY_REQUESTS:\n", " # 429 responses should have Retry-After header\n", " tests.verify(has_retry_after, True, 'has Retry-After header')\n", " else:\n", @@ -304,7 +306,7 @@ "\n", "tests.print_summary()\n", "\n", - "print_ok('Error handling test completed!')\n" + "print_ok('Error handling test completed!')" ] } ], diff --git a/samples/oauth-3rd-party/create.ipynb b/samples/oauth-3rd-party/create.ipynb index 4a81dd26..c2b883b8 100644 --- a/samples/oauth-3rd-party/create.ipynb +++ b/samples/oauth-3rd-party/create.ipynb @@ -24,9 +24,11 @@ "metadata": {}, "outputs": [], "source": [ - "import utils\n", - "from apimtypes import *\n", "import os\n", + "import utils\n", + "from typing import List\n", + "\n", + "from apimtypes import API, APIM_SKU, GET_APIOperation2, INFRASTRUCTURE, NamedValue, Role\n", "from console import print_error, print_info, print_ok\n", "from azure_resources import get_infra_rg_name\n", "\n", @@ -38,8 +40,8 @@ "index = 1\n", "apim_sku = APIM_SKU.BASICV2 # Options: 'BASICV2', 'STANDARDV2', 'PREMIUMV2'\n", "deployment = INFRASTRUCTURE.SIMPLE_APIM # Options: see supported_infras below\n", - "api_prefix = 'oauth-' # Prefix for API names # ENTER A PREFIX FOR THE APIS TO REDUCE COLLISION POTENTIAL WITH OTHER SAMPLES\n", - "tags = ['oauth-3rd-party', 'jwt', 'credential-manager', 'policy-fragment'] # ENTER DESCRIPTIVE TAG(S)\n", + "api_prefix = 'oauth-' # ENTER A PREFIX FOR THE APIS TO REDUCE COLLISION POTENTIAL WITH OTHER SAMPLES\n", + "tags = ['oauth-3rd-party', 'jwt', 'credential-manager', 'policy-fragment'] # ENTER DESCRIPTIVE TAG(S)\n", "\n", "\n", "\n", @@ -50,8 +52,23 @@ "# Create the notebook helper with JWT support\n", "sample_folder = 'oauth-3rd-party'\n", "rg_name = get_infra_rg_name(deployment, index)\n", - "supported_infras = [INFRASTRUCTURE.AFD_APIM_PE, INFRASTRUCTURE.APIM_ACA, INFRASTRUCTURE.APPGW_APIM, INFRASTRUCTURE.APPGW_APIM_PE, INFRASTRUCTURE.SIMPLE_APIM]\n", - "nb_helper = utils.NotebookHelper(sample_folder, rg_name, rg_location, deployment, supported_infras, True, index = index, apim_sku = apim_sku)\n", + "supported_infras = [\n", + " INFRASTRUCTURE.AFD_APIM_PE,\n", + " INFRASTRUCTURE.APIM_ACA,\n", + " INFRASTRUCTURE.APPGW_APIM,\n", + " INFRASTRUCTURE.APPGW_APIM_PE,\n", + " INFRASTRUCTURE.SIMPLE_APIM\n", + "]\n", + "nb_helper = utils.NotebookHelper(\n", + " sample_folder,\n", + " rg_name,\n", + " rg_location,\n", + " deployment,\n", + " supported_infras,\n", + " True,\n", + " index = index,\n", + " apim_sku = apim_sku\n", + ")\n", "\n", "# OAuth credentials (required environment variables)\n", "client_id = os.getenv('SPOTIFY_CLIENT_ID')\n", @@ -59,7 +76,10 @@ "\n", "# Validate OAuth credentials\n", "if not client_id or not client_secret:\n", - " print_error('Please set the SPOTIFY_CLIENT_ID and SPOTIFY_CLIENT_SECRET environment variables in the root .env file before running this notebook.')\n", + " print_error(\n", + " 'Please set the SPOTIFY_CLIENT_ID and SPOTIFY_CLIENT_SECRET environment variables '\n", + " 'in the root .env file before running this notebook.'\n", + " )\n", " raise ValueError('Missing Spotify OAuth credentials')\n", "\n", "# Define the APIs and their operations and policies\n", @@ -91,8 +111,23 @@ "\n", "# API 1: Spotify\n", "spotify_path = f'{api_prefix}spotify'\n", - "spotify_artist_get = GET_APIOperation2('artists-get', 'Artists', '/artists/{id}', 'Gets the artist by their ID', pol_artist_get_xml, templateParameters = blob_template_parameters)\n", - "spotify = API(spotify_path, 'Spotify', spotify_path, 'This is the API for interactions with the Spotify REST API', pol_spotify_api_xml, [spotify_artist_get], tags)\n", + "spotify_artist_get = GET_APIOperation2(\n", + " 'artists-get',\n", + " 'Artists',\n", + " '/artists/{id}',\n", + " 'Gets the artist by their ID',\n", + " pol_artist_get_xml,\n", + " templateParameters = blob_template_parameters\n", + ")\n", + "spotify = API(\n", + " spotify_path,\n", + " 'Spotify',\n", + " spotify_path,\n", + " 'This is the API for interactions with the Spotify REST API',\n", + " pol_spotify_api_xml,\n", + " [spotify_artist_get],\n", + " tags\n", + ")\n", "\n", "# APIs Array\n", "apis: List[API] = [spotify]\n", @@ -208,18 +243,31 @@ "\n", "# Test artist lookup (Taylor Swift's Spotify Artist ID)\n", "artist_id = '06HL4z0CvFAxyc27GXpf02'\n", - "output = reqs.singleGet(f'{spotify_path}/artists/{artist_id}', msg = 'Calling the Spotify Artist API via API Management Gateway URL.')\n", + "output = reqs.singleGet(\n", + " f'{spotify_path}/artists/{artist_id}',\n", + " msg = 'Calling the Spotify Artist API via API Management Gateway URL.'\n", + ")\n", "\n", "artist = json.loads(output)\n", "tests.verify(artist['name'], 'Taylor Swift')\n", - "print_info(f'{artist[\"name\"]} has a popularity rating of {artist[\"popularity\"]} with {artist[\"followers\"][\"total\"]:,} followers on Spotify.')\n", + "print_info(\n", + " f'{artist[\"name\"]} has a popularity rating of {artist[\"popularity\"]} '\n", + " f'with {artist[\"followers\"][\"total\"]:,} followers on Spotify.'\n", + ")\n", "\n", "# Test unauthorized access (should fail with 401)\n", "reqsNoApiSubscription = ApimRequests(endpoint_url, None, request_headers)\n", - "output = reqsNoApiSubscription.singleGet(f'{spotify_path}/artists/{artist_id}', msg = 'Calling the Spotify Artist API without API subscription key. Expect 401.')\n", + "output = reqsNoApiSubscription.singleGet(\n", + " f'{spotify_path}/artists/{artist_id}',\n", + " msg = 'Calling the Spotify Artist API without API subscription key. Expect 401.'\n", + ")\n", "outputJson = utils.get_json(output)\n", "tests.verify(outputJson['statusCode'], 401)\n", - "tests.verify(outputJson['message'], 'Access denied due to missing subscription key. Make sure to include subscription key when making requests to an API.')\n", + "tests.verify(\n", + " outputJson['message'],\n", + " 'Access denied due to missing subscription key. '\n", + " 'Make sure to include subscription key when making requests to an API.'\n", + ")\n", "\n", "tests.print_summary()\n", "print_ok('โœ… All OAuth integration tests completed successfully!')" diff --git a/samples/secure-blob-access/README.md b/samples/secure-blob-access/README.md index cc707f6d..b82156e4 100644 --- a/samples/secure-blob-access/README.md +++ b/samples/secure-blob-access/README.md @@ -1,6 +1,6 @@ # ๐Ÿ” Samples: Secure Blob Access via API Management -This sample demonstrates implementing the **valet key pattern** with Azure API Management (APIM) to provide direct, secure, time-limited access to blob storage without exposing storage account keys. While APIM provides the key, it is deliberately not the conduit for downloading the actual blob. +This sample demonstrates implementing the **valet key pattern** with Azure API Management (APIM) to provide direct, secure, time-limited access to blob storage using **User Delegation SAS** tokens. Shared key access is disabled on the storage account entirely. While APIM provides the SAS token, it is deliberately not the conduit for downloading the actual blob. โš™๏ธ **Supported infrastructures**: All infrastructures @@ -9,12 +9,13 @@ This sample demonstrates implementing the **valet key pattern** with Azure API M ## ๐ŸŽฏ Objectives 1. Learn how the [valet key pattern][valet-key-pattern] works. -1. Understand how APIM provides the SAS token for direct download from storage. +1. Understand how APIM generates a User Delegation SAS token for direct download from storage. 1. Experience how you can secure the caller from APIM with your own mechanisms and use APIM's managed identity to interact with Azure Storage. +1. Learn why User Delegation SAS is preferred over account-key SAS for security. ## ๐Ÿ“ Scenario -This sample demonstrates how a Human Resources (HR) application or user can securely gain access to an HR file. The authentication and authorization between the application or the user is with APIM. Once verified, APIM then uses its own managed identity to verify the blob exists and creates a SAS token for direct, secure, time-limited access to the blob. This token is then combined with the URL to the blob before it is returned to the API caller. Once received, the caller can then _directly_ access the blob on storage. +This sample demonstrates how a Human Resources (HR) application or user can securely gain access to an HR file. The authentication and authorization between the application or the user is with APIM. Once verified, APIM then uses its own managed identity to verify the blob exists, obtains a user delegation key from Azure Storage, and creates a User Delegation SAS token for direct, secure, time-limited access to the blob. This token is then combined with the URL to the blob before it is returned to the API caller. Once received, the caller can then _directly_ access the blob on storage. This is an implementation of the valet key pattern, which ensures that APIM is not used as the download (or upload) conduit of the blob, which could potentially be quite large. Instead, APIM is used very appropriately for facilitating means of secure access to the resource only. @@ -23,10 +24,10 @@ This sample builds upon knowledge gained from the _AuthX_ and _AuthX-Pro_ sample ## ๐Ÿ›ฉ๏ธ Lab Components This lab sets up: -- A simple Azure Storage account with LRS redundancy +- A simple Azure Storage account with LRS redundancy and shared key access disabled - A blob container with a sample text file -- APIM managed identity with Storage Blob Data Reader permissions -- An API that generates secure blob access URLs using the valet key pattern +- APIM managed identity with Storage Blob Data Reader and Storage Blob Delegator permissions +- An API that generates User Delegation SAS tokens for secure blob access URLs using the valet key pattern - Sample files: a text file for testing diff --git a/samples/secure-blob-access/blob-get-operation.xml b/samples/secure-blob-access/blob-get-operation.xml index 76a8ace9..8e8d4d5c 100644 --- a/samples/secure-blob-access/blob-get-operation.xml +++ b/samples/secure-blob-access/blob-get-operation.xml @@ -2,7 +2,7 @@ - + @@ -10,9 +10,9 @@ - + - + @@ -24,14 +24,13 @@ - - + - + @@ -43,10 +42,11 @@ - + - + + @@ -69,7 +69,7 @@ timestamp = DateTime.UtcNow.ToString("yyyy-MM-ddTHH:mm:ssZ"), expire_at = (string)context.Variables["expiry"] }; - + return Newtonsoft.Json.JsonConvert.SerializeObject(response, Newtonsoft.Json.Formatting.Indented); } diff --git a/samples/secure-blob-access/create.ipynb b/samples/secure-blob-access/create.ipynb index 93f1f802..d59745ab 100644 --- a/samples/secure-blob-access/create.ipynb +++ b/samples/secure-blob-access/create.ipynb @@ -18,7 +18,8 @@ "outputs": [], "source": [ "import utils\n", - "from apimtypes import *\n", + "\n", + "from apimtypes import API, APIM_SKU, GET_APIOperation2, HttpStatusCode, INFRASTRUCTURE, NamedValue, PolicyFragment, Role\n", "from console import print_error, print_info, print_ok, print_val, print_warning\n", "from azure_resources import get_infra_rg_name\n", "\n", @@ -28,77 +29,10 @@ "\n", "rg_location = 'eastus2'\n", "index = 1\n", - "apim_sku = APIM_SKU.BASICV2 # Options: 'BASICV2', 'STANDARDV2', 'PREMIUMV2'\n", + "apim_sku = APIM_SKU.STANDARDV2 # Options: 'BASICV2', 'STANDARDV2', 'PREMIUMV2'\n", "deployment = INFRASTRUCTURE.SIMPLE_APIM # Options: see supported_infras below\n", "api_prefix = 'blob-'\n", - "tags = ['secure-blob-access', 'valet-key', 'storage', 'jwt', 'authz']\n", - "\n", - "\n", - "\n", - "# ------------------------------\n", - "# SYSTEM CONFIGURATION\n", - "# ------------------------------\n", - "\n", - "# Create the notebook helper with JWT support\n", - "sample_folder = 'secure-blob-access'\n", - "rg_name = get_infra_rg_name(deployment, index)\n", - "supported_infras = [INFRASTRUCTURE.AFD_APIM_PE, INFRASTRUCTURE.APIM_ACA, INFRASTRUCTURE.APPGW_APIM, INFRASTRUCTURE.APPGW_APIM_PE, INFRASTRUCTURE.SIMPLE_APIM]\n", - "nb_helper = utils.NotebookHelper(sample_folder, rg_name, rg_location, deployment, supported_infras, True, index = index, apim_sku = apim_sku)\n", - "\n", - "# Blob storage configuration\n", - "container_name = 'hr-assets'\n", - "file_name = 'hr.txt'\n", - "\n", - "# Define the APIs and their operations and policies\n", - "\n", - "# Set up the named values\n", - "nvs: List[NamedValue] = [\n", - " NamedValue(nb_helper.jwt_key_name, nb_helper.jwt_key_value_bytes_b64, True),\n", - " NamedValue('HRMemberRoleId', Role.HR_MEMBER)\n", - "]\n", - "\n", - "# Load policy fragment definitions\n", - "pf_authx_hr_member_xml = utils.read_policy_xml('pf-authx-hr-member.xml', {\n", - " 'jwt_signing_key' : nb_helper.jwt_key_name,\n", - " 'hr_member_role_id' : 'HRMemberRoleId'\n", - "}, sample_folder)\n", - "\n", - "pf_create_sas_token_xml = utils.read_policy_xml('pf-create-sas-token.xml', sample_name = sample_folder)\n", - "pf_check_blob_existence_via_mi = utils.read_policy_xml('pf-check-blob-existence-via-managed-identity.xml', sample_name = sample_folder)\n", - "\n", - "# Define policy fragments\n", - "pfs: List[PolicyFragment] = [\n", - " PolicyFragment('AuthX-HR-Member', pf_authx_hr_member_xml, 'Authenticates and authorizes users with HR Member role.'),\n", - " PolicyFragment('Create-Sas-Token', pf_create_sas_token_xml, 'Creates a SAS token to use with access to a blob.'),\n", - " PolicyFragment('Check-Blob-Existence-via-Managed-Identity', pf_check_blob_existence_via_mi, 'Checks whether the specified blob exists at the blobUrl. A boolean value for blobExists will be available afterwards.')\n", - "]\n", - "\n", - "# Load API policy\n", - "pol_blob_get = utils.read_and_modify_policy_xml('blob-get-operation.xml', {\n", - " 'container_name': container_name\n", - "}, sample_folder)\n", - "\n", - "# Define template parameters for blob name\n", - "blob_template_parameters = [\n", - " {\n", - " \"name\": \"blob-name\",\n", - " \"description\": \"The name of the blob to access\",\n", - " \"type\": \"string\",\n", - " \"required\": True\n", - " }\n", - "]\n", - "\n", - "# Define API operations\n", - "\n", - "# API 1: Secure Blob Access API\n", - "secure_blob_path = f'/{api_prefix}secure-files'\n", - "secure_blob_get = GET_APIOperation2('GET', 'GET', '/{blob-name}', 'Gets the blob access valet key info', pol_blob_get, templateParameters=blob_template_parameters)\n", - "secure_blob = API('secure-blob-access', 'Secure Blob Access API', f'/{api_prefix}secure-files', 'API for secure access to blob storage using the valet key pattern', operations = [secure_blob_get], tags = tags)\n", - "\n", - "# APIs Array\n", - "apis: List[API] = [secure_blob]\n", - "\n", - "print_ok('Notebook initialized')" + "tags = ['secure-blob-access', 'valet-key', 'storage', 'jwt', 'authz']" ] }, { @@ -123,8 +57,7 @@ " 'apis' : {'value': [api.to_dict() for api in apis]},\n", " 'namedValues' : {'value': [nv.to_dict() for nv in nvs]},\n", " 'policyFragments' : {'value': [pf.to_dict() for pf in pfs]},\n", - " 'containerName' : {'value': container_name},\n", - " 'blobName' : {'value': file_name}\n", + " 'containerName' : {'value': container_name}\n", "}\n", "\n", "# Deploy the sample\n", @@ -145,6 +78,87 @@ " raise SystemExit(1)" ] }, + { + "cell_type": "markdown", + "id": "e090095c", + "metadata": {}, + "source": [ + "### ๐Ÿ“ฆ Upload Sample Blob File\n", + "\n", + "Uploads a sample HR document to the blob container. This uses the current user's Azure CLI credentials (`--auth-mode login`) instead of a deployment script, avoiding key-based storage authentication." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "74f5025e", + "metadata": {}, + "outputs": [], + "source": [ + "import tempfile\n", + "import os\n", + "import time\n", + "from azure_resources import run, get_account_info\n", + "\n", + "if 'storage_account_name' not in locals():\n", + " raise SystemExit(1)\n", + "\n", + "# Get current user's object ID for role assignment\n", + "_, current_user_id, _, subscription_id = get_account_info()\n", + "\n", + "# Assign Storage Blob Data Contributor to the current user on the storage account\n", + "storage_account_resource_id = (\n", + " f'/subscriptions/{subscription_id}/resourceGroups/{rg_name}'\n", + " f'/providers/Microsoft.Storage/storageAccounts/{storage_account_name}'\n", + ")\n", + "blob_contributor_role = 'ba92f5b4-2d11-453d-a403-e96b0029c9fe'\n", + "\n", + "print_info(f'Assigning Storage Blob Data Contributor role to current user on {storage_account_name}...')\n", + "role_result = run(\n", + " f'az role assignment create'\n", + " f' --assignee-object-id {current_user_id}'\n", + " f' --assignee-principal-type User'\n", + " f' --role {blob_contributor_role}'\n", + " f' --scope {storage_account_resource_id}'\n", + ")\n", + "\n", + "if not role_result.success:\n", + " print_warning('Role assignment may already exist or failed.')\n", + "\n", + "# Allow time for role assignment propagation\n", + "print_info('Waiting for role assignment to propagate...')\n", + "time.sleep(30)\n", + "\n", + "# Create a temporary file with sample content\n", + "temp_dir = tempfile.mkdtemp()\n", + "temp_file = os.path.join(temp_dir, file_name)\n", + "\n", + "with open(temp_file, 'w', encoding='utf-8') as f:\n", + " f.write('This is an HR document.')\n", + "\n", + "# Upload the blob using the current user's CLI credentials\n", + "print_info(f'Uploading {file_name} to {container_name} in {storage_account_name}...')\n", + "result = run(\n", + " f'az storage blob upload'\n", + " f' --account-name {storage_account_name}'\n", + " f' --container-name {container_name}'\n", + " f' --name {file_name}'\n", + " f' --file \"{temp_file}\"'\n", + " f' --auth-mode login'\n", + " f' --overwrite'\n", + ")\n", + "\n", + "# Clean up temp file\n", + "os.remove(temp_file)\n", + "os.rmdir(temp_dir)\n", + "\n", + "if result.success:\n", + " print_ok(f'Successfully uploaded {file_name} to {container_name}')\n", + "else:\n", + " print_error('Failed to upload blob.')\n", + " raise SystemExit(1)" + ] + }, { "cell_type": "markdown", "id": "7e96a588", @@ -194,45 +208,14 @@ "source": [ "# Test and verify secure blob access using valet key pattern\n", "import json\n", + "import sys\n", "import requests\n", "from apimrequests import ApimRequests\n", "from apimtesting import ApimTesting\n", "from users import UserHelper\n", "from authfactory import AuthFactory\n", "\n", - "def handleResponse(response):\n", - " \"\"\"Handle blob access response and test direct blob access.\"\"\"\n", - " if isinstance(response, str):\n", - " try:\n", - " access_info = json.loads(response)\n", - " sas_url = access_info.get('sas_url', 'N/A')\n", - "\n", - " if sas_url == 'N/A':\n", - " return response\n", - "\n", - " print_info(f\"Secure Blob URL: {sas_url}\")\n", - " print_info(f\"Expires At: {access_info.get('expire_at', 'N/A')}\")\n", - "\n", - " # Test direct blob access using the valet key (SAS URL)\n", - " print_info(\"๐Ÿงช Testing direct blob access...\")\n", - "\n", - " try:\n", - " blob_response = requests.get(access_info['sas_url'])\n", - " if blob_response.status_code == 200:\n", - " print_info(\"โœ… Direct blob access successful!\")\n", - " content_preview = blob_response.text[:200] + \"...\" if len(blob_response.text) > 200 else blob_response.text\n", - " print_val(\"Content preview:\", content_preview.strip(), True)\n", - " return content_preview.strip()\n", - " else:\n", - " print_error(f\"โŒ Direct blob access failed: {blob_response.status_code}\")\n", - " return blob_response.status_code\n", - " except Exception as e:\n", - " print_error(f\"Error accessing blob directly: {str(e)}\")\n", - " except (json.JSONDecodeError, AttributeError):\n", - " print_error(\"Failed to parse JSON response or response is not in expected format.\")\n", - " return response\n", - "\n", - "tests = ApimTesting(\"Secure Blob Access Sample Tests\", sample_folder, deployment)\n", + "tests = ApimTesting('Secure Blob Access Sample Tests', sample_folder, deployment)\n", "\n", "# Determine endpoints, URLs, etc. prior to test execution\n", "endpoint_url, request_headers = utils.get_endpoint(deployment, rg_name, apim_gateway_url)\n", @@ -242,7 +225,7 @@ "api_subscription_key = apim_apis[0]['subscriptionPrimaryKey']\n", "\n", "# Test 1: Authorized user with HR Member role\n", - "print_info(\"1๏ธโƒฃ Testing with Authorized User (HR Member role)\")\n", + "print_info('1๏ธโƒฃ Testing with Authorized User (HR Member role)')\n", "\n", "# Create JWT token for HR Member role\n", "encoded_jwt_token_hr_member = AuthFactory.create_symmetric_jwt_token_for_user(\n", @@ -251,18 +234,50 @@ ")\n", "print_info(f'JWT token for HR Member:\\n{encoded_jwt_token_hr_member}')\n", "\n", - "# Test secure blob access with authorization\n", + "# Request SAS URL from APIM\n", "reqsApimAuthorized = ApimRequests(endpoint_url, api_subscription_key, request_headers)\n", "reqsApimAuthorized.headers['Authorization'] = f'Bearer {encoded_jwt_token_hr_member}'\n", "\n", - "print_info(f\"๐Ÿ”’ Getting secure access for {file_name} with authorized user...\")\n", - "response = reqsApimAuthorized.singleGet(f'/{api_prefix}secure-files/{file_name}',\n", - " msg=f'Requesting secure access for {file_name} (authorized)')\n", - "output = handleResponse(response)\n", - "tests.verify(output, 'This is an HR document.')\n", + "response = reqsApimAuthorized.singleGet(\n", + " f'/{api_prefix}secure-files/{file_name}',\n", + " msg=f'Requesting secure access for {file_name} (authorized)',\n", + " printResponse=False\n", + ")\n", + "\n", + "# Parse the SAS URL response and test direct blob access\n", + "sas_url = None\n", + "blob_content = None\n", + "\n", + "try:\n", + " access_info = json.loads(response)\n", + " sas_url = access_info.get('sas_url')\n", + " print_ok('Received valet key response from APIM')\n", + " print_val('Secure Blob URL', sas_url, True)\n", + " print_val('Expires At', access_info.get('expire_at', 'N/A'))\n", + "except (json.JSONDecodeError, TypeError):\n", + " print_error('Failed to parse APIM response as JSON')\n", + " print_val('Raw response', str(response), True)\n", + "\n", + "if sas_url:\n", + " print_info('Testing direct blob access using SAS URL...')\n", + " try:\n", + " blob_response = requests.get(sas_url, timeout=30)\n", + " if blob_response.status_code == HttpStatusCode.OK:\n", + " blob_content = blob_response.text.strip()\n", + " print_ok('Direct blob access successful!')\n", + " print_val('Content', blob_content)\n", + " else:\n", + " print_error(f'Direct blob access failed with status {blob_response.status_code}')\n", + " print_val('Response body', blob_response.text[:500], True)\n", + " except requests.exceptions.RequestException as e:\n", + " print_error(f'Direct blob access request failed: {e}')\n", + "\n", + "sys.stderr.flush()\n", + "tests.verify(blob_content, 'This is an HR document.')\n", "\n", "# Test 2: Unauthorized user without required role\n", - "print_info(\"2๏ธโƒฃ Testing with Unauthorized User (no role)\")\n", + "sys.stderr.flush()\n", + "print_info('2๏ธโƒฃ Testing with Unauthorized User (no role)')\n", "\n", "# Create JWT token for user with no role\n", "encoded_jwt_token_no_role = AuthFactory.create_symmetric_jwt_token_for_user(\n", @@ -275,12 +290,14 @@ "reqsApimUnauthorized = ApimRequests(endpoint_url, api_subscription_key, request_headers)\n", "reqsApimUnauthorized.headers['Authorization'] = f'Bearer {encoded_jwt_token_no_role}'\n", "\n", - "print_info(f\"๐Ÿ”’ Attempting to obtain secure access for {file_name} with unauthorized user (expect 401/403)...\")\n", - "response = reqsApimUnauthorized.singleGet(f'/{api_prefix}secure-files/{file_name}',\n", - " msg=f'Requesting secure access for {file_name} (unauthorized)')\n", - "output = handleResponse(response)\n", - "tests.verify(json.loads(output)['statusCode'], 401)\n", + "response = reqsApimUnauthorized.singleGet(\n", + " f'/{api_prefix}secure-files/{file_name}',\n", + " msg=f'Requesting secure access for {file_name} (unauthorized, expect 401)'\n", + ")\n", + "sys.stderr.flush()\n", + "tests.verify(json.loads(response)['statusCode'], HttpStatusCode.UNAUTHORIZED)\n", "\n", + "sys.stderr.flush()\n", "tests.print_summary()\n", "\n", "print_ok('All done!')" diff --git a/samples/secure-blob-access/main.bicep b/samples/secure-blob-access/main.bicep index 15b027af..fdb6d99c 100644 --- a/samples/secure-blob-access/main.bicep +++ b/samples/secure-blob-access/main.bicep @@ -20,8 +20,6 @@ param policyFragments array = [] @maxLength(63) param containerName string -param blobName string - // ------------------------------ // RESOURCES @@ -51,7 +49,7 @@ resource storageAccount 'Microsoft.Storage/storageAccounts@2023-05-01' = { properties: { accessTier: 'Hot' allowBlobPublicAccess: false - allowSharedKeyAccess: true + allowSharedKeyAccess: false minimumTlsVersion: 'TLS1_2' supportsHttpsTrafficOnly: true } @@ -72,21 +70,6 @@ resource blobContainer 'Microsoft.Storage/storageAccounts/blobServices/container } } -// Upload sample files to blob storage using deployment script -module uploadSampleFilesModule 'upload-sample-files.bicep' = { - name: 'upload-sample-files' - params: { - location: location - resourceSuffix: resourceSuffix - storageAccountName: storageAccount.name - containerName: containerName - blobName: blobName - } - dependsOn: [ - blobContainer - ] -} - // https://learn.microsoft.com/azure/templates/microsoft.authorization/roleassignments resource apimStorageBlobDataReaderRole 'Microsoft.Authorization/roleAssignments@2022-04-01' = { name: guid(storageAccount.id, apimService.id, 'Storage Blob Data Reader') @@ -98,6 +81,17 @@ resource apimStorageBlobDataReaderRole 'Microsoft.Authorization/roleAssignments@ } } +// https://learn.microsoft.com/azure/templates/microsoft.authorization/roleassignments +resource apimStorageBlobDelegatorRole 'Microsoft.Authorization/roleAssignments@2022-04-01' = { + name: guid(storageAccount.id, apimService.id, 'Storage Blob Delegator') + scope: storageAccount + properties: { + roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'db58b8e5-c6ad-4a2a-8342-4190687cbf4a') // Storage Blob Delegator + principalId: apimService.identity.principalId + principalType: 'ServicePrincipal' + } +} + // Add storage account name as APIM named value resource storageAccountNamedValue 'Microsoft.ApiManagement/service/namedValues@2024-06-01-preview' = { parent: apimService @@ -109,16 +103,6 @@ resource storageAccountNamedValue 'Microsoft.ApiManagement/service/namedValues@2 } } -// Add storage account key as APIM named value (for demo purposes - use Key Vault in production) -resource storageAccountKeyNamedValue 'Microsoft.ApiManagement/service/namedValues@2024-06-01-preview' = { - parent: apimService - name: 'storage-account-key' - properties: { - displayName: 'storage-account-key' - value: storageAccount.listKeys().keys[0].value - secret: true - } -} // APIM Named Values module namedValueModule '../../shared/bicep/modules/apim/v1/named-value.bicep' = [for nv in namedValues: { @@ -160,8 +144,8 @@ module apisModule '../../shared/bicep/modules/apim/v1/api.bicep' = [for api in a namedValueModule // ensure all named values are created before APIs policyFragmentModule // ensure all policy fragments are created before APIs storageAccountNamedValue - storageAccountKeyNamedValue - apimStorageBlobDataReaderRole // ensure role assignment is complete before APIs + apimStorageBlobDataReaderRole // ensure role assignments are complete before APIs + apimStorageBlobDelegatorRole ] }] @@ -177,7 +161,6 @@ output storageAccountName string = storageAccount.name output storageAccountId string = storageAccount.id output blobContainerName string = containerName output storageAccountEndpoint string = storageAccount.properties.primaryEndpoints.blob -output uploadedFiles array = uploadSampleFilesModule.outputs.uploadedFiles // API outputs output apiOutputs array = [for i in range(0, length(apis)): { diff --git a/samples/secure-blob-access/pf-create-sas-token.xml b/samples/secure-blob-access/pf-create-sas-token.xml index d506a2a6..1d6eb7fc 100644 --- a/samples/secure-blob-access/pf-create-sas-token.xml +++ b/samples/secure-blob-access/pf-create-sas-token.xml @@ -1,18 +1,27 @@ - - + @@ -21,74 +30,106 @@ - - + + + @($"https://{(string)context.Variables["storageAccount"]}.blob.core.windows.net/?restype=service&comp=userdelegationkey") + POST + + 2020-12-06 + + @{ + var start = DateTime.UtcNow.ToString("yyyy-MM-ddTHH:mm:ssZ"); + var expiry = DateTime.UtcNow.AddMinutes(10).ToString("yyyy-MM-ddTHH:mm:ssZ"); + return $"{start}{expiry}"; + } + + - - + (); + var xml = System.Xml.Linq.XDocument.Parse(body); + var root = xml.Root; + + var signedOid = root.Element("SignedOid")?.Value ?? ""; + var signedTid = root.Element("SignedTid")?.Value ?? ""; + var signedKeyStart = root.Element("SignedStart")?.Value ?? ""; + var signedKeyExpiry = root.Element("SignedExpiry")?.Value ?? ""; + var signedKeyService = root.Element("SignedService")?.Value ?? ""; + var signedKeyVersion = root.Element("SignedVersion")?.Value ?? ""; + var keyValue = root.Element("Value")?.Value ?? ""; + + var permissions = (string)context.Variables["permissions"]; + var expiry = (string)context.Variables["sasExpiry"]; + var canonicalizedResource = (string)context.Variables["canonicalizedResource"]; + var signedProtocol = (string)context.Variables["signedProtocol"]; + var signedVersion = (string)context.Variables["signedVersion"]; + var signedResource = (string)context.Variables["signedResource"]; + + // User Delegation SAS string-to-sign (v2020-12-06) + var stringToSign = + permissions + "\n" + + "" + "\n" + // signedStart (empty = now) + expiry + "\n" + + canonicalizedResource + "\n" + + signedOid + "\n" + + signedTid + "\n" + + signedKeyStart + "\n" + + signedKeyExpiry + "\n" + + signedKeyService + "\n" + + signedKeyVersion + "\n" + + "" + "\n" + // signedAuthorizedUserObjectId + "" + "\n" + // signedUnauthorizedUserObjectId + "" + "\n" + // signedCorrelationId + "" + "\n" + // signedIP + signedProtocol + "\n" + + signedVersion + "\n" + + signedResource + "\n" + + "" + "\n" + // signedSnapshotTime + "" + "\n" + // signedEncryptionScope + "" + "\n" + // rscc + "" + "\n" + // rscd + "" + "\n" + // rsce + "" + "\n" + // rscl + ""; // rsct + + // Sign with the user delegation key using HMAC-SHA256 + var key = Convert.FromBase64String(keyValue); + string signature; using (var hmac = new System.Security.Cryptography.HMACSHA256(key)) { var hash = hmac.ComputeHash(System.Text.Encoding.UTF8.GetBytes(stringToSign)); - return Convert.ToBase64String(hash); + signature = Convert.ToBase64String(hash); } + + // Build User Delegation SAS token + return $"sp={permissions}" + + $"&se={expiry}" + + $"&spr={signedProtocol}" + + $"&sv={signedVersion}" + + $"&sr={signedResource}" + + $"&skoid={signedOid}" + + $"&sktid={signedTid}" + + $"&skt={signedKeyStart}" + + $"&ske={signedKeyExpiry}" + + $"&sks={signedKeyService}" + + $"&skv={signedKeyVersion}" + + $"&sig={System.Net.WebUtility.UrlEncode(signature)}"; } catch (Exception ex) { - context.Trace("SAS signature generation failed: " + ex.Message); - return "SIGNATURE_GENERATION_FAILED"; + context.Trace("User delegation SAS generation failed: " + ex.Message); + return ""; } }" /> - - - - - - - - - - - - \ No newline at end of file + diff --git a/samples/secure-blob-access/upload-sample-files.bicep b/samples/secure-blob-access/upload-sample-files.bicep index 1010962b..4c0a6eb7 100644 --- a/samples/secure-blob-access/upload-sample-files.bicep +++ b/samples/secure-blob-access/upload-sample-files.bicep @@ -54,6 +54,18 @@ resource uploadIdentityBlobContributorRole 'Microsoft.Authorization/roleAssignme } } +// Grant the managed identity Storage File Data Privileged Contributor role (required for deployment script backing storage) +// https://learn.microsoft.com/azure/templates/microsoft.authorization/roleassignments +resource uploadIdentityFileContributorRole 'Microsoft.Authorization/roleAssignments@2022-04-01' = { + name: guid(storageAccount.id, uploadManagedIdentity.id, 'Storage File Data Privileged Contributor') + scope: storageAccount + properties: { + roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', azureRoles.StorageFileDataPrivilegedContributor) + principalId: uploadManagedIdentity.properties.principalId + principalType: 'ServicePrincipal' + } +} + // https://learn.microsoft.com/azure/templates/microsoft.storage/storageaccounts/blobservices/containers resource blobContainer 'Microsoft.Storage/storageAccounts/blobServices/containers@2024-01-01' = { name: '${storageAccount.name}/default/${containerName}' @@ -104,9 +116,13 @@ resource deploymentScript 'Microsoft.Resources/deploymentScripts@2023-08-01' = { } ] retentionInterval: 'PT1H' + storageAccountSettings: { + storageAccountName: storageAccountName + } } dependsOn: [ uploadIdentityBlobContributorRole + uploadIdentityFileContributorRole blobContainer ] } diff --git a/shared/azure-roles.json b/shared/azure-roles.json index 9febc5ad..935501d2 100644 --- a/shared/azure-roles.json +++ b/shared/azure-roles.json @@ -6,5 +6,6 @@ "KeyVaultCertificateUser": "db79e9a7-68ee-4b58-9aeb-b90e7c24fcba", "KeyVaultSecretsUser": "4633458b-17de-408a-b874-0445c86b69e6", "StorageBlobDataContributor": "ba92f5b4-2d11-453d-a403-e96b0029c9fe", - "StorageBlobDataReader": "2a2b9908-6ea1-4ae2-8e65-a410df84e7d1" + "StorageBlobDataReader": "2a2b9908-6ea1-4ae2-8e65-a410df84e7d1", + "StorageFileDataPrivilegedContributor": "69566ab7-960f-475b-8e7c-b3118f30c6bd" } diff --git a/shared/bicep/modules/apim/v1/api.bicep b/shared/bicep/modules/apim/v1/api.bicep index 49faf73d..7b1dea69 100644 --- a/shared/bicep/modules/apim/v1/api.bicep +++ b/shared/bicep/modules/apim/v1/api.bicep @@ -88,6 +88,9 @@ resource apimTags 'Microsoft.ApiManagement/service/tags@2024-06-01-preview' = [f resource apimApiTags 'Microsoft.ApiManagement/service/apis/tags@2024-06-01-preview' = [for tag in tagList: { name: tag parent: apimApi + dependsOn: [ + apimTags + ] }] // https://learn.microsoft.com/azure/templates/microsoft.apimanagement/service/apis/policies diff --git a/shared/bicep/modules/apim/v1/diagnostics.bicep b/shared/bicep/modules/apim/v1/diagnostics.bicep new file mode 100644 index 00000000..1fa84346 --- /dev/null +++ b/shared/bicep/modules/apim/v1/diagnostics.bicep @@ -0,0 +1,199 @@ +/** + * @module apim-diagnostics-v1 + * @description This module configures observability for an existing Azure API Management (APIM) service. + * It sets up diagnostic settings, loggers, and diagnostic policies for both Log Analytics and Application Insights. + */ + + +// ------------------ +// PARAMETERS +// ------------------ + +@description('Location to be used for resources. Defaults to the resource group location') +param location string = resourceGroup().location + +@description('Name of the existing API Management service') +param apimServiceName string + +@description('Resource group name where the APIM service is deployed') +param apimResourceGroupName string = resourceGroup().name + +@description('Enable Log Analytics diagnostic settings for APIM') +param enableLogAnalytics bool = true + +@description('Log Analytics Workspace ID for diagnostic settings') +param logAnalyticsWorkspaceId string = '' + +@description('Enable Application Insights logger and diagnostic policy for APIM') +param enableApplicationInsights bool = true + +@description('Application Insights instrumentation key') +param appInsightsInstrumentationKey string = '' + +@description('Application Insights resource ID') +param appInsightsResourceId string = '' + +@description('Name suffix for the diagnostic settings resource') +param diagnosticSettingsNameSuffix string = 'diagnostics' + +@description('Name of the APIM logger resource') +param apimLoggerName string = 'applicationinsights-logger' + +@description('Description of the APIM logger') +param apimLoggerDescription string = 'Application Insights logger for APIM diagnostics' + + +// ------------------ +// VARIABLES +// ------------------ + +var diagnosticSettingsName = 'apim-${diagnosticSettingsNameSuffix}' + + +// ------------------ +// RESOURCES +// ------------------ + +// Reference the existing APIM service +// https://learn.microsoft.com/azure/templates/microsoft.apimanagement/service +resource apimService 'Microsoft.ApiManagement/service@2024-06-01-preview' existing = { + name: apimServiceName +} + +// Configure diagnostic settings to send logs to Log Analytics Workspace +// https://learn.microsoft.com/azure/templates/microsoft.insights/diagnosticsettings +resource apimDiagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (enableLogAnalytics && !empty(logAnalyticsWorkspaceId)) { + name: diagnosticSettingsName + scope: apimService + properties: { + workspaceId: logAnalyticsWorkspaceId + logAnalyticsDestinationType: 'Dedicated' + logs: [ + { + category: 'GatewayLogs' + enabled: true + retentionPolicy: { + enabled: false + days: 0 + } + } + { + category: 'WebSocketConnectionLogs' + enabled: true + retentionPolicy: { + enabled: false + days: 0 + } + } + ] + metrics: [ + { + category: 'AllMetrics' + enabled: true + retentionPolicy: { + enabled: false + days: 0 + } + } + ] + } +} + +// Configure APIM logger for Application Insights +// https://learn.microsoft.com/azure/templates/microsoft.apimanagement/service/loggers +#disable-next-line BCP318 +resource apimLogger 'Microsoft.ApiManagement/service/loggers@2024-06-01-preview' = if (enableApplicationInsights && !empty(appInsightsInstrumentationKey) && !empty(appInsightsResourceId)) { + parent: apimService + name: apimLoggerName + properties: { + loggerType: 'applicationInsights' + description: apimLoggerDescription + credentials: { + instrumentationKey: appInsightsInstrumentationKey + } + isBuffered: true + #disable-next-line BCP318 + resourceId: appInsightsResourceId + } +} + +// Configure diagnostic policy for Application Insights +// https://learn.microsoft.com/azure/templates/microsoft.apimanagement/service/diagnostics +#disable-next-line BCP318 +resource apimDiagnostic 'Microsoft.ApiManagement/service/diagnostics@2024-06-01-preview' = if (enableApplicationInsights && !empty(appInsightsInstrumentationKey) && !empty(appInsightsResourceId)) { + parent: apimService + name: 'applicationinsights' + properties: { + alwaysLog: 'allErrors' + loggerId: enableApplicationInsights && !empty(appInsightsInstrumentationKey) && !empty(appInsightsResourceId) ? apimLogger.id : '' + sampling: { + samplingType: 'fixed' + percentage: 100 + } + frontend: { + request: { + headers: [] + body: { + bytes: 0 + } + } + response: { + headers: [] + body: { + bytes: 0 + } + } + } + backend: { + request: { + headers: [] + body: { + bytes: 0 + } + } + response: { + headers: [] + body: { + bytes: 0 + } + } + } + logClientIp: true + httpCorrelationProtocol: 'W3C' + verbosity: 'information' + } +} + +// Configure Azure Monitor diagnostic settings for APIM +// This ensures gateway logs include subscription IDs and other details +// https://learn.microsoft.com/azure/templates/microsoft.apimanagement/service/diagnostics +resource apimAzureMonitorDiagnostic 'Microsoft.ApiManagement/service/diagnostics@2024-06-01-preview' = if (enableLogAnalytics) { + parent: apimService + name: 'azuremonitor' + properties: { + loggerId: '/subscriptions/${subscription().subscriptionId}/resourceGroups/${apimResourceGroupName}/providers/Microsoft.ApiManagement/service/${apimServiceName}/loggers/azuremonitor' + sampling: { + samplingType: 'fixed' + percentage: 100 + } + logClientIp: true + verbosity: 'information' + } +} + + +// ------------------ +// OUTPUTS +// ------------------ + +@description('APIM diagnostic settings resource ID') +output diagnosticSettingsId string = enableLogAnalytics && !empty(logAnalyticsWorkspaceId) ? apimDiagnosticSettings.id : '' + +@description('APIM logger resource ID') +output apimLoggerId string = enableApplicationInsights && !empty(appInsightsInstrumentationKey) && !empty(appInsightsResourceId) ? apimLogger.id : '' + +@description('APIM Application Insights diagnostic resource ID') +output apimDiagnosticId string = enableApplicationInsights && !empty(appInsightsInstrumentationKey) && !empty(appInsightsResourceId) ? apimDiagnostic.id : '' + +@description('APIM Azure Monitor diagnostic resource ID') +output apimAzureMonitorDiagnosticId string = enableLogAnalytics ? apimAzureMonitorDiagnostic.id : '' diff --git a/shared/python/apimrequests.py b/shared/python/apimrequests.py index 61e513dd..f91ba1e0 100644 --- a/shared/python/apimrequests.py +++ b/shared/python/apimrequests.py @@ -5,11 +5,12 @@ import json import time from typing import Any + import requests import urllib3 # APIM Samples imports -from apimtypes import HTTP_VERB, SUBSCRIPTION_KEY_PARAMETER_NAME, SLEEP_TIME_BETWEEN_REQUESTS_MS +from apimtypes import HTTP_VERB, SLEEP_TIME_BETWEEN_REQUESTS_MS, SUBSCRIPTION_KEY_PARAMETER_NAME, HttpStatusCode from console import BOLD_G, BOLD_R, RESET, print_error, print_info, print_message, print_ok, print_val # Disable SSL warnings for self-signed certificates @@ -253,7 +254,7 @@ def _print_response(self, response) -> None: self._print_response_code(response) print_val('Response headers', response.headers, True) - if response.status_code == 200: + if response.status_code == HttpStatusCode.OK: try: data = json.loads(response.text) print_val('Response body', json.dumps(data, indent = 4), True) @@ -267,9 +268,9 @@ def _print_response_code(self, response) -> None: Print the response status code with color formatting. """ - if 200 <= response.status_code < 300: + if HttpStatusCode.OK <= response.status_code < HttpStatusCode.MULTIPLE_CHOICES: status_code_str = f'{BOLD_G}{response.status_code} - {response.reason}{RESET}' - elif response.status_code >= 400: + elif response.status_code >= HttpStatusCode.BAD_REQUEST: status_code_str = f'{BOLD_R}{response.status_code} - {response.reason}{RESET}' else: status_code_str = str(response.status_code) @@ -299,11 +300,11 @@ def _poll_async_operation(self, location_url: str, headers: dict = None, timeout print_info(f'Polling operation - Status: {response.status_code}') - if response.status_code == 200: + if response.status_code == HttpStatusCode.OK: print_ok('Async operation completed successfully!') return response - if response.status_code == 202: + if response.status_code == HttpStatusCode.ACCEPTED: print_info(f'Operation still in progress, waiting {poll_interval} seconds...') time.sleep(poll_interval) else: @@ -423,7 +424,7 @@ def singlePostAsync( print_info(f'Initial response status: {response.status_code}') - if response.status_code == 202: # Accepted - async operation started + if response.status_code == HttpStatusCode.ACCEPTED: # Accepted - async operation started location_header = response.headers.get('Location') if location_header: @@ -432,7 +433,7 @@ def singlePostAsync( # Poll the location URL until completion final_response = self._poll_async_operation(location_header, timeout = timeout, poll_interval = poll_interval ) - if final_response and final_response.status_code == 200: + if final_response and final_response.status_code == HttpStatusCode.OK: if printResponse: self._print_response(final_response) diff --git a/shared/python/apimtypes.py b/shared/python/apimtypes.py index 1efe7ffd..0a7d6cda 100644 --- a/shared/python/apimtypes.py +++ b/shared/python/apimtypes.py @@ -2,28 +2,28 @@ Types and constants for Azure API Management automation and deployment. """ -import os -import json import ast -from enum import StrEnum +import json +import os from dataclasses import dataclass +from enum import IntEnum, StrEnum from pathlib import Path -from typing import List, Optional, Any +from typing import Any, List, Optional # APIM Samples imports from console import print_error, print_val -from json_utils import is_string_json, extract_json +from json_utils import extract_json, is_string_json def get_project_root() -> Path: """Get the project root directory path.""" # Try to get from environment variable first (set by .env file) - if 'PROJECT_ROOT' in os.environ: - return Path(os.environ['PROJECT_ROOT']) + if "PROJECT_ROOT" in os.environ: + return Path(os.environ["PROJECT_ROOT"]) # Fallback: detect project root by walking up from this file current_path = Path(__file__).resolve().parent.parent.parent # Go up from shared/python/ - indicators = ['README.md', 'pyproject.toml', 'bicepconfig.json'] + indicators = ["README.md", "pyproject.toml", "bicepconfig.json"] while current_path != current_path.parent: if all((current_path / indicator).exists() for indicator in indicators): @@ -33,25 +33,27 @@ def get_project_root() -> Path: # Ultimate fallback return Path(__file__).resolve().parent.parent.parent + # Get project root and construct absolute paths to policy files _PROJECT_ROOT = get_project_root() -_SHARED_XML_POLICY_BASE_PATH = _PROJECT_ROOT / 'shared' / 'apim-policies' +_SHARED_XML_POLICY_BASE_PATH = _PROJECT_ROOT / "shared" / "apim-policies" # Policy file paths (now absolute and platform-independent) -DEFAULT_XML_POLICY_PATH = str(_SHARED_XML_POLICY_BASE_PATH / 'default.xml') -HELLO_WORLD_XML_POLICY_PATH = str(_SHARED_XML_POLICY_BASE_PATH / 'hello-world.xml') -REQUEST_HEADERS_XML_POLICY_PATH = str(_SHARED_XML_POLICY_BASE_PATH / 'request-headers.xml') -BACKEND_XML_POLICY_PATH = str(_SHARED_XML_POLICY_BASE_PATH / 'backend.xml') -API_ID_XML_POLICY_PATH = str(_SHARED_XML_POLICY_BASE_PATH / 'api-id.xml') +DEFAULT_XML_POLICY_PATH = str(_SHARED_XML_POLICY_BASE_PATH / "default.xml") +HELLO_WORLD_XML_POLICY_PATH = str(_SHARED_XML_POLICY_BASE_PATH / "hello-world.xml") +REQUEST_HEADERS_XML_POLICY_PATH = str(_SHARED_XML_POLICY_BASE_PATH / "request-headers.xml") +BACKEND_XML_POLICY_PATH = str(_SHARED_XML_POLICY_BASE_PATH / "backend.xml") +API_ID_XML_POLICY_PATH = str(_SHARED_XML_POLICY_BASE_PATH / "api-id.xml") -SUBSCRIPTION_KEY_PARAMETER_NAME = 'api-key' -SLEEP_TIME_BETWEEN_REQUESTS_MS = 50 +SUBSCRIPTION_KEY_PARAMETER_NAME = "api-key" +SLEEP_TIME_BETWEEN_REQUESTS_MS = 50 # ------------------------------ # PRIVATE METHODS # ------------------------------ + # Placing this here privately as putting it into the utils module would constitute a circular import def _read_policy_xml(policy_xml_filepath: str) -> str: """ @@ -65,7 +67,7 @@ def _read_policy_xml(policy_xml_filepath: str) -> str: """ # Read the specified policy XML file with explicit UTF-8 encoding - with open(policy_xml_filepath, 'r', encoding = 'utf-8') as policy_xml_file: + with open(policy_xml_filepath, "r", encoding="utf-8") as policy_xml_file: policy_template_xml = policy_xml_file.read() return policy_template_xml @@ -75,27 +77,106 @@ def _read_policy_xml(policy_xml_filepath: str) -> str: # CLASSES # ------------------------------ + # Mock role IDs for testing purposes class Role: """ Predefined roles and their GUIDs (mocked for testing purposes). """ - NONE = '00000000-0000-0000-0000-000000000000' # No role assigned - HR_MEMBER = '316790bc-fbd3-4a14-8867-d1388ffbc195' - HR_ASSOCIATE = 'd3c1b0f2-4a5e-4c8b-9f6d-7c8e1f2a3b4c' - HR_ADMINISTRATOR = 'a1b2c3d4-e5f6-7g8h-9i0j-k1l2m3n4o5p6' - MARKETING_MEMBER = 'b2c3d4e5-f6g7-8h9i-0j1k-2l3m4n5o6p7q' + NONE = "00000000-0000-0000-0000-000000000000" # No role assigned + HR_MEMBER = "316790bc-fbd3-4a14-8867-d1388ffbc195" + HR_ASSOCIATE = "d3c1b0f2-4a5e-4c8b-9f6d-7c8e1f2a3b4c" + HR_ADMINISTRATOR = "a1b2c3d4-e5f6-7g8h-9i0j-k1l2m3n4o5p6" + MARKETING_MEMBER = "b2c3d4e5-f6g7-8h9i-0j1k-2l3m4n5o6p7q" + + +class HttpStatusCode(IntEnum): + """ + HTTP status codes for API responses. + """ + + # 1xx Informational + CONTINUE = 100 + SWITCHING_PROTOCOLS = 101 + PROCESSING = 102 + EARLY_HINTS = 103 + + # 2xx Success + OK = 200 + CREATED = 201 + ACCEPTED = 202 + NON_AUTHORITATIVE_INFORMATION = 203 + NO_CONTENT = 204 + RESET_CONTENT = 205 + PARTIAL_CONTENT = 206 + MULTI_STATUS = 207 + ALREADY_REPORTED = 208 + IM_USED = 226 + + # 3xx Redirection + MULTIPLE_CHOICES = 300 + MOVED_PERMANENTLY = 301 + FOUND = 302 + SEE_OTHER = 303 + NOT_MODIFIED = 304 + TEMPORARY_REDIRECT = 307 + PERMANENT_REDIRECT = 308 + + # 4xx Client Errors + BAD_REQUEST = 400 + UNAUTHORIZED = 401 + PAYMENT_REQUIRED = 402 + FORBIDDEN = 403 + NOT_FOUND = 404 + METHOD_NOT_ALLOWED = 405 + NOT_ACCEPTABLE = 406 + PROXY_AUTHENTICATION_REQUIRED = 407 + REQUEST_TIMEOUT = 408 + CONFLICT = 409 + GONE = 410 + LENGTH_REQUIRED = 411 + PRECONDITION_FAILED = 412 + CONTENT_TOO_LARGE = 413 + URI_TOO_LONG = 414 + UNSUPPORTED_MEDIA_TYPE = 415 + RANGE_NOT_SATISFIABLE = 416 + EXPECTATION_FAILED = 417 + IM_A_TEAPOT = 418 + MISDIRECTED_REQUEST = 421 + UNPROCESSABLE_CONTENT = 422 + LOCKED = 423 + FAILED_DEPENDENCY = 424 + TOO_EARLY = 425 + UPGRADE_REQUIRED = 426 + PRECONDITION_REQUIRED = 428 + TOO_MANY_REQUESTS = 429 + REQUEST_HEADER_FIELDS_TOO_LARGE = 431 + UNAVAILABLE_FOR_LEGAL_REASONS = 451 + + # 5xx Server Errors + INTERNAL_SERVER_ERROR = 500 + NOT_IMPLEMENTED = 501 + BAD_GATEWAY = 502 + SERVICE_UNAVAILABLE = 503 + GATEWAY_TIMEOUT = 504 + HTTP_VERSION_NOT_SUPPORTED = 505 + VARIANT_ALSO_NEGOTIATES = 506 + INSUFFICIENT_STORAGE = 507 + LOOP_DETECTED = 508 + NOT_EXTENDED = 510 + NETWORK_AUTHENTICATION_REQUIRED = 511 + class APIMNetworkMode(StrEnum): """ Networking configuration modes for Azure API Management (APIM). """ - PUBLIC = 'Public' # APIM is accessible from the public internet - EXTERNAL_VNET = 'External' # APIM is deployed in a VNet with external (public) access - INTERNAL_VNET = 'Internal' # APIM is deployed in a VNet with only internal (private) access - NONE = 'None' # No explicit network configuration (legacy or default) + PUBLIC = "Public" # APIM is accessible from the public internet + EXTERNAL_VNET = "External" # APIM is deployed in a VNet with external (public) access + INTERNAL_VNET = "Internal" # APIM is deployed in a VNet with only internal (private) access + NONE = "None" # No explicit network configuration (legacy or default) class APIM_SKU(StrEnum): @@ -103,13 +184,13 @@ class APIM_SKU(StrEnum): APIM SKU types. """ - DEVELOPER = 'Developer' - BASIC = 'Basic' - STANDARD = 'Standard' - PREMIUM = 'Premium' - BASICV2 = 'Basicv2' - STANDARDV2 = 'Standardv2' - PREMIUMV2 = 'Premiumv2' + DEVELOPER = "Developer" + BASIC = "Basic" + STANDARD = "Standard" + PREMIUM = "Premium" + BASICV2 = "Basicv2" + STANDARDV2 = "Standardv2" + PREMIUMV2 = "Premiumv2" def is_v1(self): """Check if the SKU is a v1 tier.""" @@ -119,18 +200,19 @@ def is_v2(self): """Check if the SKU is a v2 tier.""" return self in (APIM_SKU.BASICV2, APIM_SKU.STANDARDV2, APIM_SKU.PREMIUMV2) + class HTTP_VERB(StrEnum): """ HTTP verbs that can be used for API operations. """ - GET = 'GET' - POST = 'POST' - PUT = 'PUT' - DELETE = 'DELETE' - PATCH = 'PATCH' - OPTIONS = 'OPTIONS' - HEAD = 'HEAD' + GET = "GET" + POST = "POST" + PUT = "PUT" + DELETE = "DELETE" + PATCH = "PATCH" + OPTIONS = "OPTIONS" + HEAD = "HEAD" class INFRASTRUCTURE(StrEnum): @@ -138,11 +220,11 @@ class INFRASTRUCTURE(StrEnum): Infrastructure types for APIM automation scenarios. """ - SIMPLE_APIM = 'simple-apim' # Simple API Management with no dependencies - APIM_ACA = 'apim-aca' # Azure API Management connected to Azure Container Apps - AFD_APIM_PE = 'afd-apim-pe' # Azure Front Door Premium connected to Azure API Management (Standard V2) via Private Link - APPGW_APIM_PE = 'appgw-apim-pe' # Application Gateway connected to Azure API Management (Standard V2) via Private Link - APPGW_APIM = 'appgw-apim' # Application Gateway connected to Azure API Management (Developer SKU) via VNet (Internal) + SIMPLE_APIM = "simple-apim" # Simple API Management with no dependencies + APIM_ACA = "apim-aca" # Azure API Management connected to Azure Container Apps + AFD_APIM_PE = "afd-apim-pe" # Azure Front Door Premium connected to Azure API Management (Standard V2) via Private Link + APPGW_APIM_PE = "appgw-apim-pe" # Application Gateway connected to Azure API Management (Standard V2) via Private Link + APPGW_APIM = "appgw-apim" # Application Gateway connected to Azure API Management (Developer SKU) via VNet (Internal) class Endpoints: @@ -172,6 +254,8 @@ class Output: Represents the output of a command or deployment, including success status, raw text, and parsed JSON data. """ + _SECURE_MASK_MIN_LENGTH = 4 + # ------------------------------ # CONSTRUCTOR # ------------------------------ @@ -198,7 +282,7 @@ def __init__(self, success: bool, text: str): self.is_json = self.json_data is not None - def get(self, key: str, label: str = '', secure: bool = False, suppress_logging: bool = False) -> str | None: + def get(self, key: str, label: str = "", secure: bool = False, suppress_logging: bool = False) -> str | None: """ Retrieve a deployment output property by key, with optional label and secure masking. @@ -215,30 +299,30 @@ def get(self, key: str, label: str = '', secure: bool = False, suppress_logging: deployment_output: Any if not isinstance(self.json_data, dict): - raise KeyError('json_data is not a dict') + raise KeyError("json_data is not a dict") - if 'properties' in self.json_data: - properties = self.json_data.get('properties') + if "properties" in self.json_data: + properties = self.json_data.get("properties") if not isinstance(properties, dict): raise KeyError("'properties' is not a dict in deployment result") - outputs = properties.get('outputs') + outputs = properties.get("outputs") if not isinstance(outputs, dict): raise KeyError("'outputs' is missing or not a dict in deployment result") output_entry = outputs.get(key) - if not isinstance(output_entry, dict) or 'value' not in output_entry: + if not isinstance(output_entry, dict) or "value" not in output_entry: raise KeyError(f"Output key '{key}' not found in deployment outputs") - deployment_output = output_entry['value'] + deployment_output = output_entry["value"] elif key in self.json_data: - deployment_output = self.json_data[key]['value'] + deployment_output = self.json_data[key]["value"] else: raise KeyError(f"Output key '{key}' not found in deployment outputs") if not suppress_logging and label: - if secure and isinstance(deployment_output, str) and len(deployment_output) >= 4: - print_val(label, f'****{deployment_output[-4:]}') + if secure and isinstance(deployment_output, str) and len(deployment_output) >= self._SECURE_MASK_MIN_LENGTH: + print_val(label, f"****{deployment_output[-4:]}") else: print_val(label, deployment_output) @@ -253,7 +337,7 @@ def get(self, key: str, label: str = '', secure: bool = False, suppress_logging: return None - def getJson(self, key: str, label: str = '', secure: bool = False, suppress_logging: bool = False) -> Any: + def getJson(self, key: str, label: str = "", secure: bool = False, suppress_logging: bool = False) -> Any: """ Retrieve a deployment output property by key and return it as a JSON object. This method is independent from get() and retrieves the raw deployment output value. @@ -271,30 +355,30 @@ def getJson(self, key: str, label: str = '', secure: bool = False, suppress_logg deployment_output: Any if not isinstance(self.json_data, dict): - raise KeyError('json_data is not a dict') + raise KeyError("json_data is not a dict") - if 'properties' in self.json_data: - properties = self.json_data.get('properties') + if "properties" in self.json_data: + properties = self.json_data.get("properties") if not isinstance(properties, dict): raise KeyError("'properties' is not a dict in deployment result") - outputs = properties.get('outputs') + outputs = properties.get("outputs") if not isinstance(outputs, dict): raise KeyError("'outputs' is missing or not a dict in deployment result") output_entry = outputs.get(key) - if not isinstance(output_entry, dict) or 'value' not in output_entry: + if not isinstance(output_entry, dict) or "value" not in output_entry: raise KeyError(f"Output key '{key}' not found in deployment outputs") - deployment_output = output_entry['value'] + deployment_output = output_entry["value"] elif key in self.json_data: - deployment_output = self.json_data[key]['value'] + deployment_output = self.json_data[key]["value"] else: raise KeyError(f"Output key '{key}' not found in deployment outputs") # pragma: no cover if not suppress_logging and label: - if secure and isinstance(deployment_output, str) and len(deployment_output) >= 4: - print_val(label, f'****{deployment_output[-4:]}') + if secure and isinstance(deployment_output, str) and len(deployment_output) >= self._SECURE_MASK_MIN_LENGTH: + print_val(label, f"****{deployment_output[-4:]}") else: print_val(label, deployment_output) @@ -310,7 +394,7 @@ def getJson(self, key: str, label: str = '', secure: bool = False, suppress_logg try: return ast.literal_eval(deployment_output) except (ValueError, SyntaxError) as e: - print_error(f'Failed to parse deployment output as Python literal. Error: {e}') + print_error(f"Failed to parse deployment output as Python literal. Error: {e}") # Return the original result if it's not a string or can't be parsed return deployment_output @@ -324,6 +408,7 @@ def getJson(self, key: str, label: str = '', secure: bool = False, suppress_logg return None + @dataclass class API: """ @@ -335,7 +420,7 @@ class API: path: str description: str policyXml: Optional[str] = None - operations: Optional[List['APIOperation']] = None + operations: Optional[List["APIOperation"]] = None tags: Optional[List[str]] = None productNames: Optional[List[str]] = None subscriptionRequired: bool = True @@ -346,10 +431,17 @@ class API: # ------------------------------ def __init__( - self, name: str, displayName: str, path: str, description: str, - policyXml: Optional[str] = None, operations: Optional[List['APIOperation']] = None, - tags: Optional[List[str]] = None, productNames: Optional[List[str]] = None, - subscriptionRequired: bool = True, serviceUrl: Optional[str] = None, + self, + name: str, + displayName: str, + path: str, + description: str, + policyXml: Optional[str] = None, + operations: Optional[List["APIOperation"]] = None, + tags: Optional[List[str]] = None, + productNames: Optional[List[str]] = None, + subscriptionRequired: bool = True, + serviceUrl: Optional[str] = None, ): self.name = name self.displayName = displayName @@ -369,16 +461,16 @@ def __init__( def to_dict(self) -> dict: """Convert the API instance to a dictionary.""" return { - 'name': self.name, - 'displayName': self.displayName, - 'path': self.path, - 'description': self.description, - 'operations': [op.to_dict() for op in self.operations] if self.operations else [], - 'serviceUrl': self.serviceUrl, - 'subscriptionRequired': self.subscriptionRequired, - 'policyXml': self.policyXml, - 'tags': self.tags, - 'productNames': self.productNames + "name": self.name, + "displayName": self.displayName, + "path": self.path, + "description": self.description, + "operations": [op.to_dict() for op in self.operations] if self.operations else [], + "serviceUrl": self.serviceUrl, + "subscriptionRequired": self.subscriptionRequired, + "policyXml": self.policyXml, + "tags": self.tags, + "productNames": self.productNames, } @@ -400,8 +492,13 @@ class APIOperation: # ------------------------------ def __init__( - self, name: str, displayName: str, urlTemplate: str, method: HTTP_VERB, - description: str, policyXml: Optional[str] = None, + self, + name: str, + displayName: str, + urlTemplate: str, + method: HTTP_VERB, + description: str, + policyXml: Optional[str] = None, templateParameters: Optional[List[dict[str, Any]]] = None, ) -> None: # Validate that method is a valid HTTP_VERB @@ -409,7 +506,7 @@ def __init__( try: method = HTTP_VERB(method).value except Exception as exc: - raise ValueError(f'Invalid HTTP_VERB: {method}') from exc + raise ValueError(f"Invalid HTTP_VERB: {method}") from exc self.name = name self.displayName = displayName @@ -426,13 +523,13 @@ def __init__( def to_dict(self) -> dict: """Convert the API operation to a dictionary.""" return { - 'name': self.name, - 'displayName': self.displayName, - 'urlTemplate': self.urlTemplate, - 'description': self.description, - 'method': self.method, - 'policyXml': self.policyXml, - 'templateParameters': self.templateParameters + "name": self.name, + "displayName": self.displayName, + "urlTemplate": self.urlTemplate, + "description": self.description, + "method": self.method, + "policyXml": self.policyXml, + "templateParameters": self.templateParameters, } @@ -447,7 +544,7 @@ class GET_APIOperation(APIOperation): # ------------------------------ def __init__(self, description: str, policyXml: Optional[str] = None, templateParameters: Optional[List[dict[str, Any]]] = None): - super().__init__('GET', 'GET', '/', HTTP_VERB.GET, description, policyXml, templateParameters) + super().__init__("GET", "GET", "/", HTTP_VERB.GET, description, policyXml, templateParameters) @dataclass @@ -461,7 +558,11 @@ class GET_APIOperation2(APIOperation): # ------------------------------ def __init__( - self, name: str, displayName: str, urlTemplate: str, description: str, + self, + name: str, + displayName: str, + urlTemplate: str, + description: str, policyXml: Optional[str] = None, templateParameters: Optional[List[dict[str, Any]]] = None, ) -> None: @@ -479,7 +580,7 @@ class POST_APIOperation(APIOperation): # ------------------------------ def __init__(self, description: str, policyXml: Optional[str] = None, templateParameters: Optional[List[dict[str, Any]]] = None) -> None: - super().__init__('POST', 'POST', '/', HTTP_VERB.POST, description, policyXml, templateParameters) + super().__init__("POST", "POST", "/", HTTP_VERB.POST, description, policyXml, templateParameters) @dataclass @@ -501,18 +602,13 @@ def __init__(self, name: str, value: str, isSecret: bool = False) -> None: self.value = value self.isSecret = isSecret - # ------------------------------ # PUBLIC METHODS # ------------------------------ def to_dict(self) -> dict: """Convert the named value to a dictionary.""" - nv_dict = { - 'name': self.name, - 'value': self.value, - 'isSecret': self.isSecret - } + nv_dict = {"name": self.name, "value": self.value, "isSecret": self.isSecret} return nv_dict @@ -531,23 +627,18 @@ class PolicyFragment: # CONSTRUCTOR # ------------------------------ - def __init__(self, name: str, policyXml: str, description: str = '') -> None: + def __init__(self, name: str, policyXml: str, description: str = "") -> None: self.name = name self.policyXml = policyXml self.description = description - # ------------------------------ # PUBLIC METHODS # ------------------------------ def to_dict(self) -> dict: """Convert the policy fragment to a dictionary.""" - pf_dict = { - 'name': self.name, - 'policyXml': self.policyXml, - 'description': self.description - } + pf_dict = {"name": self.name, "policyXml": self.policyXml, "description": self.description} return pf_dict @@ -563,7 +654,7 @@ class Product: name: str displayName: str description: str - state: str = 'published' # 'published' or 'notPublished' + state: str = "published" # 'published' or 'notPublished' subscriptionRequired: bool = True approvalRequired: bool = False policyXml: Optional[str] = None @@ -573,9 +664,14 @@ class Product: # ------------------------------ def __init__( - self, name: str, displayName: str, description: str, - state: str = 'published', subscriptionRequired: bool = True, - approvalRequired: bool = False, policyXml: Optional[str] = None, + self, + name: str, + displayName: str, + description: str, + state: str = "published", + subscriptionRequired: bool = True, + approvalRequired: bool = False, + policyXml: Optional[str] = None, ) -> None: self.name = name self.displayName = displayName @@ -613,11 +709,11 @@ def __init__( def to_dict(self) -> dict: """Convert the product to a dictionary.""" return { - 'name': self.name, - 'displayName': self.displayName, - 'description': self.description, - 'state': self.state, - 'subscriptionRequired': self.subscriptionRequired, - 'approvalRequired': self.approvalRequired, - 'policyXml': self.policyXml + "name": self.name, + "displayName": self.displayName, + "description": self.description, + "state": self.state, + "subscriptionRequired": self.subscriptionRequired, + "approvalRequired": self.approvalRequired, + "policyXml": self.policyXml, } diff --git a/shared/python/charts.py b/shared/python/charts.py index e9e4a73a..9b8cacc1 100644 --- a/shared/python/charts.py +++ b/shared/python/charts.py @@ -5,11 +5,12 @@ """ import json -import pandas as pd + +import matplotlib as mpl import matplotlib.pyplot as plt +import pandas as pd +from apimtypes import HttpStatusCode from matplotlib.patches import Rectangle as pltRectangle -import matplotlib as mpl - # ------------------------------ # CLASSES @@ -72,7 +73,7 @@ def _plot_barchart(self, api_results: list[dict]) -> None: response_time = entry['response_time'] status_code = entry['status_code'] - if status_code == 200 and entry['response']: + if status_code == HttpStatusCode.OK and entry['response']: try: resp = json.loads(entry['response']) backend_index = resp.get('index', 99) @@ -91,14 +92,14 @@ def _plot_barchart(self, api_results: list[dict]) -> None: mpl.rcParams['figure.figsize'] = [15, 7] - # Define a color map for each backend index (200) and errors (non-200 always lightcoral) - backend_indexes_200 = sorted(df[df['Status Code'] == 200]['Backend Index'].unique()) + # Define a color map for each backend index (OK) and errors (non-OK always lightcoral) + backend_indexes_200 = sorted(df[df['Status Code'] == HttpStatusCode.OK]['Backend Index'].unique()) color_palette = ['lightyellow', 'lightblue', 'lightgreen', 'plum', 'orange'] color_map_200 = {idx: color_palette[i % len(color_palette)] for i, idx in enumerate(backend_indexes_200)} bar_colors = [] for _, row in df.iterrows(): - if row['Status Code'] == 200: + if row['Status Code'] == HttpStatusCode.OK: bar_colors.append(color_map_200.get(row['Backend Index'], 'gray')) else: bar_colors.append('lightcoral') @@ -129,7 +130,7 @@ def _plot_barchart(self, api_results: list[dict]) -> None: plt.xticks(rotation = 0) # Exclude high outliers for average calculation - valid_200 = df[(df['Status Code'] == 200)].copy() + valid_200 = df[(df['Status Code'] == HttpStatusCode.OK)].copy() # Exclude high outliers (e.g., above 95th percentile) if not valid_200.empty: diff --git a/shared/python/console.py b/shared/python/console.py index 1fb925dd..9b4ebb8d 100644 --- a/shared/python/console.py +++ b/shared/python/console.py @@ -15,6 +15,7 @@ import threading from logging_config import configure_logging + configure_logging() # ------------------------------ @@ -22,22 +23,23 @@ # ------------------------------ # ANSI escape code constants for colored console output -BOLD_B = '\x1b[1;34m' # blue -BOLD_G = '\x1b[1;32m' # green -BOLD_R = '\x1b[1;31m' # red -BOLD_Y = '\x1b[1;33m' # yellow -BOLD_C = '\x1b[1;36m' # cyan -BOLD_M = '\x1b[1;35m' # magenta -BOLD_W = '\x1b[1;37m' # white -RESET = '\x1b[0m' +BOLD_B = "\x1b[1;34m" # blue +BOLD_G = "\x1b[1;32m" # green +BOLD_R = "\x1b[1;31m" # red +BOLD_Y = "\x1b[1;33m" # yellow +BOLD_C = "\x1b[1;36m" # cyan +BOLD_M = "\x1b[1;35m" # magenta +BOLD_W = "\x1b[1;37m" # white +RESET = "\x1b[0m" # Thread colors for parallel operations THREAD_COLORS = [BOLD_B, BOLD_G, BOLD_Y, BOLD_C, BOLD_M, BOLD_W] CONSOLE_WIDTH = 220 -_CONSOLE_WIDTH_ENV = 'APIM_SAMPLES_CONSOLE_WIDTH' +_CONSOLE_WIDTH_ENV = "APIM_SAMPLES_CONSOLE_WIDTH" _DEFAULT_CONSOLE_WIDTH = 220 +_MIN_CONSOLE_WIDTH = 20 # Thread-safe print lock _print_lock = threading.Lock() @@ -48,6 +50,7 @@ # PRIVATE METHODS # ------------------------------ + def _get_console_width() -> int: """Return configured console width for line wrapping.""" @@ -56,42 +59,44 @@ def _get_console_width() -> int: return _DEFAULT_CONSOLE_WIDTH try: value = int(raw) - return value if value > 20 else _DEFAULT_CONSOLE_WIDTH + return value if value > _MIN_CONSOLE_WIDTH else _DEFAULT_CONSOLE_WIDTH except ValueError: return _DEFAULT_CONSOLE_WIDTH + def _infer_level_from_message(message: str, default: int = logging.INFO) -> int: stripped = message.lstrip() if not stripped: return default # Heuristic mappings for existing emoji/prefix styles. - if stripped.startswith('โŒ'): + if stripped.startswith("โŒ"): return logging.ERROR - if stripped.startswith('โš ๏ธ'): + if stripped.startswith("โš ๏ธ"): return logging.WARNING - if stripped.startswith(('โœ…', '๐ŸŽ‰')): + if stripped.startswith(("โœ…", "๐ŸŽ‰")): return logging.INFO - if stripped.lower().startswith('debug') or stripped.startswith(('๐Ÿž')): + if stripped.lower().startswith("debug") or stripped.startswith(("๐Ÿž")): return logging.DEBUG lowered = stripped.lower() - if lowered.startswith('error:'): + if lowered.startswith("error:"): return logging.ERROR - if lowered.startswith('warning:'): + if lowered.startswith("warning:"): return logging.WARNING - if lowered.startswith('command output:'): + if lowered.startswith("command output:"): return logging.DEBUG # Default return default + def _wrap_line(line: str, width: int) -> str: if not line or width <= 0: return line # Preserve leading whitespace for tables/indented output. - leading_len = len(line) - len(line.lstrip(' ')) + leading_len = len(line) - len(line.lstrip(" ")) leading = line[:leading_len] content = line[leading_len:] if not content: @@ -106,10 +111,18 @@ def _wrap_line(line: str, width: int) -> str: break_on_hyphens=False, ) + def _print_log( - message: str, prefix: str = '', color: str = '', output: str = '', - duration: str = '', show_time: bool = False, blank_above: bool = False, - blank_below: bool = False, wrap_lines: bool = False, level: int | None = None, + message: str, + prefix: str = "", + color: str = "", + output: str = "", + duration: str = "", + show_time: bool = False, + blank_above: bool = False, + blank_below: bool = False, + wrap_lines: bool = False, + level: int | None = None, ) -> None: """ Print a formatted log message with optional prefix, color, output, duration, and time. @@ -127,66 +140,74 @@ def _print_log( wrap_lines (bool, optional): Whether to wrap lines to fit console width. """ - time_str = f' โŒš {datetime.datetime.now().time()}' if show_time else '' - output_str = f' {output}' if output else '' + time_str = f" โŒš {datetime.datetime.now().time()}" if show_time else "" + output_str = f" {output}" if output else "" resolved_level = level if level is not None else _infer_level_from_message(message) if blank_above: - _logger.log(resolved_level, '') + _logger.log(resolved_level, "") # To preserve explicit newlines in the message (e.g., from print_val with val_below=True), # split the message on actual newlines and wrap each line separately, preserving blank lines and indentation. - full_message = f'{prefix}{color}{message}{RESET}{time_str} {duration}{output_str}'.rstrip() - lines = full_message.splitlines(keepends = False) + full_message = f"{prefix}{color}{message}{RESET}{time_str} {duration}{output_str}".rstrip() + lines = full_message.splitlines(keepends=False) width = _get_console_width() for line in lines: if wrap_lines: wrapped = _wrap_line(line, width) - for wrapped_line in wrapped.splitlines() or ['']: + for wrapped_line in wrapped.splitlines() or [""]: _logger.log(resolved_level, wrapped_line) else: _logger.log(resolved_level, line) if blank_below: - _logger.log(resolved_level, '') + _logger.log(resolved_level, "") # ------------------------------ # PUBLIC METHODS # ------------------------------ -def print_command(cmd: str = '') -> None: + +def print_command(cmd: str = "") -> None: """Print a command message.""" - _print_log(cmd, 'โš™๏ธ ', BOLD_B, blank_above = True, blank_below = True, level = logging.INFO) + _print_log(cmd, "โš™๏ธ ", BOLD_B, blank_above=True, blank_below=True, level=logging.INFO) + -def print_error(msg: str, output: str = '', duration: str = '') -> None: +def print_error(msg: str, output: str = "", duration: str = "") -> None: """Print an error message.""" - _print_log(msg, 'โŒ ', BOLD_R, output, duration, True, True, True, wrap_lines = True, level = logging.ERROR) + _print_log(msg, "โŒ ", BOLD_R, output, duration, True, True, True, wrap_lines=True, level=logging.ERROR) + def print_info(msg: str, blank_above: bool = False) -> None: """Print an informational message.""" - _print_log(msg, 'โ„น๏ธ ', BOLD_B, blank_above = blank_above, level = logging.INFO) + _print_log(msg, "โ„น๏ธ ", BOLD_B, blank_above=blank_above, level=logging.INFO) + -def print_message(msg: str, output: str = '', duration: str = '', blank_above: bool = False, blank_below: bool = False) -> None: +def print_message(msg: str, output: str = "", duration: str = "", blank_above: bool = False, blank_below: bool = False) -> None: """Print a general message.""" - _print_log(msg, 'โ„น๏ธ ', BOLD_G, output, duration, True, blank_above, blank_below, level = logging.INFO) + _print_log(msg, "โ„น๏ธ ", BOLD_G, output, duration, True, blank_above, blank_below, level=logging.INFO) -def print_ok(msg: str, output: str = '', duration: str = '', blank_above: bool = False) -> None: + +def print_ok(msg: str, output: str = "", duration: str = "", blank_above: bool = False) -> None: """Print an OK/success message.""" - _print_log(msg, 'โœ… ', BOLD_G, output, duration, True, blank_above, level = logging.INFO) + _print_log(msg, "โœ… ", BOLD_G, output, duration, True, blank_above, level=logging.INFO) + -def print_warning(msg: str, output: str = '', duration: str = '') -> None: +def print_warning(msg: str, output: str = "", duration: str = "") -> None: """Print a warning message.""" - _print_log(msg, 'โš ๏ธ ', BOLD_Y, output, duration, True, wrap_lines = True, level = logging.WARNING) + _print_log(msg, "โš ๏ธ ", BOLD_Y, output, duration, True, wrap_lines=True, level=logging.WARNING) + def print_val(name: str, value: str, val_below: bool = False) -> None: """Print a key-value pair.""" - _print_log(f"{name:<25}:{'\n' if val_below else ' '}{value}", '๐Ÿ‘‰ ', BOLD_B, wrap_lines = True, level = logging.INFO) + _print_log(f"{name:<25}:{'\n' if val_below else ' '}{value}", "๐Ÿ‘‰ ", BOLD_B, wrap_lines=True, level=logging.INFO) + -def print_plain(msg: str = '', *, level: int | None = None, wrap_lines: bool = True, blank_above: bool = False, blank_below: bool = False) -> None: +def print_plain(msg: str = "", *, level: int | None = None, wrap_lines: bool = True, blank_above: bool = False, blank_below: bool = False) -> None: """Log a message without any icon/prefix. Useful for tables, separators, and other formatted output where adding an @@ -194,9 +215,10 @@ def print_plain(msg: str = '', *, level: int | None = None, wrap_lines: bool = T """ resolved_level = level if level is not None else _infer_level_from_message(msg, default=logging.INFO) - _print_log(msg, prefix='', color = '', blank_above = blank_above, blank_below=blank_below, wrap_lines = wrap_lines, level = resolved_level) + _print_log(msg, prefix="", color="", blank_above=blank_above, blank_below=blank_below, wrap_lines=wrap_lines, level=resolved_level) + -def print_debug(msg: str = '', *, wrap_lines: bool = True, blank_above: bool = False, blank_below: bool = False) -> None: +def print_debug(msg: str = "", *, wrap_lines: bool = True, blank_above: bool = False, blank_below: bool = False) -> None: """Log a debug message.""" - _print_log(msg, prefix='๐Ÿž ', color = '', blank_above = blank_above, blank_below=blank_below, wrap_lines = wrap_lines, level = logging.DEBUG) + _print_log(msg, prefix="๐Ÿž ", color="", blank_above=blank_above, blank_below=blank_below, wrap_lines=wrap_lines, level=logging.DEBUG) diff --git a/shared/python/infrastructures.py b/shared/python/infrastructures.py index 4615cb6c..7796f0ef 100644 --- a/shared/python/infrastructures.py +++ b/shared/python/infrastructures.py @@ -6,23 +6,33 @@ import os import time import traceback +from concurrent.futures import ThreadPoolExecutor, as_completed from pathlib import Path from typing import List -from concurrent.futures import ThreadPoolExecutor, as_completed + +import azure_resources as az import requests +import utils # APIM Samples imports -from apimtypes import API, APIM_SKU, APIMNetworkMode, GET_APIOperation, HELLO_WORLD_XML_POLICY_PATH, INFRASTRUCTURE, PolicyFragment +from apimtypes import API, APIM_SKU, HELLO_WORLD_XML_POLICY_PATH, INFRASTRUCTURE, APIMNetworkMode, GET_APIOperation, HttpStatusCode, PolicyFragment from console import ( - BOLD_R, BOLD_Y, RESET, THREAD_COLORS, - _print_lock, _print_log, - print_command, print_error, print_info, print_message, - print_ok, print_plain, print_warning, print_val, + BOLD_R, + BOLD_Y, + RESET, + THREAD_COLORS, + _print_lock, + _print_log, + print_command, + print_error, + print_info, + print_message, + print_ok, + print_plain, + print_val, + print_warning, ) from logging_config import should_print_traceback -import azure_resources as az -import utils - # ------------------------------ # INFRASTRUCTURE CLASSES @@ -319,7 +329,7 @@ def _verify_apim_connectivity(self, apim_gateway_url: str) -> bool: response = requests.get(healthcheck_url, timeout=30) - if response.status_code == 200: + if response.status_code == HttpStatusCode.OK: print_ok('APIM connectivity verified - Health check returned 200') return True diff --git a/shared/python/show_infrastructures.py b/shared/python/show_infrastructures.py index d7a7bfa9..5a56eaf3 100644 --- a/shared/python/show_infrastructures.py +++ b/shared/python/show_infrastructures.py @@ -13,16 +13,16 @@ def _format_index(index: int | None) -> str: - return str(index) if index is not None else 'N/A' + return str(index) if index is not None else "N/A" def _format_location(location: str | None) -> str: - return location if location else 'Unknown' + return location if location else "Unknown" def _sort_key(entry: dict[str, Any]) -> tuple[str, int]: - index_value = entry.get('index') - return entry.get('infrastructure', ''), index_value if index_value is not None else 0 + index_value = entry.get("index") + return entry.get("infrastructure", ""), index_value if index_value is not None else 0 def gather_infrastructures(include_location: bool = True) -> list[dict[str, str | int | None]]: @@ -41,10 +41,10 @@ def gather_infrastructures(include_location: bool = True) -> list[dict[str, str discovered.append( { - 'infrastructure': infra_type.value, - 'index': index, - 'resource_group': rg_name, - 'location': location, + "infrastructure": infra_type.value, + "index": index, + "resource_group": rg_name, + "location": location, } ) @@ -55,87 +55,86 @@ def gather_infrastructures(include_location: bool = True) -> list[dict[str, str def display_infrastructures(infrastructures: list[dict[str, str | int | None]], include_location: bool = True) -> None: """Render a simple table summarizing deployed infrastructures.""" - print('Deployed infrastructures') - print('------------------------') + print("Deployed infrastructures") + print("------------------------") if not infrastructures: - print('\nNo deployed infrastructures found with the infrastructure tag.\n') + print("\nNo deployed infrastructures found with the infrastructure tag.\n") return - headers = ['#', 'Infrastructure', 'Index', 'Resource Group'] + headers = ["#", "Infrastructure", "Index", "Resource Group"] if include_location: - headers.append('Location') + headers.append("Location") rows: list[list[str]] = [] for idx, entry in enumerate(infrastructures, 1): row = [ str(idx), - entry.get('infrastructure', ''), - _format_index(entry.get('index')), - entry.get('resource_group', ''), + entry.get("infrastructure", ""), + _format_index(entry.get("index")), + entry.get("resource_group", ""), ] if include_location: - row.append(_format_location(entry.get('location'))) + row.append(_format_location(entry.get("location"))) rows.append(row) widths = [max(len(str(value)) for value in column) for column in zip(headers, *rows)] - header_line = ' '.join(str(value).ljust(widths[i]) for i, value in enumerate(headers)) - separator_line = ' '.join('-' * width for width in widths) + header_line = " ".join(str(value).ljust(widths[i]) for i, value in enumerate(headers)) + separator_line = " ".join("-" * width for width in widths) - print('\n' + header_line) + print("\n" + header_line) print(separator_line) # Index column (column 2) is right-aligned; others are left-aligned + index_column = 2 for row in rows: formatted_row = [] for i, value in enumerate(row): - if i == 2: # Index column + if i == index_column: formatted_row.append(str(value).rjust(widths[i])) else: formatted_row.append(str(value).ljust(widths[i])) - print(' '.join(formatted_row)) + print(" ".join(formatted_row)) infra_totals: dict[str, int] = {} for entry in infrastructures: - infra_name = entry.get('infrastructure', '') + infra_name = entry.get("infrastructure", "") infra_totals[infra_name] = infra_totals.get(infra_name, 0) + 1 - print('\nSummary:') + print("\nSummary:") print(f" Resource groups found : {len(infrastructures)}") print(f" Infrastructure types : {len(infra_totals)}") - print('\n') + print("\n") def show_subscription() -> None: """Display the current Azure subscription information.""" - account_output = az.run('az account show -o json') + account_output = az.run("az account show -o json") - print('Current subscription') - print('---------------------') + print("Current subscription") + print("---------------------") if account_output.success and account_output.json_data: - name = account_output.json_data.get('name', 'Unknown') - subscription_id = account_output.json_data.get('id', 'Unknown') + name = account_output.json_data.get("name", "Unknown") + subscription_id = account_output.json_data.get("id", "Unknown") - print(f'Name : {name}') - print(f'ID : {subscription_id}\n') + print(f"Name : {name}") + print(f"ID : {subscription_id}\n") else: - print('Unable to read subscription details. Ensure Azure CLI is logged in.\n') + print("Unable to read subscription details. Ensure Azure CLI is logged in.\n") def main() -> int: """List all deployed APIM infrastructures in the current Azure subscription.""" - parser = argparse.ArgumentParser( - description='List all deployed APIM infrastructures in the current Azure subscription' - ) + parser = argparse.ArgumentParser(description="List all deployed APIM infrastructures in the current Azure subscription") parser.add_argument( - '--no-location', - action='store_true', - help='Skip resource group location lookup for faster execution.', + "--no-location", + action="store_true", + help="Skip resource group location lookup for faster execution.", ) args = parser.parse_args() @@ -151,5 +150,5 @@ def main() -> int: return 0 -if __name__ == '__main__': # pragma: no cover +if __name__ == "__main__": # pragma: no cover raise SystemExit(main()) diff --git a/shared/python/utils.py b/shared/python/utils.py index aeefb099..a630cb00 100644 --- a/shared/python/utils.py +++ b/shared/python/utils.py @@ -887,8 +887,8 @@ def read_and_modify_policy_xml(policy_xml_filepath: str, replacements: dict[str, if replacements is not None and isinstance(replacements, dict): # Replace placeholders in the policy XML with provided values - for placeholder, value in replacements.items(): - placeholder = '{' + placeholder + '}' + for key, value in replacements.items(): + placeholder = '{' + key + '}' if placeholder in policy_template_xml: policy_template_xml = policy_template_xml.replace(placeholder, value) diff --git a/start.ps1 b/start.ps1 index 57770ca7..4f79df32 100755 --- a/start.ps1 +++ b/start.ps1 @@ -130,7 +130,7 @@ while ($true) { Write-Host " 6) Show all deployed infrastructures" Write-Host "" Write-Host "Tests" -ForegroundColor Yellow - Write-Host " 7) Run pylint" + Write-Host " 7) Run ruff" Write-Host " 8) Run tests (shows detailed test results)" Write-Host " 9) Run full Python checks (most statistics)" Write-Host "" @@ -174,7 +174,7 @@ while ($true) { PyRun "$RepoRoot/shared/python/show_infrastructures.py" | Out-Null } '7' { - Invoke-Cmd "$RepoRoot/tests/python/run_pylint.ps1" | Out-Null + Invoke-Cmd "$RepoRoot/tests/python/run_ruff.ps1" | Out-Null } '8' { Invoke-Cmd "$RepoRoot/tests/python/run_tests.ps1" | Out-Null diff --git a/start.sh b/start.sh index 8b414449..0cb170d2 100755 --- a/start.sh +++ b/start.sh @@ -98,7 +98,7 @@ while true; do echo " 6) Show all deployed infrastructures" echo "" echo "Tests" - echo " 7) Run pylint" + echo " 7) Run ruff" echo " 8) Run tests (shows detailed test results)" echo " 9) Run full Python checks" echo "" @@ -143,7 +143,7 @@ while true; do run_cmd pyrun "${REPO_ROOT}/shared/python/show_infrastructures.py" ;; 7) - run_cmd bash "${REPO_ROOT}/tests/python/run_pylint.sh" + run_cmd bash "${REPO_ROOT}/tests/python/run_ruff.sh" ;; 8) run_cmd bash "${REPO_ROOT}/tests/python/run_tests.sh" diff --git a/tests/README.md b/tests/README.md index b4582495..5b343813 100644 --- a/tests/README.md +++ b/tests/README.md @@ -18,7 +18,7 @@ The fastest way to validate your code changes: ./tests/python/check_python.sh ``` -This runs both pylint (code linting) and pytest (unit tests) with a single command. +This runs both ruff (code linting) and pytest (unit tests) with a single command. ## Code Quality Tools @@ -29,38 +29,38 @@ This runs both pylint (code linting) and pytest (unit tests) with a single comma ```powershell # Windows .\tests\python\check_python.ps1 # Run all checks -.\tests\python\check_python.ps1 -ShowLintReport # Include detailed pylint report +.\tests\python\check_python.ps1 -ShowLintReport # Include detailed ruff report ``` ```bash # Linux/macOS ./tests/python/check_python.sh # Run all checks -./tests/python/check_python.sh --show-report # Include detailed pylint report +./tests/python/check_python.sh --show-report # Include detailed ruff report ``` -### Linting Only (pylint) +### Linting Only (ruff) -Run pylint separately when you only need linting: +Run ruff separately when you only need linting: ```powershell # Windows - from repository root -.\tests\python\run_pylint.ps1 # Default: all Python code -.\tests\python\run_pylint.ps1 -ShowReport # Show detailed report -.\tests\python\run_pylint.ps1 -Target "samples" # Lint specific folder +.\tests\python\run_ruff.ps1 # Default: all Python code +.\tests\python\run_ruff.ps1 -ShowReport # Show detailed report +.\tests\python\run_ruff.ps1 -Target "samples" # Lint specific folder ``` ```bash # Linux/macOS - from repository root -./tests/python/run_pylint.sh # Default: all Python code -./tests/python/run_pylint.sh samples --show-report # Lint specific folder with report +./tests/python/run_ruff.sh # Default: all Python code +./tests/python/run_ruff.sh samples --show-report # Lint specific folder with report ``` -#### Pylint Reports +#### Ruff Reports -All pylint runs generate timestamped reports in `tests/python/pylint/reports/`: +All ruff runs generate timestamped reports in `tests/python/ruff/reports/`: - **JSON format**: Machine-readable for CI/CD integration - **Text format**: Human-readable detailed analysis -- **Latest symlinks**: `latest.json` and `latest.txt` always point to the most recent run +- **Latest files**: `latest.json` and `latest.txt` always reflect the most recent run The script automatically displays a **Top 10 Issues Summary** showing the most frequent code quality issues. @@ -97,7 +97,7 @@ Both scripts: ```powershell pip install coverage pytest-cov ``` -- Note: Running pytest only from the terminal wonโ€™t decorate the Explorer. Use the Testing UI to see coverage overlays. +- Note: Running pytest only from the terminal won't decorate the Explorer. Use the Testing UI to see coverage overlays. **In Browser:** - Open `htmlcov/index.html` in your browser for detailed coverage information @@ -106,7 +106,7 @@ Both scripts: ### Configuration Files -- `.pylintrc` - Pylint configuration and rules (in repository root) +- `pyproject.toml` - Ruff linting configuration and rules (in repository root, under `[tool.ruff]`) - `.coveragerc` - Coverage.py configuration - `pytest.ini` - Pytest configuration and markers (in repository root) - `conftest.py` - Shared pytest fixtures @@ -127,7 +127,7 @@ Markers are registered in `pytest.ini`. On every push or pull request, GitHub Actions will: - Install dependencies - Run all Python tests with coverage -- Run pylint on all Python code +- Run ruff on all Python code - Upload coverage reports as artifacts ## Sample Test Matrix diff --git a/tests/python/check_python.ps1 b/tests/python/check_python.ps1 index cdc9a9f4..6ae59c23 100644 --- a/tests/python/check_python.ps1 +++ b/tests/python/check_python.ps1 @@ -4,20 +4,20 @@ Run comprehensive Python code quality checks (linting and testing). .DESCRIPTION - This script executes both pylint linting and pytest testing in sequence, + This script executes both ruff linting and pytest testing in sequence, providing a complete code quality assessment. It's the recommended way to validate Python code changes before committing. The script can be run from anywhere in the repository and will: - - Execute pylint on all Python code with detailed reporting + - Execute ruff on all Python code with detailed reporting - Run the full test suite with coverage analysis - Display combined results and exit with appropriate status code .PARAMETER ShowLintReport - Display the full pylint text report after completion. + Display the full ruff text report after completion. .PARAMETER Target - Path to analyze for pylint. Defaults to all Python files in the repository. + Path to analyze for ruff. Defaults to all Python files in the repository. .EXAMPLE .\check_python.ps1 @@ -25,7 +25,7 @@ .EXAMPLE .\check_python.ps1 -ShowLintReport - Run checks and show detailed pylint report + Run checks and show detailed ruff report .EXAMPLE .\check_python.ps1 -Target "samples" @@ -48,12 +48,12 @@ Write-Host "" # ------------------------------ -# STEP 1: RUN PYLINT +# STEP 1: RUN RUFF # ------------------------------ -Write-Host "โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”" -ForegroundColor Yellow -Write-Host " Step 1/2: Running Pylint " -ForegroundColor Yellow -Write-Host "โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”" -ForegroundColor Yellow +Write-Host "โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”" -ForegroundColor Yellow +Write-Host " Step 1/2: Running Ruff " -ForegroundColor Yellow +Write-Host "โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”" -ForegroundColor Yellow Write-Host "" $LintArgs = @{ @@ -63,7 +63,7 @@ if ($ShowLintReport) { $LintArgs.ShowReport = $true } -& "$ScriptDir\run_pylint.ps1" @LintArgs +& "$ScriptDir\run_ruff.ps1" @LintArgs $LintExitCode = $LASTEXITCODE Write-Host "" @@ -72,9 +72,9 @@ Write-Host "" # ------------------------------ # STEP 2: RUN TESTS # ------------------------------ -Write-Host "โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”" -ForegroundColor Yellow -Write-Host " Step 2/2: Running Tests " -ForegroundColor Yellow -Write-Host "โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”" -ForegroundColor Yellow +Write-Host "โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”" -ForegroundColor Yellow +Write-Host " Step 2/2: Running Tests " -ForegroundColor Yellow +Write-Host "โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”" -ForegroundColor Yellow Write-Host "" # Capture test output and pass it through to console while also capturing it @@ -156,37 +156,28 @@ Write-Host "" $LintStatus = if ($LintExitCode -eq 0) { "โœ… PASSED" } else { "โš ๏ธ ISSUES FOUND" } # leave two spaces after yellow triangle to display correctly $TestStatus = if ($FailedTests -eq 0 -and $TestExitCode -eq 0) { "โœ… PASSED" } else { "โŒ FAILED" } -# Get pylint score -$PylintScore = $null -$PylintIssueCount = $null -$LatestPylintText = Join-Path $ScriptDir "pylint/reports/latest.txt" -$LatestPylintJson = Join-Path $ScriptDir "pylint/reports/latest.json" +# Get ruff issue count +$RuffIssueCount = $null +$LatestRuffJson = Join-Path $ScriptDir "ruff/reports/latest.json" -if (Test-Path $LatestPylintText) { - $ScoreMatch = Select-String -Path $LatestPylintText -Pattern 'rated at (\d+(?:\.\d+)?/10)' | Select-Object -First 1 - if ($ScoreMatch -and $ScoreMatch.Matches.Count -gt 0) { - $PylintScore = $ScoreMatch.Matches[0].Groups[1].Value - } -} - -if (Test-Path $LatestPylintJson) { +if (Test-Path $LatestRuffJson) { try { - $RawJson = Get-Content $LatestPylintJson -Raw + $RawJson = Get-Content $LatestRuffJson -Raw if ($RawJson -and $RawJson.Trim()) { $Issues = $RawJson | ConvertFrom-Json if ($null -eq $Issues) { - $PylintIssueCount = 0 + $RuffIssueCount = 0 } elseif ($Issues -is [System.Array]) { - $PylintIssueCount = $Issues.Count + $RuffIssueCount = $Issues.Count } else { - $PylintIssueCount = 1 + $RuffIssueCount = 1 } } } catch { - $PylintIssueCount = $null + $RuffIssueCount = $null } } @@ -194,21 +185,13 @@ if (Test-Path $LatestPylintJson) { $LintColor = if ($LintExitCode -eq 0) { "Green" } else { "Yellow" } $TestColor = if ($FailedTests -eq 0 -and $TestExitCode -eq 0) { "Green" } else { "Red" } -# Display Pylint status with score -Write-Host "Pylint : " -NoNewline +# Display Ruff status with issue count +Write-Host "Ruff : " -NoNewline Write-Host $LintStatus -ForegroundColor $LintColor -NoNewline -$PylintDetails = @() -if ($PylintScore) { - $PylintDetails += $PylintScore -} -if ($PylintIssueCount -ne $null) { - $IssueLabel = if ($PylintIssueCount -eq 1) { "1 issue" } else { "$PylintIssueCount issues" } - $PylintDetails += $IssueLabel -} - -if ($PylintDetails.Count -gt 0) { +if ($RuffIssueCount -ne $null) { + $IssueLabel = if ($RuffIssueCount -eq 1) { "1 issue" } else { "$RuffIssueCount issues" } Write-Host " (" -ForegroundColor Gray -NoNewline - Write-Host ($PylintDetails -join " | ") -ForegroundColor Gray -NoNewline + Write-Host $IssueLabel -ForegroundColor Gray -NoNewline Write-Host ")" -ForegroundColor Gray } else { Write-Host "" diff --git a/tests/python/check_python.sh b/tests/python/check_python.sh index 73a4ad19..39295db6 100755 --- a/tests/python/check_python.sh +++ b/tests/python/check_python.sh @@ -1,13 +1,13 @@ #!/bin/bash # Run comprehensive Python code quality checks (linting and testing) # -# This script executes both pylint linting and pytest testing in sequence, +# This script executes both ruff linting and pytest testing in sequence, # providing a complete code quality assessment. It's the recommended way # to validate Python code changes before committing. # # Usage: # ./check_python.sh # Run with default settings -# ./check_python.sh --show-report # Include detailed pylint report +# ./check_python.sh --show-report # Include detailed ruff report # ./check_python.sh samples # Only lint the samples folder set -e @@ -17,7 +17,7 @@ REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" SHOW_REPORT="" TARGET="${1:-infrastructure samples setup shared}" -PYLINT_SCORE="" +RUFF_ISSUE_COUNT="" # Parse arguments if [ "$1" = "--show-report" ]; then @@ -35,23 +35,23 @@ echo "" # ------------------------------ -# STEP 1: RUN PYLINT +# STEP 1: RUN RUFF # ------------------------------ echo "โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”" -echo " Step 1/2: Running Pylint" +echo " Step 1/2: Running Ruff" echo "โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”" echo "" set +e -bash "$SCRIPT_DIR/run_pylint.sh" "$TARGET" $SHOW_REPORT +bash "$SCRIPT_DIR/run_ruff.sh" "$TARGET" $SHOW_REPORT LINT_EXIT_CODE=$? set -e -# Extract pylint score from the latest report, if available -PYLINT_LATEST_TEXT="$SCRIPT_DIR/pylint/reports/latest.txt" -if [ -f "$PYLINT_LATEST_TEXT" ]; then - PYLINT_SCORE=$(grep -Eo 'rated at [0-9]+(\.[0-9]+)?/10' "$PYLINT_LATEST_TEXT" | head -n 1 | awk '{print $3}') +# Extract ruff issue count from the latest JSON report, if available +RUFF_LATEST_JSON="$SCRIPT_DIR/ruff/reports/latest.json" +if [ -f "$RUFF_LATEST_JSON" ] && command -v jq &> /dev/null; then + RUFF_ISSUE_COUNT=$(jq 'length' "$RUFF_LATEST_JSON" 2>/dev/null || echo "") fi echo "" @@ -113,7 +113,7 @@ echo "โ•‘ Final Results โ•‘" echo "โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•" echo "" -# Determine Pylint status +# Determine Ruff status if [ $LINT_EXIT_CODE -eq 0 ]; then LINT_STATUS="โœ… PASSED" else @@ -128,9 +128,9 @@ else fi # Display results with proper alignment -echo "Pylint : $LINT_STATUS" -if [ -n "$PYLINT_SCORE" ]; then - echo " ($PYLINT_SCORE)" +echo "Ruff : $LINT_STATUS" +if [ -n "$RUFF_ISSUE_COUNT" ]; then + echo " ($RUFF_ISSUE_COUNT issues)" fi if [ $FAILED_TESTS -eq 0 ] && [ $TEST_EXIT_CODE -eq 0 ]; then @@ -182,3 +182,4 @@ fi echo "" exit $OVERALL_EXIT_CODE + diff --git a/tests/python/conftest.py b/tests/python/conftest.py index c8339265..3e2a5865 100644 --- a/tests/python/conftest.py +++ b/tests/python/conftest.py @@ -13,8 +13,8 @@ # Add the tests/python directory to import test_helpers sys.path.insert(0, os.path.abspath(os.path.dirname(__file__))) -# APIM Samples imports (must come after the sys path inserts, so we disable the offending pylint rule C0413 (wrong-import-position) below) -from test_helpers import ( # pylint: disable=wrong-import-position +# APIM Samples imports (must come after the sys path inserts) +from test_helpers import ( create_mock_http_response, create_mock_output, create_sample_apis, diff --git a/tests/python/run_pylint.ps1 b/tests/python/run_pylint.ps1 deleted file mode 100644 index 78095ab2..00000000 --- a/tests/python/run_pylint.ps1 +++ /dev/null @@ -1,119 +0,0 @@ -#!/usr/bin/env pwsh -<# -.SYNOPSIS - Run pylint on the Apim-Samples project with comprehensive reporting. - -.DESCRIPTION - Executes pylint with multiple output formats for better visibility: - - Colorized console output - - JSON report for automated processing - - Text report for detailed analysis - - Statistics summary - -.PARAMETER Target - Path to analyze. Defaults to all Python files in infrastructure, samples, setup, shared, and tests. - -.PARAMETER ShowReport - Display the full text report after completion. - -.EXAMPLE - .\run_pylint.ps1 - Run pylint on all repository Python files with default settings - -.EXAMPLE - .\run_pylint.ps1 -Target "../../samples" -ShowReport - Run on samples folder and show detailed report -#> - -param( - [string]$Target = "infrastructure samples setup shared", - [switch]$ShowReport -) - -$ErrorActionPreference = "Continue" -$ScriptDir = $PSScriptRoot -$RepoRoot = Split-Path (Split-Path $ScriptDir -Parent) -Parent -$ReportDir = Join-Path $ScriptDir "pylint/reports" -$PylintRc = Join-Path $RepoRoot ".pylintrc" -$Timestamp = Get-Date -Format "yyyyMMdd_HHmmss" - -# Set UTF-8 encoding for Python and console output -$env:PYTHONIOENCODING = "utf-8" -[Console]::OutputEncoding = [System.Text.Encoding]::UTF8 - -# Ensure report directory exists -if (-not (Test-Path $ReportDir)) { - New-Item -ItemType Directory -Path $ReportDir -Force | Out-Null -} - -Write-Host "`n๐Ÿ” Running pylint analysis...`n" -ForegroundColor Cyan -Write-Host " Target : $Target" -ForegroundColor Gray -Write-Host " Reports : $ReportDir" -ForegroundColor Gray -Write-Host " Working Directory : $RepoRoot" -ForegroundColor Gray -Write-Host " Pylint Config : $PylintRc`n" -ForegroundColor Gray - -# Run pylint with multiple output formats -$JsonReport = Join-Path $ReportDir "pylint_${Timestamp}.json" -$TextReport = Join-Path $ReportDir "pylint_${Timestamp}.txt" -$LatestJson = Join-Path $ReportDir "latest.json" -$LatestText = Join-Path $ReportDir "latest.txt" - -# Change to repository root and execute pylint -Push-Location $RepoRoot -try { - pylint --rcfile "$PylintRc" ` - --output-format=json ` - $Target.Split(' ') ` - | Tee-Object -FilePath $JsonReport | Out-Null - $JsonExitCode = $LASTEXITCODE - - pylint --rcfile "$PylintRc" ` - --output-format=text ` - $Target.Split(' ') ` - | Tee-Object -FilePath $TextReport - $TextExitCode = $LASTEXITCODE - - $ExitCode = if ($JsonExitCode -ne 0) { $JsonExitCode } else { $TextExitCode } -} finally { - Pop-Location -} - -# Create symlinks to latest reports -if (Test-Path $JsonReport) { - Copy-Item $JsonReport $LatestJson -Force - Copy-Item $TextReport $LatestText -Force -} - -# Display summary -Write-Host "`n๐Ÿ“Š Pylint Summary`n" -ForegroundColor Cyan -Write-Host " Exit code: $ExitCode" -ForegroundColor $(if ($ExitCode -eq 0) { "Green" } else { "Yellow" }) -Write-Host " JSON report : $JsonReport" -ForegroundColor Gray -Write-Host " Text report : $TextReport" -ForegroundColor Gray - -# Parse and display top issues from JSON -if (Test-Path $JsonReport) { - $Issues = Get-Content $JsonReport | ConvertFrom-Json - $GroupedIssues = $Issues | Group-Object -Property symbol | Sort-Object Count -Descending | Select-Object -First 10 - - if ($GroupedIssues) { - Write-Host "`n๐Ÿ” Top 10 Issues:" -ForegroundColor Cyan - foreach ($Group in $GroupedIssues) { - $Sample = $Issues | Where-Object { $_.symbol -eq $Group.Name } | Select-Object -First 1 - Write-Host " [$($Group.Count.ToString().PadLeft(3))] " -NoNewline -ForegroundColor Yellow - Write-Host "$($Group.Name) " -NoNewline -ForegroundColor White - Write-Host "($($Sample.'message-id'))" -ForegroundColor Gray - Write-Host " $($Sample.message)" -ForegroundColor DarkGray - } - } else { - Write-Host "`nโœ… No issues found!" -ForegroundColor Green - } -} - -# Show full report if requested -if ($ShowReport -and (Test-Path $TextReport)) { - Write-Host "`n๐Ÿ“„ Full Report:" -ForegroundColor Cyan - Get-Content $TextReport -} - -Write-Host "" -exit $ExitCode diff --git a/tests/python/run_ruff.ps1 b/tests/python/run_ruff.ps1 new file mode 100644 index 00000000..eb3274d5 --- /dev/null +++ b/tests/python/run_ruff.ps1 @@ -0,0 +1,116 @@ +#!/usr/bin/env pwsh +<# +.SYNOPSIS + Run ruff on the Apim-Samples project with comprehensive reporting. + +.DESCRIPTION + Executes ruff with multiple output formats for better visibility: + - Colorized console output + - JSON report for automated processing + - Text report for detailed analysis + +.PARAMETER Target + Path to analyze. Defaults to all Python files in infrastructure, samples, setup, and shared. + +.PARAMETER ShowReport + Display the full text report after completion. + +.EXAMPLE + .\run_ruff.ps1 + Run ruff on all repository Python files with default settings + +.EXAMPLE + .\run_ruff.ps1 -Target "../../samples" -ShowReport + Run on samples folder and show detailed report +#> + +param( + [string]$Target = "infrastructure samples setup shared", + [switch]$ShowReport +) + +$ErrorActionPreference = "Continue" +$ScriptDir = $PSScriptRoot +$RepoRoot = Split-Path (Split-Path $ScriptDir -Parent) -Parent +$ReportDir = Join-Path $ScriptDir "ruff/reports" +$Timestamp = Get-Date -Format "yyyyMMdd_HHmmss" + +# Set UTF-8 encoding for Python and console output +$env:PYTHONIOENCODING = "utf-8" +[Console]::OutputEncoding = [System.Text.Encoding]::UTF8 + +# Ensure report directory exists +if (-not (Test-Path $ReportDir)) { + New-Item -ItemType Directory -Path $ReportDir -Force | Out-Null +} + +Write-Host "`n๐Ÿ” Running ruff analysis...`n" -ForegroundColor Cyan +Write-Host " Target : $Target" -ForegroundColor Gray +Write-Host " Reports : $ReportDir" -ForegroundColor Gray +Write-Host " Working Directory : $RepoRoot`n" -ForegroundColor Gray + +# Run ruff with multiple output formats +$TextReport = Join-Path $ReportDir "ruff_${Timestamp}.txt" +$JsonReport = Join-Path $ReportDir "ruff_${Timestamp}.json" +$LatestText = Join-Path $ReportDir "latest.txt" +$LatestJson = Join-Path $ReportDir "latest.json" + +# Change to repository root and execute ruff +Push-Location $RepoRoot +try { + ruff check $Target.Split(' ') ` + | Tee-Object -FilePath $TextReport + $TextExitCode = $LASTEXITCODE + + ruff check --output-format json $Target.Split(' ') ` + | Tee-Object -FilePath $JsonReport | Out-Null + $JsonExitCode = $LASTEXITCODE + + $ExitCode = if ($TextExitCode -ne 0) { $TextExitCode } else { $JsonExitCode } +} finally { + Pop-Location +} + +# Copy to latest reports +if (Test-Path $TextReport) { + Copy-Item $TextReport $LatestText -Force +} +if (Test-Path $JsonReport) { + Copy-Item $JsonReport $LatestJson -Force +} + +# Display summary +Write-Host "`n๐Ÿ“Š Ruff Summary`n" -ForegroundColor Cyan +Write-Host " Exit code: $ExitCode" -ForegroundColor $(if ($ExitCode -eq 0) { "Green" } else { "Yellow" }) +Write-Host " Text report : $TextReport" -ForegroundColor Gray +Write-Host " JSON report : $JsonReport" -ForegroundColor Gray + +# Parse and display top issues from JSON +if (Test-Path $JsonReport) { + $RawJson = Get-Content $JsonReport -Raw + if ($RawJson -and $RawJson.Trim()) { + $Issues = $RawJson | ConvertFrom-Json + $GroupedIssues = $Issues | Group-Object -Property code | Sort-Object Count -Descending | Select-Object -First 10 + + if ($GroupedIssues) { + Write-Host "`n๐Ÿ” Top 10 Issues:" -ForegroundColor Cyan + foreach ($Group in $GroupedIssues) { + $Sample = $Issues | Where-Object { $_.code -eq $Group.Name } | Select-Object -First 1 + Write-Host " [$($Group.Count.ToString().PadLeft(3))] " -NoNewline -ForegroundColor Yellow + Write-Host "$($Group.Name)" -NoNewline -ForegroundColor White + Write-Host " - $($Sample.message)" -ForegroundColor DarkGray + } + } else { + Write-Host "`nโœ… No issues found!" -ForegroundColor Green + } + } +} + +# Show full report if requested +if ($ShowReport -and (Test-Path $TextReport)) { + Write-Host "`n๐Ÿ“„ Full Report:" -ForegroundColor Cyan + Get-Content $TextReport +} + +Write-Host "" +exit $ExitCode diff --git a/tests/python/run_pylint.sh b/tests/python/run_ruff.sh similarity index 53% rename from tests/python/run_pylint.sh rename to tests/python/run_ruff.sh index b44a322e..8d6981b2 100755 --- a/tests/python/run_pylint.sh +++ b/tests/python/run_ruff.sh @@ -1,13 +1,12 @@ #!/bin/bash -# Run pylint on the Apim-Samples project with comprehensive reporting +# Run ruff on the Apim-Samples project with comprehensive reporting set -e SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" TARGET="${1:-infrastructure samples setup shared}" -REPORT_DIR="$SCRIPT_DIR/pylint/reports" -PYLINT_RC="$REPO_ROOT/.pylintrc" +REPORT_DIR="$SCRIPT_DIR/ruff/reports" TIMESTAMP=$(date +"%Y%m%d_%H%M%S") # Set UTF-8 encoding for Python and console output @@ -19,53 +18,63 @@ export LANG=C.UTF-8 mkdir -p "$REPORT_DIR" echo "" -echo "๐Ÿ” Running pylint analysis..." +echo "๐Ÿ” Running ruff analysis..." echo "" echo " Target : $TARGET" echo " Reports : $REPORT_DIR" echo " Working Directory : $REPO_ROOT" echo "" -# Run pylint with multiple output formats -JSON_REPORT="$REPORT_DIR/pylint_${TIMESTAMP}.json" -TEXT_REPORT="$REPORT_DIR/pylint_${TIMESTAMP}.txt" -LATEST_JSON="$REPORT_DIR/latest.json" +# Run ruff with multiple output formats +TEXT_REPORT="$REPORT_DIR/ruff_${TIMESTAMP}.txt" +JSON_REPORT="$REPORT_DIR/ruff_${TIMESTAMP}.json" LATEST_TEXT="$REPORT_DIR/latest.txt" +LATEST_JSON="$REPORT_DIR/latest.json" -# Change to repository root and execute pylint (allow non-zero exit for reporting) +# Change to repository root and execute ruff (allow non-zero exit for reporting) cd "$REPO_ROOT" set +e -pylint --rcfile "$PYLINT_RC" \ - --output-format=json:"$JSON_REPORT",colorized,text:"$TEXT_REPORT" \ - $TARGET -EXIT_CODE=$? +# shellcheck disable=SC2086 +ruff check $TARGET 2>&1 | tee "$TEXT_REPORT" +EXIT_CODE=${PIPESTATUS[0]} +# shellcheck disable=SC2086 +ruff check --output-format json $TARGET > "$JSON_REPORT" 2>/dev/null || true set -e -# Create symlinks to latest reports +# Copy to latest reports +if [ -f "$TEXT_REPORT" ]; then + cp "$TEXT_REPORT" "$LATEST_TEXT" +fi if [ -f "$JSON_REPORT" ]; then cp "$JSON_REPORT" "$LATEST_JSON" - cp "$TEXT_REPORT" "$LATEST_TEXT" fi # Display summary echo "" -echo "๐Ÿ“Š Pylint Summary" +echo "๐Ÿ“Š Ruff Summary" echo "" if [ $EXIT_CODE -eq 0 ]; then echo " Exit code: $EXIT_CODE โœ…" else echo " Exit code: $EXIT_CODE โš ๏ธ" fi -echo " JSON report : $JSON_REPORT" -echo " Text report : $TEXT_REPORT" +echo " Text report : $TEXT_REPORT" +echo " JSON report : $JSON_REPORT" # Parse and display top issues from JSON if [ -f "$JSON_REPORT" ] && command -v jq &> /dev/null; then + ISSUE_COUNT=$(jq 'length' "$JSON_REPORT" 2>/dev/null || echo "0") echo "" - echo "๐Ÿ” Top 10 Issues:" - jq -r 'group_by(.symbol) | map({symbol: .[0].symbol, msgid: .[0]."message-id", msg: .[0].message, count: length}) | sort_by(-.count) | limit(10; .[]) | " [\(.count | tostring | tonumber)] \(.symbol) (\(.msgid))\n \(.msg)"' "$JSON_REPORT" + if [ "$ISSUE_COUNT" -eq 0 ]; then + echo "โœ… No issues found!" + else + echo " $ISSUE_COUNT issue(s) found." + echo "" + echo "๐Ÿ” Top 10 Issues:" + jq -r 'group_by(.code) | map({code: .[0].code, message: .[0].message, count: length}) | sort_by(-.count) | limit(10; .[]) | " [\(.count | tostring)] \(.code)\n \(.message)"' "$JSON_REPORT" + fi elif [ -f "$JSON_REPORT" ]; then - ISSUE_COUNT=$(grep -c '"symbol"' "$JSON_REPORT" || true) + ISSUE_COUNT=$(grep -c '"code"' "$JSON_REPORT" || true) echo "" if [ "$ISSUE_COUNT" -eq 0 ]; then echo "โœ… No issues found!" diff --git a/tests/python/test_apimrequests.py b/tests/python/test_apimrequests.py index 37efd44c..629740fd 100644 --- a/tests/python/test_apimrequests.py +++ b/tests/python/test_apimrequests.py @@ -1,20 +1,22 @@ """Tests for apimrequests helpers and request behavior.""" -from unittest.mock import patch, MagicMock -import requests +from unittest.mock import MagicMock, patch + import pytest +import requests # APIM Samples imports from apimrequests import ApimRequests -from apimtypes import SUBSCRIPTION_KEY_PARAMETER_NAME, HTTP_VERB, SLEEP_TIME_BETWEEN_REQUESTS_MS +from apimtypes import HTTP_VERB, SLEEP_TIME_BETWEEN_REQUESTS_MS, SUBSCRIPTION_KEY_PARAMETER_NAME, HttpStatusCode from test_helpers import create_mock_http_response, create_mock_session_with_response # Sample values for tests -DEFAULT_URL = 'https://example.com/apim/' -DEFAULT_KEY = 'test-KEY' -DEFAULT_PATH = '/test' -DEFAULT_HEADERS = {'Custom-Header': 'Value'} -DEFAULT_DATA = {'foo': 'bar'} +DEFAULT_URL = "https://example.com/apim/" +DEFAULT_KEY = "test-KEY" +DEFAULT_PATH = "/test" +DEFAULT_HEADERS = {"Custom-Header": "Value"} +DEFAULT_DATA = {"foo": "bar"} + @pytest.fixture def apim(): @@ -36,122 +38,128 @@ def test_init_no_key(): apim = ApimRequests(DEFAULT_URL) assert apim._url == DEFAULT_URL assert apim.subscriptionKey is None - assert 'Ocp-Apim-Subscription-Key' not in apim.headers - assert apim.headers['Accept'] == 'application/json' + assert "Ocp-Apim-Subscription-Key" not in apim.headers + assert apim.headers["Accept"] == "application/json" + @pytest.mark.http def test_single_get_success(apim, apimrequests_patches, mock_http_response_200): apimrequests_patches.request.return_value = mock_http_response_200 - with patch.object(apim, '_print_response') as mock_print_response: + with patch.object(apim, "_print_response") as mock_print_response: result = apim.singleGet(DEFAULT_PATH, printResponse=True) assert result == '{\n "result": "ok"\n}' mock_print_response.assert_called_once_with(mock_http_response_200) apimrequests_patches.print_error.assert_not_called() + @pytest.mark.http def test_single_get_error(apim, apimrequests_patches): - apimrequests_patches.request.side_effect = requests.exceptions.RequestException('fail') + apimrequests_patches.request.side_effect = requests.exceptions.RequestException("fail") result = apim.singleGet(DEFAULT_PATH, printResponse=True) assert result is None apimrequests_patches.print_error.assert_called_once() + @pytest.mark.http def test_single_post_success(apim, apimrequests_patches): - response = create_mock_http_response( - status_code=201, - json_data={'created': True} - ) + response = create_mock_http_response(status_code=201, json_data={"created": True}) apimrequests_patches.request.return_value = response - with patch.object(apim, '_print_response') as mock_print_response: + with patch.object(apim, "_print_response") as mock_print_response: result = apim.singlePost(DEFAULT_PATH, data=DEFAULT_DATA, printResponse=True) assert result == '{\n "created": true\n}' mock_print_response.assert_called_once_with(response) apimrequests_patches.print_error.assert_not_called() + @pytest.mark.http def test_multi_get_success(apim, apimrequests_patches, mock_http_response_200): - with patch('apimrequests.requests.Session') as session_cls: + with patch("apimrequests.requests.Session") as session_cls: session = create_mock_session_with_response(mock_http_response_200) session_cls.return_value = session - with patch.object(apim, '_print_response_code') as mock_print_code: + with patch.object(apim, "_print_response_code") as mock_print_code: result = apim.multiGet(DEFAULT_PATH, runs=2, printResponse=True) assert len(result) == 2 for run in result: - assert run['status_code'] == 200 - assert run['response'] == '{\n "result": "ok"\n}' + assert run["status_code"] == HttpStatusCode.OK + assert run["response"] == '{\n "result": "ok"\n}' assert session.request.call_count == 2 mock_print_code.assert_called() + @pytest.mark.http def test_multi_get_error(apim, apimrequests_patches): - with patch('apimrequests.requests.Session') as session_cls: + with patch("apimrequests.requests.Session") as session_cls: session = MagicMock() - session.request.side_effect = requests.exceptions.RequestException('fail') + session.request.side_effect = requests.exceptions.RequestException("fail") session_cls.return_value = session - with patch.object(apim, '_print_response_code'): + with patch.object(apim, "_print_response_code"): with pytest.raises(requests.exceptions.RequestException): apim.multiGet(DEFAULT_PATH, runs=1, printResponse=True) # Sample values for tests -URL = 'https://example.com/apim/' -KEY = 'test-KEY' -PATH = '/test' +URL = "https://example.com/apim/" +KEY = "test-KEY" +PATH = "/test" + def make_apim(): return ApimRequests(URL, KEY) + @pytest.mark.http def test_single_post_error(): apim = make_apim() - with patch('apimrequests.requests.request') as mock_request, \ - patch('apimrequests.print_error') as mock_print_error: - mock_request.side_effect = requests.RequestException('fail') - result = apim.singlePost(PATH, data={'foo': 'bar'}, printResponse=True) + with patch("apimrequests.requests.request") as mock_request, patch("apimrequests.print_error") as mock_print_error: + mock_request.side_effect = requests.RequestException("fail") + result = apim.singlePost(PATH, data={"foo": "bar"}, printResponse=True) assert result is None mock_print_error.assert_called() + @pytest.mark.http def test_multi_get_non_json(): apim = make_apim() - with patch('apimrequests.requests.Session') as mock_session: + with patch("apimrequests.requests.Session") as mock_session: mock_sess = MagicMock() mock_response = MagicMock() mock_response.status_code = 200 - mock_response.headers = {'Content-Type': 'text/plain'} - mock_response.text = 'not json' + mock_response.headers = {"Content-Type": "text/plain"} + mock_response.text = "not json" mock_response.raise_for_status.return_value = None mock_sess.request.return_value = mock_response mock_session.return_value = mock_sess - with patch.object(apim, '_print_response_code'): + with patch.object(apim, "_print_response_code"): result = apim.multiGet(PATH, runs=1, printResponse=True) - assert result[0]['response'] == 'not json' + assert result[0]["response"] == "not json" + @pytest.mark.http def test_request_header_merging(): apim = make_apim() - with patch('apimrequests.requests.request') as mock_request: + with patch("apimrequests.requests.request") as mock_request: mock_response = MagicMock() mock_response.status_code = 200 - mock_response.headers = {'Content-Type': 'application/json'} - mock_response.json.return_value = {'ok': True} + mock_response.headers = {"Content-Type": "application/json"} + mock_response.json.return_value = {"ok": True} mock_response.text = '{"ok": true}' mock_response.raise_for_status.return_value = None mock_request.return_value = mock_response # Custom header should override default - custom_headers = {'Accept': 'application/xml', 'X-Test': '1'} - with patch.object(apim, '_print_response'): + custom_headers = {"Accept": "application/xml", "X-Test": "1"} + with patch.object(apim, "_print_response"): apim.singleGet(PATH, headers=custom_headers, printResponse=True) - called_headers = mock_request.call_args[1]['headers'] - assert called_headers['Accept'] == 'application/xml' - assert called_headers['X-Test'] == '1' + called_headers = mock_request.call_args[1]["headers"] + assert called_headers["Accept"] == "application/xml" + assert called_headers["X-Test"] == "1" + @pytest.mark.http def test_init_missing_url(): @@ -160,61 +168,68 @@ def test_init_missing_url(): with pytest.raises(TypeError): init(ApimRequests) + @pytest.mark.http def test_print_response_code_edge(): apim = make_apim() + class DummyResponse: """Response stub for non-2xx status formatting.""" + status_code = 302 - reason = 'Found' - with patch('apimrequests.print_val') as mock_print_val: + reason = "Found" + + with patch("apimrequests.print_val") as mock_print_val: apim._print_response_code(DummyResponse()) - mock_print_val.assert_called_with('Response status', '302') + mock_print_val.assert_called_with("Response status", "302") + # ------------------------------ # HEADERS PROPERTY # ------------------------------ + def test_headers_property_allows_external_modification(): apim = ApimRequests(DEFAULT_URL, DEFAULT_KEY) - apim.headers['X-Test'] = 'value' - assert apim.headers['X-Test'] == 'value' + apim.headers["X-Test"] = "value" + assert apim.headers["X-Test"] == "value" + def test_headers_property_is_dict_reference(): apim = ApimRequests(DEFAULT_URL, DEFAULT_KEY) h = apim.headers - h['X-Ref'] = 'ref' - assert apim.headers['X-Ref'] == 'ref' + h["X-Ref"] = "ref" + assert apim.headers["X-Ref"] == "ref" def test_subscription_key_setter_updates_and_clears_header(): apim = ApimRequests(DEFAULT_URL, DEFAULT_KEY) - apim.subscriptionKey = 'new-key' - assert apim.headers[SUBSCRIPTION_KEY_PARAMETER_NAME] == 'new-key' + apim.subscriptionKey = "new-key" + assert apim.headers[SUBSCRIPTION_KEY_PARAMETER_NAME] == "new-key" apim.subscriptionKey = None assert SUBSCRIPTION_KEY_PARAMETER_NAME not in apim.headers + # ------------------------------ # COVERAGE TESTS FOR APIMREQUESTS # ------------------------------ + @pytest.mark.unit def test_request_with_custom_headers(apim, apimrequests_patches): """Test request with custom headers merged with default headers.""" - apimrequests_patches.request.return_value = create_mock_http_response( - status_code=200, - json_data={'result': 'ok'} - ) + apimrequests_patches.request.return_value = create_mock_http_response(status_code=200, json_data={"result": "ok"}) - custom_headers = {'Custom': 'value'} + custom_headers = {"Custom": "value"} apim.singleGet(DEFAULT_PATH, headers=custom_headers) # Verify custom headers were merged with default headers call_kwargs = apimrequests_patches.request.call_args[1] - assert 'Custom' in call_kwargs['headers'] - assert SUBSCRIPTION_KEY_PARAMETER_NAME in call_kwargs['headers'] + assert "Custom" in call_kwargs["headers"] + assert SUBSCRIPTION_KEY_PARAMETER_NAME in call_kwargs["headers"] + @pytest.mark.unit def test_request_timeout_error(apim, apimrequests_patches): @@ -225,6 +240,7 @@ def test_request_timeout_error(apim, apimrequests_patches): assert result is None + @pytest.mark.unit def test_request_connection_error(apim, apimrequests_patches): """Test request with connection error.""" @@ -234,54 +250,47 @@ def test_request_connection_error(apim, apimrequests_patches): assert result is None + @pytest.mark.unit def test_request_http_error(apim, apimrequests_patches): """Test request with HTTP error response.""" - response = create_mock_http_response( - status_code=404, - headers={'Content-Type': 'text/plain'}, - text='Resource not found' - ) + response = create_mock_http_response(status_code=404, headers={"Content-Type": "text/plain"}, text="Resource not found") apimrequests_patches.request.return_value = response result = apim.singleGet(DEFAULT_PATH) # The method returns the response body even for error status codes - assert result == 'Resource not found' + assert result == "Resource not found" + @pytest.mark.unit def test_request_non_json_response(apim, apimrequests_patches): """Test request with non-JSON response.""" - response = create_mock_http_response( - status_code=200, - headers={'Content-Type': 'text/plain'}, - text='Plain text response' - ) - response.json.side_effect = ValueError('Not JSON') + response = create_mock_http_response(status_code=200, headers={"Content-Type": "text/plain"}, text="Plain text response") + response.json.side_effect = ValueError("Not JSON") apimrequests_patches.request.return_value = response result = apim.singleGet(DEFAULT_PATH) # Should return text response when JSON parsing fails - assert result == 'Plain text response' + assert result == "Plain text response" + @pytest.mark.unit def test_request_with_data(apim, apimrequests_patches): """Test POST request with data.""" - apimrequests_patches.request.return_value = create_mock_http_response( - status_code=201, - json_data={'created': True} - ) + apimrequests_patches.request.return_value = create_mock_http_response(status_code=201, json_data={"created": True}) - data = {'name': 'test', 'value': 'data'} + data = {"name": "test", "value": "data"} result = apim.singlePost(DEFAULT_PATH, data=data) # Verify data was passed correctly call_kwargs = apimrequests_patches.request.call_args[1] - assert call_kwargs['json'] == data + assert call_kwargs["json"] == data # The method returns JSON-formatted string for application/json content assert result == '{\n "created": true\n}' + @pytest.mark.unit def test_apim_requests_without_subscription_key(): """Test ApimRequests initialization without subscription KEY.""" @@ -290,13 +299,13 @@ def test_apim_requests_without_subscription_key(): assert apim._url == DEFAULT_URL assert apim.subscriptionKey is None assert SUBSCRIPTION_KEY_PARAMETER_NAME not in apim.headers - assert apim.headers['Accept'] == 'application/json' + assert apim.headers["Accept"] == "application/json" @pytest.mark.unit def test_headers_setter(apim): """Test the headers setter property.""" - new_headers = {'Authorization': 'Bearer token', 'Custom': 'value'} + new_headers = {"Authorization": "Bearer token", "Custom": "value"} apim.headers = new_headers assert apim.headers == new_headers @@ -304,62 +313,57 @@ def test_headers_setter(apim): @pytest.mark.unit def test_request_with_message(apim, apimrequests_patches): """Test _request method with message parameter.""" - apimrequests_patches.request.return_value = create_mock_http_response( - status_code=200, - json_data={'result': 'ok'} - ) + apimrequests_patches.request.return_value = create_mock_http_response(status_code=200, json_data={"result": "ok"}) - with patch.object(apim, '_print_response'): - apim._request(HTTP_VERB.GET, '/test', msg='Test message') + with patch.object(apim, "_print_response"): + apim._request(HTTP_VERB.GET, "/test", msg="Test message") - apimrequests_patches.print_message.assert_called_once_with('Test message', blank_above=True) + apimrequests_patches.print_message.assert_called_once_with("Test message", blank_above=True) @pytest.mark.unit def test_request_path_without_leading_slash(apim, apimrequests_patches): """Test _request method with PATH without leading slash.""" - apimrequests_patches.request.return_value = create_mock_http_response( - status_code=200, - json_data={'result': 'ok'} - ) + apimrequests_patches.request.return_value = create_mock_http_response(status_code=200, json_data={"result": "ok"}) - with patch.object(apim, '_print_response'): - apim._request(HTTP_VERB.GET, 'test') + with patch.object(apim, "_print_response"): + apim._request(HTTP_VERB.GET, "test") # Should call with the corrected URL - expected_url = DEFAULT_URL + '/test' + expected_url = DEFAULT_URL + "/test" apimrequests_patches.request.assert_called_once() args, _kwargs = apimrequests_patches.request.call_args assert args[1] == expected_url + @pytest.mark.unit def test_multi_request_with_message(apim, apimrequests_patches): """Test _multiRequest supports optional message output.""" - response = create_mock_http_response(json_data={'result': 'ok'}) - with patch('apimrequests.requests.Session') as mock_session_cls: + response = create_mock_http_response(json_data={"result": "ok"}) + with patch("apimrequests.requests.Session") as mock_session_cls: mock_session = create_mock_session_with_response(response) mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - result = apim._multiRequest(HTTP_VERB.GET, '/test', 1, msg='Multi-request message') + with patch.object(apim, "_print_response_code"): + result = apim._multiRequest(HTTP_VERB.GET, "/test", 1, msg="Multi-request message") - apimrequests_patches.print_message.assert_called_once_with('Multi-request message', blank_above=True) + apimrequests_patches.print_message.assert_called_once_with("Multi-request message", blank_above=True) assert len(result) == 1 @pytest.mark.unit def test_multi_request_path_without_leading_slash(apim, apimrequests_patches): """Test _multiRequest method with PATH without leading slash.""" - response = create_mock_http_response(json_data={'result': 'ok'}) - with patch('apimrequests.requests.Session') as mock_session_cls: + response = create_mock_http_response(json_data={"result": "ok"}) + with patch("apimrequests.requests.Session") as mock_session_cls: mock_session = create_mock_session_with_response(response) mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - apim._multiRequest(HTTP_VERB.GET, 'test', 1) + with patch.object(apim, "_print_response_code"): + apim._multiRequest(HTTP_VERB.GET, "test", 1) # Should call with the corrected URL - expected_url = DEFAULT_URL + '/test' + expected_url = DEFAULT_URL + "/test" mock_session.request.assert_called_once() args, _kwargs = mock_session.request.call_args assert args[1] == expected_url @@ -368,38 +372,33 @@ def test_multi_request_path_without_leading_slash(apim, apimrequests_patches): @pytest.mark.unit def test_multi_request_non_json_response(apim): """Test _multiRequest method with non-JSON response.""" - response = create_mock_http_response( - status_code=200, - headers={'Content-Type': 'text/plain'}, - text='Plain text response' - ) + response = create_mock_http_response(status_code=200, headers={"Content-Type": "text/plain"}, text="Plain text response") - with patch('apimrequests.requests.Session') as mock_session_cls: + with patch("apimrequests.requests.Session") as mock_session_cls: mock_session = create_mock_session_with_response(response) mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - result = apim._multiRequest(HTTP_VERB.GET, '/test', 1) + with patch.object(apim, "_print_response_code"): + result = apim._multiRequest(HTTP_VERB.GET, "/test", 1) assert len(result) == 1 - assert result[0]['response'] == 'Plain text response' + assert result[0]["response"] == "Plain text response" @pytest.mark.unit def test_multi_request_sleep_zero(apim): """Test _multiRequest respects sleepMs=0 without sleeping.""" - response = create_mock_http_response(json_data={'ok': True}) + response = create_mock_http_response(json_data={"ok": True}) - with patch('apimrequests.requests.Session') as mock_session_cls, \ - patch('apimrequests.time.sleep') as mock_sleep: + with patch("apimrequests.requests.Session") as mock_session_cls, patch("apimrequests.time.sleep") as mock_sleep: mock_session = create_mock_session_with_response(response) mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - result = apim._multiRequest(HTTP_VERB.GET, '/sleep', 2, sleepMs=0) + with patch.object(apim, "_print_response_code"): + result = apim._multiRequest(HTTP_VERB.GET, "/sleep", 2, sleepMs=0) assert len(result) == 2 - assert result[0]['status_code'] == 200 + assert result[0]["status_code"] == HttpStatusCode.OK mock_sleep.assert_not_called() @@ -407,15 +406,14 @@ def test_multi_request_sleep_zero(apim): def test_multi_request_default_sleep_interval(apim): """Test _multiRequest uses default sleep interval when sleepMs is None.""" - response = create_mock_http_response(json_data={'ok': True}) + response = create_mock_http_response(json_data={"ok": True}) - with patch('apimrequests.requests.Session') as mock_session_cls, \ - patch('apimrequests.time.sleep') as mock_sleep: + with patch("apimrequests.requests.Session") as mock_session_cls, patch("apimrequests.time.sleep") as mock_sleep: mock_session = create_mock_session_with_response(response) mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - apim._multiRequest(HTTP_VERB.GET, '/sleep-default', runs=2, sleepMs=None) + with patch.object(apim, "_print_response_code"): + apim._multiRequest(HTTP_VERB.GET, "/sleep-default", runs=2, sleepMs=None) mock_sleep.assert_called_once_with(SLEEP_TIME_BETWEEN_REQUESTS_MS / 1000) @@ -423,15 +421,14 @@ def test_multi_request_default_sleep_interval(apim): @pytest.mark.unit def test_multi_request_sleep_positive(apim): """Test _multiRequest sleeps when sleepMs is positive.""" - response = create_mock_http_response(json_data={'ok': True}) + response = create_mock_http_response(json_data={"ok": True}) - with patch('apimrequests.requests.Session') as mock_session_cls, \ - patch('apimrequests.time.sleep') as mock_sleep: + with patch("apimrequests.requests.Session") as mock_session_cls, patch("apimrequests.time.sleep") as mock_sleep: mock_session = create_mock_session_with_response(response) mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - result = apim._multiRequest(HTTP_VERB.GET, '/sleep', 2, sleepMs=150) + with patch.object(apim, "_print_response_code"): + result = apim._multiRequest(HTTP_VERB.GET, "/sleep", 2, sleepMs=150) # Verify sleep was called between the two requests (only once, not after last run) mock_sleep.assert_called_once_with(0.15) @@ -442,15 +439,14 @@ def test_multi_request_sleep_positive(apim): @pytest.mark.unit def test_multi_request_sleep_positive_multiple_runs(apim): """Test _multiRequest with sleepMs > 0 and multiple runs verifies sleep behavior.""" - response = create_mock_http_response(json_data={'ok': True}) + response = create_mock_http_response(json_data={"ok": True}) - with patch('apimrequests.requests.Session') as mock_session_cls, \ - patch('apimrequests.time.sleep') as mock_sleep: + with patch("apimrequests.requests.Session") as mock_session_cls, patch("apimrequests.time.sleep") as mock_sleep: mock_session = create_mock_session_with_response(response) mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - result = apim._multiRequest(HTTP_VERB.GET, '/test', runs=3, sleepMs=250) + with patch.object(apim, "_print_response_code"): + result = apim._multiRequest(HTTP_VERB.GET, "/test", runs=3, sleepMs=250) # With 3 runs, sleep should be called 2 times (between runs, not after the last) assert mock_sleep.call_count == 2 @@ -459,103 +455,91 @@ def test_multi_request_sleep_positive_multiple_runs(apim): assert len(result) == 3 # Verify responses are in order for i, run in enumerate(result): - assert run['run'] == i + 1 + assert run["run"] == i + 1 @pytest.mark.unit def test_print_response_non_200_status(apim, apimrequests_patches): """Test _print_response method with non-200 status code.""" - mock_response = create_mock_http_response( - status_code=404, - headers={'Content-Type': 'application/json'}, - text='{"error": "not found"}' - ) - mock_response.reason = 'Not Found' - - with patch.object(apim, '_print_response_code'): + mock_response = create_mock_http_response(status_code=404, headers={"Content-Type": "application/json"}, text='{"error": "not found"}') + mock_response.reason = "Not Found" + + with patch.object(apim, "_print_response_code"): apim._print_response(mock_response) # Should print response body directly for non-200 status - apimrequests_patches.print_val.assert_any_call('Response body', '{"error": "not found"}', True) + apimrequests_patches.print_val.assert_any_call("Response body", '{"error": "not found"}', True) @pytest.mark.unit def test_print_response_200_invalid_json(apim, apimrequests_patches): """Test _print_response handles invalid JSON body for 200 responses.""" - mock_response = create_mock_http_response( - status_code=200, - headers={'Content-Type': 'application/json'}, - text='not valid json' - ) - mock_response.reason = 'OK' - - with patch.object(apim, '_print_response_code'): + mock_response = create_mock_http_response(status_code=200, headers={"Content-Type": "application/json"}, text="not valid json") + mock_response.reason = "OK" + + with patch.object(apim, "_print_response_code"): apim._print_response(mock_response) - apimrequests_patches.print_val.assert_any_call('Response body', 'not valid json', True) + apimrequests_patches.print_val.assert_any_call("Response body", "not valid json", True) @pytest.mark.unit def test_print_response_200_valid_json(apim, apimrequests_patches): """Test _print_response prints formatted JSON when parse succeeds.""" - mock_response = create_mock_http_response( - status_code=200, - json_data={'alpha': 1} - ) - mock_response.reason = 'OK' + mock_response = create_mock_http_response(status_code=200, json_data={"alpha": 1}) + mock_response.reason = "OK" - with patch.object(apim, '_print_response_code'): + with patch.object(apim, "_print_response_code"): apim._print_response(mock_response) - apimrequests_patches.print_val.assert_any_call('Response body', '{\n "alpha": 1\n}', True) + apimrequests_patches.print_val.assert_any_call("Response body", '{\n "alpha": 1\n}', True) @pytest.mark.unit def test_print_response_code_success_and_error(apim, apimrequests_patches): """Test _print_response_code color formatting for success and error codes.""" + class DummyResponse: """Response stub for successful status formatting.""" + status_code = 200 - reason = 'OK' + reason = "OK" apim._print_response_code(DummyResponse()) class ErrorResponse: """Response stub for error status formatting.""" + status_code = 500 - reason = 'Server Error' + reason = "Server Error" apim._print_response_code(ErrorResponse()) messages = [record.args[1] for record in apimrequests_patches.print_val.call_args_list] - assert any('200 - OK' in msg for msg in messages) - assert any('500 - Server Error' in msg for msg in messages) + assert any("200 - OK" in msg for msg in messages) + assert any("500 - Server Error" in msg for msg in messages) @pytest.mark.unit def test_poll_async_operation_success(apim, apimrequests_patches): """Test _poll_async_operation method with successful completion.""" mock_response = create_mock_http_response(status_code=200) - with patch('apimrequests.requests.get', return_value=mock_response): - with patch('apimrequests.time.sleep'): - result = apim._poll_async_operation('http://example.com/operation/123') + with patch("apimrequests.requests.get", return_value=mock_response): + with patch("apimrequests.time.sleep"): + result = apim._poll_async_operation("http://example.com/operation/123") assert result == mock_response - apimrequests_patches.print_ok.assert_called_once_with('Async operation completed successfully!') + apimrequests_patches.print_ok.assert_called_once_with("Async operation completed successfully!") @pytest.mark.unit def test_poll_async_operation_in_progress_then_success(apim, apimrequests_patches): """Test _poll_async_operation method with in-progress then success.""" # First call returns 202 (in progress), second call returns 200 (complete) - responses = [ - MagicMock(status_code=202), - MagicMock(status_code=200) - ] - with patch('apimrequests.requests.get', side_effect=responses) as mock_get, \ - patch('apimrequests.time.sleep') as mock_sleep: - result = apim._poll_async_operation('http://example.com/operation/123', poll_interval=1) + responses = [MagicMock(status_code=202), MagicMock(status_code=200)] + with patch("apimrequests.requests.get", side_effect=responses) as mock_get, patch("apimrequests.time.sleep") as mock_sleep: + result = apim._poll_async_operation("http://example.com/operation/123", poll_interval=1) assert result == responses[1] # Should return the final success response assert mock_get.call_count == 2 @@ -566,21 +550,21 @@ def test_poll_async_operation_in_progress_then_success(apim, apimrequests_patche def test_poll_async_operation_unexpected_status(apim, apimrequests_patches): """Test _poll_async_operation method with unexpected status code.""" mock_response = MagicMock(status_code=500) - with patch('apimrequests.requests.get', return_value=mock_response): - result = apim._poll_async_operation('http://example.com/operation/123') + with patch("apimrequests.requests.get", return_value=mock_response): + result = apim._poll_async_operation("http://example.com/operation/123") assert result == mock_response # Should return the error response - apimrequests_patches.print_error.assert_called_with('Unexpected status code during polling: 500') + apimrequests_patches.print_error.assert_called_with("Unexpected status code during polling: 500") @pytest.mark.unit def test_poll_async_operation_request_exception(apim, apimrequests_patches): """Test _poll_async_operation method with request exception.""" - with patch('apimrequests.requests.get', side_effect=requests.exceptions.RequestException('Connection error')): - result = apim._poll_async_operation('http://example.com/operation/123') + with patch("apimrequests.requests.get", side_effect=requests.exceptions.RequestException("Connection error")): + result = apim._poll_async_operation("http://example.com/operation/123") assert result is None - apimrequests_patches.print_error.assert_called_with('Error polling operation: Connection error') + apimrequests_patches.print_error.assert_called_with("Error polling operation: Connection error") @pytest.mark.unit @@ -598,13 +582,15 @@ def time_side_effect(): mock_response = MagicMock(status_code=202) - with patch('apimrequests.requests.get', return_value=mock_response), \ - patch('apimrequests.time.sleep'), \ - patch('apimrequests.time.time', side_effect=time_side_effect): - result = apim._poll_async_operation('http://example.com/operation/123', timeout=60) + with ( + patch("apimrequests.requests.get", return_value=mock_response), + patch("apimrequests.time.sleep"), + patch("apimrequests.time.time", side_effect=time_side_effect), + ): + result = apim._poll_async_operation("http://example.com/operation/123", timeout=60) assert result is None - apimrequests_patches.print_error.assert_called_with('Async operation timeout reached after 60 seconds') + apimrequests_patches.print_error.assert_called_with("Async operation timeout reached after 60 seconds") @pytest.mark.unit @@ -613,21 +599,18 @@ def test_single_post_async_success_with_location(apim, apimrequests_patches): # Mock initial 202 response with Location header initial_response = MagicMock() initial_response.status_code = 202 - initial_response.headers = {'Location': 'http://example.com/operation/123'} + initial_response.headers = {"Location": "http://example.com/operation/123"} # Mock final 200 response - final_response = create_mock_http_response( - status_code=200, - json_data={'result': 'completed'} - ) + final_response = create_mock_http_response(status_code=200, json_data={"result": "completed"}) apimrequests_patches.request.return_value = initial_response - with patch.object(apim, '_poll_async_operation', return_value=final_response) as mock_poll: - with patch.object(apim, '_print_response') as mock_print_response: - result = apim.singlePostAsync('/test', data={'test': 'data'}, msg='Async test') + with patch.object(apim, "_poll_async_operation", return_value=final_response) as mock_poll: + with patch.object(apim, "_print_response") as mock_print_response: + result = apim.singlePostAsync("/test", data={"test": "data"}, msg="Async test") - apimrequests_patches.print_message.assert_called_once_with('Async test', blank_above=True) + apimrequests_patches.print_message.assert_called_once_with("Async test", blank_above=True) mock_poll.assert_called_once() mock_print_response.assert_called_once_with(final_response) assert result == '{\n "result": "completed"\n}' @@ -641,10 +624,10 @@ def test_single_post_async_no_location_header(apim, apimrequests_patches): mock_response.headers = {} # No Location header apimrequests_patches.request.return_value = mock_response - with patch.object(apim, '_print_response') as mock_print_response: - result = apim.singlePostAsync('/test') + with patch.object(apim, "_print_response") as mock_print_response: + result = apim.singlePostAsync("/test") - apimrequests_patches.print_error.assert_called_once_with('No Location header found in 202 response') + apimrequests_patches.print_error.assert_called_once_with("No Location header found in 202 response") mock_print_response.assert_called_once_with(mock_response) assert result is None @@ -652,14 +635,11 @@ def test_single_post_async_no_location_header(apim, apimrequests_patches): @pytest.mark.unit def test_single_post_async_non_async_response(apim, apimrequests_patches): """Test singlePostAsync method with non-async (immediate) response.""" - mock_response = create_mock_http_response( - status_code=200, - json_data={'result': 'immediate'} - ) + mock_response = create_mock_http_response(status_code=200, json_data={"result": "immediate"}) apimrequests_patches.request.return_value = mock_response - with patch.object(apim, '_print_response') as mock_print_response: - result = apim.singlePostAsync('/test') + with patch.object(apim, "_print_response") as mock_print_response: + result = apim.singlePostAsync("/test") mock_print_response.assert_called_once_with(mock_response) assert result == '{\n "result": "immediate"\n}' @@ -668,12 +648,12 @@ def test_single_post_async_non_async_response(apim, apimrequests_patches): @pytest.mark.unit def test_single_post_async_request_exception(apim, apimrequests_patches): """Test singlePostAsync method with request exception.""" - apimrequests_patches.request.side_effect = requests.exceptions.RequestException('Connection error') + apimrequests_patches.request.side_effect = requests.exceptions.RequestException("Connection error") - result = apim.singlePostAsync('/test') + result = apim.singlePostAsync("/test") assert result is None - apimrequests_patches.print_error.assert_called_once_with('Error making request: Connection error') + apimrequests_patches.print_error.assert_called_once_with("Error making request: Connection error") @pytest.mark.unit @@ -681,31 +661,28 @@ def test_single_post_async_failed_polling(apim, apimrequests_patches): """Test singlePostAsync method with failed async operation polling.""" initial_response = MagicMock() initial_response.status_code = 202 - initial_response.headers = {'Location': 'http://example.com/operation/123'} + initial_response.headers = {"Location": "http://example.com/operation/123"} apimrequests_patches.request.return_value = initial_response - with patch.object(apim, '_poll_async_operation', return_value=None) as mock_poll: - result = apim.singlePostAsync('/test') + with patch.object(apim, "_poll_async_operation", return_value=None) as mock_poll: + result = apim.singlePostAsync("/test") mock_poll.assert_called_once() - apimrequests_patches.print_error.assert_called_once_with('Async operation failed or timed out') + apimrequests_patches.print_error.assert_called_once_with("Async operation failed or timed out") assert result is None @pytest.mark.unit def test_single_post_async_path_without_leading_slash(apim, apimrequests_patches): """Test singlePostAsync method with PATH without leading slash.""" - mock_response = create_mock_http_response( - status_code=200, - json_data={'result': 'ok'} - ) + mock_response = create_mock_http_response(status_code=200, json_data={"result": "ok"}) apimrequests_patches.request.return_value = mock_response - with patch.object(apim, '_print_response'): - apim.singlePostAsync('test') + with patch.object(apim, "_print_response"): + apim.singlePostAsync("test") # Should call with the corrected URL - expected_url = DEFAULT_URL + '/test' + expected_url = DEFAULT_URL + "/test" apimrequests_patches.request.assert_called_once() args, _kwargs = apimrequests_patches.request.call_args assert args[1] == expected_url @@ -714,61 +691,61 @@ def test_single_post_async_path_without_leading_slash(apim, apimrequests_patches @pytest.mark.unit def test_single_post_async_non_json_response(apim, apimrequests_patches): """Test singlePostAsync method with non-JSON response.""" - mock_response = create_mock_http_response( - status_code=200, - headers={'Content-Type': 'text/plain'}, - text='Plain text result' - ) + mock_response = create_mock_http_response(status_code=200, headers={"Content-Type": "text/plain"}, text="Plain text result") apimrequests_patches.request.return_value = mock_response - with patch.object(apim, '_print_response'): - result = apim.singlePostAsync('/test') + with patch.object(apim, "_print_response"): + result = apim.singlePostAsync("/test") - assert result == 'Plain text result' + assert result == "Plain text result" @pytest.mark.unit def test_print_response_code_2xx_non_200(apim, apimrequests_patches): """Test _print_response_code with 2xx status codes other than 200.""" + class DummyResponse: """Response stub for 2xx non-200 status formatting.""" + status_code = 201 - reason = 'Created' + reason = "Created" apim._print_response_code(DummyResponse()) # Verify print_val was called with colored output for success apimrequests_patches.print_val.assert_called_once() call_args = apimrequests_patches.print_val.call_args[0] - assert 'Response status' in call_args[0] - assert '201 - Created' in call_args[1] + assert "Response status" in call_args[0] + assert "201 - Created" in call_args[1] @pytest.mark.unit def test_print_response_code_3xx(apim, apimrequests_patches): """Test _print_response_code with 3xx redirect status codes.""" + class DummyResponse: """Response stub for 3xx status formatting.""" + status_code = 301 - reason = 'Moved Permanently' + reason = "Moved Permanently" apim._print_response_code(DummyResponse()) call_args = apimrequests_patches.print_val.call_args[0] - assert '301' in call_args[1] + assert "301" in call_args[1] @pytest.mark.unit def test_multi_request_session_exception_on_close(apim): """Test _multiRequest handles exception and ensures session is closed.""" - with patch('apimrequests.requests.Session') as mock_session_cls: + with patch("apimrequests.requests.Session") as mock_session_cls: mock_session = MagicMock() - mock_response = create_mock_http_response(json_data={'ok': True}) + mock_response = create_mock_http_response(json_data={"ok": True}) mock_session.request.return_value = mock_response mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - result = apim._multiRequest(HTTP_VERB.GET, '/test', 1) + with patch.object(apim, "_print_response_code"): + result = apim._multiRequest(HTTP_VERB.GET, "/test", 1) # Verify session was closed even after successful operation mock_session.close.assert_called_once() @@ -778,34 +755,28 @@ def test_multi_request_session_exception_on_close(apim): @pytest.mark.unit def test_single_post_async_with_message(apim, apimrequests_patches): """Test singlePostAsync with message parameter.""" - mock_response = create_mock_http_response( - status_code=200, - json_data={'result': 'ok'} - ) + mock_response = create_mock_http_response(status_code=200, json_data={"result": "ok"}) apimrequests_patches.request.return_value = mock_response - with patch.object(apim, '_print_response'): - apim.singlePostAsync('/test', msg='Test async message') + with patch.object(apim, "_print_response"): + apim.singlePostAsync("/test", msg="Test async message") - apimrequests_patches.print_message.assert_called_once_with('Test async message', blank_above=True) + apimrequests_patches.print_message.assert_called_once_with("Test async message", blank_above=True) @pytest.mark.unit def test_single_post_async_with_headers(apim, apimrequests_patches): """Test singlePostAsync with custom headers.""" - mock_response = create_mock_http_response( - status_code=200, - json_data={'result': 'ok'} - ) + mock_response = create_mock_http_response(status_code=200, json_data={"result": "ok"}) apimrequests_patches.request.return_value = mock_response - custom_headers = {'X-Custom': 'header-value'} - with patch.object(apim, '_print_response'): - apim.singlePostAsync('/test', headers=custom_headers) + custom_headers = {"X-Custom": "header-value"} + with patch.object(apim, "_print_response"): + apim.singlePostAsync("/test", headers=custom_headers) # Verify headers were merged call_kwargs = apimrequests_patches.request.call_args[1] - assert 'X-Custom' in call_kwargs['headers'] + assert "X-Custom" in call_kwargs["headers"] @pytest.mark.unit @@ -813,20 +784,16 @@ def test_single_post_async_non_json_final_response(apim, apimrequests_patches): """Test singlePostAsync with non-JSON response from polling.""" initial_response = MagicMock() initial_response.status_code = 202 - initial_response.headers = {'Location': 'http://example.com/operation/123'} + initial_response.headers = {"Location": "http://example.com/operation/123"} apimrequests_patches.request.return_value = initial_response - final_response = create_mock_http_response( - status_code=200, - headers={'Content-Type': 'text/plain'}, - text='Plain text final result' - ) + final_response = create_mock_http_response(status_code=200, headers={"Content-Type": "text/plain"}, text="Plain text final result") - with patch.object(apim, '_poll_async_operation', return_value=final_response): - with patch.object(apim, '_print_response') as mock_print_response: - result = apim.singlePostAsync('/test') + with patch.object(apim, "_poll_async_operation", return_value=final_response): + with patch.object(apim, "_print_response") as mock_print_response: + result = apim.singlePostAsync("/test") - assert result == 'Plain text final result' + assert result == "Plain text final result" mock_print_response.assert_called_once_with(final_response) @@ -834,27 +801,24 @@ def test_single_post_async_non_json_final_response(apim, apimrequests_patches): def test_poll_async_operation_with_custom_headers(apim, apimrequests_patches): """Test _poll_async_operation with custom headers.""" mock_response = create_mock_http_response(status_code=200) - custom_headers = {'X-Custom': 'value'} + custom_headers = {"X-Custom": "value"} - with patch('apimrequests.requests.get', return_value=mock_response) as mock_get: - result = apim._poll_async_operation('http://example.com/op', headers=custom_headers) + with patch("apimrequests.requests.get", return_value=mock_response) as mock_get: + result = apim._poll_async_operation("http://example.com/op", headers=custom_headers) assert result == mock_response # Verify custom headers were passed call_kwargs = mock_get.call_args[1] - assert call_kwargs['headers'] == custom_headers + assert call_kwargs["headers"] == custom_headers @pytest.mark.unit def test_request_no_message(apim, apimrequests_patches): """Test _request method when no message is provided.""" - apimrequests_patches.request.return_value = create_mock_http_response( - status_code=200, - json_data={'result': 'ok'} - ) + apimrequests_patches.request.return_value = create_mock_http_response(status_code=200, json_data={"result": "ok"}) - with patch.object(apim, '_print_response'): - apim._request(HTTP_VERB.GET, '/test') + with patch.object(apim, "_print_response"): + apim._request(HTTP_VERB.GET, "/test") # Verify print_message was not called when msg is None apimrequests_patches.print_message.assert_not_called() @@ -863,13 +827,13 @@ def test_request_no_message(apim, apimrequests_patches): @pytest.mark.unit def test_multi_request_no_message(apim, apimrequests_patches): """Test _multiRequest method when no message is provided.""" - response = create_mock_http_response(json_data={'result': 'ok'}) - with patch('apimrequests.requests.Session') as mock_session_cls: + response = create_mock_http_response(json_data={"result": "ok"}) + with patch("apimrequests.requests.Session") as mock_session_cls: mock_session = create_mock_session_with_response(response) mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - apim._multiRequest(HTTP_VERB.GET, '/test', 1) + with patch.object(apim, "_print_response_code"): + apim._multiRequest(HTTP_VERB.GET, "/test", 1) # Verify print_message was not called when msg is None apimrequests_patches.print_message.assert_not_called() @@ -878,14 +842,11 @@ def test_multi_request_no_message(apim, apimrequests_patches): @pytest.mark.unit def test_single_post_async_no_print_response(apim, apimrequests_patches): """Test singlePostAsync with printResponse=False.""" - mock_response = create_mock_http_response( - status_code=200, - json_data={'result': 'ok'} - ) + mock_response = create_mock_http_response(status_code=200, json_data={"result": "ok"}) apimrequests_patches.request.return_value = mock_response - with patch.object(apim, '_print_response') as mock_print_response: - result = apim.singlePostAsync('/test', printResponse=False) + with patch.object(apim, "_print_response") as mock_print_response: + result = apim.singlePostAsync("/test", printResponse=False) # When printResponse is False, _print_response should not be called mock_print_response.assert_not_called() @@ -897,17 +858,14 @@ def test_single_post_async_202_with_location_no_print_response(apim, apimrequest """Test singlePostAsync with 202 response, location header, and printResponse=False.""" initial_response = MagicMock() initial_response.status_code = 202 - initial_response.headers = {'Location': 'http://example.com/operation/123'} + initial_response.headers = {"Location": "http://example.com/operation/123"} apimrequests_patches.request.return_value = initial_response - final_response = create_mock_http_response( - status_code=200, - json_data={'result': 'completed'} - ) + final_response = create_mock_http_response(status_code=200, json_data={"result": "completed"}) - with patch.object(apim, '_poll_async_operation', return_value=final_response) as mock_poll: - with patch.object(apim, '_print_response') as mock_print_response: - result = apim.singlePostAsync('/test', data={'test': 'data'}, printResponse=False) + with patch.object(apim, "_poll_async_operation", return_value=final_response) as mock_poll: + with patch.object(apim, "_print_response") as mock_print_response: + result = apim.singlePostAsync("/test", data={"test": "data"}, printResponse=False) mock_poll.assert_called_once() # When printResponse is False, _print_response should not be called for final response @@ -923,10 +881,10 @@ def test_single_post_async_202_no_location_no_print_response(apim, apimrequests_ mock_response.headers = {} # No Location header apimrequests_patches.request.return_value = mock_response - with patch.object(apim, '_print_response') as mock_print_response: - result = apim.singlePostAsync('/test', printResponse=False) + with patch.object(apim, "_print_response") as mock_print_response: + result = apim.singlePostAsync("/test", printResponse=False) - apimrequests_patches.print_error.assert_called_once_with('No Location header found in 202 response') + apimrequests_patches.print_error.assert_called_once_with("No Location header found in 202 response") # When printResponse is False, _print_response should not be called for initial 202 response mock_print_response.assert_not_called() assert result is None @@ -935,13 +893,10 @@ def test_single_post_async_202_no_location_no_print_response(apim, apimrequests_ @pytest.mark.unit def test_single_get_no_print_response(apim, apimrequests_patches): """Test singleGet with printResponse=False.""" - apimrequests_patches.request.return_value = create_mock_http_response( - status_code=200, - json_data={'result': 'ok'} - ) + apimrequests_patches.request.return_value = create_mock_http_response(status_code=200, json_data={"result": "ok"}) - with patch.object(apim, '_print_response') as mock_print_response: - result = apim.singleGet('/test', printResponse=False) + with patch.object(apim, "_print_response") as mock_print_response: + result = apim.singleGet("/test", printResponse=False) mock_print_response.assert_not_called() assert '{\n "result": "ok"\n}' in result @@ -950,48 +905,45 @@ def test_single_get_no_print_response(apim, apimrequests_patches): @pytest.mark.unit def test_multi_get_no_print_response(apim): """Test multiGet with printResponse=False.""" - response = create_mock_http_response(json_data={'result': 'ok'}) - with patch('apimrequests.requests.Session') as mock_session_cls: + response = create_mock_http_response(json_data={"result": "ok"}) + with patch("apimrequests.requests.Session") as mock_session_cls: mock_session = create_mock_session_with_response(response) mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - result = apim.multiGet('/test', runs=1, printResponse=False) + with patch.object(apim, "_print_response_code"): + result = apim.multiGet("/test", runs=1, printResponse=False) assert len(result) == 1 - assert result[0]['response'] == '{\n "result": "ok"\n}' + assert result[0]["response"] == '{\n "result": "ok"\n}' @pytest.mark.unit def test_single_post_async_no_custom_headers(apim, apimrequests_patches): """Test singlePostAsync without custom headers (None).""" - mock_response = create_mock_http_response( - status_code=200, - json_data={'result': 'ok'} - ) + mock_response = create_mock_http_response(status_code=200, json_data={"result": "ok"}) apimrequests_patches.request.return_value = mock_response - with patch.object(apim, '_print_response'): - result = apim.singlePostAsync('/test', headers=None) + with patch.object(apim, "_print_response"): + result = apim.singlePostAsync("/test", headers=None) assert result == '{\n "result": "ok"\n}' # Verify request was called with merged headers call_kwargs = apimrequests_patches.request.call_args[1] - assert 'headers' in call_kwargs + assert "headers" in call_kwargs @pytest.mark.unit def test_multi_request_session_exception_on_request(apim): """Test _multiRequest ensures session.close() is called even if request raises.""" - with patch('apimrequests.requests.Session') as mock_session_cls: + with patch("apimrequests.requests.Session") as mock_session_cls: mock_session = MagicMock() # Make request raise an exception after first call - mock_session.request.side_effect = requests.exceptions.RequestException('Network error') + mock_session.request.side_effect = requests.exceptions.RequestException("Network error") mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): + with patch.object(apim, "_print_response_code"): with pytest.raises(requests.exceptions.RequestException): - apim._multiRequest(HTTP_VERB.GET, '/test', 1) + apim._multiRequest(HTTP_VERB.GET, "/test", 1) # Verify session was closed even after exception mock_session.close.assert_called_once() @@ -1000,77 +952,79 @@ def test_multi_request_session_exception_on_request(apim): @pytest.mark.unit def test_multi_request_merges_custom_headers(apim): """Test _multiRequest merges passed headers with default headers.""" - custom_headers = {'X-Custom-Header': 'custom-value', 'X-Request-Id': '123'} + custom_headers = {"X-Custom-Header": "custom-value", "X-Request-Id": "123"} - with patch('apimrequests.requests.Session') as mock_session_cls: + with patch("apimrequests.requests.Session") as mock_session_cls: mock_session = MagicMock() - response = create_mock_http_response(json_data={'result': 'ok'}) + response = create_mock_http_response(json_data={"result": "ok"}) mock_session.request.return_value = response mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - apim._multiRequest(HTTP_VERB.GET, '/test', 1, headers=custom_headers, printResponse=False) + with patch.object(apim, "_print_response_code"): + apim._multiRequest(HTTP_VERB.GET, "/test", 1, headers=custom_headers, printResponse=False) # Verify headers.update was called with merged headers update_call_args = mock_session.headers.update.call_args merged_headers = update_call_args[0][0] # Check custom headers are included - assert merged_headers['X-Custom-Header'] == 'custom-value' - assert merged_headers['X-Request-Id'] == '123' + assert merged_headers["X-Custom-Header"] == "custom-value" + assert merged_headers["X-Request-Id"] == "123" # Check default headers are still there - assert 'Accept' in merged_headers - assert merged_headers['Accept'] == 'application/json' + assert "Accept" in merged_headers + assert merged_headers["Accept"] == "application/json" assert SUBSCRIPTION_KEY_PARAMETER_NAME in merged_headers @pytest.mark.unit def test_multi_get_merges_custom_headers(apim): """Test multiGet merges custom headers into requests.""" - custom_headers = {'X-Custom-Header': 'custom-value'} + custom_headers = {"X-Custom-Header": "custom-value"} - with patch('apimrequests.requests.Session') as mock_session_cls: + with patch("apimrequests.requests.Session") as mock_session_cls: mock_session = MagicMock() - response = create_mock_http_response(json_data={'result': 'ok'}) + response = create_mock_http_response(json_data={"result": "ok"}) mock_session.request.return_value = response mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - result = apim.multiGet('/test', runs=2, headers=custom_headers, printResponse=False) + with patch.object(apim, "_print_response_code"): + apim.multiGet("/test", runs=2, headers=custom_headers, printResponse=False) + + def test_single_request_merges_custom_headers(apim): """Test singleGet merges custom headers with default headers.""" - custom_headers = {'X-Custom-Header': 'test-value'} + custom_headers = {"X-Custom-Header": "test-value"} - mock_response = create_mock_http_response(json_data={'result': 'ok'}) + mock_response = create_mock_http_response(json_data={"result": "ok"}) - with patch('apimrequests.requests.request') as mock_request: + with patch("apimrequests.requests.request") as mock_request: mock_request.return_value = mock_response - with patch.object(apim, '_print_response'): - apim.singleGet('/test', headers=custom_headers, printResponse=True) + with patch.object(apim, "_print_response"): + apim.singleGet("/test", headers=custom_headers, printResponse=True) # Verify merged headers were passed to request call_kwargs = mock_request.call_args[1] - merged_headers = call_kwargs['headers'] + merged_headers = call_kwargs["headers"] - assert merged_headers['X-Custom-Header'] == 'test-value' - assert 'Accept' in merged_headers + assert merged_headers["X-Custom-Header"] == "test-value" + assert "Accept" in merged_headers assert SUBSCRIPTION_KEY_PARAMETER_NAME in merged_headers @pytest.mark.unit def test_multi_request_custom_headers_do_not_affect_other_runs(apim): """Test that custom headers persist across multiple runs in multiGet.""" - custom_headers = {'X-Request-Id': 'same-id'} + custom_headers = {"X-Request-Id": "same-id"} - with patch('apimrequests.requests.Session') as mock_session_cls: + with patch("apimrequests.requests.Session") as mock_session_cls: mock_session = MagicMock() - response = create_mock_http_response(json_data={'result': 'ok'}) + response = create_mock_http_response(json_data={"result": "ok"}) mock_session.request.return_value = response mock_session_cls.return_value = mock_session - with patch.object(apim, '_print_response_code'): - result = apim.multiGet('/test', runs=3, headers=custom_headers, printResponse=False) + with patch.object(apim, "_print_response_code"): + result = apim.multiGet("/test", runs=3, headers=custom_headers, printResponse=False) # Verify headers.update was called once with merged headers assert mock_session.headers.update.call_count == 1 @@ -1078,7 +1032,7 @@ def test_multi_request_custom_headers_do_not_affect_other_runs(apim): # Verify headers contain custom header update_call_args = mock_session.headers.update.call_args merged_headers = update_call_args[0][0] - assert merged_headers['X-Request-Id'] == 'same-id' + assert merged_headers["X-Request-Id"] == "same-id" # Verify all 3 runs completed assert len(result) == 3 diff --git a/tests/python/test_charts.py b/tests/python/test_charts.py index d27d42cb..010a61dc 100644 --- a/tests/python/test_charts.py +++ b/tests/python/test_charts.py @@ -2,13 +2,15 @@ Unit tests for the Charts module. """ -from unittest.mock import patch, MagicMock -import sys -import os import json -import pytest -import pandas as pd +import os +import sys +from unittest.mock import MagicMock, patch + import charts +import pandas as pd +import pytest +from apimtypes import HttpStatusCode from charts import BarChart # Add the shared/python directory to the Python path @@ -187,7 +189,7 @@ def test_plot_barchart_data_processing(mock_dataframe, mock_plt, sample_api_resu assert first_row['Run'] == 1 assert first_row['Response Time (ms)'] == 123.0 # 0.123 * 1000 assert first_row['Backend Index'] == 1 - assert first_row['Status Code'] == 200 + assert first_row['Status Code'] == HttpStatusCode.OK @patch('charts.plt') @@ -464,7 +466,7 @@ def test_average_line_calculation_normal_data(mock_pd, mock_plt, sample_api_resu run = entry['run'] response_time = entry['response_time'] status_code = entry['status_code'] - if status_code == 200 and entry['response']: + if status_code == HttpStatusCode.OK and entry['response']: try: resp = json.loads(entry['response']) backend_index = resp.get('index', 99) diff --git a/tests/python/test_infrastructures.py b/tests/python/test_infrastructures.py index adcd1a17..7692e7b8 100644 --- a/tests/python/test_infrastructures.py +++ b/tests/python/test_infrastructures.py @@ -1011,19 +1011,19 @@ def test_infrastructure_with_all_custom_components(mock_utils, mock_policy_fragm def test_infrastructure_missing_required_params(): """Test Infrastructure creation with missing required parameters.""" with pytest.raises(TypeError): - infrastructures.Infrastructure() # pylint: disable=no-value-for-parameter + infrastructures.Infrastructure() with pytest.raises(TypeError): - infrastructures.Infrastructure(infra=INFRASTRUCTURE.SIMPLE_APIM) # pylint: disable=no-value-for-parameter + infrastructures.Infrastructure(infra=INFRASTRUCTURE.SIMPLE_APIM) @pytest.mark.unit def test_concrete_infrastructure_missing_params(): """Test concrete infrastructure classes with missing parameters.""" with pytest.raises(TypeError): - infrastructures.SimpleApimInfrastructure() # pylint: disable=no-value-for-parameter + infrastructures.SimpleApimInfrastructure() with pytest.raises(TypeError): - infrastructures.SimpleApimInfrastructure(rg_location=TEST_LOCATION) # pylint: disable=no-value-for-parameter + infrastructures.SimpleApimInfrastructure(rg_location=TEST_LOCATION) # ------------------------------ @@ -5500,8 +5500,8 @@ def test_cleanup_resources_parallel_thread_safe_consistent_signature(monkeypatch # Verify parameter types from annotations (using string representation for comparison) assert 'list[dict]' in str(sig.parameters['resources'].annotation) - assert sig.parameters['thread_prefix'].annotation == str - assert sig.parameters['thread_color'].annotation == str + assert sig.parameters['thread_prefix'].annotation is str + assert sig.parameters['thread_color'].annotation is str @pytest.mark.unit diff --git a/tests/python/test_show_infrastructures.py b/tests/python/test_show_infrastructures.py index c77d2123..4b1105f6 100644 --- a/tests/python/test_show_infrastructures.py +++ b/tests/python/test_show_infrastructures.py @@ -266,18 +266,18 @@ def test_display_infrastructures_table_formatting(monkeypatch): si.display_infrastructures(data, include_location=True) # Verify header row exists (contains #) - header_line = next((l for l in printed_lines if '#' in l and 'Infrastructure' in l), None) + header_line = next((line for line in printed_lines if '#' in line and 'Infrastructure' in line), None) assert header_line is not None assert 'Index' in header_line assert 'Resource Group' in header_line assert 'Location' in header_line # Verify separator row exists (all dashes) - separator_line = next((l for l in printed_lines if l and all(c in '- ' for c in l)), None) + separator_line = next((line for line in printed_lines if line and all(c in '- ' for c in line)), None) assert separator_line is not None # Verify data row exists - data_line = next((l for l in printed_lines if 'test-infra' in l), None) + data_line = next((line for line in printed_lines if 'test-infra' in line), None) assert data_line is not None assert 'test-rg-1' in data_line assert 'eastus' in data_line diff --git a/tests/python/test_utils.py b/tests/python/test_utils.py index 8752b514..2d531d9a 100644 --- a/tests/python/test_utils.py +++ b/tests/python/test_utils.py @@ -1904,10 +1904,10 @@ def test_infrastructure_notebook_helper_allow_update_false(monkeypatch, suppress def test_infrastructure_notebook_helper_missing_args(): """Test InfrastructureNotebookHelper requires all arguments.""" with pytest.raises(TypeError): - utils.InfrastructureNotebookHelper() # pylint: disable=no-value-for-parameter + utils.InfrastructureNotebookHelper() with pytest.raises(TypeError): - utils.InfrastructureNotebookHelper('eastus') # pylint: disable=no-value-for-parameter + utils.InfrastructureNotebookHelper('eastus') def test_does_infrastructure_exist_with_prompt_multiple_retries(monkeypatch, suppress_console): diff --git a/uv.lock b/uv.lock index f1412abf..f44dbc73 100644 --- a/uv.lock +++ b/uv.lock @@ -27,9 +27,9 @@ dependencies = [ [package.dev-dependencies] dev = [ { name = "coverage" }, - { name = "pylint" }, { name = "pytest" }, { name = "pytest-cov" }, + { name = "ruff" }, ] [package.metadata] @@ -46,9 +46,9 @@ requires-dist = [ [package.metadata.requires-dev] dev = [ { name = "coverage", specifier = ">=7.6.4" }, - { name = "pylint", specifier = ">=4.0.0" }, { name = "pytest", specifier = ">=9.0.0" }, { name = "pytest-cov", specifier = ">=7.0.0" }, + { name = "ruff", specifier = ">=0.9.0" }, ] [[package]] @@ -60,15 +60,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/81/29/5ecc3a15d5a33e31b26c11426c45c501e439cb865d0bff96315d86443b78/appnope-0.1.4-py2.py3-none-any.whl", hash = "sha256:502575ee11cd7a28c0205f379b525beefebab9d161b7c964670864014ed7213c", size = 4321, upload-time = "2024-02-06T09:43:09.663Z" }, ] -[[package]] -name = "astroid" -version = "4.0.3" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/a1/ca/c17d0f83016532a1ad87d1de96837164c99d47a3b6bbba28bd597c25b37a/astroid-4.0.3.tar.gz", hash = "sha256:08d1de40d251cc3dc4a7a12726721d475ac189e4e583d596ece7422bc176bda3", size = 406224, upload-time = "2026-01-03T22:14:26.096Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/ce/66/686ac4fc6ef48f5bacde625adac698f41d5316a9753c2b20bb0931c9d4e2/astroid-4.0.3-py3-none-any.whl", hash = "sha256:864a0a34af1bd70e1049ba1e61cee843a7252c826d97825fcee9b2fcbd9e1b14", size = 276443, upload-time = "2026-01-03T22:14:24.412Z" }, -] - [[package]] name = "asttokens" version = "3.0.1" @@ -398,15 +389,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/4e/8c/f3147f5c4b73e7550fe5f9352eaa956ae838d5c51eb58e7a25b9f3e2643b/decorator-5.2.1-py3-none-any.whl", hash = "sha256:d316bb415a2d9e2d2b3abcc4084c6502fc09240e292cd76a76afc106a1c8e04a", size = 9190, upload-time = "2025-02-24T04:41:32.565Z" }, ] -[[package]] -name = "dill" -version = "0.4.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/81/e1/56027a71e31b02ddc53c7d65b01e68edf64dea2932122fe7746a516f75d5/dill-0.4.1.tar.gz", hash = "sha256:423092df4182177d4d8ba8290c8a5b640c66ab35ec7da59ccfa00f6fa3eea5fa", size = 187315, upload-time = "2026-01-19T02:36:56.85Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/1e/77/dc8c558f7593132cf8fefec57c4f60c83b16941c574ac5f619abb3ae7933/dill-0.4.1-py3-none-any.whl", hash = "sha256:1e1ce33e978ae97fcfcff5638477032b801c46c7c65cf717f95fbc2248f79a9d", size = 120019, upload-time = "2026-01-19T02:36:55.663Z" }, -] - [[package]] name = "executing" version = "2.2.1" @@ -532,15 +514,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/d9/33/1f075bf72b0b747cb3288d011319aaf64083cf2efef8354174e3ed4540e2/ipython_pygments_lexers-1.1.1-py3-none-any.whl", hash = "sha256:a9462224a505ade19a605f71f8fa63c2048833ce50abc86768a0d81d876dc81c", size = 8074, upload-time = "2025-01-17T11:24:33.271Z" }, ] -[[package]] -name = "isort" -version = "7.0.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/63/53/4f3c058e3bace40282876f9b553343376ee687f3c35a525dc79dbd450f88/isort-7.0.0.tar.gz", hash = "sha256:5513527951aadb3ac4292a41a16cbc50dd1642432f5e8c20057d414bdafb4187", size = 805049, upload-time = "2025-10-11T13:30:59.107Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/7f/ed/e3705d6d02b4f7aea715a353c8ce193efd0b5db13e204df895d38734c244/isort-7.0.0-py3-none-any.whl", hash = "sha256:1bcabac8bc3c36c7fb7b98a76c8abb18e0f841a3ba81decac7691008592499c1", size = 94672, upload-time = "2025-10-11T13:30:57.665Z" }, -] - [[package]] name = "jedi" version = "0.19.2" @@ -720,15 +693,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/af/33/ee4519fa02ed11a94aef9559552f3b17bb863f2ecfe1a35dc7f548cde231/matplotlib_inline-0.2.1-py3-none-any.whl", hash = "sha256:d56ce5156ba6085e00a9d54fead6ed29a9c47e215cd1bba2e976ef39f5710a76", size = 9516, upload-time = "2025-10-23T09:00:20.675Z" }, ] -[[package]] -name = "mccabe" -version = "0.7.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/e7/ff/0ffefdcac38932a54d2b5eed4e0ba8a408f215002cd178ad1df0f2806ff8/mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325", size = 9658, upload-time = "2022-01-24T01:14:51.113Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/27/1a/1f68f9ba0c207934b35b86a8ca3aad8395a3d6dd7921c0686e23853ff5a9/mccabe-0.7.0-py2.py3-none-any.whl", hash = "sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e", size = 7350, upload-time = "2022-01-24T01:14:49.62Z" }, -] - [[package]] name = "nest-asyncio" version = "1.6.0" @@ -1053,24 +1017,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/61/ad/689f02752eeec26aed679477e80e632ef1b682313be70793d798c1d5fc8f/PyJWT-2.10.1-py3-none-any.whl", hash = "sha256:dcdd193e30abefd5debf142f9adfcdd2b58004e644f25406ffaebd50bd98dacb", size = 22997, upload-time = "2024-11-28T03:43:27.893Z" }, ] -[[package]] -name = "pylint" -version = "4.0.4" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "astroid" }, - { name = "colorama", marker = "sys_platform == 'win32'" }, - { name = "dill" }, - { name = "isort" }, - { name = "mccabe" }, - { name = "platformdirs" }, - { name = "tomlkit" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/5a/d2/b081da1a8930d00e3fc06352a1d449aaf815d4982319fab5d8cdb2e9ab35/pylint-4.0.4.tar.gz", hash = "sha256:d9b71674e19b1c36d79265b5887bf8e55278cbe236c9e95d22dc82cf044fdbd2", size = 1571735, upload-time = "2025-11-30T13:29:04.315Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/a6/92/d40f5d937517cc489ad848fc4414ecccc7592e4686b9071e09e64f5e378e/pylint-4.0.4-py3-none-any.whl", hash = "sha256:63e06a37d5922555ee2c20963eb42559918c20bd2b21244e4ef426e7c43b92e0", size = 536425, upload-time = "2025-11-30T13:29:02.53Z" }, -] - [[package]] name = "pyparsing" version = "3.3.2" @@ -1189,6 +1135,31 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6", size = 64738, upload-time = "2025-08-18T20:46:00.542Z" }, ] +[[package]] +name = "ruff" +version = "0.15.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/06/04/eab13a954e763b0606f460443fcbf6bb5a0faf06890ea3754ff16523dce5/ruff-0.15.2.tar.gz", hash = "sha256:14b965afee0969e68bb871eba625343b8673375f457af4abe98553e8bbb98342", size = 4558148, upload-time = "2026-02-19T22:32:20.271Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/2f/70/3a4dc6d09b13cb3e695f28307e5d889b2e1a66b7af9c5e257e796695b0e6/ruff-0.15.2-py3-none-linux_armv6l.whl", hash = "sha256:120691a6fdae2f16d65435648160f5b81a9625288f75544dc40637436b5d3c0d", size = 10430565, upload-time = "2026-02-19T22:32:41.824Z" }, + { url = "https://files.pythonhosted.org/packages/71/0b/bb8457b56185ece1305c666dc895832946d24055be90692381c31d57466d/ruff-0.15.2-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:a89056d831256099658b6bba4037ac6dd06f49d194199215befe2bb10457ea5e", size = 10820354, upload-time = "2026-02-19T22:32:07.366Z" }, + { url = "https://files.pythonhosted.org/packages/2d/c1/e0532d7f9c9e0b14c46f61b14afd563298b8b83f337b6789ddd987e46121/ruff-0.15.2-py3-none-macosx_11_0_arm64.whl", hash = "sha256:e36dee3a64be0ebd23c86ffa3aa3fd3ac9a712ff295e192243f814a830b6bd87", size = 10170767, upload-time = "2026-02-19T22:32:13.188Z" }, + { url = "https://files.pythonhosted.org/packages/47/e8/da1aa341d3af017a21c7a62fb5ec31d4e7ad0a93ab80e3a508316efbcb23/ruff-0.15.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a9fb47b6d9764677f8c0a193c0943ce9a05d6763523f132325af8a858eadc2b9", size = 10529591, upload-time = "2026-02-19T22:32:02.547Z" }, + { url = "https://files.pythonhosted.org/packages/93/74/184fbf38e9f3510231fbc5e437e808f0b48c42d1df9434b208821efcd8d6/ruff-0.15.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f376990f9d0d6442ea9014b19621d8f2aaf2b8e39fdbfc79220b7f0c596c9b80", size = 10260771, upload-time = "2026-02-19T22:32:36.938Z" }, + { url = "https://files.pythonhosted.org/packages/05/ac/605c20b8e059a0bc4b42360414baa4892ff278cec1c91fff4be0dceedefd/ruff-0.15.2-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2dcc987551952d73cbf5c88d9fdee815618d497e4df86cd4c4824cc59d5dd75f", size = 11045791, upload-time = "2026-02-19T22:32:31.642Z" }, + { url = "https://files.pythonhosted.org/packages/fd/52/db6e419908f45a894924d410ac77d64bdd98ff86901d833364251bd08e22/ruff-0.15.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:42a47fd785cbe8c01b9ff45031af875d101b040ad8f4de7bbb716487c74c9a77", size = 11879271, upload-time = "2026-02-19T22:32:29.305Z" }, + { url = "https://files.pythonhosted.org/packages/3e/d8/7992b18f2008bdc9231d0f10b16df7dda964dbf639e2b8b4c1b4e91b83af/ruff-0.15.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cbe9f49354866e575b4c6943856989f966421870e85cd2ac94dccb0a9dcb2fea", size = 11303707, upload-time = "2026-02-19T22:32:22.492Z" }, + { url = "https://files.pythonhosted.org/packages/d7/02/849b46184bcfdd4b64cde61752cc9a146c54759ed036edd11857e9b8443b/ruff-0.15.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b7a672c82b5f9887576087d97be5ce439f04bbaf548ee987b92d3a7dede41d3a", size = 11149151, upload-time = "2026-02-19T22:32:44.234Z" }, + { url = "https://files.pythonhosted.org/packages/70/04/f5284e388bab60d1d3b99614a5a9aeb03e0f333847e2429bebd2aaa1feec/ruff-0.15.2-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:72ecc64f46f7019e2bcc3cdc05d4a7da958b629a5ab7033195e11a438403d956", size = 11091132, upload-time = "2026-02-19T22:32:24.691Z" }, + { url = "https://files.pythonhosted.org/packages/fa/ae/88d844a21110e14d92cf73d57363fab59b727ebeabe78009b9ccb23500af/ruff-0.15.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:8dcf243b15b561c655c1ef2f2b0050e5d50db37fe90115507f6ff37d865dc8b4", size = 10504717, upload-time = "2026-02-19T22:32:26.75Z" }, + { url = "https://files.pythonhosted.org/packages/64/27/867076a6ada7f2b9c8292884ab44d08fd2ba71bd2b5364d4136f3cd537e1/ruff-0.15.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:dab6941c862c05739774677c6273166d2510d254dac0695c0e3f5efa1b5585de", size = 10263122, upload-time = "2026-02-19T22:32:10.036Z" }, + { url = "https://files.pythonhosted.org/packages/e7/ef/faf9321d550f8ebf0c6373696e70d1758e20ccdc3951ad7af00c0956be7c/ruff-0.15.2-py3-none-musllinux_1_2_i686.whl", hash = "sha256:1b9164f57fc36058e9a6806eb92af185b0697c9fe4c7c52caa431c6554521e5c", size = 10735295, upload-time = "2026-02-19T22:32:39.227Z" }, + { url = "https://files.pythonhosted.org/packages/2f/55/e8089fec62e050ba84d71b70e7834b97709ca9b7aba10c1a0b196e493f97/ruff-0.15.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:80d24fcae24d42659db7e335b9e1531697a7102c19185b8dc4a028b952865fd8", size = 11241641, upload-time = "2026-02-19T22:32:34.617Z" }, + { url = "https://files.pythonhosted.org/packages/23/01/1c30526460f4d23222d0fabd5888868262fd0e2b71a00570ca26483cd993/ruff-0.15.2-py3-none-win32.whl", hash = "sha256:fd5ff9e5f519a7e1bd99cbe8daa324010a74f5e2ebc97c6242c08f26f3714f6f", size = 10507885, upload-time = "2026-02-19T22:32:15.635Z" }, + { url = "https://files.pythonhosted.org/packages/5c/10/3d18e3bbdf8fc50bbb4ac3cc45970aa5a9753c5cb51bf9ed9a3cd8b79fa3/ruff-0.15.2-py3-none-win_amd64.whl", hash = "sha256:d20014e3dfa400f3ff84830dfb5755ece2de45ab62ecea4af6b7262d0fb4f7c5", size = 11623725, upload-time = "2026-02-19T22:32:04.947Z" }, + { url = "https://files.pythonhosted.org/packages/6d/78/097c0798b1dab9f8affe73da9642bb4500e098cb27fd8dc9724816ac747b/ruff-0.15.2-py3-none-win_arm64.whl", hash = "sha256:cabddc5822acdc8f7b5527b36ceac55cc51eec7b1946e60181de8fe83ca8876e", size = 10941649, upload-time = "2026-02-19T22:32:18.108Z" }, +] + [[package]] name = "six" version = "1.17.0" @@ -1212,15 +1183,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/f1/7b/ce1eafaf1a76852e2ec9b22edecf1daa58175c090266e9f6c64afcd81d91/stack_data-0.6.3-py3-none-any.whl", hash = "sha256:d5558e0c25a4cb0853cddad3d77da9891a08cb85dd9f9f91b9f8cd66e511e695", size = 24521, upload-time = "2023-09-30T13:58:03.53Z" }, ] -[[package]] -name = "tomlkit" -version = "0.14.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/c3/af/14b24e41977adb296d6bd1fb59402cf7d60ce364f90c890bd2ec65c43b5a/tomlkit-0.14.0.tar.gz", hash = "sha256:cf00efca415dbd57575befb1f6634c4f42d2d87dbba376128adb42c121b87064", size = 187167, upload-time = "2026-01-13T01:14:53.304Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/b5/11/87d6d29fb5d237229d67973a6c9e06e048f01cf4994dee194ab0ea841814/tomlkit-0.14.0-py3-none-any.whl", hash = "sha256:592064ed85b40fa213469f81ac584f67a4f2992509a7c3ea2d632208623a3680", size = 39310, upload-time = "2026-01-13T01:14:51.965Z" }, -] - [[package]] name = "tornado" version = "6.5.4"