diff --git a/docs/authoring/ai_checklist_field_labeling.md b/docs/authoring/ai_checklist_field_labeling.md new file mode 100644 index 000000000..64a36c52e --- /dev/null +++ b/docs/authoring/ai_checklist_field_labeling.md @@ -0,0 +1,23 @@ +--- +id: ai_checklist_field_labeling +title: "AI checklist: field labeling" +sidebar_label: "Checklist: field labeling" +slug: ai_checklist_field_labeling +--- + +Use this checklist when AI is labeling PDF/DOCX fields. + +- [ ] Ensure labels follow AssemblyLine variable naming conventions. +- [ ] Use clear nouns for parties and roles (`users`, `other_parties`, etc.). +- [ ] Use contextual date names (not generic `day`, `month`, `year` fields). +- [ ] Ensure repeated fields use consistent suffix/index patterns. +- [ ] Confirm no reserved variable names are used. +- [ ] Validate field labels against expected interview logic before weaving. +- [ ] Re-check high-risk labels manually (party names, signatures, addresses, dates). + +Related guides (canonical details): + +- [Field labels to use in template files](doc_vars_reference.md) +- [PDF templates](pdf_templates.md) +- [DOCX templates](docx_templates.md) +- [Docassemble variable naming in Python](../coding_style/python.md) diff --git a/docs/authoring/ai_checklist_project_setup.md b/docs/authoring/ai_checklist_project_setup.md new file mode 100644 index 000000000..8fbfaab9a --- /dev/null +++ b/docs/authoring/ai_checklist_project_setup.md @@ -0,0 +1,23 @@ +--- +id: ai_checklist_project_setup +title: "AI checklist: project setup and context" +sidebar_label: "Checklist: project setup and context" +slug: ai_checklist_project_setup +--- + +Use this checklist before asking an AI to generate or edit interview code. + +- [ ] Create package scaffold with `dacreate` and initialize Git. +- [ ] Put source templates in `docassemble//data/templates/`. +- [ ] Gather authoritative legal references for this specific form and jurisdiction. +- [ ] Tell the AI the exact jurisdiction, court level, and filing context. +- [ ] Tell the AI which documents are canonical and which are optional. +- [ ] Confirm your package folders and filenames are consistent. +- [ ] Decide whether to use local tools, REST, or MCP endpoints. + +Related guides (canonical details): + +- [Coding with AI assistance](authoring_with_ai.md) +- [Use MCP and REST endpoints for AI-assisted coding](ai_mcp_and_rest.md) +- [GitHub workflow](github.md) +- [Introduction to project architecture](../get_started/al_project_architecture.md) diff --git a/docs/authoring/ai_checklist_quality_publish.md b/docs/authoring/ai_checklist_quality_publish.md new file mode 100644 index 000000000..743afae33 --- /dev/null +++ b/docs/authoring/ai_checklist_quality_publish.md @@ -0,0 +1,23 @@ +--- +id: ai_checklist_quality_publish +title: "AI checklist: quality checks and publishing readiness" +sidebar_label: "Checklist: quality and publishing" +slug: ai_checklist_quality_publish +--- + +Use this checklist before publishing or handing off for legal/content review. + +- [ ] Run through the interview end-to-end with realistic sample facts. +- [ ] Confirm review screens show human-readable values (not raw invariants). +- [ ] Verify all attachment mappings produce expected PDF/DOCX output. +- [ ] Check translations and translation-safe `choices` patterns. +- [ ] Confirm reading level and plain-language standards. +- [ ] Ensure metadata and publishing fields are complete and correct. +- [ ] Capture open legal questions or assumptions for attorney review. + +Related guides (canonical details): + +- [Coding style for YAML translation](../coding_style/yaml_translation.md) +- [Writing good questions](../style_guide/question_style_overview.md) +- [Editing your interview](customizing_interview.md) +- [Metadata for publishing generated YAML interviews](weaver_code_anatomy.md#interview-metadata-and-metadata-for-publishing-on-courtformsonline) diff --git a/docs/authoring/ai_checklist_weaver_editing.md b/docs/authoring/ai_checklist_weaver_editing.md new file mode 100644 index 000000000..dc444c7a4 --- /dev/null +++ b/docs/authoring/ai_checklist_weaver_editing.md @@ -0,0 +1,23 @@ +--- +id: ai_checklist_weaver_editing +title: "AI checklist: Weaver generation and interview editing" +sidebar_label: "Checklist: Weaver generation and editing" +slug: ai_checklist_weaver_editing +--- + +Use this checklist after labels are ready and you are generating a draft interview. + +- [ ] Run Weaver generation (UI, REST, or MCP workflow) to produce first-pass YAML. +- [ ] Confirm generated files are copied into the right package folders. +- [ ] Review and simplify interview flow; merge or split screens as needed. +- [ ] Replace weak/placeholder prompts with plain-language text. +- [ ] Ensure object patterns are used correctly (`name_fields`, `address_fields`, etc.). +- [ ] Remove unnecessary default-role or list-gathering screens. +- [ ] Confirm conditions, required fields, and screen order match legal workflow. + +Related guides (canonical details): + +- ["Weaving" your form into a draft interview](weaver_overview.md) +- [Editing your interview](customizing_interview.md) +- [Writing your own review screen](writing_review_screen.md) +- [Dynamic phrases based on prior answers](dynamic_phrasing_based_on_values.md) diff --git a/docs/authoring/ai_mcp_and_rest.md b/docs/authoring/ai_mcp_and_rest.md new file mode 100644 index 000000000..6e79276cc --- /dev/null +++ b/docs/authoring/ai_mcp_and_rest.md @@ -0,0 +1,114 @@ +--- +id: ai_mcp_and_rest +title: Use MCP and REST endpoints for AI-assisted coding +sidebar_label: Use MCP and REST endpoints +slug: ai_mcp_and_rest +--- + +You can use AI-assisted authoring tools without installing `docassemble.ALWeaver` or `docassemble.ALDashboard` in your local Python environment. + +Instead, call the APIs on a running Docassemble server: + +- ALWeaver REST API (`/al/api/v1/weaver...`) +- ALDashboard REST API (`/al/api/v1/dashboard/...`) +- ALDashboard MCP bridge (`/al/api/v1/mcp`) + +## Why use APIs instead of local package installs? + +- Keeps your AI workflow lightweight on local machines. +- Lets your AI agent work against one shared, server-side environment. +- Avoids local dependency setup for OCR, PDF tooling, and Docassemble internals. +- Makes async jobs available for longer-running tasks. + +## Authentication + +These endpoints use Docassemble API auth (`api_verify()`), typically: + +- `X-API-Key: YOUR_API_KEY`, or +- `Authorization: Bearer ...` (if configured on your server). + +## Option 1: MCP bridge (tool discovery + tool execution) + +The MCP bridge is exposed by ALDashboard: + +- `POST /al/api/v1/mcp` (JSON-RPC 2.0) +- `GET /al/api/v1/mcp` (metadata) +- `GET /al/api/v1/mcp/tools` (tool list) +- `GET /al/api/v1/mcp/docs` (human docs) + +Supported methods: + +- `initialize` +- `ping` +- `tools/list` +- `tools/call` + +Use this when your AI coding tool can speak MCP or JSON-RPC and you want dynamic tool discovery. + +### Example: list MCP tools + +```bash +curl -X POST "https://YOURSERVER/al/api/v1/mcp" \ + -H "Content-Type: application/json" \ + -H "X-API-Key: YOUR_API_KEY" \ + -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' +``` + +### Example: call a discovered tool + +```bash +curl -X POST "https://YOURSERVER/al/api/v1/mcp" \ + -H "Content-Type: application/json" \ + -H "X-API-Key: YOUR_API_KEY" \ + -d '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"aldashboard.get_al_api_v1_dashboard_openapi_json","arguments":{}}}' +``` + +`tools/call` reuses the same request authentication context, so you do not need a second API key just for MCP. + +## Option 2: Direct REST calls + +Use REST directly when you want explicit control over endpoints and payloads. + +### ALWeaver API + +- `POST /al/api/v1/weaver` +- `GET /al/api/v1/weaver/jobs/{job_id}` +- `DELETE /al/api/v1/weaver/jobs/{job_id}` +- `GET /al/api/v1/weaver/openapi.json` +- `GET /al/api/v1/weaver/docs` + +### ALDashboard API + +Examples include: + +- `POST /al/api/v1/dashboard/docx/auto-label` +- `POST /al/api/v1/dashboard/docx/relabel` +- `POST /al/api/v1/dashboard/pdf/fields/detect` +- `POST /al/api/v1/dashboard/review-screen/draft` +- `GET /al/api/v1/dashboard/jobs/{job_id}` +- `GET /al/api/v1/dashboard/openapi.json` +- `GET /al/api/v1/dashboard/docs` + +## Async mode for long-running jobs + +Many endpoints support async processing: + +- include `mode=async` (or `async=true`) +- poll `/jobs/{job_id}` +- optionally download artifacts from `/jobs/{job_id}/download` (ALDashboard) + +To enable async workers, configure: + +```yaml +celery modules: + - docassemble.ALWeaver.api_weaver_worker + - docassemble.ALDashboard.api_dashboard_worker +``` + +## Recommended workflow + +1. Use [Coding with AI assistance](authoring_with_ai.md) for overall flow. +2. Use MCP `tools/list` to discover available server capabilities. +3. Use REST/MCP calls for concrete tasks (labeling, draft generation, validation). +4. Apply the focused checklists in this section to review outputs before publishing. + diff --git a/docs/authoring/authoring_with_ai.md b/docs/authoring/authoring_with_ai.md index b47a6b638..d52cd26a5 100644 --- a/docs/authoring/authoring_with_ai.md +++ b/docs/authoring/authoring_with_ai.md @@ -1,7 +1,7 @@ --- id: authoring_with_ai -title: Authoring with AI assistance -sidebar_label: Authoring with AI assistance +title: Coding with AI assistance +sidebar_label: Coding with AI assistance slug: authoring_with_ai --- @@ -9,6 +9,14 @@ Here's an experimental workflow for using AI assistance to generate Docassemble while leveraging the Assembly Line framework's predictable, rules-based assistance at every stage where it can be helpful. +For API-first workflows, see: + +- [Use MCP and REST endpoints for AI-assisted coding](ai_mcp_and_rest.md) +- [AI checklist: project setup and context](ai_checklist_project_setup.md) +- [AI checklist: field labeling](ai_checklist_field_labeling.md) +- [AI checklist: Weaver generation and interview editing](ai_checklist_weaver_editing.md) +- [AI checklist: quality checks and publishing](ai_checklist_quality_publish.md) + You can follow these instructions with any AI coding assistant: one that runs in a local code editor is best in order to run the different pieces. You might use: diff --git a/sidebars.js b/sidebars.js index c85f6eb2f..54ac6bb1c 100644 --- a/sidebars.js +++ b/sidebars.js @@ -97,7 +97,18 @@ module.exports = { 'authoring/customizing_interview', 'authoring/writing_review_screen', 'authoring/yaml_anatomy', - 'authoring/authoring_with_ai', + { + label: 'Coding with AI assistance', + type: 'category', + items: [ + 'authoring/authoring_with_ai', + 'authoring/ai_mcp_and_rest', + 'authoring/ai_checklist_project_setup', + 'authoring/ai_checklist_field_labeling', + 'authoring/ai_checklist_weaver_editing', + 'authoring/ai_checklist_quality_publish', + ], + }, ], }, {