Conversation
InputsIs the context object formally structured, or is it basically random and the LLM figures out meaning from it? Right now we only have two parts of the app which build context and they're both quite naturally contextualised. But I'm not sure how well that works in other places. I mean, for example you can be on the Workflow Diagram and have a job selected - should that populate the I guess I'm wondering if we should have a formal structure for the context object which is used across services: I'm making this up on the fly but something like: The comments show the complexity of this stuff. We don't need to be perfect right now but I do want to start off on the right foot. What we're trying to do here is establish a reasonably future-proof structure, and in a way that helps the app know how to assemble context Alternatively we have to define different context structures for different pages (way less keen on that) A few other comments now that I'm thinking about this OutputsThe outputs look good.
I think we should represent rag results in a more structured way - one day I'd really like to report rag sources to the user. So a top level Attachments might want more metadata, but we can add that as we need. I'm thinking of I guess the other question here is: should attachments be "sticky" to a session and then referenced in the content? Because we presumably save all attachments in the chat history so that'll get pretty bloaty. Then again, I can't believe an attachment will be used more than once? Maybe a link but that doesn't matter. So the structure is probably correct |
Short Description
Suggested payloads for the Global Agent service. Note that without these changes, it would work with the existing formats used in
job_chatandworkflow_chat.Input Payload
{ "content": "string (REQUIRED)", // User message "context": { // All contextual information "workflow_yaml": "string (optional)", // Current workflow YAML "job_code": { // Job code context (optional) "expression": "string", "page_name": "string", "adaptor": "string" }, "errors": "string (optional)" // Error context }, "history": [ // Chat history (optional) { "role": "user|assistant", "content": "string", "attachments": [] } ], "stream": false, "read_only": false, "api_key": "string (optional)" }Output Payload
{ "response": "string", // Main text response "attachments": [ // Output attachments { "type": "workflow_yaml|job_code|document|link", "content": "string" } ], "history": [ // Updated conversation history { "role": "user|assistant", "content": "string", "attachments": [] } ], "usage": { // Token usage (4 fields) "input_tokens": 0, "output_tokens": 0, "cache_creation_input_tokens": 0, "cache_read_input_tokens": 0 }, "meta": { // Execution metadata "agents": ["router", "planner", "workflow_agent"], // Execution chain "router_confidence": 5, // Optional details "planner_iterations": 2 } }A more detailed breakdown of the changes, including motivations (if not provided in the issue).
AI Usage
Please disclose how you've used AI in this work (it's cool, we just want to know!):
You can read more details in our Responsible AI Policy