Is your feature request related to a problem? Please describe.
The AP2 project primarily supports the "gemini-2.5-flash" model. This is evident from several Python agent definitions in the repo (such as RetryingLlmAgent and usage in root_agent and subagents), where "model="gemini-2.5-flash"" is specified as the LLM in use. The code constructs the LLM client with genai.Client() and makes content requests using the Gemini API.
Describe the solution you'd like
I'd like for the AP2 protocol implementation reference to be LLM agnostic.
Describe alternatives you've considered
No response
Additional context
No response
Code of Conduct
Is your feature request related to a problem? Please describe.
The AP2 project primarily supports the "gemini-2.5-flash" model. This is evident from several Python agent definitions in the repo (such as RetryingLlmAgent and usage in root_agent and subagents), where "model="gemini-2.5-flash"" is specified as the LLM in use. The code constructs the LLM client with genai.Client() and makes content requests using the Gemini API.
Describe the solution you'd like
I'd like for the AP2 protocol implementation reference to be LLM agnostic.
Describe alternatives you've considered
No response
Additional context
No response
Code of Conduct