Currently, the LLM call endpoint directly accepts and forwards parameters as-is to OpenAI.
We should introduce Kaapi-abstracted LLM parameters that provide a unified interface across multiple LLM providers. Kaapi will be responsible for mapping these abstracted parameters to provider-specific parameters in the backend.
This will significantly simplify the user experience when switching between models and providers.