Unified API: Add support for Kaapi Abstracted LLM Call#498
Unified API: Add support for Kaapi Abstracted LLM Call#498avirajsingh7 merged 10 commits intomainfrom
Conversation
WalkthroughAdds Kaapi-format LLM configs and a Kaapi→native transformation pipeline: new Kaapi models and discriminated completion configs, a mapper that converts Kaapi configs to native ("openai-native") params (with warnings), registry/provider naming updates, docs clarifications, and extensive test updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API as API/Validation
participant Mapper as Mapper Service
participant Registry as Provider Registry
participant Provider as LLM Provider
alt Kaapi config (provider="openai")
Client->>API: POST /llm-call (KaapiCompletionConfig)
API->>API: validate request (discriminator)
API->>Mapper: transform_kaapi_config_to_native(KaapiCompletionConfig)
Mapper->>Mapper: map_kaapi_to_openai_params(...) → params + warnings
Mapper-->>API: NativeCompletionConfig + warnings
API->>Registry: get_llm_provider("openai-native")
Registry->>Registry: derive credential_provider ("openai")
Registry-->>API: provider instance
API->>Provider: execute(native params)
Provider-->>API: response
API-->>Client: response (includes metadata.warnings)
else Native config (provider="openai-native")
Client->>API: POST /llm-call (NativeCompletionConfig)
API->>API: validate request (discriminator)
API->>Registry: get_llm_provider("openai-native")
Registry->>Registry: derive credential_provider ("openai")
Registry-->>API: provider instance
API->>Provider: execute(params as-is)
Provider-->>API: response
API-->>Client: response
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (5)
backend/app/services/llm/providers/registry.py (2)
49-50: Suffix stripping approach may be fragile for future providers.The current approach using
provider_type.replace("-native", "")works for"openai-native" → "openai", but could behave unexpectedly for:
- Providers with "native" in the base name
- Future providers with different naming patterns
Consider using an explicit mapping or a more robust extraction method.
+ # Explicit mapping for credential provider lookup + _credential_mapping = { + "openai-native": "openai", + # Future: "claude-native": "claude", "gemini-native": "gemini" + } + # e.g., "openai-native" → "openai", "claude-native" → "claude" - credential_provider = provider_type.replace("-native", "") + credential_provider = _credential_mapping.get(provider_type, provider_type.replace("-native", ""))
44-74: Add type hints for local variables per coding guidelines.The function could benefit from type hints for
credential_providerandcredentialsvariables.def get_llm_provider( session: Session, provider_type: str, project_id: int, organization_id: int ) -> BaseProvider: provider_class = LLMProvider.get(provider_type) # e.g., "openai-native" → "openai", "claude-native" → "claude" - credential_provider = provider_type.replace("-native", "") + credential_provider: str = provider_type.replace("-native", "") - credentials = get_provider_credential( + credentials: dict | None = get_provider_credential( session=session, provider=credential_provider, project_id=project_id, org_id=organization_id, )backend/app/services/llm/mappers.py (2)
6-54: Add more specific return type hint.Per coding guidelines for Python 3.11+ projects, consider using a more specific return type annotation.
+from typing import Any + from app.models.llm import KaapiLLMParams, KaapiCompletionConfig, NativeCompletionConfig -def map_kaapi_to_openai_params(kaapi_params: KaapiLLMParams) -> dict: +def map_kaapi_to_openai_params(kaapi_params: KaapiLLMParams) -> dict[str, Any]:
39-46: Consider extracting magic number to a constant.The default
max_num_resultsvalue of20could be extracted to a named constant for clarity and easier maintenance.+DEFAULT_FILE_SEARCH_MAX_RESULTS = 20 + + def map_kaapi_to_openai_params(kaapi_params: KaapiLLMParams) -> dict: ... if kaapi_params.knowledge_base_ids: openai_params["tools"] = [ { "type": "file_search", "vector_store_ids": kaapi_params.knowledge_base_ids, - "max_num_results": kaapi_params.max_num_results or 20, + "max_num_results": kaapi_params.max_num_results or DEFAULT_FILE_SEARCH_MAX_RESULTS, } ]backend/app/tests/api/routes/test_llm.py (1)
4-12: Remove unused import:CompletionConfig.This import is not used after refactoring to use
NativeCompletionConfigandKaapiCompletionConfigdirectly.from app.models.llm.request import ( QueryParams, LLMCallConfig, - CompletionConfig, ConfigBlob, KaapiLLMParams, KaapiCompletionConfig, NativeCompletionConfig, )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
backend/app/api/docs/llm/llm_call.md(1 hunks)backend/app/models/llm/__init__.py(1 hunks)backend/app/models/llm/request.py(2 hunks)backend/app/services/llm/jobs.py(3 hunks)backend/app/services/llm/mappers.py(1 hunks)backend/app/services/llm/providers/registry.py(2 hunks)backend/app/tests/api/routes/configs/test_config.py(1 hunks)backend/app/tests/api/routes/configs/test_version.py(3 hunks)backend/app/tests/api/routes/test_llm.py(3 hunks)backend/app/tests/crud/config/test_config.py(2 hunks)backend/app/tests/crud/config/test_version.py(2 hunks)backend/app/tests/services/llm/providers/test_openai.py(4 hunks)backend/app/tests/services/llm/providers/test_registry.py(5 hunks)backend/app/tests/services/llm/test_jobs.py(14 hunks)backend/app/tests/services/llm/test_mappers.py(1 hunks)backend/app/tests/utils/test_data.py(4 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use type hints in Python code (Python 3.11+ project)
Files:
backend/app/tests/api/routes/configs/test_version.pybackend/app/tests/services/llm/providers/test_openai.pybackend/app/services/llm/providers/registry.pybackend/app/tests/crud/config/test_version.pybackend/app/tests/crud/config/test_config.pybackend/app/tests/api/routes/test_llm.pybackend/app/tests/api/routes/configs/test_config.pybackend/app/tests/services/llm/test_jobs.pybackend/app/services/llm/mappers.pybackend/app/tests/utils/test_data.pybackend/app/models/llm/__init__.pybackend/app/models/llm/request.pybackend/app/tests/services/llm/providers/test_registry.pybackend/app/services/llm/jobs.pybackend/app/tests/services/llm/test_mappers.py
backend/app/services/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Implement business logic services under backend/app/services/
Files:
backend/app/services/llm/providers/registry.pybackend/app/services/llm/mappers.pybackend/app/services/llm/jobs.py
backend/app/models/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Define SQLModel entities (database tables and domain objects) in backend/app/models/
Files:
backend/app/models/llm/__init__.pybackend/app/models/llm/request.py
🧬 Code graph analysis (13)
backend/app/tests/api/routes/configs/test_version.py (1)
backend/app/models/llm/request.py (2)
ConfigBlob(124-130)NativeCompletionConfig(86-100)
backend/app/tests/services/llm/providers/test_openai.py (1)
backend/app/models/llm/request.py (1)
NativeCompletionConfig(86-100)
backend/app/tests/crud/config/test_version.py (2)
backend/app/models/config/version.py (1)
ConfigVersionCreate(90-96)backend/app/models/llm/request.py (2)
ConfigBlob(124-130)NativeCompletionConfig(86-100)
backend/app/tests/crud/config/test_config.py (2)
backend/app/models/llm/request.py (1)
NativeCompletionConfig(86-100)backend/app/tests/services/llm/providers/test_openai.py (1)
provider(27-29)
backend/app/tests/api/routes/test_llm.py (1)
backend/app/models/llm/request.py (6)
KaapiLLMParams(8-42)KaapiCompletionConfig(103-114)NativeCompletionConfig(86-100)LLMCallRequest(192-226)LLMCallConfig(133-189)ConfigBlob(124-130)
backend/app/tests/services/llm/test_jobs.py (2)
backend/app/models/llm/request.py (5)
NativeCompletionConfig(86-100)KaapiLLMParams(8-42)KaapiCompletionConfig(103-114)ConfigBlob(124-130)LLMCallConfig(133-189)backend/app/tests/utils/test_data.py (1)
create_test_config(239-302)
backend/app/services/llm/mappers.py (1)
backend/app/models/llm/request.py (3)
KaapiLLMParams(8-42)KaapiCompletionConfig(103-114)NativeCompletionConfig(86-100)
backend/app/tests/utils/test_data.py (3)
backend/app/models/llm/request.py (3)
KaapiLLMParams(8-42)KaapiCompletionConfig(103-114)NativeCompletionConfig(86-100)backend/app/models/config/config.py (1)
Config(30-80)backend/app/tests/services/llm/providers/test_openai.py (1)
provider(27-29)
backend/app/models/llm/__init__.py (1)
backend/app/models/llm/request.py (3)
KaapiLLMParams(8-42)KaapiCompletionConfig(103-114)NativeCompletionConfig(86-100)
backend/app/models/llm/request.py (2)
backend/app/crud/rag/open_ai.py (1)
default(31-32)backend/app/tests/services/llm/providers/test_openai.py (1)
provider(27-29)
backend/app/tests/services/llm/providers/test_registry.py (2)
backend/app/services/llm/providers/registry.py (1)
LLMProvider(14-41)backend/app/services/llm/providers/openai.py (1)
OpenAIProvider(21-110)
backend/app/services/llm/jobs.py (5)
backend/app/models/llm/request.py (1)
KaapiCompletionConfig(103-114)backend/app/utils.py (2)
APIResponse(33-57)failure_response(46-57)backend/app/celery/utils.py (1)
start_high_priority_job(18-43)backend/app/services/llm/providers/registry.py (1)
get_llm_provider(44-74)backend/app/services/llm/mappers.py (1)
transform_kaapi_config_to_native(57-74)
backend/app/tests/services/llm/test_mappers.py (2)
backend/app/models/llm/request.py (3)
KaapiLLMParams(8-42)KaapiCompletionConfig(103-114)NativeCompletionConfig(86-100)backend/app/services/llm/mappers.py (2)
map_kaapi_to_openai_params(6-54)transform_kaapi_config_to_native(57-74)
🪛 Ruff (0.14.8)
backend/app/models/llm/request.py
119-119: Use X | Y for type annotations
Convert to X | Y
(UP007)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (35)
backend/app/tests/api/routes/configs/test_config.py (3)
22-22: LGTM! Provider updated to native naming convention.The test correctly uses
"openai-native"to align with the newNativeCompletionConfigtype.
416-426: This test correctly validates authentication requirement; the openai provider with dict params is valid.The test at line 416-426 expects a 401 authentication failure, which is correct. The plain dict
{"model": "gpt-4"}will be automatically coerced to aKaapiLLMParamsinstance by Pydantic during deserialization—this is standard behavior for nested model fields. Additionally, the authentication check dependency executes before body validation, so the endpoint returns 401 before any parameter validation occurs. No validation failure will occur here.Likely an incorrect or invalid review comment.
86-96: The test structure is correct and does not cause validation failures.Line 91 uses
provider: "openai"with dict params{"model": "gpt-4"}, which correctly maps toKaapiCompletionConfigwithKaapiLLMParams. Pydantic v2 automatically coerces the dict to the model type, and the dict contains all required fields. Validation succeeds without issues.Likely an incorrect or invalid review comment.
backend/app/tests/crud/config/test_version.py (1)
6-7: LGTM! Consistent migration to NativeCompletionConfig.The fixture correctly uses
NativeCompletionConfigwith"openai-native"provider, aligning with the new configuration model.Also applies to: 19-20
backend/app/models/llm/__init__.py (1)
6-8: LGTM! Clean package-level exports.The new configuration types are properly re-exported for convenient access from the
app.models.llmpackage.backend/app/tests/api/routes/configs/test_version.py (2)
13-14: LGTM! Consistent use of NativeCompletionConfig.The updated test cases correctly use
NativeCompletionConfigwith"openai-native"provider.Also applies to: 32-32, 307-311
92-100: The unchanged "openai" provider reference with dict params structure is valid and passes validation. The discriminated union correctly routes provider="openai" toKaapiCompletionConfig, and Pydantic successfully deserializes the params dict toKaapiLLMParams. No changes are needed.Likely an incorrect or invalid review comment.
backend/app/services/llm/jobs.py (3)
12-12: LGTM! Necessary imports for Kaapi configuration support.The imports enable Kaapi-to-native configuration transformation.
Also applies to: 16-16
172-184: LGTM! Proper transformation logic with error handling.The Kaapi configuration transformation is correctly placed before provider lookup and includes appropriate error handling. The
isinstancecheck safely distinguishes between Kaapi and native configurations.
189-189: LGTM! Correct usage of transformed completion config.The transformed
completion_configis properly used for both provider type lookup and execution.Also applies to: 201-201
backend/app/tests/services/llm/providers/test_openai.py (1)
10-10: LGTM! Comprehensive test updates for native provider.All test cases and fixtures consistently use
NativeCompletionConfigwith"openai-native"provider, and assertions properly validate the expected provider value.Also applies to: 34-37, 62-62, 236-239
backend/app/tests/services/llm/providers/test_registry.py (1)
24-25: LGTM! Registry tests correctly updated for native provider naming.All test cases properly use
"openai-native"as the provider type, and the credential lookup correctly uses"openai"(without the-nativesuffix), aligning with the registry's credential resolution logic.Also applies to: 49-61, 67-73, 90-90, 104-107
backend/app/tests/crud/config/test_config.py (1)
13-13: LGTM! Test fixture properly migrated to NativeCompletionConfig.The
example_config_blobfixture correctly usesNativeCompletionConfigwith"openai-native"provider.Also applies to: 22-29
backend/app/api/docs/llm/llm_call.md (1)
24-30: LGTM!The documentation updates accurately reflect the new discriminated union configuration model with clear guidance on:
- Provider type semantics (
"openai"vs"openai-native")- Usage recommendations (stored configs for production, ad-hoc for testing)
- Reference to schema for detailed parameter structures
backend/app/tests/api/routes/test_llm.py (2)
51-83: LGTM!Good test coverage for the Kaapi config path. The test correctly uses
KaapiCompletionConfigwithKaapiLLMParamsand verifies the 200 response.
122-162: LGTM!Good validation tests for error cases:
- Missing config returns 422
- Invalid provider type returns 422
These ensure the discriminated union and Pydantic validation work correctly at the API boundary.
backend/app/tests/services/llm/test_jobs.py (3)
531-575: LGTM!Comprehensive test for Kaapi config execution with stored config. Correctly exercises:
- Config creation with
KaapiCompletionConfig- Stored config resolution via
id+version- Provider execution and job status updates
771-857: LGTM!Excellent test coverage for config blob resolution:
- Verifies
KaapiCompletionConfigresolution maintains correct type and params- Tests both native and Kaapi configs in parallel to ensure discriminated union works correctly
- Proper
isinstancechecks validate the union discrimination
584-588: [Rewritten review comment]
[Classification tag]backend/app/services/llm/providers/registry.py (1)
14-25: LGTM!Clear registry structure with:
- Well-named constant
OPENAI_NATIVE- Helpful comments for future provider additions
- Consistent mapping pattern
backend/app/services/llm/mappers.py (1)
57-74: LGTM!Clean transformation function with:
- Clear provider dispatch logic
- Proper error handling for unsupported providers
- Correct mapping from
"openai"to"openai-native"provider typebackend/app/tests/utils/test_data.py (4)
24-24: LGTM: Imports align with the new discriminated union config system.The import of Kaapi-specific models enables test data generation for both Kaapi and native configuration formats.
239-259: LGTM: Backward-compatible API extension with clear documentation.The
use_kaapi_schemaparameter with defaultFalsemaintains backward compatibility while enabling Kaapi-format test configs. The enhanced docstring clearly documents the new parameter's purpose.
268-290: LGTM: Correct branching logic for Kaapi vs native config formats.The implementation correctly creates:
- Kaapi path:
KaapiCompletionConfigwithprovider="openai"and structuredKaapiLLMParams- Native path:
NativeCompletionConfigwithprovider="openai-native"and dictionary paramsBoth branches align with the discriminated union schema defined in
backend/app/models/llm/request.py.
317-327: LGTM: Consistent migration to native config schema.The version creation now uses
NativeCompletionConfigwithprovider="openai-native", aligning with the updated configuration system and maintaining consistency withcreate_test_config.backend/app/models/llm/request.py (7)
1-6: LGTM: Imports support discriminated union and validation patterns.The imports correctly enable discriminated union configuration (
Annotated,Union,Discriminator) and validation logic (model_validator,HttpUrl) for the new Kaapi abstraction layer.
20-42: LGTM: Remaining fields are correctly typed with appropriate validation.The optional fields use proper union types (
Type | None) with sensible validation constraints:
temperature: constrained to [0.0, 2.0]max_num_results: constrained to ≥ 1
61-68: LGTM: Proper validation ensures mutual exclusivity.The
validate_conversation_logicvalidator correctly enforces thatidandauto_create=Truecannot be specified together, with a clear error message guiding users.
86-100: LGTM: Native config correctly implements pass-through schema.
NativeCompletionConfigproperly uses:
Literal["openai-native"]for type-safe discriminationdict[str, Any]for flexible provider-specific parameters- Required
paramsfield ensures API parameters are always provided
103-114: LGTM: Kaapi config correctly implements abstraction schema.
KaapiCompletionConfigproperly uses:
Literal["openai"]for type-safe discriminationKaapiLLMParamsfor structured, validated parameters- Required fields ensure configuration is always complete
117-121: LGTM: Discriminated union correctly implements provider-based type discrimination.The
Annotated[Union[...], Field(discriminator="provider")]pattern is the correct Pydantic syntax for discriminated unions. The static analysis hint suggestingX | Yis a false positive, as that syntax doesn't support field-based discrimination.
162-184: LGTM: Comprehensive validation ensures config source integrity.The
validate_config_logicvalidator properly enforces:
- Mutual exclusivity between stored config (
id+version) and ad-hoc config (blob)- Completeness requirement for stored config path (both
idandversionmust be present)- At least one config source must be provided
The validation logic is clear and handles all edge cases correctly.
backend/app/tests/services/llm/test_mappers.py (3)
1-12: LGTM: Clear documentation and appropriate imports.The module docstring clearly describes the test scope, and imports are minimal and focused on the functionality under test.
15-153: LGTM: Comprehensive test coverage for parameter mapping.The test class provides excellent coverage including:
- Basic mappings: individual fields (model, instructions, temperature)
- Edge cases:
temperature=0.0to ensure proper handling of falsy values- Complex mappings:
reasoning→{"effort": ...},knowledge_base_ids→toolsstructure- Default values:
max_num_resultsfallback to 20- Validation: temperature + reasoning conflict detection
- Minimal scenarios: only required fields
Test assertions are specific and verify the exact structure of mapped parameters.
155-262: LGTM: Thorough test coverage for config transformation.The test class provides comprehensive coverage including:
- Basic transformation: Kaapi → Native with provider mapping
- Complete parameters: all fields including nested
toolsstructure- Reasoning transformation: proper structure conversion
- Conflict validation: temperature + reasoning mutual exclusivity
- Error handling: unsupported provider detection
- Structure preservation: validates nested parameter integrity
The use of
MagicMockin Line 234-238 to test unsupported provider error handling is acceptable, though in practiceKaapiCompletionConfigonly acceptsLiteral["openai"]as provider.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
backend/app/models/llm/request.py (1)
16-18: Fix the type/default mismatch for the model field.The field is typed as
str(required) but hasdefault=None, creating a type contradiction. The mapper implementation at line 33 ofbackend/app/services/llm/mappers.pychecksif kaapi_params.model, confirming this field should be optional.Apply this diff:
- model: str = Field( + model: str | None = Field( default=None, description="Model identifier to use for completion (e.g., 'gpt-4o', 'gpt-5')", )
🧹 Nitpick comments (1)
backend/app/models/llm/request.py (1)
116-120: Consider modernizing to PEP 604 union syntax.Since this is a Python 3.11+ project, you can use the more concise
X | Ysyntax instead ofUnion[X, Y].Apply this diff:
CompletionConfig = Annotated[ - Union[NativeCompletionConfig, KaapiCompletionConfig], + NativeCompletionConfig | KaapiCompletionConfig, Field(discriminator="provider"), ]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
backend/app/models/llm/request.py(2 hunks)backend/app/services/llm/mappers.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use type hints in Python code (Python 3.11+ project)
Files:
backend/app/services/llm/mappers.pybackend/app/models/llm/request.py
backend/app/services/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Implement business logic services under backend/app/services/
Files:
backend/app/services/llm/mappers.py
backend/app/models/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Define SQLModel entities (database tables and domain objects) in backend/app/models/
Files:
backend/app/models/llm/request.py
🧬 Code graph analysis (2)
backend/app/services/llm/mappers.py (1)
backend/app/models/llm/request.py (3)
KaapiLLMParams(8-41)KaapiCompletionConfig(102-113)NativeCompletionConfig(85-99)
backend/app/models/llm/request.py (2)
backend/app/crud/rag/open_ai.py (1)
default(31-32)backend/app/tests/services/llm/providers/test_openai.py (1)
provider(27-29)
🪛 Ruff (0.14.8)
backend/app/models/llm/request.py
118-118: Use X | Y for type annotations
Convert to X | Y
(UP007)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (6)
backend/app/services/llm/mappers.py (3)
1-4: LGTM!The module docstring and imports are clean and appropriate.
6-54: LGTM!The mapper function correctly transforms Kaapi parameters to OpenAI format. The validation logic properly uses explicit
is not Nonechecks, and the parameter mappings align with OpenAI's API structure.
57-74: LGTM!The transformation function correctly maps Kaapi configs to native provider configs. The provider identifier transformation from "openai" to "openai-native" aligns with the PR's design, and the error handling for unsupported providers is appropriate.
backend/app/models/llm/request.py (3)
1-6: LGTM!The updated imports properly support the discriminated union pattern and new model definitions.
85-99: LGTM!The
NativeCompletionConfigmodel is well-structured for passthrough provider configurations. The use ofLiteral["openai-native"]ensures type safety while maintaining flexibility with thedict[str, Any]params field.
102-113: LGTM!The
KaapiCompletionConfigmodel provides a clean, type-safe abstraction for Kaapi-standardized parameters. The design is extensible for future provider support.
kartpop
left a comment
There was a problem hiding this comment.
good work overall!
have a few comments - once resolved, can approve
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
backend/app/models/llm/request.py (2)
5-5: Unused import:DiscriminatorThe
Discriminatorimport from pydantic is not used in this file. The discriminated union on line 119 usesField(discriminator="provider")instead.🔎 Apply this diff to remove the unused import:
-from pydantic import Discriminator, model_validator, HttpUrl +from pydantic import model_validator, HttpUrl
117-120: Use modern union syntax for consistency.Per static analysis (Ruff UP007), prefer the
|operator overUnion[...]for type annotations in Python 3.11+.🔎 Apply this diff:
# Discriminated union for completion configs based on provider field CompletionConfig = Annotated[ - Union[NativeCompletionConfig, KaapiCompletionConfig], + NativeCompletionConfig | KaapiCompletionConfig, Field(discriminator="provider"), ]backend/app/services/llm/mappers.py (1)
32-34: Simplify string concatenation on model parameter.The
litellm.supports_reasoning()function takes a model string parameter with provider prefix (e.g.,"openai/model-name"). The current f-string wrapping is redundant here sincekaapi_params.modelis already a string.support_reasoning = litellm.supports_reasoning( - model="openai/" + f"{kaapi_params.model}" + model=f"openai/{kaapi_params.model}" )
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
backend/app/models/llm/request.py(2 hunks)backend/app/services/llm/jobs.py(3 hunks)backend/app/services/llm/mappers.py(1 hunks)backend/app/tests/services/llm/test_jobs.py(14 hunks)backend/app/tests/services/llm/test_mappers.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
backend/app/services/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Implement business logic in services located in
backend/app/services/
Files:
backend/app/services/llm/jobs.pybackend/app/services/llm/mappers.py
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Always add type hints to all function parameters and return values in Python code
Prefix all log messages with the function name in square brackets:logger.info(f"[function_name] Message {mask_string(sensitive_value)}")
Use Python 3.11+ with type hints throughout the codebase
Files:
backend/app/services/llm/jobs.pybackend/app/models/llm/request.pybackend/app/services/llm/mappers.pybackend/app/tests/services/llm/test_jobs.pybackend/app/tests/services/llm/test_mappers.py
backend/app/models/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use
sa_column_kwargs["comment"]to describe database columns, especially for non-obvious purposes, status/type fields, JSON/metadata columns, and foreign keys
Files:
backend/app/models/llm/request.py
backend/app/tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use factory pattern for test fixtures in
backend/app/tests/
Files:
backend/app/tests/services/llm/test_jobs.pybackend/app/tests/services/llm/test_mappers.py
🧠 Learnings (1)
📚 Learning: 2025-12-17T10:16:16.173Z
Learnt from: nishika26
Repo: ProjectTech4DevAI/kaapi-backend PR: 502
File: backend/app/models/collection.py:29-32
Timestamp: 2025-12-17T10:16:16.173Z
Learning: In backend/app/models/collection.py, treat provider as the LLM provider name (e.g., 'openai') and llm_service_name as the specific service from that provider. These fields serve different purposes and should remain non-redundant. Document their meanings, add clear type hints (e.g., provider: str, llm_service_name: str), and consider a small unit test or validation to ensure they are distinct and used appropriately, preventing accidental aliasing or duplication across the model or serializers.
Applied to files:
backend/app/models/llm/request.py
🧬 Code graph analysis (3)
backend/app/services/llm/jobs.py (3)
backend/app/models/llm/request.py (1)
KaapiCompletionConfig(102-113)backend/app/services/llm/providers/registry.py (1)
get_llm_provider(44-74)backend/app/services/llm/mappers.py (1)
transform_kaapi_config_to_native(74-94)
backend/app/services/llm/mappers.py (1)
backend/app/models/llm/request.py (3)
KaapiLLMParams(8-41)KaapiCompletionConfig(102-113)NativeCompletionConfig(85-99)
backend/app/tests/services/llm/test_mappers.py (2)
backend/app/models/llm/request.py (3)
KaapiLLMParams(8-41)KaapiCompletionConfig(102-113)NativeCompletionConfig(85-99)backend/app/services/llm/mappers.py (2)
map_kaapi_to_openai_params(7-71)transform_kaapi_config_to_native(74-94)
🪛 Ruff (0.14.8)
backend/app/models/llm/request.py
118-118: Use X | Y for type annotations
Convert to X | Y
(UP007)
🔇 Additional comments (19)
backend/app/models/llm/request.py (3)
8-41: LGTM! Well-structured Kaapi parameter model.The
KaapiLLMParamsmodel is well-designed with proper type hints, field descriptions, and validation constraints (ge,le). Themodelfield is now correctly required (no default), addressing the previous review concern.
85-99: LGTM! Native pass-through config is well-defined.The
NativeCompletionConfigcorrectly uses a literal type for the provider and allows arbitrary params viadict[str, Any]for provider-specific pass-through.
102-113: LGTM! Kaapi abstraction config is well-structured.The
KaapiCompletionConfigproperly uses a literal provider type for discrimination and typedKaapiLLMParamsfor the abstracted parameters.backend/app/services/llm/jobs.py (2)
172-187: LGTM! Kaapi config transformation is well-integrated.The transformation logic correctly:
- Checks if the config is a
KaapiCompletionConfigusingisinstance- Transforms it to native format before provider resolution
- Propagates warnings to request metadata
- Handles transformation errors gracefully with proper failure response
The use of
setdefault("warnings", []).extend(warnings)is a clean way to merge warnings with existing metadata.
189-207: LGTM! Provider resolution and execution correctly use transformed config.The flow now correctly:
- Resolves provider using
completion_config.provider(always native type after transformation)- Passes the transformed
completion_configto the provider'sexecutemethodThis ensures the provider receives native-format parameters regardless of whether the original config was Kaapi or native.
backend/app/tests/services/llm/test_jobs.py (5)
531-575: LGTM! Comprehensive test for Kaapi config execution.The test properly validates:
- KaapiCompletionConfig with KaapiLLMParams construction
- Stored config resolution
- Provider retrieval and execution
- Success status verification
624-672: LGTM! Good test for warning propagation.This test verifies that transformation warnings (e.g., temperature suppressed on reasoning models) are correctly passed through in the response metadata. The assertions are specific and validate the warning content.
674-723: LGTM! Good test for metadata merging.This test validates that warnings are merged with existing request metadata rather than overwriting it, ensuring both
tracking_idandwarningsare present in the final response.
871-904: LGTM! Good test for Kaapi config blob resolution.This test validates that stored Kaapi configs are correctly resolved and maintain their structure (KaapiCompletionConfig type, proper provider, and params).
906-958: Theuse_kaapi_schemaparameter exists increate_test_configand is properly implemented. No action needed; the test code is correct.Likely an incorrect or invalid review comment.
backend/app/services/llm/mappers.py (3)
36-54: LGTM! Correct handling of reasoning vs temperature mutual exclusivity.The logic properly:
- Checks model capability via
litellm.supports_reasoning()- For reasoning-capable models: sets reasoning param, warns if temperature provided
- For non-reasoning models: sets temperature, warns if reasoning provided
This addresses the previous review feedback to be lenient rather than raising errors.
62-69: LGTM! Knowledge base mapping is well-structured.The tools array structure correctly maps to OpenAI's file_search format with:
type: "file_search"vector_store_idsfrom knowledge_base_idsmax_num_resultswith sensible default of 20
74-94: LGTM! Transform function correctly routes to provider-specific mapper.The function:
- Dispatches to the appropriate mapper based on provider
- Returns a properly constructed
NativeCompletionConfigwith"openai-native"provider- Raises
ValueErrorfor unsupported providers (appropriate for future extensibility)backend/app/tests/services/llm/test_mappers.py (6)
15-26: LGTM! Good basic test coverage.The test validates the simplest case - model-only mapping - establishing a baseline for more complex scenarios.
53-63: LGTM! Important edge case test for temperature=0.0.This test ensures the
is not Nonecheck in the mapper correctly handlestemperature=0.0(which would fail a truthiness check). This addresses the bug identified in a previous review.
128-141: LGTM! Good test for reasoning suppression warning.Validates that non-reasoning models (gpt-4) correctly suppress the reasoning parameter and generate an appropriate warning message.
160-174: LGTM! Critical test for reasoning model behavior without explicit reasoning param.This test validates that temperature is suppressed for reasoning-capable models (o1) even when the user doesn't explicitly set reasoning. This ensures the API call won't fail due to incompatible parameters.
282-296: LGTM! Good defensive test for unsupported providers.Using MagicMock to bypass validation and test the error path is a reasonable approach for testing defensive code that handles future provider additions.
200-219: LGTM! Comprehensive transformation test.Validates the complete transformation flow from
KaapiCompletionConfigtoNativeCompletionConfigwith proper provider type conversion ("openai"→"openai-native").
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
backend/app/api/docs/llm/llm_call.md (1)
48-50: Clarify the scope of automatic warnings.The new note about automatic warnings is helpful, but could specify: (1) are warnings only for Kaapi mode or both modes, and (2) what metadata.warnings structure looks like so users know what to expect in responses.
🔎 Example enhancement:
### Note - `metadata.warnings` list is automatically added in responses when using Kaapi configs ("openai" provider) if any parameters are suppressed or adjusted (e.g., temperature on reasoning models) - Each warning contains the parameter name and reason for suppression
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/app/api/docs/llm/llm_call.md(3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
backend/app/api/docs/**/*.md
📄 CodeRabbit inference engine (CLAUDE.md)
Store endpoint documentation files in
backend/app/api/docs/<domain>/<action>.md
Files:
backend/app/api/docs/llm/llm_call.md
🔇 Additional comments (1)
backend/app/api/docs/llm/llm_call.md (1)
25-35: Excellent documentation of the discriminated completion config and Kaapi abstraction.The Mode 2 configuration section now clearly distinguishes between Kaapi ("openai") and native ("openai-native") provider types, with appropriate guidance on parameter handling and production best practices. The explanation of parameter suppression and automatic warnings (lines 32) is particularly helpful for users.
…ion to native provider format
… enhance configuration handling for OpenAI provider
…neous setting of 'temperature' and 'reasoning' parameters
…mapping logic to handle reasoning and temperature conflicts with warnings
…ers and warning handling for unsupported settings
a43a643 to
0a21c4e
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/services/llm/jobs.py (1)
223-227: Critical: Transformed config is not used for provider execution.The
decorated_executecall at line 224 passesconfig_blob.completioninstead of the transformedcompletion_config. This means when aKaapiCompletionConfigis provided, the transformation to native config happens (line 178), but the original untransformed Kaapi config is still passed to the provider, which will fail since the provider expects native parameters.🔎 Apply this diff to use the transformed config:
response, error = decorated_execute( - completion_config=config_blob.completion, + completion_config=completion_config, query=request.query, include_provider_raw_response=request.include_provider_raw_response, )
♻️ Duplicate comments (1)
backend/app/api/docs/llm/llm_call.md (1)
23-32: Documentation improved but schema reference still unresolved.The documentation updates clarify the provider types and Kaapi behavior well. However, line 32 still references "the API schema or examples below" without providing actual schema details or examples in this file.
This was flagged in a previous review. Consider either:
- Adding inline schema examples for both provider types
- Linking to an actual schema file if one exists
- Removing the reference if the OpenAPI spec is the intended source
🧹 Nitpick comments (5)
backend/app/services/llm/jobs.py (1)
174-189: Transformation logic is correct with proper error handling.The Kaapi-to-native transformation correctly:
- Checks for
KaapiCompletionConfiginstance- Transforms to native config and captures warnings
- Initializes metadata dict if needed
- Extends warnings list appropriately
- Handles exceptions with a failure callback
One minor suggestion: the generic
except Exceptioncould catch unexpected errors. Consider logging more details. As per coding guidelines, the log message should be prefixed with the function name.🔎 Optional: Add logging for transformation errors
except Exception as e: + logger.error( + f"[execute_job] Error transforming Kaapi config: {str(e)} | job_id={job_id}", + exc_info=True, + ) callback_response = APIResponse.failure_response( error=f"Error processing configuration: {str(e)}", metadata=request.request_metadata, ) return handle_job_error(job_id, request.callback_url, callback_response)backend/app/tests/services/llm/test_jobs.py (1)
624-672: Consider making warning assertions less brittle.The assertions at lines 670-672 check for specific substrings in warnings (
"temperature","suppressed"). While this tests the integration correctly, it tightly couples the test to the mapper's warning message format.Consider either:
- Using a more flexible assertion (e.g., just checking that warnings exist for reasoning models)
- Or documenting that these tests intentionally verify the exact warning format
# Verify the result includes warnings in metadata assert result["success"] assert "metadata" in result assert "warnings" in result["metadata"] - assert len(result["metadata"]["warnings"]) == 1 - assert "temperature" in result["metadata"]["warnings"][0].lower() - assert "suppressed" in result["metadata"]["warnings"][0] + assert len(result["metadata"]["warnings"]) >= 1 + # Warning should indicate temperature was affected for reasoning model + warning_text = " ".join(result["metadata"]["warnings"]).lower() + assert "temperature" in warning_textbackend/app/tests/api/routes/test_llm.py (1)
4-12: Remove unusedCompletionConfigimport.
CompletionConfigimported on line 7 is not used anywhere in the file. Tests useKaapiCompletionConfigandNativeCompletionConfigdirectly, so this import can be safely removed.backend/app/services/llm/mappers.py (1)
7-71: Good implementation of lenient parameter handling.The function correctly implements the feedback from the past review by suppressing conflicting parameters with warnings rather than raising errors. The warnings are returned and propagated to the response metadata, allowing the LLM call to proceed with reasonable defaults.
Optional: Use explicit None checks for consistency
Lines 56 and 59 use truthiness checks (
if kaapi_params.model:andif kaapi_params.instructions:), which could technically fail for empty strings. For consistency with the explicitis not Nonechecks used for optional parameters like temperature and reasoning, consider:- if kaapi_params.model: + if kaapi_params.model is not None: openai_params["model"] = kaapi_params.model - if kaapi_params.instructions: + if kaapi_params.instructions is not None: openai_params["instructions"] = kaapi_params.instructionsThis makes the code more defensive and consistent with the rest of the function, though the current implementation is functionally correct since
modelis a required field.backend/app/models/llm/request.py (1)
117-120: Consider modern union syntax for Python 3.11+.The discriminated union is correctly implemented. However, since the coding guidelines specify Python 3.11+ and Ruff's static analysis suggests it, consider using the modern
X | Ysyntax instead ofUnion[X, Y]:CompletionConfig = Annotated[ - Union[NativeCompletionConfig, KaapiCompletionConfig], + NativeCompletionConfig | KaapiCompletionConfig, Field(discriminator="provider"), ]This aligns with modern Python type hint conventions and matches the style used elsewhere in the file (e.g.,
str | None).As per static analysis hints (Ruff UP007).
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
backend/app/api/docs/llm/llm_call.md(2 hunks)backend/app/models/llm/__init__.py(1 hunks)backend/app/models/llm/request.py(2 hunks)backend/app/services/llm/jobs.py(2 hunks)backend/app/services/llm/mappers.py(1 hunks)backend/app/services/llm/providers/registry.py(2 hunks)backend/app/tests/api/routes/configs/test_config.py(1 hunks)backend/app/tests/api/routes/configs/test_version.py(3 hunks)backend/app/tests/api/routes/test_llm.py(3 hunks)backend/app/tests/crud/config/test_config.py(2 hunks)backend/app/tests/crud/config/test_version.py(2 hunks)backend/app/tests/services/llm/providers/test_openai.py(4 hunks)backend/app/tests/services/llm/providers/test_registry.py(5 hunks)backend/app/tests/services/llm/test_jobs.py(14 hunks)backend/app/tests/services/llm/test_mappers.py(1 hunks)backend/app/tests/utils/test_data.py(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- backend/app/tests/services/llm/providers/test_registry.py
- backend/app/tests/crud/config/test_version.py
- backend/app/tests/crud/config/test_config.py
- backend/app/tests/api/routes/configs/test_config.py
- backend/app/tests/api/routes/configs/test_version.py
- backend/app/services/llm/providers/registry.py
- backend/app/models/llm/init.py
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Always add type hints to all function parameters and return values in Python code
Prefix all log messages with the function name in square brackets:logger.info(f"[function_name] Message {mask_string(sensitive_value)}")
Use Python 3.11+ with type hints throughout the codebase
Files:
backend/app/tests/services/llm/test_jobs.pybackend/app/tests/utils/test_data.pybackend/app/services/llm/mappers.pybackend/app/tests/services/llm/test_mappers.pybackend/app/services/llm/jobs.pybackend/app/tests/services/llm/providers/test_openai.pybackend/app/tests/api/routes/test_llm.pybackend/app/models/llm/request.py
backend/app/tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use factory pattern for test fixtures in
backend/app/tests/
Files:
backend/app/tests/services/llm/test_jobs.pybackend/app/tests/utils/test_data.pybackend/app/tests/services/llm/test_mappers.pybackend/app/tests/services/llm/providers/test_openai.pybackend/app/tests/api/routes/test_llm.py
backend/app/api/docs/**/*.md
📄 CodeRabbit inference engine (CLAUDE.md)
Store endpoint documentation files in
backend/app/api/docs/<domain>/<action>.md
Files:
backend/app/api/docs/llm/llm_call.md
backend/app/services/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Implement business logic in services located in
backend/app/services/
Files:
backend/app/services/llm/mappers.pybackend/app/services/llm/jobs.py
backend/app/models/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use
sa_column_kwargs["comment"]to describe database columns, especially for non-obvious purposes, status/type fields, JSON/metadata columns, and foreign keys
Files:
backend/app/models/llm/request.py
🧠 Learnings (1)
📚 Learning: 2025-12-17T10:16:16.173Z
Learnt from: nishika26
Repo: ProjectTech4DevAI/kaapi-backend PR: 502
File: backend/app/models/collection.py:29-32
Timestamp: 2025-12-17T10:16:16.173Z
Learning: In backend/app/models/collection.py, treat provider as the LLM provider name (e.g., 'openai') and llm_service_name as the specific service from that provider. These fields serve different purposes and should remain non-redundant. Document their meanings, add clear type hints (e.g., provider: str, llm_service_name: str), and consider a small unit test or validation to ensure they are distinct and used appropriately, preventing accidental aliasing or duplication across the model or serializers.
Applied to files:
backend/app/models/llm/request.py
🧬 Code graph analysis (7)
backend/app/tests/services/llm/test_jobs.py (2)
backend/app/models/llm/request.py (5)
NativeCompletionConfig(85-99)QueryParams(71-82)KaapiLLMParams(8-41)KaapiCompletionConfig(102-113)ConfigBlob(123-129)backend/app/tests/utils/test_data.py (1)
create_test_config(239-302)
backend/app/tests/utils/test_data.py (1)
backend/app/models/llm/request.py (4)
KaapiLLMParams(8-41)KaapiCompletionConfig(102-113)NativeCompletionConfig(85-99)ConfigBlob(123-129)
backend/app/services/llm/mappers.py (1)
backend/app/models/llm/request.py (3)
KaapiLLMParams(8-41)KaapiCompletionConfig(102-113)NativeCompletionConfig(85-99)
backend/app/tests/services/llm/test_mappers.py (2)
backend/app/models/llm/request.py (3)
KaapiLLMParams(8-41)KaapiCompletionConfig(102-113)NativeCompletionConfig(85-99)backend/app/services/llm/mappers.py (2)
map_kaapi_to_openai_params(7-71)transform_kaapi_config_to_native(74-94)
backend/app/services/llm/jobs.py (3)
backend/app/models/llm/request.py (1)
KaapiCompletionConfig(102-113)backend/app/services/llm/providers/registry.py (1)
get_llm_provider(44-74)backend/app/services/llm/mappers.py (1)
transform_kaapi_config_to_native(74-94)
backend/app/tests/services/llm/providers/test_openai.py (1)
backend/app/models/llm/request.py (1)
NativeCompletionConfig(85-99)
backend/app/models/llm/request.py (2)
backend/app/crud/rag/open_ai.py (1)
default(31-32)backend/app/tests/services/llm/providers/test_openai.py (1)
provider(27-29)
🪛 Ruff (0.14.8)
backend/app/models/llm/request.py
118-118: Use X | Y for type annotations
Convert to X | Y
(UP007)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (25)
backend/app/tests/services/llm/providers/test_openai.py (4)
9-12: LGTM! Import updated correctly.The import change from
CompletionConfigtoNativeCompletionConfigaligns with the new discriminated union model structure.
31-37: LGTM! Fixture properly updated for native provider.The
completion_configfixture correctly usesNativeCompletionConfigwith the"openai-native"provider, which is consistent with the new provider naming convention.
62-62: LGTM! Assertion updated to match new provider naming.The provider assertion correctly expects
"openai-native"instead of the old"openai"value.
236-239: LGTM! Inline config instantiation updated consistently.The test correctly uses
NativeCompletionConfigwithprovider="openai-native"for the edge case test.backend/app/tests/api/routes/test_llm.py (4)
15-48: LGTM! Test updated with native config and callback_url.The test correctly uses
NativeCompletionConfigwith"openai-native"provider and includes thecallback_urlfield.
51-83: LGTM! Good coverage for Kaapi abstracted config path.This test validates the Kaapi configuration flow with
KaapiCompletionConfigandKaapiLLMParams, ensuring the abstraction layer works correctly.
86-119: LGTM! Native config pass-through test provides good coverage.Tests the native OpenAI config with additional parameters (
max_tokens,top_p), validating pass-through mode.
122-162: LGTM! Validation error tests are valuable additions.These tests ensure proper 422 responses for missing config and invalid provider values, validating the discriminated union validation works as expected.
backend/app/services/llm/jobs.py (1)
13-18: LGTM! Required imports added for Kaapi transformation.The imports for
KaapiCompletionConfigandtransform_kaapi_config_to_nativeare necessary for the new transformation logic.backend/app/api/docs/llm/llm_call.md (1)
46-48: LGTM! Warnings behavior is clearly documented.The note about automatic warnings in Kaapi mode is helpful for users to understand the parameter suppression behavior.
backend/app/tests/services/llm/test_jobs.py (7)
14-24: LGTM! Imports updated correctly for new config types.The imports include all necessary types for testing both native and Kaapi configuration flows.
39-51: LGTM! Fixture updated for native config.The
llm_call_requestfixture correctly usesNativeCompletionConfigwith the new provider naming.
224-238: LGTM! Request data fixture updated consistently.The
request_datafixture properly uses the"openai-native"provider for the native config path.
531-575: LGTM! Good test for Kaapi config execution path.This test validates the end-to-end flow with a stored Kaapi configuration, including transformation and execution.
674-723: LGTM! Tests metadata merging correctly.This test validates that warnings are properly merged with existing request metadata without overwriting the
tracking_id.
871-904: LGTM! Kaapi config blob resolution test is comprehensive.The test validates that Kaapi configs are correctly resolved from storage and that the
KaapiLLMParamsfields are accessible.
906-957: LGTM! Excellent test for discriminated union resolution.This test validates that both native and Kaapi configs can coexist and be correctly resolved, which is essential for the discriminated union implementation.
backend/app/tests/utils/test_data.py (2)
239-302: LGTM! Well-structured test data factory enhancement.The addition of the
use_kaapi_schemaparameter provides clean test coverage for both Kaapi and native configuration paths. The docstring is comprehensive, type hints are complete, and the branching logic correctly creates appropriate test fixtures for each schema variant.
305-339: LGTM! Consistent migration to NativeCompletionConfig.The function correctly uses
NativeCompletionConfigwithprovider="openai-native", aligning with the new configuration model structure introduced in this PR.backend/app/services/llm/mappers.py (1)
74-94: LGTM! Clean transformation wrapper.The function correctly delegates to the parameter mapper and wraps the result in a
NativeCompletionConfigwith the appropriate provider name. The error handling for unsupported providers is appropriate.backend/app/models/llm/request.py (3)
8-41: LGTM! Well-documented Kaapi parameter abstraction.The
KaapiLLMParamsclass provides a clean, provider-agnostic interface with clear field descriptions. All fields are properly typed, withmodelcorrectly marked as required and other parameters appropriately optional with sensible constraints (e.g., temperature bounds, max_num_results ≥ 1).
85-99: LGTM! Clear native pass-through configuration.The
NativeCompletionConfigclass is well-documented and appropriately uses a flexibledict[str, Any]for provider-specific parameters, enabling pass-through of any provider's native API format without transformation.
102-113: LGTM! Clean Kaapi abstraction layer.The
KaapiCompletionConfigclass provides a well-structured abstraction for provider-agnostic LLM configuration. The docstring appropriately mentions future support for additional providers while the current implementation focuses on OpenAI.backend/app/tests/services/llm/test_mappers.py (2)
15-198: Excellent test coverage for parameter mapping!The test suite comprehensively covers all mapping scenarios:
- ✅ Basic parameter mappings (model, instructions, temperature)
- ✅ Edge case handling (temperature=0.0)
- ✅ Reasoning vs temperature mutual exclusivity for both reasoning-capable and non-reasoning models
- ✅ Knowledge base mapping with default and custom max_num_results
- ✅ Warning generation for suppressed parameters
- ✅ Complete parameter combinations
- ✅ Minimal parameter sets
The tests are well-structured, clearly documented, and provide strong confidence in the mapper's behavior.
200-316: Comprehensive transformation tests with good error handling coverage.The test suite thoroughly validates the config transformation logic:
- ✅ Basic OpenAI config transformation to native format
- ✅ All Kaapi parameters correctly mapped
- ✅ Reasoning parameter handling for reasoning-capable models
- ✅ Intelligent suppression of conflicting parameters (temperature + reasoning)
- ✅ Error handling for unsupported providers using appropriate mocking
- ✅ Nested structure preservation for tools/vector_store_ids
The use of
MagicMockto test unsupported provider error handling is a clean approach that avoids modifying production models.
…fig_blob.completion
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
backend/app/services/llm/jobs.py (2)
184-189: Consider logging the specific exception for better diagnostics.The broad exception handler catches all errors during transformation but doesn't log the specific exception details. This can make debugging configuration issues more difficult in production.
🔎 Apply this diff to log the exception:
except Exception as e: + logger.error( + f"[execute_job] Error processing configuration: {str(e)} | job_id={job_id}", + exc_info=True + ) callback_response = APIResponse.failure_response( error=f"Error processing configuration: {str(e)}", metadata=request.request_metadata, ) return handle_job_error(job_id, request.callback_url, callback_response)
174-180: Consider adding a type annotation for clarity.The variable
completion_configis reassigned from a union type toNativeCompletionConfigafter transformation. Adding a type annotation would improve code clarity and help type checkers, especially per the coding guidelines requiring type hints throughout.🔎 View suggested approach:
try: # Transform Kaapi config to native config if needed (before getting provider) completion_config = config_blob.completion if isinstance(completion_config, KaapiCompletionConfig): - completion_config, warnings = transform_kaapi_config_to_native( + native_config, warnings = transform_kaapi_config_to_native( completion_config ) + completion_config = native_config # type: NativeCompletionConfig if request.request_metadata is None: request.request_metadata = {} request.request_metadata.setdefault("warnings", []).extend(warnings)Alternatively, add a comment indicating the type after transformation:
completion_config, warnings = transform_kaapi_config_to_native( completion_config ) # completion_config is now NativeCompletionConfigBased on coding guidelines, ...
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/app/services/llm/jobs.py(3 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
backend/app/services/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Implement business logic in services located in
backend/app/services/
Files:
backend/app/services/llm/jobs.py
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Always add type hints to all function parameters and return values in Python code
Prefix all log messages with the function name in square brackets:logger.info(f"[function_name] Message {mask_string(sensitive_value)}")
Use Python 3.11+ with type hints throughout the codebase
Files:
backend/app/services/llm/jobs.py
🧬 Code graph analysis (1)
backend/app/services/llm/jobs.py (5)
backend/app/models/llm/request.py (1)
KaapiCompletionConfig(102-113)backend/app/utils.py (2)
APIResponse(33-57)failure_response(46-57)backend/app/core/langfuse/langfuse.py (1)
observe_llm_execution(114-210)backend/app/services/llm/providers/registry.py (1)
get_llm_provider(44-74)backend/app/services/llm/mappers.py (1)
transform_kaapi_config_to_native(74-94)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (3)
backend/app/services/llm/jobs.py (3)
13-13: LGTM: Imports align with new Kaapi transformation logic.The imports are used correctly in the transformation block and are necessary for the Kaapi-to-native config mapping functionality.
Also applies to: 18-18
191-203: LGTM: Provider resolution correctly uses the transformed native config.The provider lookup now uses
completion_config.providerwhich contains the native provider type (e.g., "openai-native") after transformation. The ValueError handling appropriately catches provider resolution failures.
223-227: LGTM: Provider execution correctly uses the transformed native config.The LLM provider execution now receives the transformed
completion_configwith native parameters, ensuring consistent usage of the mapped configuration throughout the execution flow.
…d of CompletionConfig
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/core/langfuse/langfuse.py (1)
1-1: Critical: Fix Black formatting before merge.The pipeline reports a formatting failure that must be resolved.
Run Black formatter to fix:
#!/bin/bash # Format the file with Black black backend/app/core/langfuse/langfuse.py
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/app/core/langfuse/langfuse.py(2 hunks)backend/app/services/llm/providers/base.py(3 hunks)backend/app/services/llm/providers/openai.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
backend/app/services/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Implement business logic in services located in
backend/app/services/
Files:
backend/app/services/llm/providers/base.pybackend/app/services/llm/providers/openai.py
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Always add type hints to all function parameters and return values in Python code
Prefix all log messages with the function name in square brackets:logger.info(f"[function_name] Message {mask_string(sensitive_value)}")
Use Python 3.11+ with type hints throughout the codebase
Files:
backend/app/services/llm/providers/base.pybackend/app/services/llm/providers/openai.pybackend/app/core/langfuse/langfuse.py
🧠 Learnings (1)
📚 Learning: 2025-12-17T10:16:16.173Z
Learnt from: nishika26
Repo: ProjectTech4DevAI/kaapi-backend PR: 502
File: backend/app/models/collection.py:29-32
Timestamp: 2025-12-17T10:16:16.173Z
Learning: In backend/app/models/collection.py, the `provider` field indicates the LLM provider name (e.g., "openai"), while `llm_service_name` specifies which particular service from that provider is being used. These fields serve different purposes and are not redundant.
Applied to files:
backend/app/services/llm/providers/openai.py
🧬 Code graph analysis (3)
backend/app/services/llm/providers/base.py (3)
backend/app/models/llm/request.py (2)
NativeCompletionConfig(85-99)QueryParams(71-82)backend/app/models/llm/response.py (1)
LLMCallResponse(42-52)backend/app/tests/services/llm/providers/test_openai.py (1)
completion_config(32-37)
backend/app/services/llm/providers/openai.py (2)
backend/app/models/llm/request.py (1)
NativeCompletionConfig(85-99)backend/app/tests/services/llm/providers/test_openai.py (1)
completion_config(32-37)
backend/app/core/langfuse/langfuse.py (3)
backend/app/models/llm/request.py (2)
NativeCompletionConfig(85-99)QueryParams(71-82)backend/app/models/llm/response.py (1)
LLMCallResponse(42-52)backend/app/tests/services/llm/providers/test_openai.py (1)
completion_config(32-37)
🪛 GitHub Actions: AI Platform CI
backend/app/core/langfuse/langfuse.py
[error] 1-1: Black formatting failed. Reformatted 1 file: backend/app/core/langfuse/langfuse.py. Please commit the changes.
🔇 Additional comments (3)
backend/app/services/llm/providers/openai.py (1)
8-8: LGTM! Type migration to NativeCompletionConfig is clean.The import and signature updates correctly align with the new native configuration model. The provider field will now contain
"openai-native"as defined in theNativeCompletionConfigschema, which appropriately distinguishes native from Kaapi-format configurations.Also applies to: 33-33
backend/app/core/langfuse/langfuse.py (1)
9-9: LGTM! Type migration maintains observability wrapper integrity.The update to
NativeCompletionConfigcorrectly aligns the wrapper signature with the updated provider interface while preserving all runtime behavior.Also applies to: 133-133
backend/app/services/llm/providers/base.py (1)
10-10: LGTM! Abstract interface correctly updated for native configuration.The signature change to
NativeCompletionConfigand the clarified docstring correctly reflect the pass-through behavior of native configurations. This aligns perfectly with the concrete implementation inOpenAIProvider.Also applies to: 37-37, 46-46
* Enhancing swagger and redocs (#484) * rearranging endpoints for swagger and redocs * Claude: Next steps (#500) * claude updates * updated migration command * making it concise * add step to add typehint * Add Langfuse observability to Unified API (#457) * Add Langfuse observability to LLM execution methods * Enhance observability decorator to validate Langfuse credentials before execution * remove trace metadata * precommit * remove creds check * Unified API: Add support for Kaapi Abstracted LLM Call (#498) * Add Kaapi LLM parameters and completion config; implement transformation to native provider format * Refine LLM API documentation and improve code formatting for clarity; enhance configuration handling for OpenAI provider * add/fix tests * Fix validation logic in map_kaapi_to_openai_params to prevent simultaneous setting of 'temperature' and 'reasoning' parameters * Remove default value for 'model' in KaapiLLMParams to enforce explicit assignment * Refactor KaapiLLMParams to enforce explicit reasoning levels; update mapping logic to handle reasoning and temperature conflicts with warnings * Enhance LLM API documentation to clarify ad-hoc configuration parameters and warning handling for unsupported settings * Refactor execute_job to use completion_config directly instead of config_blob.completion * Refactor LLM provider interfaces to use NativeCompletionConfig instead of CompletionConfig * precommit * Evaluation: Uploading dataset concurrently (#461) * fix: add threadpool based concurrency to speeden up langfuse dataset upload * chore: fix precommit linting issues * fix: cleanup and deleted CELERY.md * chore: formatting --------- Co-authored-by: Akhilesh Negi <akhileshnegi.an3@gmail.com> * adding provider input (#502) * Documentation : repo enhancement (#496) * Documentation : repo MDs enhancement and adding enhancement template * Kaapi v1.0: Permissions Review and Authorization Cleanup (#501) * Refactor dependencies and enhance AuthContext for non-optional organization and project attributes * Refactor permission checks to require SUPERUSER role across multiple routes * fix session * Refactor routes to enhance AuthContext usage and enforce project permissions * Refactor dependency imports and remove unused parameters across multiple files * Refactor user model by removing UserOrganization and UserProjectOrg classes; update tests to use AuthContext for user-related operations * precommit * require project in llm call * fix: update project attribute reference in CRUD operations --------- Co-authored-by: Nishika Yadav <89646695+nishika26@users.noreply.github.com> * refactor: remove API key encryption and decryption functions from security module and tests (#507) API Key: remove API key encryption and decryption functions * added depends as import --------- Co-authored-by: Nishika Yadav <89646695+nishika26@users.noreply.github.com> Co-authored-by: Akhilesh Negi <akhileshnegi.an3@gmail.com> Co-authored-by: Aviraj Gour <100823015+avirajsingh7@users.noreply.github.com>
Summary
Introduce Kaapi LLM parameters and enhance the configuration handling for the OpenAI provider. This change improves the API documentation and ensures that the system can transform Kaapi configurations into the native provider format, addressing the need for better integration and clarity in configuration management.
Checklist
Before submitting a pull request, please ensure that you mark these tasks.
fastapi run --reload app/main.pyordocker compose upin the repository root and test.Notes
Please add here if any other information is required for the reviewer.
Summary by CodeRabbit
Documentation
New Features
Bug Fixes / Validation
Tests
✏️ Tip: You can customize this high-level summary in your review settings.