Skip to content

TTS/STT: Speech-To-Text Using Gemini in Unified API#550

Merged
AkhileshNegi merged 44 commits intomainfrom
feature/unified-api-stt-new
Feb 14, 2026
Merged

TTS/STT: Speech-To-Text Using Gemini in Unified API#550
AkhileshNegi merged 44 commits intomainfrom
feature/unified-api-stt-new

Conversation

@Prajna1999
Copy link
Copy Markdown
Collaborator

@Prajna1999 Prajna1999 commented Jan 21, 2026

Summary

Target issue is #515 and #556
Extend config management and Unified API endpoints for audio (STT and ASR) use cases.

Checklist

Before submitting a pull request, please ensure that you mark these task.

  • Ran fastapi run --reload app/main.py or docker compose up in the repository root and test.
  • If you've fixed a bug or added code that is tested and has test cases.

Notes

  1. Configuration Changes

Revised configuration schema with config version could look something like this. Based on config_blob_completion_type: “text” | “tts” |”stt”, three different completion objects would need to be passed.
Modifications to schemas are highlighted.

// Text configuration

{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  “name”:”Openai Test Configuration”,
  “description”:”Lorem ipsum dolor sit amet”,
  "version": 1,
  "config_blob": {
    "completion": {
      "provider": "openai" | “google” | “anthropic”,
      **"type": "text",**
      "params": {
        "model": "gpt-4o",
        "instructions": "You are a helpful assistant",
        "knowledge_base_ids": ["vs_123", "vs_456"],
        "reasoning": "low" | "medium" | "high",
        "temperature": 0.7,
         “max_num_results:20,
        "provider_specific": {

	"openai": {},
          "google": {}

}
      }
    },

// for future extensions  
  "classifier": {},
  "input_guardrails": {},
  "output_guardrails": {}

  }
}

// STT Configuration
{
  "id": "550e8400-e29b-41d4-a716-446655440001",
 “name”:”Gemini Configuration”,
  “description”:”Lorem ipsum dolor sit amet”,

  "version": 2,
  "config_blob": {
    "completion": {
      "provider": "google",
      "**type": "stt"**,
      "params": {
        "model": "gemini-2.5-pro",
        "instruction": "Transcribe the audio verbatim",
        **"input_language": "hi",**
       ** "output_language": "en", **(#transcription in a single step)
        **"response_format": "text"| “json”,**
        "temperature": 0.7,
        "provider_specific": {
          "openai": {},
          "google": {}
        }
      }
    },
    "classifier": {},
    "input_guardrails": {},
    "output_guardrails": {}


  }
}

// TTS Configuration 
{
  "id": "550e8400-e29b-41d4-a716-446655440002",
  "version": "1.0.0",
  "config_blob": {
    "completion": {
      "provider": "google",
      "type": "tts",
      "params": {
        "model": "gemini-2.5-pro-tts",
       ** "voice": "alloy",** (supported provider acccent)
        **"language": "en-US",** 
        **"response_format": "mp3" | “wav” | “ogg”,** (ogg works better in Android devices)
       ** "speed": 1.0,**
        "provider_specific": {
          "openai": {},
          "gemini": {
    "director_notes": "Speak with a professional, calm tone. Pause for 1 second between sentences.",
        
"response_modalities": ["AUDIO"] # example metadata for tracing 

}
        }
      }
    },
    

    "classifier": {},
    "input_guardrails": {},
    "output_guardrails": {}

  }
}

  1. Extended llm/call endpoint for Google Gemini provider. It takes a public link or base64 encoded string to the input field for audio files.
  2. Also added a combined llm_call table storing essential metadata for llm_call request and response.
  3. Experimental automatic speech recognition using auto flag in the STT endpoint.

Test cases for mappers have been supressed because the usual behaviour created inconsistency for provider.type=google/openai and provider.type=google-native/openai-native.

Also most file changes are auto created while fixing formatting. The real change files are few.

Summary by CodeRabbit

  • New Features

    • LLM call tracking system to record and retrieve interaction history
    • Google Gemini speech-to-text support
    • Partial configuration updates allowing selective field modifications
    • Enhanced multimodal input handling for text and audio
  • Improvements

    • Enforced configuration type immutability across versions
    • Enhanced provider response tracking with usage metrics
    • Improved configuration inheritance for version creation
    • Better error handling and validation for configurations

@Prajna1999 Prajna1999 self-assigned this Jan 21, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Jan 21, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This PR introduces LLM call tracking and partial config version updates. It adds a new CRUD layer for persisting LLM call records, implements Google Gemini provider support, refactors config parameters into explicit models (TextLLMParams, STTLLMParams, TTSLLMParams), and enables partial updates to configuration versions. The changes include input resolution for multimodal content, provider registry improvements, and database migration for the new llm_call table.

Changes

Cohort / File(s) Summary
Config Version Updates
backend/app/api/routes/config/version.py, backend/app/crud/config/version.py, backend/app/models/config/version.py
Introduced ConfigVersionUpdatePartial model for partial updates; added create_from_partial_or_raise method with deep-merge, validation of immutable fields (type), and type preservation across versions; extended ConfigVersionPublic with version, inserted_at, updated_at fields.
LLM CRUD Layer
backend/app/crud/llm.py
New CRUD module with functions: create_llm_call, update_llm_call_response, get_llm_call_by_id, get_llm_calls_by_job_id. Handles input serialization, config resolution, and persistence with comprehensive logging.
LLM Models - Request/Input
backend/app/models/llm/request.py
Replaced monolithic KaapiLLMParams with explicit TextLLMParams, STTLLMParams, TTSLLMParams; added multimodal inputs: TextContent, AudioContent, TextInput, AudioInput, discriminated QueryInput; introduced ConfigBlob, LlmCall persistence model; extended NativeCompletionConfig and KaapiCompletionConfig with type field; added ConversationConfig with validation.
LLM Models - Response/Output
backend/app/models/llm/response.py
Added reasoning_tokens to Usage; introduced TextOutput and AudioOutput output models; replaced LLMOutput class with discriminated union `Annotated[TextOutput
LLM Models - Exports
backend/app/models/__init__.py, backend/app/models/llm/__init__.py, backend/app/models/config/__init__.py
Expanded public API to export ConfigVersionUpdatePartial, LlmCall, and multimodal content/output models (TextContent, AudioContent, TextOutput, AudioOutput).
Google AI Provider
backend/app/services/llm/providers/gai.py
New GoogleAIProvider class implementing STT support via Google Gemini API; includes client initialization, input validation, language instruction building, file upload, and error handling.
OpenAI Provider Refactor
backend/app/services/llm/providers/oai.py
Moved from openai.py to oai.py; added create_client static method; updated execute signature to accept resolved_input; switched output to TextOutput(TextContent) pattern.
Provider Base & Registry
backend/app/services/llm/providers/base.py, backend/app/services/llm/providers/registry.py, backend/app/services/llm/providers/__init__.py
Updated BaseProvider interface with create_client static method and resolved_input parameter; refactored registry with get_provider_class method and supported_providers lookup; unified provider instantiation; added GoogleAIProvider support.
Mapper Functions
backend/app/services/llm/mappers.py
Changed map_kaapi_to_openai_params to accept dict; added new map_kaapi_to_google_params for Google support; extended transform_kaapi_config_to_native to handle google provider with type field; added warnings for unsupported params (knowledge_base_ids, reasoning).
Input Resolution
backend/app/services/llm/input_resolver.py
New utility module for resolving QueryInput (text/audio) to usable content or file path; handles base64 audio decoding, temporary file management, and error propagation.
Job Execution Service
backend/app/services/llm/jobs.py
Enhanced execute_job to create and update LLM call records; integrated input resolution with cleanup; captures provider response, usage metrics; improved logging and observability for Langfuse integration.
Langfuse Integration
backend/app/core/langfuse/langfuse.py
Added extract_output_value helper to handle TextOutput/AudioOutput discriminated outputs for logging; updated observation calls to use extracted values.
Evaluation Service
backend/app/crud/evaluations/core.py, backend/app/services/evaluations/evaluation.py
Updated resolve_model_from_config to read model from dict; added validation/selection of param model (TextLLMParams, STTLLMParams, TTSLLMParams) based on config.completion.type.
Database Migration
backend/app/alembic/versions/045_add_llm_call_table.py
New schema: llm_call table with id, job_id, project_id, organization_id, input/output types, provider, model, content/usage JSONB, conversation tracking, config reference, timestamps, soft-delete; foreign keys to job/project/organization with CASCADE; partial indexes on conversation_id and job_id.
Tests - Config & Version
backend/app/tests/api/routes/configs/test_config.py, backend/app/tests/api/routes/configs/test_version.py, backend/app/tests/crud/config/test_config.py, backend/app/tests/crud/config/test_version.py
Updated test configs to include type field in NativeCompletionConfig; added comprehensive tests for partial version updates, type-change restrictions (text↔stt↔tts), and inheritance of immutable fields.
Tests - LLM CRUD
backend/app/tests/crud/test_llm.py
New test module covering create/retrieve/update LLM call CRUD operations; tests stored and ad-hoc config handling, conversation metadata, and error paths.
Tests - Providers
backend/app/tests/services/llm/providers/test_openai.py, backend/app/tests/services/llm/providers/test_gai.py, backend/app/tests/services/llm/providers/test_registry.py
Updated OpenAI tests for new execute signature with resolved_input; added comprehensive GoogleAIProvider STT tests with language detection, translation, custom instructions; updated registry import paths.
Tests - Services
backend/app/tests/services/llm/test_jobs.py, backend/app/tests/services/llm/test_mappers.py, backend/app/tests/services/llm/test_input_resolver.py
Updated jobs tests for new config structures and LLM call tracking; extended mapper tests with Google param mapping and warning validation; added input resolver tests for text/audio handling and cleanup.
Tests - Utilities
backend/app/tests/utils/test_data.py, backend/app/tests/api/routes/test_evaluation.py, backend/app/tests/services/llm/test_input_resolver.py, backend/app/tests/crud/test_credentials.py
Updated test data helpers to fetch prior versions and maintain type/provider consistency; replaced KaapiLLMParams with TextLLMParams in evaluation tests; added input resolver tests; minor formatting updates.
Dependencies
backend/pyproject.toml
Added runtime dependency: google-genai>=1.59.0.
Seed Data
backend/app/tests/seed_data/seed_data.py
Added clarifying comment on cascade deletion behavior (no functional change).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • Unified API: Add support for Kaapi Abstracted LLM Call #498: Modifies LLM config and provider conversion surface (models.llm.request, services/llm/mappers, provider registry), adding Kaapi→native mapping and native provider handling with overlapping scope.
  • Evaluation: Use Config Management #477: Updates evaluation config resolution and resolve_model_from_config workflows, sharing evaluation CRUD and config version handling patterns.
  • Evaluation #405: Adds evaluation feature surface (models, CRUD, batch processing, API routes), directly related evaluation infrastructure.

Suggested reviewers

  • kartpop

Poem

🐰 Oh, what a grand refactor of llm calls,
With Google, partial updates, and provider halls,
New models bloom—TextOutput, AudioInput true,
Type safety and deep merges, version tracking too!
From CRUD to mappers, the stack's renewed—
A rabbit's delight in code so shrewd! 🎉

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: adding Speech-To-Text using Google Gemini to a unified API, which aligns with the PR's core objective of extending LLM capabilities for STT/TTS.
Docstring Coverage ✅ Passed Docstring coverage is 82.95% which is sufficient. The required threshold is 80.00%.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/unified-api-stt-new

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Prajna1999 Prajna1999 changed the title STT: Gemini STT Integration TTS/STT: speech-to-text using gemini in unified API Jan 21, 2026
@Prajna1999 Prajna1999 moved this to In Progress in Kaapi-dev Jan 21, 2026
@Prajna1999 Prajna1999 added the enhancement New feature or request label Jan 21, 2026
@Prajna1999 Prajna1999 marked this pull request as ready for review January 26, 2026 18:25
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
backend/app/celery/beat.py (1)

12-21: Add return type and log prefix in start_beat.
Missing return annotation and required log prefix.

✅ Suggested fix
-def start_beat(loglevel: str = "info"):
+def start_beat(loglevel: str = "info") -> None:
@@
-    logger.info(f"Starting Celery beat scheduler")
+    logger.info("[start_beat] Starting Celery beat scheduler")
backend/app/tests/services/doctransformer/test_job/utils.py (1)

8-99: Add missing type hints for helper factories and callables.
Strict typing will flag these functions (and inner callables) without explicit annotations.

✅ Suggested fix
-from pathlib import Path
+from pathlib import Path
+from typing import Any, Callable, NoReturn
 from urllib.parse import urlparse
@@
-def create_failing_convert_document(fail_count: int = 1):
+def create_failing_convert_document(fail_count: int = 1) -> Callable[..., Path]:
@@
-    def failing_convert_document(*args, **kwargs):
+    def failing_convert_document(*args: Any, **kwargs: Any) -> Path:
@@
-def create_persistent_failing_convert_document(
-    error_message: str = "Persistent error",
-):
+def create_persistent_failing_convert_document(
+    error_message: str = "Persistent error",
+) -> Callable[..., NoReturn]:
@@
-    def persistent_failing_convert_document(*args, **kwargs):
+    def persistent_failing_convert_document(*args: Any, **kwargs: Any) -> NoReturn:
         raise Exception(error_message)
backend/app/celery/utils.py (1)

19-111: Add **kwargs type hints and prefix log messages with function names.
This aligns with strict typing and logging guidelines.

✅ Suggested fix
 def start_high_priority_job(
-    function_path: str, project_id: int, job_id: str, trace_id: str = "N/A", **kwargs
+    function_path: str,
+    project_id: int,
+    job_id: str,
+    trace_id: str = "N/A",
+    **kwargs: Any,
 ) -> str:
@@
-    logger.info(f"Started high priority job {job_id} with Celery task {task.id}")
+    logger.info(
+        f"[start_high_priority_job] Started high priority job {job_id} with Celery task {task.id}"
+    )
@@
 def start_low_priority_job(
-    function_path: str, project_id: int, job_id: str, trace_id: str = "N/A", **kwargs
+    function_path: str,
+    project_id: int,
+    job_id: str,
+    trace_id: str = "N/A",
+    **kwargs: Any,
 ) -> str:
@@
-    logger.info(f"Started low priority job {job_id} with Celery task {task.id}")
+    logger.info(
+        f"[start_low_priority_job] Started low priority job {job_id} with Celery task {task.id}"
+    )
@@
-        logger.info(f"Revoked task {task_id}")
+        logger.info(f"[revoke_task] Revoked task {task_id}")
         return True
     except Exception as e:
-        logger.error(f"Failed to revoke task {task_id}: {e}")
+        logger.error(f"[revoke_task] Failed to revoke task {task_id}: {e}")
         return False
backend/app/celery/worker.py (1)

14-41: Fix typing for concurrency/return and add log prefixes.
Ensures strict typing and consistent log formatting.

✅ Suggested fix
 def start_worker(
     queues: str = "default,high_priority,low_priority,cron",
-    concurrency: int = None,
+    concurrency: int | None = None,
     loglevel: str = "info",
-):
+) -> None:
@@
-    logger.info(f"Starting Celery worker with {concurrency} processes")
-    logger.info(f"Consuming queues: {queues}")
+    logger.info(f"[start_worker] Starting Celery worker with {concurrency} processes")
+    logger.info(f"[start_worker] Consuming queues: {queues}")
backend/app/services/llm/mappers.py (1)

7-76: Handle potential None model in litellm.supports_reasoning call.

On Line 39, if model is None (from kaapi_params.get("model")), the call litellm.supports_reasoning(model=f"openai/{model}") will pass "openai/None" which may cause unexpected behavior or errors from litellm.

🐛 Proposed fix
     model = kaapi_params.get("model")
     reasoning = kaapi_params.get("reasoning")
     temperature = kaapi_params.get("temperature")
     instructions = kaapi_params.get("instructions")
     knowledge_base_ids = kaapi_params.get("knowledge_base_ids")
     max_num_results = kaapi_params.get("max_num_results")

-    support_reasoning = litellm.supports_reasoning(model=f"openai/{model}")
+    support_reasoning = (
+        litellm.supports_reasoning(model=f"openai/{model}") if model else False
+    )
🤖 Fix all issues with AI agents
In `@backend/app/models/llm/request.py`:
- Around line 4-11: Remove the duplicate import of Field and SQLModel:
consolidate the two sqlmodel import lines into one (e.g., replace the two
occurrences of "from sqlmodel import Field, SQLModel" with a single line that
also includes Index and text if needed: "from sqlmodel import Field, SQLModel,
Index, text"), leaving other imports (pydantic, datetime, sqlalchemy, JSONB,
app.core.util) unchanged; ensure only one import statement provides Field and
SQLModel to fix the Ruff F811 duplicate-import error.
- Around line 313-479: The updated_at field on the LlmCall model currently uses
default_factory=now so it only sets at creation; make it auto-update on
modifications by adding an SQLAlchemy onupdate to the Column (e.g., sa_column or
sa_column_kwargs for updated_at with onupdate=now) or, if you prefer
application-level handling, ensure the update_llm_call_response CRUD function
(or any updater) sets updated_at = now() before committing. Update the
LlmCall.updated_at definition accordingly and/or modify update_llm_call_response
to assign now() on each update so updated_at reflects the last modification.

In `@backend/app/services/llm/input_resolver.py`:
- Around line 86-111: The resolve_audio_url function currently fetches arbitrary
URLs; before calling requests.get in resolve_audio_url, validate the input URL
by reusing validate_callback_url(url) (or at minimum enforce scheme == "https"
and use _is_private_ip to reject private/link-local IPs) and return an error
string on validation failure; also call requests.get with allow_redirects=False
to disable redirects and keep the existing timeout; keep existing temp file
write logic (references: resolve_audio_url, validate_callback_url,
_is_private_ip, get_file_extension).

In `@backend/app/services/llm/jobs.py`:
- Line 212: Replace the print call printing completion_config with a logger call
using the module logger (e.g., logger.debug or logger.info) instead of print;
log the same message text but prefixed with the current function name and
include the completion_config variable for context (in
backend/app/services/llm/jobs.py, at the location where completion_config is
printed), ensuring you use the existing module logger (or create one via
logging.getLogger(__name__)) and the appropriate log level rather than print.
- Around line 160-178: Remove the temporary debug block in execute_job that
performs an inline import of select and queries recent jobs when
session.get(Job, job_id) returns None; delete the extra session.exec(...) query,
the inline "from sqlmodel import select" import, and the verbose logger.error
that prints recent jobs, leaving only the initial logger.info that attempts to
fetch the job and the existing logger.error (or add a concise logger.error) for
the missing Job; ensure any needed diagnostics are moved to a dedicated utility
function (e.g., a new diagnostic helper) rather than inline in execute_job.
- Around line 299-302: The cleanup currently compares resolved_input (str) to
request.query.input (QueryInput) which is always true; change the finally block
to only call cleanup_temp_file(resolved_input) when the original input is an
audio-type input that created a temp file — e.g., check
isinstance(request.query.input, (AudioBase64Input, AudioUrlInput)) and
resolved_input is truthy before calling cleanup_temp_file; leave TextInput alone
so we don't attempt to treat text as a temp file.

In `@backend/app/services/llm/PLAN.md`:
- Around line 222-225: The example for the Field definition for conversation_id
has a missing comma after default=None causing a syntax error; update the
conversation_id Field invocation (the conversation_id variable and its use of
Field) to include the missing comma between default=None and
description="Identifier linking this response to its conversation thread" so the
Field call is properly separated and the snippet parses correctly.
- Around line 113-115: The log message in the example uses the wrong provider
name: update the logger.info call that currently says "[OpenAIProvider.execute]
Successfully generated response: {response.response_id}" to reference the
correct provider and method (e.g., "[GoogleAIProvider.execute] Successfully
generated response: {response.response_id}") so the log reflects
GoogleAIProvider.execute; locate the logger.info in the GoogleAIProvider.execute
example and change only the provider name in the message.

In `@backend/app/services/llm/providers/gai.py`:
- Around line 75-78: The lang_instruction assignment in the block that checks
input_language uses an unnecessary f-string prefix; update the two assignments
so they are plain strings (remove the leading 'f') for the branches where you
set lang_instruction (the conditional using input_language and the variable
lang_instruction).
- Around line 38-43: The _parse_input implementation only handles
completion_type "stt" and lacks type hints, causing implicit None returns;
update the method signature to include type hints (e.g., def _parse_input(self,
query_input: Any, completion_type: str, provider: str) -> str) and implement
explicit handling for non-"stt" completion types: validate and coerce/return a
string for other expected types (e.g., "chat" or "text"/"completion") or raise a
clear ValueError when the input shape is invalid; ensure every control path
returns a str and import any typing symbols used.
- Around line 32-36: OpenAIProvider.create_client currently returns an error
string when credentials are missing, causing a wrong type to be passed to the
constructor; change it to raise an exception instead to match
GoogleAIProvider.create_client's behavior (which raises ValueError). Update
OpenAIProvider.create_client to check for the required credential key (e.g.,
"api_key" or provider-specific name) and raise a ValueError with a clear message
when missing so the registry's exception handler receives an exception rather
than a string.
- Around line 55-125: The execute method in GoogleAIProvider only handles
completion_type == "stt" and falls through silently for other types; update
execute to explicitly handle unsupported completion types (e.g., "text" and
"tts") by returning a clear error (or implementing their flows) when
completion_type is not "stt". Locate the block using completion_type,
completion_config, and the STT flow where gemini_file/upload and
client.models.generate_content are used, and add an else branch (or early guard)
that returns (None, "<descriptive error>") or raises a descriptive exception
indicating unsupported completion_type so callers no longer get an implicit
(None, None).

In `@backend/app/services/llm/providers/oai.py`:
- Around line 32-36: The create_client staticmethod in OpenAIProvider
(create_client) currently returns an error string and uses an unnecessary
f-string; change it to mirror GoogleAIProvider.create_client by raising a
ValueError when "api_key" is missing (so callers receive an exception instead of
a string), and replace the f-string with a plain string literal; ensure the
method otherwise returns the OpenAI(...) client instance to keep the return type
consistent.

In `@backend/app/services/llm/providers/registry.py`:
- Around line 92-135: Remove the ad-hoc "__main__" test block from registry.py;
the block contains hardcoded paths and an outdated call signature. Extract the
logic that uses LLMProvider.get_provider_class, ProviderClass.create_client,
NativeCompletionConfig, QueryParams and instance.execute into a proper test
under backend/app/tests/services/llm/providers/test_gai.py (or delete it),
update the execute invocation to include the resolved_input parameter to match
the current signature, and ensure any credential/env handling is mocked rather
than reading real env vars or local file paths.
- Around line 66-70: There is a duplicated assignment of credential_provider
from provider_type using replace("-native", ""); remove the redundant line so
credential_provider is assigned only once (keep the first or the clearer
occurrence) and ensure any surrounding comments remain correct; locate the
duplicate by searching for the variable name credential_provider and the
expression provider_type.replace("-native", "") in registry.py and delete the
extra assignment.
- Around line 13-26: Remove the testing artifacts imported into the module:
delete the temporary import "from google.genai.types import
GenerateContentConfig", the block importing NativeCompletionConfig,
LLMCallResponse, QueryParams, LLMOutput, LLMResponse, Usage from app.models.llm
(if they are unused here), and the call to load_dotenv(); ensure any genuinely
required symbols for functions/classes in this file (e.g., registry-related
classes or functions) remain imported from their proper modules and move
environment loading to application startup code rather than leaving
load_dotenv() in this module.

In `@backend/app/tests/crud/test_llm.py`:
- Around line 269-271: The test passes an integer literal 99999 as project_id to
get_llm_call_by_id but project_id is a UUID; change the test to pass a UUID that
will not match the created LLM call (e.g., generate a new UUID via uuid.uuid4()
or use a different fixture UUID) instead of 99999 so the call uses the correct
type; update the assertion code around get_llm_call_by_id(db, created.id,
project_id=...) and ensure imports/fixtures provide a UUID value.

In `@backend/app/tests/services/llm/test_mappers.py`:
- Around line 1-317: The entire test suite in
backend/app/tests/services/llm/test_mappers.py is commented out; restore test
visibility by either re-enabling the tests or explicitly skipping them with a
reason and tracking link. Undo the block comment (or reintroduce the classes
TestMapKaapiToOpenAIParams and TestTransformKaapiConfigToNative) and run/fix
failing assertions against map_kaapi_to_openai_params and
transform_kaapi_config_to_native if they break due to the new type system, or if
you must temporarily disable, add pytest.mark.skip(reason="TODO: update tests
for new type system, see ISSUE-XXXXX") above each Test* class and add a TODO
comment referencing the issue; ensure the skip preserves the original test names
and imports so future fixes can target the exact failing assertions.

In `@backend/app/tests/utils/test_data.py`:
- Around line 339-373: latest_version may lack a "type" in its config_blob so
config_type can be None and later fail Literal validation; set a sensible
default (e.g., "text") when extracting it from completion_config and use that
default when constructing ConfigBlob instances for both NativeCompletionConfig
and KaapiCompletionConfig (update the assignment of config_type taken from
completion_config.get("type") to fallback to "text" and ensure
NativeCompletionConfig and KaapiCompletionConfig are always passed a non-None
type).
🧹 Nitpick comments (27)
backend/app/tests/scripts/test_backend_pre_start.py (1)

5-24: Add return type hint to the test function.

Per coding guidelines, all functions should have type hints for parameters and return values.

Suggested fix
-def test_init_success():
+def test_init_success() -> None:
backend/app/tests/scripts/test_test_pre_start.py (1)

5-24: Add return type hint to the test function.

Per coding guidelines, all functions should have type hints for parameters and return values.

Suggested fix
-def test_init_success():
+def test_init_success() -> None:
backend/app/cli/bench/commands.py (1)

169-175: Use Callable from typing instead of lowercase callable.

The callable on line 174 is a built-in function, not a type annotation. For proper static type checking, use Callable from the typing module with the appropriate signature.

Suggested fix
-from typing import List, Protocol
+from typing import Callable, List, Protocol
 def send_benchmark_request(
     prompt: str,
     i: int,
     total: int,
     endpoint: str,
-    build_payload: callable,
+    build_payload: Callable[[str], dict],
 ) -> BenchItem:

As per coding guidelines, type hints should be used throughout the codebase.

backend/app/tests/services/llm/providers/test_registry.py (1)

23-26: Consider adding test coverage for GoogleAIProvider in registry.

The registry tests only verify openai-native provider. With the addition of GoogleAIProvider, consider adding a test to verify google-native is also registered correctly in LLMProvider._registry.

backend/app/models/config/version.py (1)

99-116: Consider adding validation for empty config_blob.

ConfigVersionBase includes a validate_blob_not_empty validator (line 32-36), but ConfigVersionCreatePartial doesn't inherit from it or define its own validator. If passing an empty config_blob in a partial update is invalid, consider adding validation:

from pydantic import field_validator

`@field_validator`("config_blob")
def validate_blob_not_empty(cls, value):
    if not value:
        raise ValueError("config_blob cannot be empty")
    return value

If empty is intentionally allowed (e.g., the CRUD layer handles merging with existing config), this can be ignored.

backend/app/services/llm/__init__.py (1)

1-6: Consider consolidating imports from the same module.

Both import statements pull from app.services.llm.providers. They could be combined into a single import for cleaner organization:

♻️ Suggested refactor
-from app.services.llm.providers import BaseProvider, OpenAIProvider, GoogleAIProvider
-from app.services.llm.providers import (
-    LLMProvider,
-    get_llm_provider,
-)
+from app.services.llm.providers import (
+    BaseProvider,
+    GoogleAIProvider,
+    LLMProvider,
+    OpenAIProvider,
+    get_llm_provider,
+)
backend/app/tests/api/routes/test_llm.py (1)

145-167: Consider adding the type field to the invalid provider test payload.

The test_llm_call_invalid_provider test payload is missing the type field in the completion config. While this may still trigger a 422 due to the invalid provider, it could mask validation order issues. Consider adding "type": "text" to ensure the test specifically validates provider validation.

Suggested fix
     payload = {
         "query": {"input": "Test query"},
         "config": {
             "blob": {
                 "completion": {
                     "provider": "invalid-provider",
+                    "type": "text",
                     "params": {"model": "gpt-4"},
                 }
             }
         },
     }
backend/app/api/routes/config/version.py (4)

25-45: Add return type hint to create_version function.

Per coding guidelines, all function parameters and return values should have type hints. The function returns APIResponse[ConfigVersionPublic].

Suggested fix
 def create_version(
     config_id: UUID,
     version_create: ConfigVersionCreatePartial,
     current_user: AuthContextDep,
     session: SessionDep,
-):
+) -> APIResponse[ConfigVersionPublic]:

55-75: Add return type hint to list_versions function.

Per coding guidelines, all functions should have return type hints.

Suggested fix
 def list_versions(
     config_id: UUID,
     current_user: AuthContextDep,
     session: SessionDep,
     skip: int = Query(0, ge=0, description="Number of records to skip"),
     limit: int = Query(100, ge=1, le=100, description="Maximum records to return"),
-):
+) -> APIResponse[list[ConfigVersionItems]]:

85-102: Add return type hint to get_version function.

Per coding guidelines, all functions should have return type hints.

Suggested fix
 def get_version(
     config_id: UUID,
     current_user: AuthContextDep,
     session: SessionDep,
     version_number: int = Path(
         ..., ge=1, description="The version number of the config"
     ),
-):
+) -> APIResponse[ConfigVersionPublic]:

112-130: Add return type hint to delete_version function.

Per coding guidelines, all functions should have return type hints.

Suggested fix
 def delete_version(
     config_id: UUID,
     current_user: AuthContextDep,
     session: SessionDep,
     version_number: int = Path(
         ..., ge=1, description="The version number of the config"
     ),
-):
+) -> APIResponse[Message]:
backend/app/tests/crud/test_llm.py (2)

26-39: Consider using factory pattern for test fixtures.

Per coding guidelines, test fixtures in backend/app/tests/ should use the factory pattern. These fixtures query seed data directly rather than using factories. Consider creating factory functions for test projects and organizations if they don't already exist.


42-46: Add return type hint to test_job fixture.

Per coding guidelines, all functions should have type hints.

Suggested fix
 `@pytest.fixture`
-def test_job(db: Session):
+def test_job(db: Session) -> Job:
     """Create a test job for LLM call tests."""
     crud = JobCrud(db)
     return crud.create(job_type=JobType.LLM_API, trace_id="test-llm-trace")

Note: You'll need to import Job from the appropriate models module.

backend/app/tests/services/llm/test_jobs.py (1)

19-21: Remove commented-out import instead of leaving it.

The KaapiLLMParams import is commented out. If it's no longer needed, it should be removed entirely rather than left as a comment.

Suggested fix
 from app.models.llm import (
     LLMCallRequest,
     NativeCompletionConfig,
     QueryParams,
     LLMCallResponse,
     LLMResponse,
     LLMOutput,
     Usage,
-    # KaapiLLMParams,
     KaapiCompletionConfig,
 )
backend/app/services/llm/input_resolver.py (1)

88-96: Consider adding a streaming option for large audio files.

The current implementation loads the entire response into memory with response.content. For large audio files, this could cause memory issues. Consider using stream=True and writing chunks to the temp file.

Suggested streaming implementation
 def resolve_audio_url(url: str) -> tuple[str, str | None]:
     """Fetch audio from URL and write to temp file. Returns (file_path, error)."""
     try:
-        response = requests.get(url, timeout=60)
+        response = requests.get(url, timeout=60, stream=True)
         response.raise_for_status()
     except requests.Timeout:
         return "", f"Timeout fetching audio from URL: {url}"
     except requests.HTTPError as e:
         return "", f"HTTP error fetching audio: {e.response.status_code}"
     except Exception as e:
         return "", f"Failed to fetch audio from URL: {str(e)}"

     content_type = response.headers.get("content-type", "audio/wav")
     ext = get_file_extension(content_type.split(";")[0].strip())

     try:
         with tempfile.NamedTemporaryFile(
             suffix=ext, delete=False, prefix="audio_"
         ) as tmp:
-            tmp.write(response.content)
+            for chunk in response.iter_content(chunk_size=8192):
+                if chunk:
+                    tmp.write(chunk)
             temp_path = tmp.name

         logger.info(f"[resolve_audio_url] Wrote audio to temp file: {temp_path}")
         return temp_path, None
     except Exception as e:
         return "", f"Failed to write fetched audio to temp file: {str(e)}"
backend/app/tests/api/routes/configs/test_version.py (3)

565-570: Consider adding error message validation for consistency.

The test_create_version_cannot_change_type_from_text_to_stt test validates the error message content, but this test only checks the status code. Adding similar assertions would improve test coverage consistency.

     response = client.post(
         f"{settings.API_V1_STR}/configs/{config.id}/versions",
         headers={"X-API-KEY": user_api_key.key},
         json=version_data,
     )
     assert response.status_code == 400
+    error_detail = response.json().get("error", "")
+    assert "cannot change config type" in error_detail.lower()

734-738: Unused db parameter flagged by static analysis.

The db parameter is not used in this test function. If it's required for test fixture setup/teardown, consider adding a comment explaining its purpose. Otherwise, it can be removed.

 def test_create_config_with_kaapi_provider_success(
-    db: Session,
     client: TestClient,
     user_api_key: TestAuthContext,
 ) -> None:

477-478: Consider moving repeated imports to module level.

KaapiCompletionConfig is imported locally in multiple test functions. Moving it to the module-level imports (alongside NativeCompletionConfig at line 14) would reduce duplication.

backend/app/tests/utils/test_data.py (1)

321-337: Move imports to module level.

select and and_ from sqlmodel, and ConfigVersion are imported inside the function, but ConfigVersion is already imported at line 19. Consider consolidating these imports at the module level.

+from sqlmodel import Session, select, and_
 # ... at module level
 
 def create_test_version(...):
     if config_blob is None:
-        # Fetch the latest version to maintain type consistency
-        from sqlmodel import select, and_
-        from app.models import ConfigVersion
-
         stmt = (
             select(ConfigVersion)
backend/app/services/llm/providers/registry.py (1)

1-2: Remove unnecessary imports for production code.

os and dotenv are only used in the __main__ testing block. If the testing code is removed (as suggested below), these imports should also be removed.

backend/app/services/llm/providers/gai.py (1)

1-2: Remove unused import.

os is imported but never used in this file.

 import logging
-import os
backend/app/crud/config/version.py (2)

170-190: Shallow copy may cause unintended mutation of nested structures.

base.copy() on Line 178 creates a shallow copy. If base contains nested dicts that are not in updates, those nested dicts will be shared references. While the recursive merge handles overlapping keys correctly, any external mutation of the returned result could affect the original base dict.

Consider using copy.deepcopy() if the base dict may be reused or if nested structures need isolation.

♻️ Suggested fix using deepcopy
+import copy
+
 def _deep_merge(
     self, base: dict[str, Any], updates: dict[str, Any]
 ) -> dict[str, Any]:
     """
     Deep merge two dictionaries.
     Values from 'updates' override values in 'base'.
     Nested dicts are merged recursively.
     """
-    result = base.copy()
+    result = copy.deepcopy(base)

     for key, value in updates.items():
         if (
             key in result
             and isinstance(result[key], dict)
             and isinstance(value, dict)
         ):
             result[key] = self._deep_merge(result[key], value)
         else:
             result[key] = value

     return result

291-341: Consider reusing _get_latest_version to reduce code duplication.

The query logic in Lines 300-311 duplicates the query in _get_latest_version (Lines 157-168). This creates maintenance burden and potential for divergence.

♻️ Proposed refactor to reuse existing helper
 def _validate_config_type_unchanged(
     self, version_create: ConfigVersionCreate
 ) -> None:
     """
     Validate that the config type (text/stt/tts) in the new version matches
     the type from the latest existing version.
     Raises HTTPException if types don't match.
     """
-    # Get the latest version
-    stmt = (
-        select(ConfigVersion)
-        .where(
-            and_(
-                ConfigVersion.config_id == self.config_id,
-                ConfigVersion.deleted_at.is_(None),
-            )
-        )
-        .order_by(ConfigVersion.version.desc())
-        .limit(1)
-    )
-    latest_version = self.session.exec(stmt).first()
+    latest_version = self._get_latest_version()

     # If this is the first version, no validation needed
     if latest_version is None:
         return
backend/app/crud/llm.py (3)

80-105: Potential AttributeError when accessing params.model.

Line 102-105 uses hasattr to check for model attribute but assumes completion_config.params exists and is accessible. If params is a dict (which it can be based on NativeCompletionConfig.params: dict[str, Any]), hasattr(completion_config.params, "model") will return False even when "model" is a key, and .get("model", "") should work. However, mixing attribute access with dict access creates confusion.

♻️ Suggested simplification for clarity
-    model = (
-        completion_config.params.model
-        if hasattr(completion_config.params, "model")
-        else completion_config.params.get("model", "")
-    )
+    # params is always a dict for both Native and Kaapi configs
+    model = completion_config.params.get("model", "")

99-100: Type ignore comment suggests potential type mismatch.

The # type: ignore[assignment] comment on Line 100 indicates the provider type doesn't match the expected Literal["openai", "google", "anthropic"]. The comment mentions provider is "guaranteed to be normalized" but this isn't enforced at the type level. Consider adding a runtime assertion or documenting the contract more explicitly.


200-229: Add type hint for return value on get_llm_calls_by_job_id.

The function is missing a return type annotation per the coding guidelines requiring type hints on all function parameters and return values.

♻️ Proposed fix
 def get_llm_calls_by_job_id(
     session: Session,
     job_id: UUID,
-) -> list[LlmCall]:
+) -> list[LlmCall]:

Actually, looking again, the return type is already specified. The function signatures look complete.

LGTM!

Query functions are well-typed and properly filter out soft-deleted records.

backend/app/models/llm/request.py (1)

63-63: Consider using X | Y union syntax for consistency.

Static analysis (UP007) suggests using X | Y instead of Union[...] for type annotations to align with modern Python 3.11+ syntax. This is optional but improves consistency with other type hints in the codebase.

♻️ Proposed fix
-KaapiLLMParams = Union[TextLLMParams, STTLLMParams, TTSLLMParams]
+KaapiLLMParams = TextLLMParams | STTLLMParams | TTSLLMParams

 # Discriminated union for query input types
-QueryInput = Annotated[
-    Union[TextInput, AudioBase64Input, AudioUrlInput],
-    Field(discriminator="type"),
-]
+QueryInput = Annotated[
+    TextInput | AudioBase64Input | AudioUrlInput,
+    Field(discriminator="type"),
+]

Also applies to: 87-90

Comment on lines 4 to +11
from sqlmodel import Field, SQLModel
from pydantic import Discriminator, model_validator, HttpUrl
from datetime import datetime
from app.core.util import now

import sqlalchemy as sa
from sqlalchemy.dialects.postgresql import JSONB
from sqlmodel import Field, SQLModel, Index, text
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove duplicate imports flagged by static analysis.

Lines 4 and 11 both import Field and SQLModel from sqlmodel. This is flagged by Ruff (F811).

🐛 Proposed fix
 from uuid import UUID, uuid4
-from sqlmodel import Field, SQLModel
 from pydantic import Discriminator, model_validator, HttpUrl
 from datetime import datetime
 from app.core.util import now

 import sqlalchemy as sa
 from sqlalchemy.dialects.postgresql import JSONB
 from sqlmodel import Field, SQLModel, Index, text
🧰 Tools
🪛 Ruff (0.14.14)

11-11: Redefinition of unused Field from line 4: Field redefined here

Remove definition: Field

(F811)


11-11: Redefinition of unused SQLModel from line 4: SQLModel redefined here

Remove definition: SQLModel

(F811)

🤖 Prompt for AI Agents
In `@backend/app/models/llm/request.py` around lines 4 - 11, Remove the duplicate
import of Field and SQLModel: consolidate the two sqlmodel import lines into one
(e.g., replace the two occurrences of "from sqlmodel import Field, SQLModel"
with a single line that also includes Index and text if needed: "from sqlmodel
import Field, SQLModel, Index, text"), leaving other imports (pydantic,
datetime, sqlalchemy, JSONB, app.core.util) unchanged; ensure only one import
statement provides Field and SQLModel to fix the Ruff F811 duplicate-import
error.

Comment on lines +160 to +178

# Debug: Try to fetch the job first
logger.info(f"[execute_job] Attempting to fetch job | job_id={job_id}")
job = session.get(Job, job_id)
if not job:
# Log all jobs to see what's in the database
from sqlmodel import select

all_jobs = session.exec(
select(Job).order_by(Job.created_at.desc()).limit(5)
).all()
logger.error(
f"[execute_job] Job not found! | job_id={job_id} | "
f"Recent jobs in DB: {[(j.id, j.status) for j in all_jobs]}"
)
else:
logger.info(
f"[execute_job] Found job | job_id={job_id}, status={job.status}"
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove debug code before merging.

This block appears to be debugging code that queries all recent jobs when the expected job isn't found. It adds unnecessary database queries in production and includes an inline import. Consider removing this or converting it to a proper diagnostic utility if this scenario needs monitoring.

🔧 Suggested removal
             job_crud = JobCrud(session=session)
-
-            # Debug: Try to fetch the job first
-            logger.info(f"[execute_job] Attempting to fetch job | job_id={job_id}")
-            job = session.get(Job, job_id)
-            if not job:
-                # Log all jobs to see what's in the database
-                from sqlmodel import select
-
-                all_jobs = session.exec(
-                    select(Job).order_by(Job.created_at.desc()).limit(5)
-                ).all()
-                logger.error(
-                    f"[execute_job] Job not found! | job_id={job_id} | "
-                    f"Recent jobs in DB: {[(j.id, j.status) for j in all_jobs]}"
-                )
-            else:
-                logger.info(
-                    f"[execute_job] Found job | job_id={job_id}, status={job.status}"
-                )
-
             job_crud.update(
🤖 Prompt for AI Agents
In `@backend/app/services/llm/jobs.py` around lines 160 - 178, Remove the
temporary debug block in execute_job that performs an inline import of select
and queries recent jobs when session.get(Job, job_id) returns None; delete the
extra session.exec(...) query, the inline "from sqlmodel import select" import,
and the verbose logger.error that prints recent jobs, leaving only the initial
logger.info that attempts to fetch the job and the existing logger.error (or add
a concise logger.error) for the missing Job; ensure any needed diagnostics are
moved to a dedicated utility function (e.g., a new diagnostic helper) rather
than inline in execute_job.

completion_config, warnings = transform_kaapi_config_to_native(
completion_config
)
print(f"The completion_config transformed is {completion_config}")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace print with logger call.

Debug output should use the logger for consistency and proper log level control. As per coding guidelines, log messages should be prefixed with the function name.

-                    print(f"The completion_config transformed is {completion_config}")
+                    logger.debug(f"[execute_job] Transformed completion_config: {completion_config}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print(f"The completion_config transformed is {completion_config}")
logger.debug(f"[execute_job] Transformed completion_config: {completion_config}")
🤖 Prompt for AI Agents
In `@backend/app/services/llm/jobs.py` at line 212, Replace the print call
printing completion_config with a logger call using the module logger (e.g.,
logger.debug or logger.info) instead of print; log the same message text but
prefixed with the current function name and include the completion_config
variable for context (in backend/app/services/llm/jobs.py, at the location where
completion_config is printed), ensuring you use the existing module logger (or
create one via logging.getLogger(__name__)) and the appropriate log level rather
than print.

Comment on lines +66 to 70
# e.g "openai-native" -> "openai", "claude-native" -> "claude"
credential_provider = provider_type.replace("-native", "")

# e.g., "openai-native" → "openai", "claude-native" → "claude"
credential_provider = provider_type.replace("-native", "")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove duplicate variable assignment.

credential_provider is assigned twice with identical logic. Remove the duplicate.

     provider_class = LLMProvider.get_provider_class(provider_type)

     # e.g "openai-native" -> "openai", "claude-native" -> "claude"
     credential_provider = provider_type.replace("-native", "")

-    # e.g., "openai-native" → "openai", "claude-native" → "claude"
-    credential_provider = provider_type.replace("-native", "")
-
     credentials = get_provider_credential(
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# e.g "openai-native" -> "openai", "claude-native" -> "claude"
credential_provider = provider_type.replace("-native", "")
# e.g., "openai-native" → "openai", "claude-native" → "claude"
credential_provider = provider_type.replace("-native", "")
provider_class = LLMProvider.get_provider_class(provider_type)
# e.g "openai-native" -> "openai", "claude-native" -> "claude"
credential_provider = provider_type.replace("-native", "")
credentials = get_provider_credential(
🤖 Prompt for AI Agents
In `@backend/app/services/llm/providers/registry.py` around lines 66 - 70, There
is a duplicated assignment of credential_provider from provider_type using
replace("-native", ""); remove the redundant line so credential_provider is
assigned only once (keep the first or the clearer occurrence) and ensure any
surrounding comments remain correct; locate the duplicate by searching for the
variable name credential_provider and the expression
provider_type.replace("-native", "") in registry.py and delete the extra
assignment.

Comment on lines +92 to +135
# ad hoc testing code
if __name__ == "__main__":
# 1. Simulate environment/credentials
GEMINI_KEY = os.getenv("GEMINI_API_KEY")
if not GEMINI_KEY:
print("Set GEMINI_API_KEY environment variable first.")
exit(1)

# This dictionary mimics what get_provider_credential would return from the DB
mock_credentials = {"api_key": GEMINI_KEY}

# 2. Idiomatic Initialization via Registry
provider_type = "google-native"
# provider_type=LLMProvider.get_provider_class(provider_type="GOOGLE-NATIVE")

print(f"Initializing provider: {provider_type}...")

# This block mimics the core logic of your get_llm_provider function
ProviderClass = LLMProvider.get_provider_class(provider_type)
client = ProviderClass.create_client(credentials=mock_credentials)
instance = ProviderClass(client=client)

# 3. Setup Config and Query
test_config = NativeCompletionConfig(
provider="google-native",
type="stt",
params={
"model": "gemini-2.5-pro",
"instructions": "Please transcribe this audio accurately.",
},
)

test_query = QueryParams(
input="/Users/prajna/Desktop/personal/projects/software/Syspin_Hackathon_api_server/wav_files/1253534463206645.wav" # Ensure this file exists in your directory
)

# 4. Execution
print("Executing STT...")
result, error = instance.execute(completion_config=test_config, query=test_query)

return provider_class(client=client)
if error:
print(f"Error: {error}")
else:
print(f"Result: {result}")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove ad hoc testing code from production module.

This __main__ block contains:

  1. Hardcoded local file paths (line 125) that won't work for other developers
  2. Outdated execute() call signature missing the resolved_input parameter (line 130)
  3. Testing logic that belongs in a dedicated test file

Consider moving this to backend/app/tests/services/llm/providers/test_gai.py or removing it entirely.

🤖 Prompt for AI Agents
In `@backend/app/services/llm/providers/registry.py` around lines 92 - 135, Remove
the ad-hoc "__main__" test block from registry.py; the block contains hardcoded
paths and an outdated call signature. Extract the logic that uses
LLMProvider.get_provider_class, ProviderClass.create_client,
NativeCompletionConfig, QueryParams and instance.execute into a proper test
under backend/app/tests/services/llm/providers/test_gai.py (or delete it),
update the execute invocation to include the resolved_input parameter to match
the current signature, and ensure any credential/env handling is mocked rather
than reading real env vars or local file paths.

Comment on lines +269 to +271
# Should not find with wrong project
fetched_wrong = get_llm_call_by_id(db, created.id, project_id=99999)
assert fetched_wrong is None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Type mismatch: project_id should be UUID, not int.

The test uses project_id=99999 (an integer), but based on other usages in the file (e.g., test_project.id), project_id appears to be a UUID type. This could cause a type error or unexpected behavior.

Suggested fix
     # Should not find with wrong project
-    fetched_wrong = get_llm_call_by_id(db, created.id, project_id=99999)
+    fetched_wrong = get_llm_call_by_id(db, created.id, project_id=uuid4())
     assert fetched_wrong is None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Should not find with wrong project
fetched_wrong = get_llm_call_by_id(db, created.id, project_id=99999)
assert fetched_wrong is None
# Should not find with wrong project
fetched_wrong = get_llm_call_by_id(db, created.id, project_id=uuid4())
assert fetched_wrong is None
🤖 Prompt for AI Agents
In `@backend/app/tests/crud/test_llm.py` around lines 269 - 271, The test passes
an integer literal 99999 as project_id to get_llm_call_by_id but project_id is a
UUID; change the test to pass a UUID that will not match the created LLM call
(e.g., generate a new UUID via uuid.uuid4() or use a different fixture UUID)
instead of 99999 so the call uses the correct type; update the assertion code
around get_llm_call_by_id(db, created.id, project_id=...) and ensure
imports/fixtures provide a UUID value.

Comment on lines +339 to +373
if latest_version:
# Extract the type and provider from the latest version
completion_config = latest_version.config_blob.get("completion", {})
config_type = completion_config.get("type")
provider = completion_config.get("provider", "openai-native")

# Create a new config_blob maintaining the same type and provider
if provider in ["openai-native", "google-native"]:
config_blob = ConfigBlob(
completion=NativeCompletionConfig(
provider=provider,
type=config_type,
params={
"model": completion_config.get("params", {}).get(
"model", "gpt-4"
),
"temperature": 0.8,
"max_tokens": 1500,
},
)
)
else:
# For Kaapi providers (openai, google)
config_blob = ConfigBlob(
completion=KaapiCompletionConfig(
provider=provider,
type=config_type,
params={
"model": completion_config.get("params", {}).get(
"model", "gpt-4"
),
"temperature": 0.8,
},
)
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing default for config_type could cause validation failure.

If latest_version.config_blob doesn't contain a type field (e.g., legacy data), config_type will be None, which would fail the Literal["text", "stt", "tts"] validation when passed to NativeCompletionConfig or KaapiCompletionConfig.

🔧 Suggested fix
         if latest_version:
             # Extract the type and provider from the latest version
             completion_config = latest_version.config_blob.get("completion", {})
-            config_type = completion_config.get("type")
+            config_type = completion_config.get("type", "text")  # Default to "text" for legacy data
             provider = completion_config.get("provider", "openai-native")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if latest_version:
# Extract the type and provider from the latest version
completion_config = latest_version.config_blob.get("completion", {})
config_type = completion_config.get("type")
provider = completion_config.get("provider", "openai-native")
# Create a new config_blob maintaining the same type and provider
if provider in ["openai-native", "google-native"]:
config_blob = ConfigBlob(
completion=NativeCompletionConfig(
provider=provider,
type=config_type,
params={
"model": completion_config.get("params", {}).get(
"model", "gpt-4"
),
"temperature": 0.8,
"max_tokens": 1500,
},
)
)
else:
# For Kaapi providers (openai, google)
config_blob = ConfigBlob(
completion=KaapiCompletionConfig(
provider=provider,
type=config_type,
params={
"model": completion_config.get("params", {}).get(
"model", "gpt-4"
),
"temperature": 0.8,
},
)
)
if latest_version:
# Extract the type and provider from the latest version
completion_config = latest_version.config_blob.get("completion", {})
config_type = completion_config.get("type", "text") # Default to "text" for legacy data
provider = completion_config.get("provider", "openai-native")
# Create a new config_blob maintaining the same type and provider
if provider in ["openai-native", "google-native"]:
config_blob = ConfigBlob(
completion=NativeCompletionConfig(
provider=provider,
type=config_type,
params={
"model": completion_config.get("params", {}).get(
"model", "gpt-4"
),
"temperature": 0.8,
"max_tokens": 1500,
},
)
)
else:
# For Kaapi providers (openai, google)
config_blob = ConfigBlob(
completion=KaapiCompletionConfig(
provider=provider,
type=config_type,
params={
"model": completion_config.get("params", {}).get(
"model", "gpt-4"
),
"temperature": 0.8,
},
)
)
🤖 Prompt for AI Agents
In `@backend/app/tests/utils/test_data.py` around lines 339 - 373, latest_version
may lack a "type" in its config_blob so config_type can be None and later fail
Literal validation; set a sensible default (e.g., "text") when extracting it
from completion_config and use that default when constructing ConfigBlob
instances for both NativeCompletionConfig and KaapiCompletionConfig (update the
assignment of config_type taken from completion_config.get("type") to fallback
to "text" and ensure NativeCompletionConfig and KaapiCompletionConfig are always
passed a non-None type).

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@backend/app/models/llm/request.py`:
- Around line 395-402: The LlmCall.provider Literal includes "anthropic" but
there is no anthropic provider implementation in the provider registry (only
"openai" and "google"), so either remove "anthropic" from the Literal on the
provider Field in the LlmCall model or prevent invalid values from being
persisted: update the provider Field definition in LlmCall.provider to remove
"anthropic" (so Literal["openai","google"]) OR add an explicit Pydantic
validator on LlmCall.provider that checks the value against the actual provider
registry (the same registry lookup used at runtime) and raises a ValidationError
if "anthropic" (or any unsupported provider) is supplied; if you plan to
implement anthropic, instead implement and register the anthropic provider in
the provider registry so lookups succeed.

In `@backend/app/tests/seed_data/seed_data.py`:
- Around line 21-22: Remove the unused imports Config and ConfigVersion from the
top of this module to clean up imports; locate the import list in seed_data.py
where Config and ConfigVersion are imported alongside other models and delete
those two symbols (or defer importing them only when you add
create_config/create_config_version seed functions), ensuring functions like
clear_database and existing seed_... helpers remain unchanged.
🧹 Nitpick comments (1)
backend/app/services/llm/providers/oai.py (1)

7-7: Consider grouping typing imports with standard library imports.

The typing import should be placed before third-party imports (like openai) per PEP 8 import ordering conventions.

+from typing import Any
+
 import logging
 
 import openai
 from openai import OpenAI
 from openai.types.responses.response import Response
 
-from typing import Any
 from app.models.llm import (


Revision ID: 042
Revises: 041
Revision ID: 043
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still changing the existing migration

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for pointing out. missed updating the docstring. But the revision id for alembics correctly updates to 42.

ConfigVersion,
ConfigVersionBase,
ConfigVersionCreate,
ConfigVersionUpdatePartial,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there ConfigVersionUpdate also? how is it different thatn ConfigVersionUpdatePartial

Copy link
Copy Markdown
Collaborator Author

@Prajna1999 Prajna1999 Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no ConfigVersionUpdate model. We use -Partial suffix because the user cannot change config type from text--> stt --> tts, so every config version update is technically a partial update; i.e update every parameter but not the type field.

from app.crud.config import ConfigCrud, ConfigVersionCrud
from app.models import (
ConfigVersionCreate,
ConfigVersionUpdatePartial,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

didn;t get why use partial in the name

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all updates are partial update since type field is immutable.

description="Can take multiple response_format like text, json, verbose_json.",
)
temperature: float | None = Field(
default=0.2,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

feels you are using some other formatter

Comment on lines +50 to +51
ge=0.0,
le=2.0,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be 2 or 1?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

both Openai and Gemini models support from 0 to 2

if output_language and output_language != input_language:
lang_instruction += f" and translate to {output_language} in the native script of {output_language}"

forced_trascription_text = "Only return transcribed text and no other text."
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo

llm_response.provider_raw_response = response.model_dump()

logger.info(
f"[GoogleAIProvider._execute_stt] Successfully generated STT response: {response.response_id}"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we calling it GoogleAI or Gemini. We should have consistent name

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's GoogleAIProvider since they have non-gemini models like Chirp, Veo etc as part of the stack as well.

from openai import OpenAI
from openai.types.responses.response import Response

from typing import Any
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason we are changing names from open_ai to oai, and what's with gai

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is circular dependency issue like the provider SDKs are named google, openai and sarvam etc. So exporting openai library from openai.py the compiler throws the error. hence the sort form.

# Update job status to PROCESSING
job_crud = JobCrud(session=session)

# Debug: Try to fetch the job first
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can remove these comments

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
backend/app/tests/services/llm/test_jobs.py (1)

11-20: 🛠️ Refactor suggestion | 🟠 Major

Add return type hints to fixtures and refactor to use factory pattern.

These fixtures lack return type annotations and use inline object construction instead of factories. Move fixture creation logic to factory classes and add explicit return types to comply with coding guidelines:

  • **/*.py: Always add type hints to all function parameters and return values
  • backend/app/tests/**/*.py: Use factory pattern for test fixtures

Affected fixtures: llm_call_request (line 37), job_for_execution (line 218), mock_llm_response (line 243), job_env (line 257).

backend/app/tests/services/llm/providers/test_openai.py (1)

16-40: ⚠️ Potential issue | 🟡 Minor

Add type hints to fixture methods to align with codebase conventions.

Fixture methods in this test class lack return type hints and parameter type hints. The established pattern in backend/app/tests/conftest.py shows typed fixtures (e.g., def db() -> Generator[Session, None, None]:). Update mock_client(), provider(), completion_config(), and query_params() to include proper return type annotations per the **/*.py guideline: Always add type hints to all function parameters and return values in Python code.

backend/app/services/llm/providers/oai.py (1)

39-82: ⚠️ Potential issue | 🟡 Minor

Add type validation to ensure only text responses are processed.

The execute method always wraps output as TextOutput without validating that completion_config.type is compatible with the Responses API. Although the Responses API does not support audio/TTS (it only returns text responses), the lack of validation creates a brittle contract. If a caller passes type="tts" with Responses API parameters, the code will fail silently or produce incorrect behavior.

Consider adding an explicit check at the start of execute to reject non-text completion types, similar to how the GoogleAI provider validates and rejects unsupported types (e.g., rejecting completion_type != "stt" for that provider).

🤖 Fix all issues with AI agents
In `@backend/app/models/llm/request.py`:
- Around line 180-203: The validate_params model validator lacks a return type
annotation: update the signature of validate_params (the
`@model_validator`(mode="after") method) to declare it returns Self (i.e. def
validate_params(self) -> Self), and add the appropriate import for Self (from
typing or typing_extensions consistent with the project) so Pydantic v2
validators use the Self return type like other validators in the codebase.

Comment on lines +180 to +203
provider: Literal["openai", "google"] = Field(
..., description="LLM provider (openai)"
)

type: Literal["text", "stt", "tts"] = Field(
..., description="Completion config type. Params schema varies by type"
)
params: dict[str, Any] = Field(
...,
description="Kaapi-standardized parameters mapped to provider-specific API",
)

# validate all these 3 config types
@model_validator(mode="after")
def validate_params(self):
param_models = {
"text": TextLLMParams,
"stt": STTLLMParams,
"tts": TTSLLMParams,
}
model_class = param_models[self.type]
validated = model_class.model_validate(self.params)
self.params = validated.model_dump(exclude_none=True)
return self
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's find and examine the file
find . -name "request.py" -path "*/models/llm/*" | head -20

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 110


🏁 Script executed:

# Check the structure of the file to understand the class definition
head -n 50 backend/app/models/llm/request.py

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 1460


🏁 Script executed:

# Get the specific section mentioned in the review
sed -n '1,250p' backend/app/models/llm/request.py | head -n 250

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 7638


🏁 Script executed:

# Check for other `@model_validator` uses in the codebase to see the pattern
rg "@model_validator" -A 5 --type py

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 3243


🏁 Script executed:

# Check if Self is imported anywhere in the codebase
rg "from typing import.*Self" --type py
rg "from typing_extensions import.*Self" --type py

Repository: ProjectTech4DevAI/kaapi-backend

Length of output: 136


🌐 Web query:

Pydantic v2 @model_validator return type annotation best practice Python 3.11

💡 Result:

For Pydantic v2 model validators, the best-practice return type annotation depends on the validator mode:

mode="after" (instance method)

  • Must return self.
  • Annotate the return type as Self (Python 3.11+: from typing import Self). Pydantic explicitly recommends Self (or the backport) for type checking. [1]
  • Example:
from typing import Self
from pydantic import BaseModel, model_validator

class User(BaseModel):
    a: int
    b: int

    `@model_validator`(mode="after")
    def check(self) -> Self:
        if self.a > self.b:
            raise ValueError("a must be <= b")
        return self

If you forget return self (or return something else), Pydantic warns this can cause unexpected behavior. [1]

mode="before" (classmethod)

  • Takes raw input (often dict[str, Any], but not always), so annotate the input as Any and typically return Any. [1]
  • Example:
from typing import Any
from pydantic import BaseModel, model_validator

class M(BaseModel):
    x: int

    `@model_validator`(mode="before")
    `@classmethod`
    def normalize(cls, data: Any) -> Any:
        # mutate/normalize raw input
        return data

If you don’t want Self

Pydantic notes you can also annotate explicitly with the model type (often as a forward reference), e.g. -> 'User', but Self is the recommended type-checking-friendly option. [1]

Sources: Pydantic “Model validators” docs (return self, recommend Self, and behavior when not returning self). [1]


Add a return type annotation to validate_params.

For Pydantic v2 model validators with mode="after", use Self instead of the concrete type name. This is the recommended approach per Pydantic documentation and is already used elsewhere in the codebase.

🔧 Suggested fix
+from typing_extensions import Self
+
     `@model_validator`(mode="after")
-    def validate_params(self):
+    def validate_params(self) -> Self:

As per coding guidelines: **/*.py: Always add type hints to all function parameters and return values in Python code.

🤖 Prompt for AI Agents
In `@backend/app/models/llm/request.py` around lines 180 - 203, The
validate_params model validator lacks a return type annotation: update the
signature of validate_params (the `@model_validator`(mode="after") method) to
declare it returns Self (i.e. def validate_params(self) -> Self), and add the
appropriate import for Self (from typing or typing_extensions consistent with
the project) so Pydantic v2 validators use the Self return type like other
validators in the codebase.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
backend/app/tests/api/routes/test_llm.py (1)

191-207: ⚠️ Potential issue | 🔴 Critical

Missing required type field in NativeCompletionConfig — test will fail with ValidationError.

NativeCompletionConfig.type is required (Literal["text", "stt", "tts"] with no default). The other tests in this file were updated to include type="text", but this fixture and the one in test_llm_call_guardrails_bypassed_still_succeeds (Line 250) were missed.

Proposed fix
                     completion=NativeCompletionConfig(
                         provider="openai-native",
+                        type="text",
                         params={
                             "model": "gpt-4o",
                             "temperature": 0.7,
                         },
                     )

Apply the same fix at Line 250–256 for test_llm_call_guardrails_bypassed_still_succeeds.

backend/app/tests/services/llm/test_jobs.py (2)

243-255: ⚠️ Potential issue | 🟠 Major

mock_llm_response now uses TextOutput but guardrails code in jobs.py still accesses .output.text.

The fixture correctly updated to TextOutput(content=TextContent(value="Test response")), but this creates an incompatibility:

  1. Production code (jobs.py Line 348): response.response.output.textTextOutput has no .text field; the text lives at .content.value.
  2. Test assertions (e.g., Line 820): checks result["data"]["response"]["output"]["text"] — serialized TextOutput produces {"type": "text", "content": {...}}, not {"text": "..."}.

The output guardrails path in jobs.py needs to be updated to use the new TextOutput structure.

#!/bin/bash
# Verify how TextOutput is accessed in jobs.py guardrails section
rg -n "output\.text" backend/app/services/llm/jobs.py
echo "---"
# Check TextOutput definition
ast-grep --pattern 'class TextOutput($_) { $$$ }'

762-782: ⚠️ Potential issue | 🔴 Critical

Guardrails test configs missing required type field — will fail validation.

Multiple guardrails tests construct NativeCompletionConfig via raw dicts without the now-required type field (e.g., Lines 766–769, 806–811, 845–849, 873–877, 907–911). Since NativeCompletionConfig.type is Literal["text", "stt", "tts"] with no default, LLMCallRequest(**request_data) will raise ValidationError.

Add "type": "text" to each completion config dict in these test cases.

🧹 Nitpick comments (2)
backend/app/services/llm/jobs.py (1)

255-256: Remove empty else: pass block.

This is dead code that adds no value.

Proposed fix
                 if isinstance(completion_config, KaapiCompletionConfig):
                     completion_config, warnings = transform_kaapi_config_to_native(
                         completion_config
                     )
-
                     if request.request_metadata is None:
                         request.request_metadata = {}
                     request.request_metadata.setdefault("warnings", []).extend(warnings)
-                else:
-                    pass
backend/app/tests/services/llm/test_jobs.py (1)

18-22: Commented-out import KaapiLLMParams should be removed.

     TextOutput,
     TextContent,
-    # KaapiLLMParams,
     KaapiCompletionConfig,
 )

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
backend/app/tests/api/routes/test_llm.py (1)

145-167: ⚠️ Potential issue | 🟡 Minor

Missing type field in invalid-provider test payload — verify this still triggers a 422.

This payload omits the now-mandatory type field. It still expects a 422, which will happen, but the validation error will be about the missing type (or discriminator failure) rather than the invalid provider. If the intent is to test provider validation specifically, add "type": "text" so the error is about the provider.

🔧 Suggested fix
     payload = {
         "query": {"input": "Test query"},
         "config": {
             "blob": {
                 "completion": {
                     "provider": "invalid-provider",
+                    "type": "text",
                     "params": {"model": "gpt-4"},
                 }
             }
         },
     }
backend/app/services/llm/jobs.py (1)

375-378: ⚠️ Potential issue | 🔴 Critical

Bug: response.response.output.text will raise AttributeError — should be .content.value.

TextOutput has no .text attribute (it has .content.value as used correctly on lines 350 and 363). This code path (output guardrail failure, non-bypass, non-rephrase) will crash at runtime.

🐛 Proposed fix
                 else:
-                    response.response.output.text = safe_output["error"]
+                    response.response.output.content.value = safe_output["error"]
🧹 Nitpick comments (4)
backend/app/tests/services/llm/test_jobs.py (1)

18-22: Remove commented-out import.

The # KaapiLLMParams comment on line 20 is dead code. Remove it to keep imports clean.

🧹 Suggested cleanup
     TextOutput,
     TextContent,
-    # KaapiLLMParams,
     KaapiCompletionConfig,
 )
backend/app/services/llm/jobs.py (3)

144-145: Variable job_id shadows the parameter with a different type.

Line 134 declares job_id: str but line 145 re-annotates and reassigns it as job_id: UUID. While functional, this is confusing. Consider using a distinct name like job_uuid or just drop the redundant type annotation on line 145.

♻️ Option: drop inline annotation
-    job_id: UUID = UUID(job_id)
+    job_uuid = UUID(job_id)

Then use job_uuid throughout the function body.


255-258: Empty else: pass is unnecessary.

🧹 Remove empty else
                     request.request_metadata.setdefault("warnings", []).extend(warnings)
-                else:
-                    pass
             except Exception as e:

130-137: Add type hint for task_instance parameter.

Per coding guidelines, all function parameters should have type hints. task_instance on line 136 lacks a type annotation.

♻️ Suggested fix
 def execute_job(
     request_data: dict,
     project_id: int,
     organization_id: int,
     job_id: str,
     task_id: str,
-    task_instance,
+    task_instance: object | None,
 ) -> dict:

As per coding guidelines, "Always add type hints to all function parameters and return values in Python code".

def create_version(
config_id: UUID,
version_create: ConfigVersionCreate,
version_create: ConfigVersionUpdate,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why we are replacing ConfigVersionCreate with ConfigVersionUpdate and what happened to ConfigVersionCreate

Copy link
Copy Markdown
Collaborator Author

@Prajna1999 Prajna1999 Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both models have the same config version update logic, so two separate models for creation and updation are redundant as creating. a version during config creation (happening inside ConfigCrud.create_or_raise for version 1)and updating to subsequent versions inside ConfigVersionCrud. Hence the naming convention. Did not remove the redundant ConfigVersionCrud.create_or_raise just in case there is a requirement in future for the use case that the user can store a config without creating an associated version 1 for it and attach a version blob later not during config id creation POST request.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A better naming of the router function could be create_version_update denoting that its a version update we are POST-ing

Copy link
Copy Markdown
Collaborator Author

@Prajna1999 Prajna1999 Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated to use ConfigVersionUpdate for creation/updation of new version. We can reconcile the nomenclature in a separate optimization PR.

session=session, project_id=current_user.project_.id, config_id=config_id
)
version = version_crud.create_or_raise(version_create=version_create)
version = version_crud.create_from_partial_or_raise(version_create=version_create)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we remove the word partial from here too

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed partial and kept create_or_raise

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request ready-for-review

Projects

Status: Closed

Development

Successfully merging this pull request may close these issues.

4 participants