Skip to content

Evaluation: Use Config Management#477

Merged
AkhileshNegi merged 29 commits intomainfrom
evals/config_addition
Jan 28, 2026
Merged

Evaluation: Use Config Management#477
AkhileshNegi merged 29 commits intomainfrom
evals/config_addition

Conversation

@avirajsingh7
Copy link
Copy Markdown
Collaborator

@avirajsingh7 avirajsingh7 commented Dec 10, 2025

Summary

Target issue is #557 & #547

This change refactors the evaluation run process to utilize a stored configuration instead of a configuration dictionary. It introduces fields for config_id, config_version, and model in the evaluation run table, streamlining the evaluation process and improving data integrity.

Checklist

Before submitting a pull request, please ensure that you mark these tasks.

  • Ran fastapi run --reload app/main.py or docker compose up in the repository root and test.
  • If you've fixed a bug or added code that is tested and has test cases.

Summary by CodeRabbit

  • Refactor
    • Evaluation runs now reference stored configs by ID and version; model names are resolved at runtime from those stored configs.
  • New Features
    • Runtime resolution of model name from stored evaluation config.
    • Default embedding model standardized to "text-embedding-3-large".
  • Behavior
    • Dataset upload/list endpoints now operate at /evaluations/datasets.
  • Chores
    • DB migration adds config_id and config_version to evaluation runs.
  • API
    • Evaluation creation accepts config_id and config_version instead of inline config.
  • Tests
    • Tests updated to use the stored-config workflow and new payloads.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Dec 10, 2025

📝 Walkthrough

Walkthrough

Replaces embedded evaluation config dicts with stored references (config_id: UUID, config_version: int), adds resolver functions to fetch/resolve stored config blobs and derive model names, hard-codes embedding model to text-embedding-3-large, updates DB migration and tests, and re-exports the new resolver functions.

Changes

Cohort / File(s) Summary
Data Models
backend/app/models/evaluation.py
Replaced embedded config: dict with config_id: UUID and config_version: int on EvaluationRun and EvaluationRunPublic (nullable, version ≥ 1).
Evaluation CRUD Core
backend/app/crud/evaluations/core.py
create_evaluation_run now accepts/stores config_id & config_version; added resolve_evaluation_config(session, config_id, config_version, project_id) and resolve_model_from_config(session, eval_run) using ConfigVersionCrud + resolve_config_blob; added UUID/LLM imports, logging and docstring updates.
Embedding CRUD
backend/app/crud/evaluations/embeddings.py
Added EMBEDDING_MODEL = "text-embedding-3-large" constant; embedding flows use this constant instead of reading model from eval run config.
Processing CRUD
backend/app/crud/evaluations/processing.py
Replaced direct eval_run.config access with resolve_model_from_config(session, eval_run) and pass resolved model into Langfuse dataset run creation.
Batch CRUD
backend/app/crud/evaluations/batch.py
Switched build_evaluation_jsonl / start_evaluation_batch to accept KaapiLLMParams; JSONL builder extracts explicit fields, skips items without question, and stores batch config via model_dump(exclude_none=True).
Service: Evaluations
backend/app/services/evaluations/evaluation.py
start_evaluation signature changed to accept config_id & config_version; resolves stored config via ConfigVersionCrud/resolve_config_blob, enforces provider == OPENAI, uses resolved completion params for batch start; removed build_evaluation_config.
LLM Provider Registry
backend/app/services/llm/providers/registry.py
Added LLMProvider.OPENAI = "openai" and registered it in the provider mapping.
Module Exports
backend/app/crud/evaluations/__init__.py, backend/app/services/evaluations/__init__.py
Re-exported resolve_evaluation_config and resolve_model_from_config; removed build_evaluation_config re-export from services evaluations init.
API Routes
backend/app/api/routes/evaluations/evaluation.py, backend/app/api/routes/evaluations/dataset.py
evaluate() endpoint payload now requires config_id (UUID) and config_version (int) instead of inline config; some dataset route decorators changed from "/" to "".
DB Migration
backend/app/alembic/versions/041_add_config_in_evals_run_table.py
Migration adds config_id (UUID FK) and config_version (Integer) to evaluation_run, drops old config JSONB; downgrade restores previous schema.
Tests
backend/app/tests/...
Tests updated to use create_test_config and reference config_id/config_version; replaced inline configs with persisted config references and KaapiLLMParams.

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client/API
  participant API as Evaluations API
  participant Service as Evaluations Service
  participant DB as Config DB
  participant Resolver as Config Blob Resolver
  participant Batch as Batch Orchestrator

  Client->>API: POST /evaluations (dataset_id, config_id, config_version, ...)
  API->>Service: start_evaluation(dataset_id, config_id, config_version, ...)
  Service->>DB: ConfigVersionCrud.get_by_id(config_id, config_version)
  DB-->>Service: ConfigVersion record (with blob)
  Service->>Resolver: resolve_config_blob(config_version.blob)
  Resolver-->>Service: LLMCallConfig (completion.params, model)
  Service->>Service: validate provider == OPENAI
  Service->>Batch: start_evaluation_batch(resolved_completion_params, model)
  Batch-->>Service: batch run created / status
  Service-->>API: 201 Created (EvaluationRun with config_id/config_version)
  API-->>Client: 201 Created (response)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • avirajsingh7
  • Prajna1999

Poem

🐇 I hopped from dicts to IDs with glee,
I fetched the blobs to find the model key.
One embedding tune, one migration moon,
Tests leapt forward — I munched a carrot soon. 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 58.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title "Evaluation: Use Config Management" clearly and concisely summarizes the main change: refactoring evaluation runs to use stored configuration management instead of inline config dictionaries.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@avirajsingh7 avirajsingh7 linked an issue Dec 10, 2025 that may be closed by this pull request
@avirajsingh7 avirajsingh7 self-assigned this Dec 10, 2025
@avirajsingh7 avirajsingh7 added enhancement New feature or request ready-for-review labels Dec 10, 2025
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
backend/app/crud/evaluations/embeddings.py (1)

366-367: Misleading comment - update to reflect actual behavior.

The comment says "Get embedding model from config" but the code hardcodes the value. Update the comment to accurately describe the implementation.

-        # Get embedding model from config (default: text-embedding-3-large)
-        embedding_model = "text-embedding-3-large"
+        # Use fixed embedding model (text-embedding-3-large)
+        embedding_model = "text-embedding-3-large"
backend/app/tests/api/routes/test_evaluation.py (1)

524-545: Consider renaming function to match its new purpose.

The function test_start_batch_evaluation_missing_model was repurposed to test invalid config_id scenarios. The docstring was updated but the function name still references "missing_model". Consider renaming for clarity.

-    def test_start_batch_evaluation_missing_model(self, client, user_api_key_header):
-        """Test batch evaluation fails with invalid config_id."""
+    def test_start_batch_evaluation_invalid_config_id(self, client, user_api_key_header):
+        """Test batch evaluation fails with invalid config_id."""
backend/app/api/routes/evaluation.py (1)

499-510: Consider validating that model is present in config params.

The model is extracted with .get("model") which returns None if not present. Since model is critical for cost tracking (used in create_langfuse_dataset_run), consider validating its presence and returning an error if missing.

     # Extract model from config for storage
     model = config.completion.params.get("model")
+    if not model:
+        raise HTTPException(
+            status_code=400,
+            detail="Config must specify a 'model' in completion params for evaluation",
+        )

     # Create EvaluationRun record with config references
backend/app/crud/evaluations/core.py (1)

15-69: Config-based create_evaluation_run refactor is correctly implemented; consider logging model for improved traceability.

The refactor from inline config dict to config_id: UUID and config_version: int is properly implemented throughout:

  • The sole call site in backend/app/api/routes/evaluation.py:503 correctly passes all new parameters with the right types (config_id as UUID, config_version as int, model extracted from config).
  • The EvaluationRun model in backend/app/models/evaluation.py correctly defines all three fields with appropriate types and descriptions.
  • All type hints align with Python 3.11+ guidelines.

One suggested improvement for debugging:

Include model in the creation log for better traceability when correlating evaluation runs with model versions:

logger.info(
    f"Created EvaluationRun record: id={eval_run.id}, run_name={run_name}, "
-   f"config_id={config_id}, config_version={config_version}"
+   f"config_id={config_id}, config_version={config_version}, model={model}"
)

Since the model is already extracted at the call site and passed to the function, including it in the log will provide fuller context for operational debugging without any additional cost.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 30ef268 and d5f9d4d.

📒 Files selected for processing (7)
  • backend/app/alembic/versions/7b48f23ebfdd_add_config_id_and_version_in_evals_run_.py (1 hunks)
  • backend/app/api/routes/evaluation.py (5 hunks)
  • backend/app/crud/evaluations/core.py (5 hunks)
  • backend/app/crud/evaluations/embeddings.py (1 hunks)
  • backend/app/crud/evaluations/processing.py (1 hunks)
  • backend/app/models/evaluation.py (3 hunks)
  • backend/app/tests/api/routes/test_evaluation.py (5 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use type hints in Python code (Python 3.11+ project)

Files:

  • backend/app/api/routes/evaluation.py
  • backend/app/models/evaluation.py
  • backend/app/crud/evaluations/embeddings.py
  • backend/app/tests/api/routes/test_evaluation.py
  • backend/app/crud/evaluations/processing.py
  • backend/app/crud/evaluations/core.py
  • backend/app/alembic/versions/7b48f23ebfdd_add_config_id_and_version_in_evals_run_.py
backend/app/api/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Expose FastAPI REST endpoints under backend/app/api/ organized by domain

Files:

  • backend/app/api/routes/evaluation.py
backend/app/models/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Define SQLModel entities (database tables and domain objects) in backend/app/models/

Files:

  • backend/app/models/evaluation.py
backend/app/crud/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Implement database access operations in backend/app/crud/

Files:

  • backend/app/crud/evaluations/embeddings.py
  • backend/app/crud/evaluations/processing.py
  • backend/app/crud/evaluations/core.py
🧬 Code graph analysis (2)
backend/app/tests/api/routes/test_evaluation.py (2)
backend/app/crud/evaluations/batch.py (1)
  • build_evaluation_jsonl (62-115)
backend/app/models/evaluation.py (2)
  • EvaluationDataset (74-130)
  • EvaluationRun (133-248)
backend/app/crud/evaluations/processing.py (1)
backend/app/crud/evaluations/langfuse.py (1)
  • create_langfuse_dataset_run (20-163)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (3)
backend/app/crud/evaluations/processing.py (1)

257-264: LGTM! Clean refactor to use stored model field.

The change correctly retrieves the model from eval_run.model instead of extracting it from config. This aligns with the new data model where the model is snapshotted at evaluation creation time.

backend/app/models/evaluation.py (1)

148-157: LGTM! Well-structured config reference fields.

The new config_id and config_version fields properly establish the relationship to stored configs with appropriate constraints (ge=1 for version). The nullable design allows backward compatibility with existing data.

backend/app/api/routes/evaluation.py (1)

478-495: LGTM! Robust config resolution with provider validation.

The config resolution flow properly validates that the stored config exists and uses the OPENAI provider. Error handling returns appropriate HTTP 400 responses with descriptive messages.

@codecov
Copy link
Copy Markdown

codecov bot commented Dec 10, 2025

Codecov Report

❌ Patch coverage is 85.07463% with 10 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
backend/app/services/evaluations/evaluation.py 25.00% 6 Missing ⚠️
backend/app/crud/evaluations/core.py 87.50% 2 Missing ⚠️
backend/app/crud/evaluations/batch.py 83.33% 1 Missing ⚠️
backend/app/crud/evaluations/embeddings.py 50.00% 1 Missing ⚠️

📢 Thoughts on this report? Let us know!

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
backend/app/models/evaluation.py (1)

148-158: Align EvaluationRun type hints with nullable DB columns for config fields

config_id and config_version are nullable in the schema but annotated as non-optional types. This can mislead callers and type checkers into assuming they’re always present, even for legacy runs or transitional data.

Consider updating the annotations to reflect nullability:

-    config_id: UUID = SQLField(
+    config_id: UUID | None = SQLField(
         foreign_key="config.id",
         nullable=True,
         description="Reference to the stored config used for this evaluation",
     )
-    config_version: int = SQLField(
+    config_version: int | None = SQLField(
         nullable=True,
         ge=1,
         description="Version of the config used for this evaluation",
     )

This keeps the schema the same while making runtime and type expectations clearer.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d5f9d4d and eda7762.

📒 Files selected for processing (1)
  • backend/app/models/evaluation.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use type hints in Python code (Python 3.11+ project)

Files:

  • backend/app/models/evaluation.py
backend/app/models/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Define SQLModel entities (database tables and domain objects) in backend/app/models/

Files:

  • backend/app/models/evaluation.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (1)
backend/app/models/evaluation.py (1)

271-273: Public model nullability now matches the schema

Making config_id, config_version, and model nullable in EvaluationRunPublic correctly reflects the DB fields and avoids validation issues for existing rows. This resolves the earlier mismatch between the table and the public model.

Copy link
Copy Markdown
Collaborator

@Prajna1999 Prajna1999 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@avirajsingh7
Copy link
Copy Markdown
Collaborator Author

hold merge- untill frontend is ready.

@Prajna1999
Copy link
Copy Markdown
Collaborator

good to go. Can be merged

@ProjectTech4DevAI ProjectTech4DevAI deleted a comment from coderabbitai bot Jan 26, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@backend/app/services/evaluations/evaluation.py`:
- Around line 94-111: The current flow assumes resolve_evaluation_config
returned a fully populated object and dereferences config.completion.provider,
which can be None/missing and cause AttributeError; update the handling after
resolve_evaluation_config (the block that checks error and then inspects
config.completion.provider) to first verify config is not None and that config
has a completion attribute with the expected fields (e.g., config.completion and
config.completion.provider and config.completion.params) before accessing
provider; if any are missing, raise an HTTPException with a 400 (or 422 as
appropriate) and a clear detail string like "Incomplete evaluation config:
missing completion/provider/params" so unresolved/partial configs return
controlled 4xx errors instead of a 500.
♻️ Duplicate comments (1)
backend/app/crud/evaluations/core.py (1)

372-392: Validate completion params/model before returning.

config.completion.params.model can be missing or None, which violates the declared str return type and can break downstream calls. This mirrors a previously flagged issue.

🔧 Proposed fix
-    if not eval_run.config_id or not eval_run.config_version:
+    if eval_run.config_id is None or eval_run.config_version is None:
         raise ValueError(
             f"Evaluation run {eval_run.id} has no config reference "
             f"(config_id={eval_run.config_id}, config_version={eval_run.config_version})"
         )
@@
-    model = config.completion.params.model
-    return model
+    if not config.completion or not config.completion.params:
+        raise ValueError(
+            f"Config for evaluation {eval_run.id} is missing completion params"
+        )
+
+    model = config.completion.params.model
+    if not model:
+        raise ValueError(
+            f"Config for evaluation {eval_run.id} does not specify a model "
+            f"(config_id={eval_run.config_id}, version={eval_run.config_version})"
+        )
+    return model
🧹 Nitpick comments (1)
backend/app/tests/api/routes/test_evaluation.py (1)

529-544: Prefer deriving config_version from the created config.

Hardcoding config_version=1 (e.g., Line 543, Line 767, Line 807, Line 868, Line 906) can drift if create_test_config changes its default versioning. Consider using the version exposed by the created config object to keep tests resilient.

Also applies to: 759-768, 799-808, 862-868, 899-906

Comment on lines +94 to +111
# Step 2: Resolve config from stored config management
config, error = resolve_evaluation_config(
session=session,
config=config,
assistant_id=assistant_id,
config_id=config_id,
config_version=config_version,
project_id=project_id,
)
if error:
raise HTTPException(
status_code=400,
detail=f"Failed to resolve config from stored config: {error}",
)
elif config.completion.provider != LLMProvider.OPENAI:
raise HTTPException(
status_code=422,
detail="Only 'openai' provider is supported for evaluation configs",
)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard against unresolved/partial config before dereferencing.

resolve_evaluation_config can return config=None or a blob missing completion/params. The current flow dereferences those fields, which can crash with AttributeError and return a 500 instead of a controlled 4xx.

🔧 Proposed fix
-    config, error = resolve_evaluation_config(
+    config, error = resolve_evaluation_config(
         session=session,
         config_id=config_id,
         config_version=config_version,
         project_id=project_id,
     )
-    if error:
+    if error or config is None:
         raise HTTPException(
             status_code=400,
             detail=f"Failed to resolve config from stored config: {error}",
         )
-    elif config.completion.provider != LLMProvider.OPENAI:
+    if not config.completion or not config.completion.params:
+        raise HTTPException(
+            status_code=422,
+            detail="Config is missing completion parameters required for evaluation",
+        )
+    if config.completion.provider != LLMProvider.OPENAI:
         raise HTTPException(
             status_code=422,
             detail="Only 'openai' provider is supported for evaluation configs",
         )
🤖 Prompt for AI Agents
In `@backend/app/services/evaluations/evaluation.py` around lines 94 - 111, The
current flow assumes resolve_evaluation_config returned a fully populated object
and dereferences config.completion.provider, which can be None/missing and cause
AttributeError; update the handling after resolve_evaluation_config (the block
that checks error and then inspects config.completion.provider) to first verify
config is not None and that config has a completion attribute with the expected
fields (e.g., config.completion and config.completion.provider and
config.completion.params) before accessing provider; if any are missing, raise
an HTTPException with a 400 (or 422 as appropriate) and a clear detail string
like "Incomplete evaluation config: missing completion/provider/params" so
unresolved/partial configs return controlled 4xx errors instead of a 500.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@backend/app/alembic/versions/041_add_config_in_evals_run_table.py`:
- Line 19: Add Python type hints to the migration functions: update the
signature of upgrade to include a return type (e.g., change def upgrade() to def
upgrade() -> None) and do the same for the corresponding downgrade function (def
downgrade() -> None), ensuring both functions have explicit type hints for
parameters (none) and return values to satisfy the codebase typing guidelines.
♻️ Duplicate comments (2)
backend/app/alembic/versions/041_add_config_in_evals_run_table.py (2)

19-41: Prevent data loss when dropping evaluation_run.config.

Line 41 drops the old JSONB config without migrating existing data into config_id/config_version, which will permanently lose configuration data for existing rows. Please add a data migration/backfill (or explicitly document why data loss is acceptable) before dropping the column.


45-53: Downgrade will fail if rows exist due to non-nullable config.

op.add_column(..., nullable=False) will fail on a non-empty table. Make it nullable (or add a default/backfill before enforcing NOT NULL).

✅ Suggested fix
     op.add_column(
         "evaluation_run",
         sa.Column(
             "config",
             postgresql.JSONB(astext_type=sa.Text()),
             autoincrement=False,
-            nullable=False,
+            nullable=True,
             comment="Evaluation configuration (model, instructions, etc.)",
         ),
     )

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@backend/app/crud/evaluations/batch.py`:
- Around line 103-110: The body dict currently always sets "instructions":
config.instructions which can send a None value; update the code that constructs
the request body (the body dict) to only add the "instructions" key when
config.instructions is not None, following the same conditional pattern used for
"reasoning" and "tools" (i.e., check config.instructions and only include the
field when present) so that the API never receives "instructions": None.
🧹 Nitpick comments (2)
backend/app/crud/evaluations/batch.py (2)

117-117: Redundant length check.

The expression config.knowledge_base_ids and len(config.knowledge_base_ids) > 0 is redundant. A non-empty list is truthy, so if config.knowledge_base_ids: suffices.

♻️ Proposed simplification
-        if config.knowledge_base_ids and len(config.knowledge_base_ids) > 0:
+        if config.knowledge_base_ids:

154-156: Update docstring to reflect the new type.

The docstring describes config as a "dict with llm, instructions, vector_store_ids" but it is now KaapiLLMParams with fields model, instructions, knowledge_base_ids, etc.

📝 Proposed docstring update
-        config: Evaluation configuration dict with llm, instructions, vector_store_ids
+        config: KaapiLLMParams with model, instructions, knowledge_base_ids, etc.

Copy link
Copy Markdown
Collaborator

@kartpop kartpop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

approved with comments

@ProjectTech4DevAI ProjectTech4DevAI deleted a comment from coderabbitai bot Jan 27, 2026

def build_evaluation_jsonl(
dataset_items: list[dict[str, Any]], config: dict[str, Any]
dataset_items: list[dict[str, Any]], config: KaapiLLMParams
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make sure Text, TTS and STT config works, KaapiLLMParams looks like this in the STT PR

class TextLLMParams(SQLModel):
    model: str
    instructions: str | None = Field(
        default=None,
    )
    knowledge_base_ids: list[str] | None = Field(
        default=None,
        description="List of vector store IDs to use for knowledge retrieval",
    )
    reasoning: Literal["low", "medium", "high"] | None = Field(
        default=None,
        description="Reasoning configuration or instructions",
    )
    temperature: float | None = Field(
        default=None,
        ge=0.0,
        le=2.0,
    )
    max_num_results: int | None = Field(
        default=None,
        ge=1,
        description="Maximum number of candidate results to return",
    )


class STTLLMParams(SQLModel):
    model: str
    instructions: str
    input_language: str | None = None
    output_language: str | None = None
    response_format: Literal["text"] | None = Field(
        None,
        description="Can take multiple response_format like text, json, verbose_json.",
    )
    temperature: float | None = Field(
        default=0.2,
        ge=0.0,
        le=2.0,
    )


class TTSLLMParams(SQLModel):
    model: str
    voice: str
    language: str
    response_format: Literal["mp3", "wav", "ogg"] | None = "wav"
    speed: float | None = Field(None, ge=0.25, le=4.0)


KaapiLLMParams = Union[TextLLMParams, STTLLMParams, TTSLLMParams]

Union of all these related pydantic models. May be we can add Any type to not force type safety atm and once the STT PR gets merged, plan accordingly.

session: Session,
eval_run: EvaluationRun,
config: dict[str, Any],
config: KaapiLLMParams,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto as above


class LLMProvider:
OPENAI_NATIVE = "openai-native"
OPENAI = "openai"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LLMProvider by design takes only OPENAI_NATIVE, GOOGLE_NATIVE for effective flow from top. Once we add OPENAI also it will create issues with unified API. Let's remove this enum for now. Once the new STT PR gets merged, this extra field will probably not required.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will take this up in next PR

@AkhileshNegi AkhileshNegi merged commit 6ac9391 into main Jan 28, 2026
2 of 3 checks passed
@AkhileshNegi AkhileshNegi deleted the evals/config_addition branch January 28, 2026 06:55
@AkhileshNegi AkhileshNegi linked an issue Jan 30, 2026 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request ready-for-review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Config Management: Evaluation Evaluation: Large Merge Conflict Add config management in Evals

5 participants