Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
2ca6279
add: web_call for OpenAI provider
maisiukartyom Apr 18, 2025
605ed20
add: Claude web_call (non-stream)
maisiukartyom Apr 18, 2025
cd2ef04
add: custom <tool_call> for non-streaming
maisiukartyom Apr 20, 2025
127a907
add: custom <tool_call> for streaming
maisiukartyom Apr 21, 2025
a894e1b
upd: yaml config
maisiukartyom Apr 21, 2025
73e8ba0
Updated streams
Apr 23, 2025
81a896d
Add tool calls
May 5, 2025
64dcc23
added files
May 5, 2025
f978306
updated
May 8, 2025
b36d714
updated
May 8, 2025
f9038e7
testing
May 8, 2025
4e3c047
udpates
May 8, 2025
599b3af
testing w/ deepseek
May 10, 2025
95ad311
testing w/ deepseek
May 10, 2025
71945d0
changed to custom providers
May 15, 2025
7736c87
add [tool.poetry.group.dev.dependencies]
May 15, 2025
c74351f
poetry lock
May 15, 2025
16a5dc3
add def custom_keys as {}
May 15, 2025
85706a2
add def custom_keys as {}
May 15, 2025
d730d1e
updated files path
May 15, 2025
3215888
updated files path
May 15, 2025
ea264d4
added StopStreamingError
May 16, 2025
a842068
added new version
May 16, 2025
104ff4f
handle StopStreamingError
May 16, 2025
2f66633
added another raise
May 16, 2025
18d46c2
0.1.40a16
May 21, 2025
939d45b
Updated tests for tags writing
May 22, 2025
57b2f93
Updated tests for tags writing
May 22, 2025
980f9a4
updated lib
May 22, 2025
fb5f7e2
publish prerelease
May 22, 2025
abc7615
updated lib
May 22, 2025
ae5982a
Added streaming_content
May 23, 2025
f53435f
fixed saving of the prompt
Jun 16, 2025
3c9a7ea
updated version
Jun 16, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 13 additions & 4 deletions .github/workflows/run-unit-tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,11 @@ jobs:
echo OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }} >> .env
echo LAMOOM_API_URI=${{ secrets.LAMOOM_API_URI }} >> .env
echo LAMOOM_API_TOKEN=${{ secrets.LAMOOM_API_TOKEN }} >> .env
echo LAMOOM_CUSTOM_PROVIDERS=${{ secrets.LAMOOM_CUSTOM_PROVIDERS }} >> .env
echo NEBIUS_API_KEY=${{ secrets.NEBIUS_API_KEY }} >> .env
echo CUSTOM_API_KEY=${{ secrets.CUSTOM_API_KEY }} >> .env
echo GOOGLE_API_KEY=${{ secrets.GOOGLE_API_KEY }} >> .env
echo SEARCH_ENGINE_ID=${{ secrets.SEARCH_ENGINE_ID }} >> .env
cat .env

- name: Install dependencies
Expand All @@ -30,15 +33,21 @@ jobs:
- name: Install Poetry
run: pip install poetry

- name: Cache Poetry Dependencies
uses: actions/cache@v3
with:
path: ~/.cache/pypoetry
key: ${{ runner.os }}-poetry-${{ hashFiles('**/poetry.lock') }}
restore-keys: |
${{ runner.os }}-poetry-

- name: Install Python
uses: actions/setup-python@v3
with:
python-version: 3.11
cache: poetry

- name: Install Python libraries
run: poetry install
run: poetry install --with dev

- name: Run tests with pytest
run: |
poetry run make test
run: poetry run make test
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,5 @@ dist
.vscode
.pytest_cache
python
.env.test
.env.test
*/logs/
6 changes: 1 addition & 5 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,7 @@ lint:
poetry run isort --settings-path pyproject.toml --check-only .

test:
poetry run pytest --cache-clear -vv tests \
--cov=${PROJECT_FOLDER} \
--cov-config=.coveragerc \
--cov-fail-under=81 \
--cov-report term-missing
poetry run pytest --cache-clear -vv tests

.PHONY: format
format: make-black isort-check flake8 make-mypy
Expand Down
25 changes: 18 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,7 @@ Mix models easily, and districute the load across models. The system will automa
- Gemini
- OpenAI (w/ Azure OpenAI models)
- Nebius with (Llama, DeepSeek, Mistral, Mixtral, dolphin, Qwen and others)
- OpenRouter woth open source models
- Custom providers

Model string format is the following for Claude, Gemini, OpenAI, Nebius:
Expand All @@ -132,16 +133,17 @@ For Azure models format is the following:
`"azure/{realm}/{model_name}"`

```python
response_llm = client.call(agent.id, context, model = "openai/gpt-4o")
response_llm = client.call(agent.id, context, model = "openai/o4-mini")
response_llm = client.call(agent.id, context, model = "azure/useast/gpt-4o")
```

Custom model string format is the following:
`"custom/{model_name}"`
`provider_url` is required
`"custom/{provider_name}/{model_name}"`
where provider is provided in the env variable:
LAMOOM_CUSTOM_PROVIDERS={"provider_name": {"base_url": "https://","key":"key"}}

```python
response_llm = client.call(agent.id, context, model = "custom/gpt-4o", provider_url = "your_model_url")
response_llm = client.call(agent.id, context, model = "custom/provider_name/model_name")
```

### Lamoom Keys
Expand Down Expand Up @@ -169,14 +171,14 @@ prompt.add("You're {name}. Say Hello and ask what's their name.", role="system")

# Call AI model with Lamoom
context = {"name": "John Doe"}
response = client.call(prompt.id, context, "openai/gpt-4o")
response = client.call(prompt.id, context, "openai/o4-mini")
print(response.content)
```

### Creating Tests While Using Prompts
```python
# Call with test_data to automatically generate tests
response = client.call(prompt.id, context, "openai/gpt-4o", test_data={
response = client.call(prompt.id, context, "openai/o4-mini", test_data={
'ideal_answer': "Hello, I'm John Doe. What's your name?",
'model_name': "gemini/gemini-1.5-flash"
})
Expand All @@ -202,6 +204,14 @@ client.add_ideal_answer(
)
```

### To Add Search Credentials:
- Add Search ENgine id from here:
https://programmablesearchengine.google.com/controlpanel/create

- Get A google Search Key:
https://developers.google.com/custom-search/v1/introduction/?apix=true


### Monitoring and Management
- **Test Dashboard**: Review created tests and scores at https://cloud.lamoom.com/tests
- **Prompt Management**: Update prompts and rerun tests for published or saved versions
Expand All @@ -219,4 +229,5 @@ We welcome contributions! Please see our Contribution Guidelines for more inform
This project is licensed under the Apache2.0 License - see the [LICENSE](LICENSE.txt) file for details.

## Contact
For support or contributions, please contact us via GitHub Issues.
For support or contributions, please contact us via GitHub Issues.

Empty file added claude.config
Empty file.
16 changes: 1 addition & 15 deletions docs/evaluate_prompts_quality/evaluate_prompt_quality.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,6 @@

lamoom = Lamoom()

gpt4_behaviour = behaviour.AIModelsBehaviour(
attempts=[
AttemptToCall(
ai_model=AzureAIModel(
realm='useast',
deployment_id="gpt-4o",
max_tokens=C_128K,
support_functions=True,
),
weight=100,
),
]
)


def main():
for prompt in get_all_prompts():
Expand All @@ -41,7 +27,7 @@ def main():
'prompt_data': prompt_chats,
'prompt_id': prompt_id,
}
result = lamoom.call(prompt_to_evaluate_prompt.id, context, gpt4_behaviour)
result = lamoom.call(prompt_to_evaluate_prompt.id, context, 'azure/useast/o4-mini')
print(result.content)

if __name__ == '__main__':
Expand Down
Loading
Loading