feat: add Mistral Medium 3.5 with reasoning support#24996
Conversation
|
Dont all reasoning mistral models support the variant or no? maybe some models are incorrectly marked as reasoning models? Ideally we arent quite as explicit but also ideally we track this in models.dev which we will do soon |
Nope, only those two. The other reasoning models you have to textually instruct: Native (https://docs.mistral.ai/capabilities/reasoning/native, Magistral) vs adjustable (https://docs.mistral.ai/studio-api/conversations/reasoning/adjustable, Mistral Small 4 and Mistral Medium 3.5). |
I thought so. I've sent a PR to Vercel AI as well: vercel/ai#14828 |
|
@rekram1-node I see the option for "high" in the TUI (toggleable with ^t), but not in the web-ui; does that require changes anywhere else? |
|
Huh ill check, how are u running web ui? |
|
Oh, |
|
Just checked: |
|
No they shouldnt different just wondering how u were running helps guarantee the version ur on |
|
ill ask web ppl (i never use it) |
|
ah it's because this PR actually doesnt work it looks like. Real ids would be: |
I'm using this model-id, it means I'm using 'high' ? |
"There isn't anything to compare". edit: 👉 I asked because from my understanding it is already merged / release and I still can't tweak the thinking variant. |
|
I just merged my pr an hour ago, so no it's not released yet |
|
You mean the id matching for the variants should've been on the dated id, not the versioned one? Thought it would be consistent with showing "high" and nothing on ^t in the TUI... Thanks for finding out! |
Issue for this PR
(didn't make one, but follow-up from #19479 )
Type of change
What does this PR do?
Follow up on #23735 , but for the Mistral Medium 3.5 model released today.
How did you verify your code works?
Added a test case to check that mistral-medium-3.5 returns a reasoning variant.
Screenshots / recordings
n/a
Checklist