Conversation
Signed-off-by: rawnly <rawnly@users.noreply.github.com>
Signed-off-by: rawnly <rawnly@users.noreply.github.com>
Signed-off-by: rawnly <rawnly@users.noreply.github.com>
Signed-off-by: rawnly <rawnly@users.noreply.github.com>
Signed-off-by: rawnly <rawnly@users.noreply.github.com>
Signed-off-by: rawnly <rawnly@users.noreply.github.com>
Signed-off-by: rawnly <rawnly@users.noreply.github.com>
Signed-off-by: rawnly <rawnly@users.noreply.github.com>
Signed-off-by: rawnly <rawnly@users.noreply.github.com>
Signed-off-by: rawnly <rawnly@users.noreply.github.com>
Signed-off-by: rawnly <rawnly@users.noreply.github.com>
Co-authored-by: Ladislas de Toldi <ladislas@detoldi.me>
|
This won't support passing images to the Vision API right? |
The request got quite a bit more complicated than it used to be. Now the content can be a single String, an array of strings, or an image_url object, a text object and also an array of the combination of these. I've been trying to come up with an elegant way to do it, but I don't think there is one without writing custom logic. The other question is if there is a need to support every single possibility or just say the content will always be a an array of content objects (image_url, text and others). The endpoint doesn't care after all if you send a single string or a [{"type": "text" , "text":"Prompt comes here"}]. |
|
I think that there's no need to support all the formats, the array with multiple object s should be fine. I thought the same and that's why i didn't implement anything in this PR |
|
Can't tell if it's been addressed here, but the new models now support parallel function calling - which means the response can contain an array of |
Nope. The scope of the PR is to support new models and parameters. Nothing changed on response format. I aim to make a new PR eventually once this gets merged |
|
But I'd love to solve the |
|
The |
|
Amazing work! 😍 |
Yes, the main idea was to support new models. Then I added the new params too since was a very little effort. // content is now required
// @see https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages
let content: [Content]
public struct Content: Codable, Equatable {
let type: ChatContentType
let value: String
public enum ChatContentType: String, Codable {
case text
case imageUrl = "image_url"
}
} |
|
@rawnly thanks for your work! Was waiting for this functionality! 👍 @ingvarus-bc will there be a release tag with those changes? |
Hi, @marcoboerner, sure! Will make a release soon, first would be great to finish up with what we discussed in this PR and gotta merge some previous improvements |
|
i have a working branch with the new content, it's been tricky but it works (just tested on a local project). (all tests passing, + new test to cover the new entity and some encoding/decoding utilities) tomorrow i can try implementing the new tools too 🙌 |
|
Any ETA when can this PR be merged? |
Hi, @Arnav-arw, will do it within this working week for sure as well as a new release tag with some of the new functionality! |
|
Hey, @rawnly, need a hand with the |
sure some help is appreciated! The content is ready but i still have some issues |
Okay, you can make a PR, let's finish it up together, so that we could be all set for the release ✨ |
|
Hi, @rawnly, it seems like tests are failing, in OpenAITestsDecoder.swift UPD: There was a quick easy fix to tests. 🙌 |
|
Kudos, SonarCloud Quality Gate passed! |
New Models and Enhancements








What
Models
Added new models and deprecated some old ones.
gpt-4-1106-previewgpt-4-vision-previewgpt-3.5-turbo-1106dall-e-2dall-e-3Options
ChatQuerynow supportsresponseFormatImageQuerynow hasmodelandstylepropertiesAudioTranscriptionQuerynow supportsresponseFormatAudioTranslationQuerynow supportsresponseFormatWhy
New features availability
Missing
Looking at OpenAI docs seems like
functionsandfunction_callparameters are deprecated and replaced withtoolsAffected Areas
Image Generation and Chat Completions
fixes #111
fixes #112
#113 (partially)
fixes #114