Use this library to adopt AI APIs in your app. Swift clients for the following providers are included:
- OpenAI
- Gemini
- Anthropic
- Stability AI
- DeepL
- Together AI
- Replicate
- ElevenLabs
- Fal
- Groq
- Perplexity
- Mistral
- EachAI
- OpenRouter
Your initialization code determines whether requests go straight to the provider or are protected through the AIProxy backend.
We only recommend making requests straight to the provider during prototyping and for BYOK use-cases.
Requests that are protected through AIProxy have five levels of security applied to keep your API key secure and your AI bill predictable:
- Certificate pinning
- DeviceCheck verification
- Split key encryption
- Per user rate limits
- Per IP rate limits
-
From within your Xcode project, select
File > Add Package Dependencies
-
Punch
github.com/lzell/aiproxyswift
into the package URL bar, and select the 'main' branch as the dependency rule. Alternatively, you can choose specific releases if you'd like to have finer control of when your dependency gets updated.
See the AIProxy integration video. Note that this is not required if you are shipping an app where the customers provide their own API keys (known as BYOK for "bring your own key").
If you are shipping an app using a personal or company API key, we highly recommend setting up AIProxy as an alternative to building, monitoring, and maintaining your own backend.
-
If you set the dependency rule to
main
in step 2 above, then you can ensure the package is up to date by right clicking on the package and selecting 'Update Package' -
If you selected a version-based rule, inspect the rule in the 'Package Dependencies' section of your project settings:
Once the rule is set to include the release version that you'd like to bring in, Xcode should update the package automatically. If it does not, right click on the package in the project tree and select 'Update Package'.
Along with the snippets below, which you can copy and paste into your Xcode project, we also offer full demo apps to jump-start your development. Please see the AIProxyBootstrap repo.
- OpenAI
- Gemini
- Anthropic
- Stability AI
- DeepL
- Together AI
- Replicate
- ElevenLabs
- Fal
- Groq
- Perplexity
- Mistral
- EachAI
- OpenRouter
- Advanced Settings
import AIProxy
/* Uncomment for BYOK use cases */
// let openAIService = AIProxy.openAIDirectService(
// unprotectedAPIKey: "your-openai-key"
// )
/* Uncomment for all other production use cases */
// let openAIService = AIProxy.openAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let response = try await openAIService.chatCompletionRequest(body: .init(
model: "gpt-4o",
messages: [.system(content: .text("hello world"))]
))
print(response.choices.first?.message.content ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create OpenAI chat completion: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let openAIService = AIProxy.openAIDirectService(
// unprotectedAPIKey: "your-openai-key"
// )
/* Uncomment for all other production use cases */
// let openAIService = AIProxy.openAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
let requestBody = OpenAIChatCompletionRequestBody(
model: "gpt-4o-mini",
messages: [.user(content: .text("hello world"))]
)
do {
let stream = try await openAIService.streamingChatCompletionRequest(body: requestBody)
for try await chunk in stream {
print(chunk.choices.first?.delta.content ?? "")
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create OpenAI streaming chat completion: \(error.localizedDescription)")
}
On macOS, use NSImage(named:)
in place of UIImage(named:)
import AIProxy
/* Uncomment for BYOK use cases */
// let openAIService = AIProxy.openAIDirectService(
// unprotectedAPIKey: "your-openai-key"
// )
/* Uncomment for all other production use cases */
// let openAIService = AIProxy.openAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
guard let image = UIImage(named: "myImage") else {
print("Could not find an image named 'myImage' in your app assets")
return
}
guard let imageURL = AIProxy.encodeImageAsURL(image: image, compressionQuality: 0.8) else {
print("Could not convert image to OpenAI's imageURL format")
return
}
do {
let response = try await openAIService.chatCompletionRequest(body: .init(
model: "gpt-4o",
messages: [
.system(
content: .text("Tell me what you see")
),
.user(
content: .parts(
[
.text("What do you see?"),
.imageURL(imageURL, detail: .auto)
]
)
)
]
))
print(response.choices.first?.message.content ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create OpenAI multi-modal chat completion: \(error.localizedDescription)")
}
This snippet will print out the URL of an image generated with dall-e-3
:
import AIProxy
/* Uncomment for BYOK use cases */
// let openAIService = AIProxy.openAIDirectService(
// unprotectedAPIKey: "your-openai-key"
// )
/* Uncomment for all other production use cases */
// let openAIService = AIProxy.openAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let requestBody = OpenAICreateImageRequestBody(
prompt: "a skier",
model: "dall-e-3"
)
let response = try await openAIService.createImageRequest(body: requestBody)
print(response.data.first?.url ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not generate an image with OpenAI's DALLE: \(error.localizedDescription)")
}
Use responseFormat
and specify in the prompt that OpenAI should return JSON only:
import AIProxy
/* Uncomment for BYOK use cases */
// let openAIService = AIProxy.openAIDirectService(
// unprotectedAPIKey: "your-openai-key"
// )
/* Uncomment for all other production use cases */
// let openAIService = AIProxy.openAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let requestBody = OpenAIChatCompletionRequestBody(
model: "gpt-4o",
messages: [
.system(content: .text("Return valid JSON only")),
.user(content: .text("Return alice and bob in a list of names"))
],
responseFormat: .jsonObject
)
let response = try await openAIService.chatCompletionRequest(body: requestBody)
print(response.choices.first?.message.content ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create OpenAI chat completion in JSON mode: \(error.localizedDescription)")
}
This example prompts chatGPT to construct a color palette and conform to a strict JSON schema in its response:
import AIProxy
/* Uncomment for BYOK use cases */
// let openAIService = AIProxy.openAIDirectService(
// unprotectedAPIKey: "your-openai-key"
// )
/* Uncomment for all other production use cases */
// let openAIService = AIProxy.openAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let schema: [String: AIProxyJSONValue] = [
"type": "object",
"properties": [
"colors": [
"type": "array",
"items": [
"type": "object",
"properties": [
"name": [
"type": "string",
"description": "A descriptive name to give the color"
],
"hex_code": [
"type": "string",
"description": "The hex code of the color"
]
],
"required": ["name", "hex_code"],
"additionalProperties": false
]
]
],
"required": ["colors"],
"additionalProperties": false
]
let requestBody = OpenAIChatCompletionRequestBody(
model: "gpt-4o-2024-08-06",
messages: [
.system(content: .text("Return valid JSON only, and follow the specified JSON structure")),
.user(content: .text("Return a peaches and cream color palette"))
],
responseFormat: .jsonSchema(
name: "palette_creator",
description: "A list of colors that make up a color pallete",
schema: schema,
strict: true
)
)
let response = try await openAIService.chatCompletionRequest(body: requestBody)
print(response.choices.first?.message.content ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create OpenAI chat completion with structured outputs: \(error.localizedDescription)")
}
This example is taken from the structured outputs announcement: https://openai.com/index/introducing-structured-outputs-in-the-api/
It asks ChatGPT to call a function with the correct arguments to look up a business's unfulfilled orders:
import AIProxy
/* Uncomment for BYOK use cases */
// let openAIService = AIProxy.openAIDirectService(
// unprotectedAPIKey: "your-openai-key"
// )
/* Uncomment for all other production use cases */
// let openAIService = AIProxy.openAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let schema: [String: AIProxyJSONValue] = [
"type": "object",
"properties": [
"location": [
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
],
"unit": [
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature. If not specified in the prompt, always default to fahrenheit",
"default": "fahrenheit"
]
],
"required": ["location", "unit"],
"additionalProperties": false
]
let requestBody = OpenAIChatCompletionRequestBody(
model: "gpt-4o-2024-08-06",
messages: [
.user(content: .text("How cold is it today in SF?"))
],
tools: [
.function(
name: "get_weather",
description: "Call this when the user wants the weather",
parameters: schema,
strict: true)
]
)
let response = try await openAIService.chatCompletionRequest(body: requestBody)
if let toolCall = response.choices.first?.message.toolCalls?.first {
let functionName = toolCall.function.name
let arguments = toolCall.function.arguments ?? [:]
print("ChatGPT wants us to call function \(functionName) with arguments: \(arguments)")
} else {
print("Could not get function arguments")
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not make an OpenAI structured output tool call: \(error.localizedDescription)")
}
This example shows streaming tool use, where one tool is for getting the weather and another tool is for chatting with the user.
If the user's prompt is not weather related, then OpenAI will call the 'chat' tool. You can use a strategy like this to build experiences where the main purpose is to call at tool, but also have chat functionality alongside it.
let systemInstructions = """
You are a friendly assistant that chats with the user or provides them
with the weather.
If the user is just trying to chat, use the 'chat' tool.
If the user wants the weather, use the 'get_weather' tool.
"""
let weatherSchema: [String: AIProxyJSONValue] = [
"type": "object",
"properties": [
"location": [
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
],
"unit": [
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature. If not specified in the prompt, always default to fahrenheit",
"default": "fahrenheit"
]
],
"required": ["location", "unit"],
"additionalProperties": false
]
let chatSchema: [String: AIProxyJSONValue] = [
"type": "object",
"properties": [
"message": [
"type": "string",
"description": "A message to chat with the user"
]
],
"required": ["message"],
"additionalProperties": false
]
let requestBody = OpenAIChatCompletionRequestBody(
model: "gpt-4o",
messages: [
.system(content: .text(systemInstructions)),
.user(content: .text("How are you doing?")),
],
tools: [
.function(
name: "get_weather",
description: "Gets the weather",
parameters: weatherSchema,
strict: true
),
.function(
name: "chat",
description: "Chat with the user",
parameters: chatSchema,
strict: true
)
]
)
do {
let stream = try await openAIService.streamingChatCompletionRequest(body: requestBody)
for try await chunk in stream {
let toolCall = chunk.choices.first?.delta.toolCalls?.first
if let functionName = toolCall?.function?.name {
print("ChatGPT wants to call function \(functionName) with arguments...")
}
print(toolCall?.function?.arguments ?? "")
if let usage = chunk.usage {
print(
"""
Used:
\(usage.promptTokens ?? 0) prompt tokens
\(usage.completionTokens ?? 0) completion tokens
\(usage.completionTokensDetails?.reasoningTokens ?? 0) reasoning tokens
\(usage.totalTokens ?? 0) total tokens
"""
)
}
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not make a streaming tool call to OpenAI: \(error.localizedDescription)")
}
-
Record an audio file in quicktime and save it as "helloworld.m4a"
-
Add the audio file to your Xcode project. Make sure it's included in your target: select your audio file in the project tree, type
cmd-opt-0
to open the inspect panel, and viewTarget Membership
-
Run this snippet:
import AIProxy /* Uncomment for BYOK use cases */ // let openAIService = AIProxy.openAIDirectService( // unprotectedAPIKey: "your-openai-key" // ) /* Uncomment for all other production use cases */ // let openAIService = AIProxy.openAIService( // partialKey: "partial-key-from-your-developer-dashboard", // serviceURL: "service-url-from-your-developer-dashboard" // ) do { let url = Bundle.main.url(forResource: "helloworld", withExtension: "m4a")! let requestBody = OpenAICreateTranscriptionRequestBody( file: try Data(contentsOf: url), model: "whisper-1", responseFormat: "verbose_json", timestampGranularities: [.word, .segment] ) let response = try await openAIService.createTranscriptionRequest(body: requestBody) if let words = response.words { for word in words { print("\(word.word) from \(word.start) to \(word.end)") } } } catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) { print("Received \(statusCode) status code with response body: \(responseBody)") } catch { print("Could not get word-level timestamps from OpenAI: \(error.localizedDescription)") }
```swift
import AIProxy
/* Uncomment for BYOK use cases */
// let openAIService = AIProxy.openAIDirectService(
// unprotectedAPIKey: "your-openai-key"
// )
/* Uncomment for all other production use cases */
// let openAIService = AIProxy.openAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let requestBody = OpenAITextToSpeechRequestBody(
input: "Hello world",
voice: .nova
)
let mpegData = try await openAIService.createTextToSpeechRequest(body: requestBody)
// Do not use a local `let` or `var` for AVAudioPlayer.
// You need the lifecycle of the player to live beyond the scope of this function.
// Instead, use file scope or set the player as a member of a reference type with long life.
// For example, at the top of this file you may define:
//
// fileprivate var audioPlayer: AVAudioPlayer? = nil
//
// And then use the code below to play the TTS result:
audioPlayer = try AVAudioPlayer(data: mpegData)
audioPlayer?.prepareToPlay()
audioPlayer?.play()
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create OpenAI TTS audio: \(error.localizedDescription)")
}
```
```swift
import AIProxy
/* Uncomment for BYOK use cases */
// let openAIService = AIProxy.openAIDirectService(
// unprotectedAPIKey: "your-openai-key"
// )
/* Uncomment for all other production use cases */
// let openAIService = AIProxy.openAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
let requestBody = OpenAIModerationRequestBody(
input: [
.text("is this bad"),
],
model: "omni-moderation-latest"
)
do {
let response = try await openAIService.moderationRequest(body: requestBody)
print("Is this content flagged: \(response.results.first?.flagged ?? false)")
//
// For a more detailed assessment of the input content, inspect:
//
// response.results.first?.categories
//
// and
//
// response.results.first?.categoryScores
//
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not perform moderation request to OpenAI")
}
```
You can use all of the OpenAI snippets aboves with one change. Initialize the OpenAI service with:
import AIProxy
let openAIService = AIProxy.openAIService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard",
requestFormat: .azureDeployment(apiVersion: "2024-06-01")
)
import AIProxy
/* Uncomment for BYOK use cases */
// let geminiService = AIProxy.geminiDirectService(
// unprotectedAPIKey: "your-gemini-key"
// )
/* Uncomment for all other production use cases */
// let geminiService = AIProxy.geminiService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
let requestBody = GeminiGenerateContentRequestBody(
contents: [
.init(
parts: [.text("How do I use product xyz?")]
)
],
generationConfig: .init(maxOutputTokens: 1024),
systemInstruction: .init(parts: [.text("Introduce yourself as a customer support person")])
)
do {
let response = try await geminiService.generateContentRequest(
body: requestBody,
model: "gemini-2.0-flash-exp"
)
for part in response.candidates?.first?.content?.parts ?? [] {
switch part {
case .text(let text):
print("Gemini sent: \(text)")
case .functionCall(name: let functionName, args: let arguments):
print("Gemini wants us to call function \(functionName) with arguments: \(arguments ?? [:])")
}
}
if let usage = response.usageMetadata {
print(
"""
Used:
\(usage.promptTokenCount ?? 0) prompt tokens
\(usage.cachedContentTokenCount ?? 0) cached tokens
\(usage.candidatesTokenCount ?? 0) candidate tokens
\(usage.totalTokenCount ?? 0) total tokens
"""
)
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create Gemini generate content request: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let geminiService = AIProxy.geminiDirectService(
// unprotectedAPIKey: "your-gemini-key"
// )
/* Uncomment for all other production use cases */
// let geminiService = AIProxy.geminiService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
let functionParameters: [String: AIProxyJSONValue] = [
"type": "OBJECT",
"properties": [
"brightness": [
"description": "Light level from 0 to 100. Zero is off and 100 is full brightness.",
"type": "NUMBER"
],
"colorTemperature": [
"description": "Color temperature of the light fixture which can be `daylight`, `cool` or `warm`.",
"type": "STRING"
]
],
"required": [
"brightness",
"colorTemperature"
]
]
let requestBody = GeminiGenerateContentRequestBody(
contents: [
.init(
parts: [.text("Dim the lights so the room feels cozy and warm.")],
role: "user"
)
],
/* Uncomment this to enforce that a function is called regardless of prompt contents. */
// toolConfig: .init(
// functionCallingConfig: .init(
// allowedFunctionNames: ["controlLight"],
// mode: .anyFunction
// )
// ),
tools: [
.functionDeclarations(
[
.init(
name: "controlLight",
description: "Set the brightness and color temperature of a room light.",
parameters: functionParameters
)
]
)
]
)
do {
let response = try await geminiService.generateContentRequest(
body: requestBody,
model: "gemini-2.0-flash-exp"
)
for part in response.candidates?.first?.content?.parts ?? [] {
switch part {
case .text(let text):
print("Gemini sent: \(text)")
case .functionCall(name: let functionName, args: let arguments):
print("Gemini wants us to call function \(functionName) with arguments: \(arguments ?? [:])")
}
}
if let usage = response.usageMetadata {
print(
"""
Used:
\(usage.promptTokenCount ?? 0) prompt tokens
\(usage.cachedContentTokenCount ?? 0) cached tokens
\(usage.candidatesTokenCount ?? 0) candidate tokens
\(usage.totalTokenCount ?? 0) total tokens
"""
)
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create Gemini tool (function) call: \(error.localizedDescription)")
}
It's important that you connect a GCP billing account to your Gemini API key to use this feature. Otherwise, Gemini will return 429s for every call. You can connect your billing account for the API keys you use here.
Consider applying to google for startups to gain credits that you can put towards Gemini.
import AIProxy
/* Uncomment for BYOK use cases */
// let geminiService = AIProxy.geminiDirectService(
// unprotectedAPIKey: "your-gemini-key"
// )
/* Uncomment for all other production use cases */
// let geminiService = AIProxy.geminiService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
let requestBody = GeminiGenerateContentRequestBody(
contents: [
.init(
parts: [.text("What is the price of Google stock today")],
role: "user"
)
],
tools: [
.googleSearchRetrieval(.init(dynamicThreshold: 0.7, mode: .dynamic))
]
)
do {
let response = try await geminiService.generateContentRequest(
body: requestBody,
model: "gemini-1.5-flash"
)
for part in response.candidates?.first?.content?.parts ?? [] {
switch part {
case .text(let text):
print("Gemini sent: \(text)")
case .functionCall(name: let functionName, args: let arguments):
print("Gemini wants us to call function \(functionName) with arguments: \(arguments ?? [:])")
}
}
if let usage = response.usageMetadata {
print(
"""
Used:
\(usage.promptTokenCount ?? 0) prompt tokens
\(usage.cachedContentTokenCount ?? 0) cached tokens
\(usage.candidatesTokenCount ?? 0) candidate tokens
\(usage.totalTokenCount ?? 0) total tokens
"""
)
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create Gemini grounding search request: \(error.localizedDescription)")
}
Add a file called helloworld.m4a
to your Xcode assets before running this sample snippet:
import AIProxy
/* Uncomment for BYOK use cases */
// let geminiService = AIProxy.geminiDirectService(
// unprotectedAPIKey: "your-gemini-key"
// )
/* Uncomment for all other production use cases */
// let geminiService = AIProxy.geminiService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
guard let url = Bundle.main.url(forResource: "helloworld", withExtension: "m4a") else {
print("Could not find an audio file named helloworld.m4a in your app bundle")
return
}
do {
let requestBody = GeminiGenerateContentRequestBody(
contents: [
.init(
parts: [
.text("""
Can you transcribe this interview, in the format of timecode, speaker, caption?
Use speaker A, speaker B, etc. to identify speakers.
"""),
.inline(data: try Data(contentsOf: url), mimeType: "audio/mp4")
]
)
]
)
let response = try await geminiService.generateContentRequest(
body: requestBody,
model: "gemini-1.5-flash"
)
for part in response.candidates?.first?.content?.parts ?? [] {
switch part {
case .text(let text):
print("Gemini transcript: \(text)")
case .functionCall(name: let functionName, args: let arguments):
print("Gemini wants us to call function \(functionName) with arguments: \(arguments ?? [:])")
}
}
if let usage = response.usageMetadata {
print(
"""
Used:
\(usage.promptTokenCount ?? 0) prompt tokens
\(usage.cachedContentTokenCount ?? 0) cached tokens
\(usage.candidatesTokenCount ?? 0) candidate tokens
\(usage.totalTokenCount ?? 0) total tokens
"""
)
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create transcript with Gemini: \(error.localizedDescription)")
}
Add a file called 'my-image.jpg' to Xcode app assets. Then run this snippet:
import AIProxy
/* Uncomment for BYOK use cases */
// let geminiService = AIProxy.geminiDirectService(
// unprotectedAPIKey: "your-gemini-key"
// )
/* Uncomment for all other production use cases */
// let geminiService = AIProxy.geminiService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
guard let image = NSImage(named: "my-image") else {
print("Could not find an image named 'my-image' in your app assets")
return
}
guard let jpegData = AIProxy.encodeImageAsJpeg(image: image, compressionQuality: 0.9) else {
print("Could not encode image as Jpeg")
return
}
do {
let requestBody = GeminiGenerateContentRequestBody(
contents: [
.init(
parts: [
.text("What do you see?"),
.inline(
data: jpegData,
mimeType: "image/jpeg"
)
]
)
],
safetySettings: [
.init(category: .dangerousContent, threshold: .none),
.init(category: .civicIntegrity, threshold: .none),
.init(category: .harassment, threshold: .none),
.init(category: .hateSpeech, threshold: .none),
.init(category: .sexuallyExplicit, threshold: .none)
]
)
let response = try await geminiService.generateContentRequest(
body: requestBody,
model: "gemini-1.5-flash"
)
for part in response.candidates?.first?.content?.parts ?? [] {
switch part {
case .text(let text):
print("Gemini sees: \(text)")
case .functionCall(name: let functionName, args: let arguments):
print("Gemini wants us to call function \(functionName) with arguments: \(arguments ?? [:])")
}
}
if let usage = response.usageMetadata {
print(
"""
Used:
\(usage.promptTokenCount ?? 0) prompt tokens
\(usage.cachedContentTokenCount ?? 0) cached tokens
\(usage.candidatesTokenCount ?? 0) candidate tokens
\(usage.totalTokenCount ?? 0) total tokens
"""
)
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create Gemini generate content request: \(error.localizedDescription)")
}
Add a file called my-movie.mov
to your Xcode assets before running this sample snippet.
If you use a file like my-movie.mp4
, change the mime type from video/quicktime
to video/mp4
in the snippet below.
import AIProxy
/* Uncomment for BYOK use cases */
// let geminiService = AIProxy.geminiDirectService(
// unprotectedAPIKey: "your-gemini-key"
// )
/* Uncomment for all other production use cases */
// let geminiService = AIProxy.geminiService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
// Try to upload the zip file in Xcode assets
// Get the images to train with:
guard let movieAsset = NSDataAsset(name: "my-movie") else {
print("""
Drop my-movie.mov into Assets first.
""")
return
}
do {
let geminiFile = try await geminiService.uploadFile(
fileData: movieAsset.data,
mimeType: "video/quicktime"
)
print("""
Video file uploaded to Gemini's media storage.
It will be available for 48 hours.
Find it at \(geminiFile.uri.absoluteString)
""")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not upload file to Gemini: \(error.localizedDescription)")
}
Use the file URL returned from the snippet above.
import AIProxy
let fileURL = URL(string: "url-from-snippet-above")!
/* Uncomment for BYOK use cases */
// let geminiService = AIProxy.geminiDirectService(
// unprotectedAPIKey: "your-gemini-key"
// )
/* Uncomment for all other production use cases */
// let geminiService = AIProxy.geminiService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
let requestBody = GeminiGenerateContentRequestBody(
model: "gemini-1.5-flash",
contents: [
.init(
parts: [
.text("Dump the text content in markdown from this video"),
.file(
url: fileURL,
mimeType: "video/quicktime"
)
]
)
],
safetySettings: [
.init(category: .dangerousContent, threshold: .none),
.init(category: .civicIntegrity, threshold: .none),
.init(category: .harassment, threshold: .none),
.init(category: .hateSpeech, threshold: .none),
.init(category: .sexuallyExplicit, threshold: .none)
]
)
do {
let response = try await geminiService.generateContentRequest(
body: requestBody,
model: "gemini-1.5-flash"
)
for part in response.candidates?.first?.content?.parts ?? [] {
switch part {
case .text(let text):
print("Gemini transcript: \(text)")
case .functionCall(name: let functionName, args: let arguments):
print("Gemini wants us to call function \(functionName) with arguments: \(arguments ?? [:])")
}
}
if let usage = response.usageMetadata {
print(
"""
Used:
\(usage.promptTokenCount ?? 0) prompt tokens
\(usage.cachedContentTokenCount ?? 0) cached tokens
\(usage.candidatesTokenCount ?? 0) candidate tokens
\(usage.totalTokenCount ?? 0) total tokens
"""
)
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create Gemini vision request: \(error.localizedDescription)")
}
import AIProxy
let fileURL = URL(string: "url-from-snippet-above")!
/* Uncomment for BYOK use cases */
// let geminiService = AIProxy.geminiDirectService(
// unprotectedAPIKey: "your-gemini-key"
// )
/* Uncomment for all other production use cases */
// let geminiService = AIProxy.geminiService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
try await geminiService.deleteFile(fileURL: fileURL)
print("File deleted from \(fileURL.absoluteString)")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not delete file from Gemini temporary storage: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let anthropicService = AIProxy.anthropicDirectService(
// unprotectedAPIKey: "your-anthropic-key"
// )
/* Uncomment for all other production use cases */
// let anthropicService = AIProxy.anthropicService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let response = try await anthropicService.messageRequest(body: AnthropicMessageRequestBody(
maxTokens: 1024,
messages: [
AnthropicInputMessage(content: [.text("hello world")], role: .user)
],
model: "claude-3-5-sonnet-20240620"
))
for content in response.content {
switch content {
case .text(let message):
print("Claude sent a message: \(message)")
case .toolUse(id: _, name: let toolName, input: let toolInput):
print("Claude used a tool \(toolName) with input: \(toolInput)")
}
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create an Anthropic message: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let anthropicService = AIProxy.anthropicDirectService(
// unprotectedAPIKey: "your-anthropic-key"
// )
/* Uncomment for all other production use cases */
// let anthropicService = AIProxy.anthropicService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let requestBody = AnthropicMessageRequestBody(
maxTokens: 1024,
messages: [
.init(
content: [.text("hello world")],
role: .user
)
],
model: "claude-3-5-sonnet-20240620"
)
let stream = try await anthropicService.streamingMessageRequest(body: requestBody)
for try await chunk in stream {
switch chunk {
case .text(let text):
print(text)
case .toolUse(name: let toolName, input: let toolInput):
print("Claude wants to call tool \(toolName) with input \(toolInput)")
}
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not use Anthropic's message stream: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let anthropicService = AIProxy.anthropicDirectService(
// unprotectedAPIKey: "your-anthropic-key"
// )
/* Uncomment for all other production use cases */
// let anthropicService = AIProxy.anthropicService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let requestBody = AnthropicMessageRequestBody(
maxTokens: 1024,
messages: [
.init(
content: [.text("What is nvidia's stock price?")],
role: .user
)
],
model: "claude-3-5-sonnet-20240620",
tools: [
.init(
description: "Call this function when the user wants a stock symbol",
inputSchema: [
"type": "object",
"properties": [
"ticker": [
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
]
],
"required": ["ticker"]
],
name: "get_stock_symbol"
)
]
)
let stream = try await anthropicService.streamingMessageRequest(body: requestBody)
for try await chunk in stream {
switch chunk {
case .text(let text):
print(text)
case .toolUse(name: let toolName, input: let toolInput):
print("Claude wants to call tool \(toolName) with input \(toolInput)")
}
}
print("Done with stream")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
On macOS, use NSImage(named:)
in place of UIImage(named:)
import AIProxy
guard let image = UIImage(named: "myImage") else {
print("Could not find an image named 'myImage' in your app assets")
return
}
guard let jpegData = AIProxy.encodeImageAsJpeg(image: image, compressionQuality: 0.8) else {
print("Could not convert image to jpeg")
return
}
/* Uncomment for BYOK use cases */
// let anthropicService = AIProxy.anthropicDirectService(
// unprotectedAPIKey: "your-anthropic-key"
// )
/* Uncomment for all other production use cases */
// let anthropicService = AIProxy.anthropicService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let response = try await anthropicService.messageRequest(body: AnthropicMessageRequestBody(
maxTokens: 1024,
messages: [
AnthropicInputMessage(content: [
.text("Provide a very short description of this image"),
.image(mediaType: .jpeg, data: jpegData.base64EncodedString())
], role: .user)
],
model: "claude-3-5-sonnet-20240620"
))
for content in response.content {
switch content {
case .text(let message):
print("Claude sent a message: \(message)")
case .toolUse(id: _, name: let toolName, input: let toolInput):
print("Claude used a tool \(toolName) with input: \(toolInput)")
}
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not send a multi-modal message to Anthropic: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let anthropicService = AIProxy.anthropicDirectService(
// unprotectedAPIKey: "your-anthropic-key"
// )
/* Uncomment for all other production use cases */
// let anthropicService = AIProxy.anthropicService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let requestBody = AnthropicMessageRequestBody(
maxTokens: 1024,
messages: [
.init(
content: [.text("What is nvidia's stock price?")],
role: .user
)
],
model: "claude-3-5-sonnet-20240620",
tools: [
.init(
description: "Call this function when the user wants a stock symbol",
inputSchema: [
"type": "object",
"properties": [
"ticker": [
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
]
],
"required": ["ticker"]
],
name: "get_stock_symbol"
)
]
)
let response = try await anthropicService.messageRequest(body: requestBody)
for content in response.content {
switch content {
case .text(let message):
print("Claude sent a message: \(message)")
case .toolUse(id: _, name: let toolName, input: let toolInput):
print("Claude used a tool \(toolName) with input: \(toolInput)")
}
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create Anthropic message with tool call: \(error.localizedDescription)")
}
This snippet includes a pdf mydocument.pdf
in the Anthropic request. Adjust the filename to
match the pdf included in your Xcode project. The snippet expects the pdf in the app bundle.
```swift
import AIProxy
/* Uncomment for BYOK use cases */
// let anthropicService = AIProxy.anthropicDirectService(
// unprotectedAPIKey: "your-anthropic-key"
// )
/* Uncomment for all other production use cases */
// let anthropicService = AIProxy.anthropicService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
guard let pdfFileURL = Bundle.main.url(forResource: "mydocument", withExtension: "pdf"),
let pdfData = try? Data(contentsOf: pdfFileURL)
else {
print("""
Drop mydocument.pdf file into your Xcode project first.
""")
return
}
do {
let response = try await anthropicService.messageRequest(body: AnthropicMessageRequestBody(
maxTokens: 1024,
messages: [
AnthropicInputMessage(content: [.pdf(data: pdfData.base64EncodedString())], role: .user),
AnthropicInputMessage(content: [.text("Summarize this")], role: .user)
],
model: "claude-3-5-sonnet-20241022"
))
for content in response.content {
switch content {
case .text(let message):
print("Claude sent a message: \(message)")
case .toolUse(id: _, name: let toolName, input: let toolInput):
print("Claude used a tool \(toolName) with input: \(toolInput)")
}
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not use Anthropic's buffered pdf support: \(error.localizedDescription)")
}
```
This snippet includes a pdf mydocument.pdf
in the Anthropic request. Adjust the filename to
match the pdf included in your Xcode project. The snippet expects the pdf in the app bundle.
```swift
import AIProxy
/* Uncomment for BYOK use cases */
// let anthropicService = AIProxy.anthropicDirectService(
// unprotectedAPIKey: "your-anthropic-key"
// )
/* Uncomment for all other production use cases */
// let anthropicService = AIProxy.anthropicService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
guard let pdfFileURL = Bundle.main.url(forResource: "mydocument", withExtension: "pdf"),
let pdfData = try? Data(contentsOf: pdfFileURL)
else {
print("""
Drop mydocument.pdf file into your Xcode project first.
""")
return
}
do {
let stream = try await anthropicService.streamingMessageRequest(body: AnthropicMessageRequestBody(
maxTokens: 1024,
messages: [
AnthropicInputMessage(content: [.pdf(data: pdfData.base64EncodedString())], role: .user),
AnthropicInputMessage(content: [.text("Summarize this")], role: .user)
],
model: "claude-3-5-sonnet-20241022"
))
for try await chunk in stream {
switch chunk {
case .text(let text):
print(text)
case .toolUse(name: let toolName, input: let toolInput):
print("Claude wants to call tool \(toolName) with input \(toolInput)")
}
}
print("Done with stream")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not use Anthropic's streaming pdf support: \(error.localizedDescription)")
}
```
In the snippet below, replace NSImage with UIImage if you are building on iOS. For a SwiftUI example, see this gist
import AIProxy
/* Uncomment for BYOK use cases */
// let stabilityService = AIProxy.stabilityDirectService(
// unprotectedAPIKey: "your-stability-key"
// )
/* Uncomment for all other production use cases */
// let service = AIProxy.stabilityAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let body = StabilityAIUltraRequestBody(prompt: "Lighthouse on a cliff overlooking the ocean")
let response = try await service.ultraRequest(body: body)
let image = NSImage(data: response.imageData)
// Do something with `image`
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not generate an image with StabilityAI: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let deepLService = AIProxy.deepLDirectService(
// unprotectedAPIKey: "your-deepL-key"
// )
/* Uncomment for all other production use cases */
// let service = AIProxy.deepLService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let body = DeepLTranslateRequestBody(targetLang: "ES", text: ["hello world"])
let response = try await service.translateRequest(body: body)
// Do something with `response.translations`
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create DeepL translation: \(error.localizedDescription)")
}
See the TogetherAI model list for available
options to pass as the model
argument:
import AIProxy
/* Uncomment for BYOK use cases */
// let togetherAIService = AIProxy.togetherAIDirectService(
// unprotectedAPIKey: "your-togetherAI-key"
// )
/* Uncomment for all other production use cases */
// let togetherAIService = AIProxy.togetherAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let requestBody = TogetherAIChatCompletionRequestBody(
messages: [TogetherAIMessage(content: "Hello world", role: .user)],
model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"
)
let response = try await togetherAIService.chatCompletionRequest(body: requestBody)
print(response.choices.first?.message.content ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create TogetherAI chat completion: \(error.localizedDescription)")
}
See the TogetherAI model list for available
options to pass as the model
argument:
import AIProxy
/* Uncomment for BYOK use cases */
// let togetherAIService = AIProxy.togetherAIDirectService(
// unprotectedAPIKey: "your-togetherAI-key"
// )
/* Uncomment for all other production use cases */
// let togetherAIService = AIProxy.togetherAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let requestBody = TogetherAIChatCompletionRequestBody(
messages: [TogetherAIMessage(content: "Hello world", role: .user)],
model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"
)
let stream = try await togetherAIService.streamingChatCompletionRequest(body: requestBody)
for try await chunk in stream {
print(chunk.choices.first?.delta.content ?? "")
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create TogetherAI streaming chat completion: \(error.localizedDescription)")
}
JSON mode is handy for enforcing that the model returns JSON in a structure that your
application expects. You specify the contract using schema
below. Note that only some models
support JSON mode. See this guide for a list.
import AIProxy
/* Uncomment for BYOK use cases */
// let togetherAIService = AIProxy.togetherAIDirectService(
// unprotectedAPIKey: "your-togetherAI-key"
// )
/* Uncomment for all other production use cases */
// let togetherAIService = AIProxy.togetherAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let schema: [String: AIProxyJSONValue] = [
"type": "object",
"properties": [
"colors": [
"type": "array",
"items": [
"type": "object",
"properties": [
"name": [
"type": "string",
"description": "A descriptive name to give the color"
],
"hex_code": [
"type": "string",
"description": "The hex code of the color"
]
],
"required": ["name", "hex_code"],
"additionalProperties": false
]
]
],
"required": ["colors"],
"additionalProperties": false
]
let requestBody = TogetherAIChatCompletionRequestBody(
messages: [
TogetherAIMessage(
content: "You are a helpful assistant that answers in JSON",
role: .system
),
TogetherAIMessage(
content: "Create a peaches and cream color palette",
role: .user
)
],
model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
responseFormat: .json(schema: schema)
)
let response = try await togetherAIService.chatCompletionRequest(body: requestBody)
print(response.choices.first?.message.content ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create TogetherAI JSON chat completion: \(error.localizedDescription)")
}
If you need this use case, please open a github issue. We don't currently get the tool call result out of the response!
This example is a Swift port of this guide:
import AIProxy
/* Uncomment for BYOK use cases */
// let togetherAIService = AIProxy.togetherAIDirectService(
// unprotectedAPIKey: "your-togetherAI-key"
// )
/* Uncomment for all other production use cases */
// let togetherAIService = AIProxy.togetherAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let function = TogetherAIFunction(
description: "Call this when the user wants the weather",
name: "get_weather",
parameters: [
"type": "object",
"properties": [
"location": [
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
],
"num_days": [
"type": "integer",
"description": "The number of days to get the forecast for",
],
],
"required": ["location", "num_days"],
]
)
let toolPrompt = """
You have access to the following functions:
Use the function '\(function.name)' to '\(function.description)':
\(try function.serialize())
If you choose to call a function ONLY reply in the following format with no prefix or suffix:
<function=example_function_name>{{\"example_name\": \"example_value\"}}</function>
Reminder:
- Function calls MUST follow the specified format, start with <function= and end with </function>
- Required parameters MUST be specified
- Only call one function at a time
- Put the entire function call reply on one line
- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls
"""
let requestBody = TogetherAIChatCompletionRequestBody(
messages: [
TogetherAIMessage(
content: toolPrompt,
role: .system
),
TogetherAIMessage(
content: "What's the weather like in Tokyo over the next few days?",
role: .user
)
],
model: "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
temperature: 0,
tools: [
TogetherAITool(function: function)
]
)
let response = try await togetherAIService.chatCompletionRequest(body: requestBody)
print(response.choices.first?.message.content ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create TogetherAI llama 3.1 tool completion: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let input = ReplicateFluxSchnellInputSchema(
prompt: "Monument valley, Utah"
)
let output = try await replicateService.createFluxSchnellImageURLs(
input: input
)
print("Done creating Flux-Schnell image: ", output.first ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create Flux-Schnell image: \(error.localizedDescription)")
}
See the full range of controls for generating an image by viewing ReplicateFluxSchnellInputSchema.swift
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let input = ReplicateFluxDevInputSchema(
prompt: "Monument valley, Utah. High res"
)
let output = try await replicateService.createFluxDevImageURLs(
input: input
)
print("Done creating Flux-Dev image: ", output.first ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create Flux-Dev image: \(error.localizedDescription)")
}
See the full range of controls for generating an image by viewing ReplicateFluxDevInputSchema.swift
This snippet generates a version 1.1 image. If you would like to generate version 1, make the following substitutions:
-
ReplicateFluxProInputSchema_v1_1
->ReplicateFluxProInputSchema
-
createFluxProImage_v1_1
->createFluxProImage
import AIProxy /* Uncomment for BYOK use cases */ // let replicateService = AIProxy.replicateDirectService( // unprotectedAPIKey: "your-replicate-key" // ) /* Uncomment for all other production use cases */ // let replicateService = AIProxy.replicateService( // partialKey: "partial-key-from-your-developer-dashboard", // serviceURL: "service-url-from-your-developer-dashboard" // ) do { let input = ReplicateFluxProInputSchema_v1_1( prompt: "Monument valley, Utah. High res" ) let output = try await replicateService.createFluxProImageURL_v1_1( input: input ) print("Done creating Flux-Pro image: ", output) } catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) { print("Received \(statusCode) status code with response body: \(responseBody)") } catch { print("Could not create Flux-Pro image: \(error.localizedDescription)") }
See the full range of controls for generating an image by viewing ReplicateFluxProInputSchema_v1_1.swift
On macOS, use NSImage(named:)
in place of UIImage(named:)
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
guard let image = UIImage(named: "face") else {
print("Could not find an image named 'face' in your app assets")
return
}
guard let imageURL = AIProxy.encodeImageAsURL(image: image, compressionQuality: 0.8) else {
print("Could not convert image to a local data URI")
return
}
do {
let input = ReplicateFluxPulidInputSchema(
mainFaceImage: imageURL,
prompt: "smiling man holding sign with glowing green text 'PuLID for FLUX'",
numOutputs: 1,
startStep: 4
)
let output = try await replicateService.createFluxPulidImage(
input: input
)
print("Done creating Flux-PuLID image: ", output)
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not create Flux-Pulid images: \(error.localizedDescription)")
}
See the full range of controls for generating an image by viewing ReplicateFluxPulidInputSchema.swift
There are many controls to play with for this use case. Please see
ReplicateFluxDevControlNetInputSchema.swift
for the full range of controls.
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let input = ReplicateFluxDevControlNetInputSchema(
controlImage: URL(string: "https://example.com/your/image")!,
prompt: "a cyberpunk with natural greys and whites and browns",
controlStrength: 0.4
)
let output = try await replicateService.createFluxDevControlNetImage(
input: input
)
print("Done creating Flux-ControlNet image: ", output)
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not create Flux-ControlNet image: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let input = ReplicateSDXLInputSchema(
prompt: "Monument valley, Utah"
)
let urls = try await replicateService.createSDXLImageURLs(
input: input
)
print("Done creating SDXL image: ", urls.first ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create SDXL image: \(error.localizedDescription)")
}
See the full range of controls for generating an image by viewing ReplicateSDXLInputSchema.swift
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let input = ReplicateSDXLFreshInkInputSchema(
prompt: "A fresh ink TOK tattoo of monument valley, Utah",
negativePrompt: "ugly, broken, distorted"
)
let urls = try await replicateService.createSDXLFreshInkImageURLs(
input: input
)
print("Done creating SDXL fresh ink image: ", urls.first ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create SDXL fresh ink image: \(error.localizedDescription)")
}
See the full range of controls for generating an image by viewing ReplicateSDXLFreshInkInputSchema.swift
Look in the ReplicateService+Convenience.swift
file for inspiration on how to do this.
-
Generate the Encodable representation of your input schema. Take a look at any of the input schemas used in
ReplicateService+Convenience.swift
for inspiration. Find the schema format that you should conform to using replicate's web dashboard and tapping throughYour Model > API > Schema > Input Schema
-
Generate the Decodable representation of your output schema. The output schema is defined on replicate's site at
Your Model > API > Schema > Output Schema
. I find that unfortunately these schemas are not always accurate, so sometimes you have to look at the network response manually. For simple cases, a typealias will do (for example, if the output schema is just a string or an array of strings). Look atReplicateFluxOutputSchema.swift
for inspiration. If you need help doing this, please reach out. -
Call the
createSynchronousPredictionUsingVersion
orcreateSynchronousPredictionUsingOfficialModel
method and grab theoutput
off the response. SeecreateFaceSwapImage
inReplicateService+Convenience.swift
as an example.
You'll need to change YourInputSchema
, YourOutputSchema
and your-model-version
in this
snippet:
```
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let input = YourInputSchema(
prompt: "Monument valley, Utah"
)
let apiResult: ReplicateSynchronousAPIOutput<YourOutputSchema> = try await replicateService.createSynchronousPredictionUsingVersion(
modelVersion: "your-model-version",
input: input,
secondsToWait: secondsToWait
)
guard let output = apiResult.output else {
throw ReplicateError.predictionDidNotIncludeOutput
}
// Do something with output
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create replicate synchronous prediction: \(error.localizedDescription)")
}
```
Replace <your-account>
:
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let modelURL = try await replicateService.createModel(
owner: "<your-account>",
name: "my-model",
description: "My great model",
hardware: "gpu-t4",
visibility: .private
)
print("Your model is at \(modelURL)")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create replicate model: \(error.localizedDescription)")
}
Create a zip file called training.zip
and drop it in your Xcode assets.
See the "Prepare your training data" section of this guide
for tips on what to include in the zip file. Then run:
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
guard let trainingData = NSDataAsset(name: "training") else {
print("""
Drop training.zip file into Assets first.
See the 'Prepare your training data' of this guide:
https://replicate.com/blog/fine-tune-flux
""")
return
}
do {
let fileUploadResponse = try await replicateService.uploadTrainingZipFile(
zipData: trainingData.data,
name: "training.zip"
)
print("""
Training file uploaded. Find it at \(fileUploadResponse.urls.get)
You you can train with this file until \(fileUploadResponse.expiresAt ?? "")
""")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not upload file to replicate: \(error.localizedDescription)")
}
Use the <training-url>
returned from the snippet above.
Use the <model-name>
that you used from the snippet above that.
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
// You should experiment with the settings in `ReplicateFluxTrainingInput.swift` to
// find what works best for your use case.
//
// The `layersToOptimizeRegex` argument here speeds training and works well for faces.
// You could could optionally remove that argument to see if the final trained model
// works better for your user case.
let trainingInput = ReplicateFluxTrainingInput(
inputImages: URL(string: "<training-url>")!,
layersToOptimizeRegex: "transformer.single_transformer_blocks.(7|12|16|20).proj_out",
steps: 200,
triggerWord: "face"
)
let reqBody = ReplicateTrainingRequestBody(destination: "<model-owner>/<model-name>", input: trainingInput)
// Find valid version numbers here: https://replicate.com/ostris/flux-dev-lora-trainer/train
let training = try await replicateService.createTraining(
modelOwner: "ostris",
modelName: "flux-dev-lora-trainer",
versionID: "d995297071a44dcb72244e6c19462111649ec86a9646c32df56daa7f14801944",
body: reqBody
)
print("Get training status at: \(training.urls?.get?.absoluteString ?? "unknown")")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create replicate training: \(error.localizedDescription)")
}
Use the <url>
that is returned from the snippet above.
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
// This URL comes from the output of the sample above
let url = URL(string: "<url>")!
do {
let training = try await replicateService.pollForTrainingComplete(
url: url,
pollAttempts: 100,
secondsBetweenPollAttempts: 10
)
print("""
Flux training status: \(training.status?.rawValue ?? "unknown")
Your model version is: \(training.output?.version ?? "unknown")
""")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not poll for the replicate training: \(error.localizedDescription)")
}
Use the <version>
string that was returned from the snippet above, but do not include the
model owner and model name in the string.
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
let input = ReplicateFluxFineTuneInputSchema(
prompt: "an oil painting of my face on a blimp",
model: .dev,
numInferenceSteps: 28 // Replicate recommends around 28 steps for `.dev` and 4 for `.schnell`
)
do {
let predictionResponse = try await replicateService.createPrediction(
version: "<version>",
input: input,
output: ReplicatePredictionResponseBody<[URL]>.self
)
let predictionOutput: [URL] = try await replicateService.pollForPredictionOutput(
predictionResponse: predictionResponse,
pollAttempts: 30,
secondsBetweenPollAttempts: 5
)
print("Done creating predictionOutput: \(predictionOutput)")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create replicate prediction: \(error.localizedDescription)")
}
On macOS, use NSImage(named:) in place of
UIImage(named:)`
import AIProxy
/* Uncomment for BYOK use cases */
// let replicateService = AIProxy.replicateDirectService(
// unprotectedAPIKey: "your-replicate-key"
// )
/* Uncomment for all other production use cases */
// let replicateService = AIProxy.replicateService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
guard let louFace = UIImage(named: "lou_face") else {
print("Could not find an image named 'lou_face' in your app assets")
return
}
guard let toddFace = UIImage(named: "todd_face") else {
print("Could not find an image named 'todd_face' in your app assets")
return
}
do {
let input = ReplicateFaceSwapInputSchema(
localSource: AIProxy.encodeImageAsURL(image: louFace),
localTarget: AIProxy.encodeImageAsURL(image: toddFace)
)
let output = try await replicateService.createFaceSwapImage(input: input)
if let imageURL = output.imageURL {
print("Done creating xiankgx/face-swap image: ", imageURL)
} else {
print("face-swap returned status \(output.status) with error: \(output.msg ?? "unspecified")")
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not create xiankgx/face-swap image: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let elevenLabsService = AIProxy.elevenLabsDirectService(
// unprotectedAPIKey: "your-elevenLabs-key"
// )
/* Uncomment for all other production use cases */
// let elevenLabsService = AIProxy.elevenLabsService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let body = ElevenLabsTTSRequestBody(
text: "Hello world"
)
let mpegData = try await elevenLabsService.ttsRequest(
voiceID: "EXAVITQu4vr4xnSDxMaL",
body: body
)
// Do not use a local `let` or `var` for AVAudioPlayer.
// You need the lifecycle of the player to live beyond the scope of this function.
// Instead, use file scope or set the player as a member of a reference type with long life.
// For example, at the top of this file you may define:
//
// fileprivate var audioPlayer: AVAudioPlayer? = nil
//
// And then use the code below to play the TTS result:
audioPlayer = try AVAudioPlayer(data: mpegData)
audioPlayer?.prepareToPlay()
audioPlayer?.play()
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print("Could not create ElevenLabs TTS audio: \(error.localizedDescription)")
}
```
- See the full range of TTS controls by viewing
ElevenLabsTTSRequestBody.swift
. - See https://api.elevenlabs.io/v1/voices for the IDs that you can pass to
voiceID
.
-
Record an audio file in quicktime and save it as "helloworld.m4a"
-
Add the audio file to your Xcode project. Make sure it's included in your target: select your audio file in the project tree, type
cmd-opt-0
to open the inspect panel, and viewTarget Membership
-
Run this snippet:
import AIProxy /* Uncomment for BYOK use cases */ // let elevenLabsService = AIProxy.elevenLabsDirectService( // unprotectedAPIKey: "your-elevenLabs-key" // ) /* Uncomment for all other production use cases */ // let elevenLabsService = AIProxy.elevenLabsService( // partialKey: "partial-key-from-your-developer-dashboard", // serviceURL: "service-url-from-your-developer-dashboard" // ) guard let localAudioURL = Bundle.main.url(forResource: "helloworld", withExtension: "m4a") else { print("Could not find an audio file named helloworld.m4a in your app bundle") return } do { let body = ElevenLabsSpeechToSpeechRequestBody( audio: try Data(contentsOf: localAudioURL), modelID: "eleven_english_sts_v2", removeBackgroundNoise: true ) let mpegData = try await elevenLabsService.speechToSpeechRequest( voiceID: "EXAVITQu4vr4xnSDxMaL", body: body ) // Do not use a local `let` or `var` for AVAudioPlayer. // You need the lifecycle of the player to live beyond the scope of this function. // Instead, use file scope or set the player as a member of a reference type with long life. // For example, at the top of this file you may define: // // fileprivate var audioPlayer: AVAudioPlayer? = nil // // And then use the code below to play the TTS result: audioPlayer = try AVAudioPlayer(data: mpegData) audioPlayer?.prepareToPlay() audioPlayer?.play() } catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) { print("Received non-200 status code: \(statusCode) with response body: \(responseBody)") } catch { print("Could not create ElevenLabs STS audio: \(error.localizedDescription)") }
import AIProxy
/* Uncomment for BYOK use cases */
// let falService = AIProxy.falDirectService(
// unprotectedAPIKey: "your-fal-key"
// )
/* Uncomment for all other production use cases */
// let falService = AIProxy.falService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
let input = FalFastSDXLInputSchema(
prompt: "Yosemite Valley",
enableSafetyChecker: false
)
do {
let output = try await falService.createFastSDXLImage(input: input)
print("""
The first output image is at \(output.images?.first?.url?.absoluteString ?? "")
It took \(output.timings?.inference ?? Double.nan) seconds to generate.
""")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not create Fal SDXL image: \(error.localizedDescription)")
}
See the full range of controls for generating an image by viewing FalFastSDXLInputSchema.swift
The garmentImage
and modelImage
arguments may be:
-
A remote URL to the image hosted on a public site
-
A local data URL that you construct using
AIProxy.encodeImageAsURL
import AIProxy /* Uncomment for BYOK use cases */ // let falService = AIProxy.falDirectService( // unprotectedAPIKey: "your-fal-key" // ) /* Uncomment for all other production use cases */ // let falService = AIProxy.falService( // partialKey: "partial-key-from-your-developer-dashboard", // serviceURL: "service-url-from-your-developer-dashboard" // ) guard let garmentImage = NSImage(named: "garment-image"), let garmentImageURL = AIProxy.encodeImageAsURL(image: garmentImage) else { print("Could not find an image named 'garment-image' in your app assets") return } guard let modelImage = NSImage(named: "model-image"), let modelImageURL = AIProxy.encodeImageAsURL(image: modelImage) else { print("Could not find an image named 'model-image' in your app assets") return } let input = FalTryonInputSchema( category: .tops, garmentImage: garmentImageURL, modelImage: modelImageURL ) do { let output = try await falService.createTryonImage(input: input) print("Tryon image is available at: \(output.images.first?.url.absoluteString ?? "No URL")") } catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) { print("Received non-200 status code: \(statusCode) with response body: \(responseBody)") } catch { print("Could not create fashn/tryon image on Fal: \(error.localizedDescription)") }
Your training data must be a zip file of images. You can either pull the zip from assets (what I do here), or construct the zip in memory:
import AIProxy
/* Uncomment for BYOK use cases */
// let falService = AIProxy.falDirectService(
// unprotectedAPIKey: "your-fal-key"
// )
/* Uncomment for all other production use cases */
// let falService = AIProxy.falService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
// Get the images to train with:
guard let trainingData = NSDataAsset(name: "training") else {
print("Drop training.zip file into Assets first")
return
}
do {
let url = try await falService.uploadTrainingZipFile(
zipData: trainingData.data,
name: "training.zip"
)
print("Training file uploaded. Find it at \(url.absoluteString)")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not upload file to Fal: \(error.localizedDescription)")
}
Using the URL returned in the step above:
let input = FalFluxLoRAFastTrainingInputSchema(
imagesDataURL: <url-from-step-above>
triggerWord: "face"
)
do {
let output = try await falService.createFluxLoRAFastTraining(input: input)
print("""
Fal's Flux LoRA fast trainer is complete.
Your weights are at: \(output.diffusersLoraFile?.url?.absoluteString ?? "")
""")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not create Fal Flux training: \(error.localizedDescription)")
}
See FalFluxLoRAFastTrainingInputSchema.swift
for the full range of training controls.
Using the LoRA URL returned in the step above:
let inputSchema = FalFluxLoRAInputSchema(
prompt: "face on a blimp over Monument Valley, Utah",
loras: [
.init(
path: <lora-url-from-step-above>
scale: 0.9
)
],
numImages: 2,
outputFormat: .jpeg
)
do {
let output = try await falService.createFluxLoRAImage(input: inputSchema)
print("""
Fal's Flux LoRA inference is complete.
Your images are at: \(output.images?.compactMap {$0.url?.absoluteString} ?? [])
""")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not create Fal LoRA image: \(error.localizedDescription)")
}
See FalFluxLoRAInputSchema.swift
for the full range of inference controls
import AIProxy
/* Uncomment for BYOK use cases */
// let groqService = AIProxy.groqDirectService(
// unprotectedAPIKey: "your-groq-key"
// )
/* Uncomment for all other production use cases */
// let groqService = AIProxy.groqService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let response = try await groqService.chatCompletionRequest(body: .init(
messages: [.assistant(content: "hello world")],
model: "mixtral-8x7b-32768"
))
print(response.choices.first?.message.content ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
import AIProxy
/* Uncomment for BYOK use cases */
// let groqService = AIProxy.groqDirectService(
// unprotectedAPIKey: "your-groq-key"
// )
/* Uncomment for all other production use cases */
// let groqService = AIProxy.groqService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let stream = try await groqService.streamingChatCompletionRequest(body: .init(
messages: [.assistant(content: "hello world")],
model: "mixtral-8x7b-32768"
)
)
for try await chunk in stream {
print(chunk.choices.first?.delta.content ?? "")
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
-
Record an audio file in quicktime and save it as "helloworld.m4a"
-
Add the audio file to your Xcode project. Make sure it's included in your target: select your audio file in the project tree, type
cmd-opt-0
to open the inspect panel, and viewTarget Membership
-
Run this snippet:
import AIProxy /* Uncomment for BYOK use cases */ // let groqService = AIProxy.groqDirectService( // unprotectedAPIKey: "your-groq-key" // ) /* Uncomment for all other production use cases */ // let groqService = AIProxy.groqService( // partialKey: "partial-key-from-your-developer-dashboard", // serviceURL: "service-url-from-your-developer-dashboard" // ) do { let url = Bundle.main.url(forResource: "helloworld", withExtension: "m4a")! let requestBody = GroqTranscriptionRequestBody( file: try Data(contentsOf: url), model: "whisper-large-v3-turbo", responseFormat: "json" ) let response = try await groqService.createTranscriptionRequest(body: requestBody) let transcript = response.text ?? "None" print("Groq transcribed: \(transcript)") } catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) { print("Received non-200 status code: \(statusCode) with response body: \(responseBody)") } catch { print("Could not get audio transcription from Groq: \(error.localizedDescription)") }
import AIProxy
/* Uncomment for BYOK use cases */
// let perplexityService = AIProxy.perplexityDirectService(
// unprotectedAPIKey: "your-perplexity-key"
// )
/* Uncomment for all other production use cases */
// let perplexityService = AIProxy.perplexityService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let response = try await perplexityService.chatCompletionRequest(body: .init(
messages: [.user(content: "How many national parks in the US?")],
model: "llama-3.1-sonar-small-128k-online"
))
print(
"""
Received from Perplexity:
\(response.choices.first?.message?.content ?? "no content")
With citations:
\(response.citations ?? ["none"])
Using:
\(response.usage?.promptTokens ?? 0) prompt tokens
\(response.usage?.completionTokens ?? 0) completion tokens
\(response.usage?.totalTokens ?? 0) total tokens
"""
)
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not create perplexity chat completion: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let perplexityService = AIProxy.perplexityDirectService(
// unprotectedAPIKey: "your-perplexity-key"
// )
/* Uncomment for all other production use cases */
// let perplexityService = AIProxy.perplexityService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let stream = try await perplexityService.streamingChatCompletionRequest(body: .init(
messages: [.user(content: "How many national parks in the US?")],
model: "llama-3.1-sonar-small-128k-online"
))
var lastChunk: PerplexityChatCompletionResponseBody?
for try await chunk in stream {
print(chunk.choices.first?.delta?.content ?? "")
lastChunk = chunk
}
if let lastChunk = lastChunk {
print(
"""
Citations:
\(lastChunk.citations ?? ["none"])
Using:
\(lastChunk.usage?.promptTokens ?? 0) prompt tokens
\(lastChunk.usage?.completionTokens ?? 0) completion tokens
\(lastChunk.usage?.totalTokens ?? 0) total tokens
"""
)
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not create perplexity streaming chat completion: \(error.localizedDescription)")
}
Use api.mistral.ai
as the proxy domain when creating your AIProxy service in the developer dashboard.
import AIProxy
/* Uncomment for BYOK use cases */
// let mistralService = AIProxy.mistralDirectService(
// unprotectedAPIKey: "your-mistral-key"
// )
/* Uncomment for all other production use cases */
// let mistralService = AIProxy.mistralService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let response = try await mistralService.chatCompletionRequest(body: .init(
messages: [.user(content: "Hello world")],
model: "mistral-small-latest"
))
print(response.choices.first?.message.content ?? "")
if let usage = response.usage {
print(
"""
Used:
\(usage.promptTokens ?? 0) prompt tokens
\(usage.completionTokens ?? 0) completion tokens
\(usage.totalTokens ?? 0) total tokens
"""
)
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not create mistral chat completion: \(error.localizedDescription)")
}
Use api.mistral.ai
as the proxy domain when creating your AIProxy service in the developer dashboard.
import AIProxy
/* Uncomment for BYOK use cases */
// let mistralService = AIProxy.mistralDirectService(
// unprotectedAPIKey: "your-mistral-key"
// )
/* Uncomment for all other production use cases */
// let mistralService = AIProxy.mistralService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let stream = try await mistralService.streamingChatCompletionRequest(body: .init(
messages: [.user(content: "Hello world")],
model: "mistral-small-latest"
))
for try await chunk in stream {
print(chunk.choices.first?.delta.content ?? "")
if let usage = chunk.usage {
print(
"""
Used:
\(usage.promptTokens ?? 0) prompt tokens
\(usage.completionTokens ?? 0) completion tokens
\(usage.totalTokens ?? 0) total tokens
"""
)
}
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not create mistral streaming chat completion: \(error.localizedDescription)")
}
Use flows.eachlabs.ai
as the proxy domain when creating your AIProxy service in the developer dashboard.
import AIProxy
/* Uncomment for BYOK use cases */
// let eachAIService = AIProxy.eachAIDirectService(
// unprotectedAPIKey: "your-eachAI-key"
// )
/* Uncomment for all other production use cases */
// let eachAIService = AIProxy.eachAIService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
// Update the arguments here based on your eachlabs use case:
let workflowID = "your-workflow-id"
let requestBody = EachAITriggerWorkflowRequestBody(
parameters: [
"img": "https://storage.googleapis.com/magicpoint/models/women.png"
]
)
do {
let triggerResponse = try await eachAIService.triggerWorkflow(
workflowID: workflowID,
body: requestBody
)
let executionResponse = try await eachAIService.pollForWorkflowExecutionComplete(
workflowID: workflowID,
triggerID: triggerResponse.triggerID
)
print("Workflow result is available at \(executionResponse.output ?? "output missing")")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not execute EachAI workflow: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let openRouterService = AIProxy.openRouterDirectService(
// unprotectedAPIKey: "your-openRouter-key"
// )
/* Uncomment for all other production use cases */
// let openRouterService = AIProxy.openRouterService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let requestBody = OpenRouterChatCompletionRequestBody(
messages: [.user(content: .text("hello world"))],
models: [
"deepseek/deepseek-chat",
"google/gemini-2.0-flash-exp:free",
// ...
],
route: .fallback
)
let response = try await openRouterService.chatCompletionRequest(requestBody)
print("""
Received: \(response.choices.first?.message.content ?? "")
Served by \(response.provider ?? "unspecified")
using model \(response.model ?? "unspecified")
"""
)
if let usage = response.usage {
print(
"""
Used:
\(usage.promptTokens ?? 0) prompt tokens
\(usage.completionTokens ?? 0) completion tokens
\(usage.totalTokens ?? 0) total tokens
"""
)
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not get OpenRouter buffered chat completion: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let openRouterService = AIProxy.openRouterDirectService(
// unprotectedAPIKey: "your-openRouter-key"
// )
/* Uncomment for all other production use cases */
// let openRouterService = AIProxy.openRouterService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
let requestBody = OpenRouterChatCompletionRequestBody(
messages: [.user(content: .text("hello world"))],
models: [
"deepseek/deepseek-chat",
"google/gemini-2.0-flash-exp:free",
// ...
],
route: .fallback
)
do {
let stream = try await openRouterService.streamingChatCompletionRequest(body: requestBody)
for try await chunk in stream {
print(chunk.choices.first?.delta.content ?? "")
if let usage = chunk.usage {
print(
"""
Served by \(chunk.provider ?? "unspecified")
using model \(chunk.model ?? "unspecified")
Used:
\(usage.promptTokens ?? 0) prompt tokens
\(usage.completionTokens ?? 0) completion tokens
\(usage.totalTokens ?? 0) total tokens
"""
)
}
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not get OpenRouter streaming chat completion: \(error.localizedDescription)")
}
import AIProxy
/* Uncomment for BYOK use cases */
// let openRouterService = AIProxy.openRouterDirectService(
// unprotectedAPIKey: "your-openRouter-key"
// )
/* Uncomment for all other production use cases */
// let openRouterService = AIProxy.openRouterService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
do {
let schema: [String: AIProxyJSONValue] = [
"type": "object",
"properties": [
"colors": [
"type": "array",
"items": [
"type": "object",
"properties": [
"name": [
"type": "string",
"description": "A descriptive name to give the color"
],
"hex_code": [
"type": "string",
"description": "The hex code of the color"
]
],
"required": ["name", "hex_code"],
"additionalProperties": false
]
]
],
"required": ["colors"],
"additionalProperties": false
]
let requestBody = OpenRouterChatCompletionRequestBody(
messages: [
.system(content: .text("Return valid JSON only, and follow the specified JSON structure")),
.user(content: .text("Return a peaches and cream color palette"))
],
models: [
"cohere/command-r7b-12-2024",
"meta-llama/llama-3.3-70b-instruct",
// ...
],
responseFormat: .jsonSchema(
name: "palette_creator",
description: "A list of colors that make up a color pallete",
schema: schema,
strict: true
),
route: .fallback
)
let response = try await openRouterService.chatCompletionRequest(body: requestBody)
print("""
Received: \(response.choices.first?.message.content ?? "")
Served by \(response.provider ?? "unspecified")
using model \(response.model ?? "unspecified")
"""
)
if let usage = response.usage {
print(
"""
Used:
\(usage.promptTokens ?? 0) prompt tokens
\(usage.completionTokens ?? 0) completion tokens
\(usage.totalTokens ?? 0) total tokens
"""
)
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not get structured outputs response from OpenRouter: \(error.localizedDescription)")
}
On macOS, use NSImage(named:)
in place of UIImage(named:)
import AIProxy
/* Uncomment for BYOK use cases */
// let openRouterService = AIProxy.openRouterDirectService(
// unprotectedAPIKey: "your-openRouter-key"
// )
/* Uncomment for all other production use cases */
// let openRouterService = AIProxy.openRouterService(
// partialKey: "partial-key-from-your-developer-dashboard",
// serviceURL: "service-url-from-your-developer-dashboard"
// )
guard let image = NSImage(named: "myImage") else {
print("Could not find an image named 'myImage' in your app assets")
return
}
guard let imageURL = AIProxy.encodeImageAsURL(image: image) else {
print("Could not encode image as a data URI")
return
}
do {
let response = try await openRouterService.chatCompletionRequest(body: .init(
messages: [
.system(
content: .text("Tell me what you see")
),
.user(
content: .parts(
[
.text("What do you see?"),
.imageURL(imageURL)
]
)
)
],
models: [
"x-ai/grok-2-vision-1212",
"openai/gpt-4o"
],
route: .fallback
))
print("""
Received: \(response.choices.first?.message.content ?? "")
Served by \(response.provider ?? "unspecified")
using model \(response.model ?? "unspecified")
"""
)
if let usage = response.usage {
print(
"""
Used:
\(usage.promptTokens ?? 0) prompt tokens
\(usage.completionTokens ?? 0) completion tokens
\(usage.totalTokens ?? 0) total tokens
"""
)
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not make a vision request to OpenRouter: \(error.localizedDescription)")
}
This pattern is slightly different than the others, because OpenMeteo has an official lib that we'd like to rely on. To run the snippet below, you'll need to add AIProxySwift and OpenMeteoSDK to your Xcode project. Add OpenMeteoSDK:
- In Xcode, go to
File > Add Package Dependences
- Enter the package URL
https://github.com/open-meteo/sdk
- Choose your dependency rule (e.g. the
main
branch for the most up-to-date package)
Next, use AIProxySwift's core functionality to get a URLRequest and URLSession, and pass those into the OpenMeteoSDK:
import AIProxy
import OpenMeteoSdk
do {
let request = try await AIProxy.request(
partialKey: "partial-key-from-your-aiproxy-developer-dashboard",
serviceURL: "service-url-from-your-aiproxy-developer-dashboard",
proxyPath: "/v1/forecast?latitude=52.52&longitude=13.41&hourly=temperature_2m&format=flatbuffers"
)
let session = AIProxy.session()
let responses = try await WeatherApiResponse.fetch(request: request, session: session)
// Do something with `responses`. For a usage example, follow these instructions:
// 1. Navigate to https://open-meteo.com/en/docs
// 2. Scroll to the 'API response' section
// 3. Tap on Swift
// 4. Scroll to 'Usage'
print(responses)
} catch {
print("Could not fetch the weather: \(error.localizedDescription)")
}
If your app already has client or user IDs that you want to annotate AIProxy requests with, pass a second argument to the provider's service initializer. For example:
let openAIService = AIProxy.openAIService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard",
clientID: "<your-id>"
)
Requests that are made using openAIService
will be annotated on the AIProxy backend, so that
when you view top users, or the timeline of requests, your client IDs will be familiar.
If you do not have existing client or user IDs, no problem! Leave the clientID
argument
out, and we'll generate IDs for you. See AIProxyIdentifier.swift
if you would like to see
ID generation specifics.
We use Foundation's URL
types such as URLRequest
and URLSession
for all connections to
AIProxy. You can view the various errors that Foundation may raise by viewing NSURLError.h
(which is easiest to find by punching cmd-shift-o
in Xcode and searching for it).
Some errors may be more interesting to you, and worth their own error handler to pop UI for
your user. For example, to catch NSURLErrorTimedOut
, NSURLErrorNetworkConnectionLost
and
NSURLErrorNotConnectedToInternet
, you could use the following try/catch structure:
import AIProxy
let openAIService = AIProxy.openAIService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard"
)
do {
let response = try await openAIService.chatCompletionRequest(body: .init(
model: "gpt-4o-mini",
messages: [.assistant(content: .text("hello world"))]
))
print(response.choices.first?.message.content ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch let err as URLError where err.code == URLError.timedOut {
print("Request for OpenAI buffered chat completion timed out")
} catch let err as URLError where [.notConnectedToInternet, .networkConnectionLost].contains(err.code) {
print("Could not make buffered chat request. Please check your internet connection")
} catch {
print("Could not get buffered chat completion: \(error.localizedDescription)")
}
Occassionally, Xcode fails to automatically add the AIProxy library to your target's dependency
list. If you receive the No such module 'AIProxy'
error, first ensure that you have added
the package to Xcode using the Installation steps.
Next, select your project in the Project Navigator (cmd-1
), select your target, and scroll to
the Frameworks, Libraries, and Embedded Content
section. Tap on the plus icon:
And add the AIProxy library:
If you encounter the error
networkd_settings_read_from_file Sandbox is preventing this process from reading networkd settings file at "/Library/Preferences/com.apple.networkd.plist", please add an exception.
or
A server with the specified hostname could not be found
Modify your macOS project settings by tapping on your project in the Xcode project tree, then
select Signing & Capabilities
and enable Outgoing Connections (client)
If you use the snippets above and encounter the error
'async' call in a function that does not support concurrency
it is because we assume you are in a structured concurrency context. If you encounter this
error, you can use the escape hatch of wrapping your snippet in a Task {}
.
If you'd like to do UI testing and allow the test cases to execute real API requests, you must
set the AIPROXY_DEVICE_CHECK_BYPASS
env variable in your test plan and forward the env
variable from the test case to the host simulator (Apple does not do this by default, which I
consider a bug). Here is how to set it up:
-
Set the
AIPROXY_DEVICE_CHECK_BYPASS
env variable in your test environment: -
Important Edit your test cases to forward on the env variable to the host simulator:
func testExample() throws {
let app = XCUIApplication()
app.launchEnvironment = [
"AIPROXY_DEVICE_CHECK_BYPASS": ProcessInfo.processInfo.environment["AIPROXY_DEVICE_CHECK_BYPASS"]!
]
app.launch()
}
AIProxy uses Apple's DeviceCheck to ensure that requests received by the backend originated from your app on a legitimate Apple device. However, the iOS simulator cannot produce DeviceCheck tokens. Rather than requiring you to constantly build and run on device during development, AIProxy provides a way to skip the DeviceCheck integrity check. The token is intended for use by developers only. If an attacker gets the token, they can make requests to your AIProxy project without including a DeviceCheck token, and thus remove one level of protection.
The AIPROXY_DEVICE_CHECK_BYPASS
is intended for the simulator only. Do not let it leak into
a distribution build of your app (including a TestFlight distribution). If you follow the
integration steps we provide, then the
constant won't leak because env variables are not packaged into the app bundle.
This constant is intended to be included in the distributed version of your app. As the name implies, it is a partial representation of your OpenAI key. Specifically, it is one half of an encrypted version of your key. The other half resides on AIProxy's backend. As your app makes requests to AIProxy, the two encrypted parts are paired, decrypted, and used to fulfill the request to OpenAI.
Contributions are welcome! This library uses the MIT license.
-
Services should conform to a NameService protocol that defines the interface that the direct service and proxied service adopt. Factory methods on AIProxy.swift are typed to return an existential (e.g. NameService) rather than a concrete type (e.g. NameProxiedService)
- Why do we do this? Two reason:
-
We want to make it as easy as possible for lib users to swap between the BYOK use case and the proxied use case. By returning an existential, callers can use conditional logic in their app to select which service to use:
let service = byok ? AIProxy.openaiDirectService() : AIProxy.openaiProxiedService()
-
We prevent the direct and proxied concrete types from diverging in the public interface. As we add more functionality to the service's protocol, the compiler helps us ensure that the functionality is implemented for our two major use cases.
-
- Why do we do this? Two reason:
-
In codable representations, fields that are required by the API should be above fields that are optional. Within the two groups (required and optional) all fields should be alphabetically ordered.
-
Decodables should all have optional properties. Why? We don't want to fail decoding in live apps if the provider changes something out from under us (which can happen purposefully due to deprecations, or by accident due to regressions). If we use non-optionals in decodable definitions, then a provider removing a field, changing the type of a field, or removing an enum case would cause decoding to fail. You may think this isn't too bad, since the JSONDecoder throws anyway, and therefore client code will already be wrapped in a do/catch. However, we always want to give the best chance that decodable succeeds for the properties that the client actually uses. That is, if the provider changes out the enum case of a property unused by the client, we want the client application to continue functioning correctly, not to throw an error and enter the catch branch of the client's call site.
-
When a request or response object is deeply nested by the API provider, create nested types in the same namespace as the top level struct. For examples:
public struct ProviderResponseBody: Decodable { public let status: Status? // ... other fields ... } extension ProviderResponseBody { public enum Status: String, Decodable { case succeeded case failed case canceled } }
This pattern avoids collisions, works well with Xcode's cmd-click to jump to definition, and improves source understanding for folks that use
ctrl-6
to navigate through a file.You may wonder why we don't nest all types within the original top level type definition:
public struct ProviderResponseBody: Decodable { public enum Status: String, Decodable { ... } }
This approach is readable when the nested types are small and the nesting level is not too deep. When either of those conditions flip, readability suffers. This is particularly true for nested types that require their own coding keys and encodable/decodable logic, which balloon line count with implementation detail that a user of the top level type has no interest in.
-
If you are implementing an API contract that could reuse a provider's nested structure, and it's reasonable to suppose that the two objects will change together, then pull the nested struct into its own file and give it a longer prefix. The example above would become:
// ProviderResponseBody.swift public struct ProviderResponseBody: Decodable { // An examples status public let status: ProviderStatus? // ... other fields ... } // ProviderStatus.swift public enum ProviderStatus: String, Decodable { case succeeded case failed case canceled }