Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor to use SpeziLLM, SpeziChat, and Spezi 1.x #36

Merged
merged 27 commits into from
Mar 28, 2024
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
de5a6e7
Refactor to use SpeziLLM
vishnuravi Mar 17, 2024
11d0dd8
Update Settings View
vishnuravi Mar 17, 2024
4bda922
Add text to speech settings
vishnuravi Mar 17, 2024
88bed89
Update UI test
vishnuravi Mar 17, 2024
4fc8b86
Update model settings
vishnuravi Mar 17, 2024
5aafa8d
Update model options
vishnuravi Mar 18, 2024
97b14cf
Fix settings view
vishnuravi Mar 18, 2024
051cf4f
Updates SpeziLLM to latest version
vishnuravi Mar 23, 2024
91481a5
Removes unnecessary password autofill disable during onboarding UI test
vishnuravi Mar 23, 2024
bc61a15
Migrate to string catalogue
vishnuravi Mar 23, 2024
8dbd5a5
Fix REUSE and remove unused plist
vishnuravi Mar 23, 2024
327d5f9
Add LLM mock mode for UI testing
vishnuravi Mar 23, 2024
9c4033f
Adds UI tests
vishnuravi Mar 23, 2024
406bc05
Add reset chat option and update tests
vishnuravi Mar 23, 2024
ebc74d8
Fixes SwiftLint error
vishnuravi Mar 23, 2024
971a1cb
Address feedback
vishnuravi Mar 27, 2024
56c479e
Better error handling in HealthGPTView
vishnuravi Mar 27, 2024
e708998
Fix model switch option
vishnuravi Mar 27, 2024
2bca9cd
Convert HealthDataFetcher into a module and inject into HealthDataInt…
vishnuravi Mar 27, 2024
f8ce3f8
Should not invoke LLM query for system message
vishnuravi Mar 27, 2024
4063082
Update README
vishnuravi Mar 27, 2024
84fa508
Remove redundant authorization
vishnuravi Mar 28, 2024
68db0b4
Update README
vishnuravi Mar 28, 2024
0256a8e
Refresh data when resetting chat
vishnuravi Mar 28, 2024
8ef1477
Allow exporting chat as a text file
vishnuravi Mar 28, 2024
72c694f
Add reset button to toolbar and update README with new screenshots
vishnuravi Mar 28, 2024
132cd49
Fix feature flag for resetting secure storage
vishnuravi Mar 28, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
100 changes: 45 additions & 55 deletions HealthGPT.xcodeproj/project.pbxproj

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -1,75 +1,121 @@
{
"originHash" : "98a3d0589c1bc602c69147b70f8a44c2f8b44cfc1469b8ed82df6bf8db92de53",
"pins" : [
{
"identity" : "llama.cpp",
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordBDHG/llama.cpp",
"state" : {
"revision" : "7bfd6d4b5bbc9fd47bd023bdbb35f96c827977f3",
"version" : "0.2.1"
}
},
{
"identity" : "openai",
"kind" : "remoteSourceControl",
"location" : "https://github.com/MacPaw/OpenAI.git",
"state" : {
"revision" : "c45f3320ffa760f043c0239f724850c0e4f8bde5",
"version" : "0.2.4"
"revision" : "35afc9a6ee127b8f22a85a31aec2036a987478af",
"version" : "0.2.6"
}
},
{
"identity" : "spezi",
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordSpezi/Spezi.git",
"state" : {
"revision" : "9ad506d4d2e36eb7a0c1ff8cc6bb0ef9c972724c",
"version" : "0.7.1"
"revision" : "c43e4fa3d3938a847de2b677091a34ddaea5bc76",
"version" : "1.2.3"
}
},
{
"identity" : "spezichat",
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordSpezi/SpeziChat",
"state" : {
"revision" : "2334583105224b0c04fc36989db82b000021d31d",
"version" : "0.1.9"
}
},
{
"identity" : "spezifoundation",
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordSpezi/SpeziFoundation",
"state" : {
"revision" : "01af5b91a54f30ddd121258e81aff2ddc2a99ff9",
"version" : "1.0.4"
}
},
{
"identity" : "spezihealthkit",
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordSpezi/SpeziHealthKit.git",
"state" : {
"revision" : "f8f664549e81c8fa107a1fff616e0eaca6e8a6fa",
"version" : "0.3.1"
"revision" : "1e9cb5a6036ac7f4ff37ea1c3ed4898103339ad1",
"version" : "0.5.3"
}
},
{
"identity" : "speziml",
"identity" : "spezillm",
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordSpezi/SpeziML.git",
"location" : "https://github.com/StanfordSpezi/SpeziLLM",
"state" : {
"revision" : "63cb6659876c58529407d7fa3556228345f9faa1",
"version" : "0.2.6"
"revision" : "dc37b91ed55c9d50eaf58e645d454cb62e3681d1",
"version" : "0.7.2"
}
},
{
"identity" : "spezionboarding",
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordSpezi/SpeziOnboarding.git",
"state" : {
"revision" : "0ea46a66c17615e1a933481a07434bfd41717c54",
"version" : "0.6.0"
"revision" : "4971a82e94996ce0c3d8ecf64fdeec874a1f20d6",
"version" : "1.1.1"
}
},
{
"identity" : "spezispeech",
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordSpezi/SpeziSpeech",
"state" : {
"revision" : "60b8cdbf6f3d58b0d75eadf30db50f88848069aa",
"version" : "1.0.1"
}
},
{
"identity" : "spezistorage",
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordSpezi/SpeziStorage.git",
"state" : {
"revision" : "77666f57cc0b7f148bc3949f173db8498ef9b4a6",
"version" : "0.4.1"
"revision" : "b958df9b31f24800388a7bfc28f457ce7b82556c",
"version" : "1.0.2"
}
},
{
"identity" : "speziviews",
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordSpezi/SpeziViews",
"state" : {
"revision" : "4b7cc423fd823123d354ec1d541ca7d2e0a9d6e3",
"version" : "0.5.1"
"revision" : "4d2a724d97c8f19ac7de7aa2c046b1cb3ef7b279",
"version" : "1.3.1"
}
},
{
"identity" : "swift-collections",
"kind" : "remoteSourceControl",
"location" : "https://github.com/apple/swift-collections.git",
"state" : {
"revision" : "94cf62b3ba8d4bed62680a282d4c25f9c63c2efb",
"version" : "1.1.0"
}
},
{
"identity" : "xctestextensions",
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordBDHG/XCTestExtensions.git",
"state" : {
"revision" : "625477e0937294cb3fd6e7bbf72b78f951644b1d",
"version" : "0.4.6"
"revision" : "1fe9b8e76aeb7a132af37bfa0892160c9b662dcc",
"version" : "0.4.10"
}
},
{
Expand All @@ -86,10 +132,10 @@
"kind" : "remoteSourceControl",
"location" : "https://github.com/StanfordBDHG/XCTRuntimeAssertions.git",
"state" : {
"revision" : "9226052589b8faece98861bc3d7b33b3ebfe4f5a",
"version" : "0.2.5"
"revision" : "51da3403f128b120705571ce61e0fe190f8889e6",
"version" : "1.0.1"
}
}
],
"version" : 2
"version" : 3
}
6 changes: 6 additions & 0 deletions HealthGPT.xcodeproj/xcshareddata/xcschemes/HealthGPT.xcscheme
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,12 @@
ReferencedContainer = "container:HealthGPT.xcodeproj">
</BuildableReference>
</BuildableProductRunnable>
<CommandLineArguments>
vishnuravi marked this conversation as resolved.
Show resolved Hide resolved
<CommandLineArgument
argument = "--mockMode"
isEnabled = "NO">
</CommandLineArgument>
</CommandLineArguments>
</LaunchAction>
<ProfileAction
buildConfiguration = "Release"
Expand Down
109 changes: 55 additions & 54 deletions HealthGPT/HealthGPT/HealthDataInterpreter.swift
Original file line number Diff line number Diff line change
Expand Up @@ -8,73 +8,74 @@

import Foundation
import Spezi
import SpeziOpenAI
import SpeziChat
import SpeziLLM
import SpeziLLMOpenAI
import SpeziSpeechSynthesizer


class HealthDataInterpreter: DefaultInitializable, Component, ObservableObject, ObservableObjectProvider {
@Dependency var openAIComponent = OpenAIComponent()

@Observable
class HealthDataInterpreter: DefaultInitializable, Module, EnvironmentAccessible {
@ObservationIgnored @Dependency private var llmRunner: LLMRunner

var querying = false {
willSet {
_Concurrency.Task { @MainActor in
objectWillChange.send()
}
}
}

var runningPrompt: [Chat] = [] {
willSet {
_Concurrency.Task { @MainActor in
objectWillChange.send()
}
}
didSet {
_Concurrency.Task {
if runningPrompt.last?.role == .user {
do {
try await queryOpenAI()
} catch {
print(error)
}
}
}
}
}
var llm: (any LLMSession)?
var systemPrompt = ""
vishnuravi marked this conversation as resolved.
Show resolved Hide resolved

required init() { }

required init() {}


func generateMainPrompt() async throws {
private func generateSystemPrompt() async throws -> String {
vishnuravi marked this conversation as resolved.
Show resolved Hide resolved
let healthDataFetcher = HealthDataFetcher()
vishnuravi marked this conversation as resolved.
Show resolved Hide resolved
let healthData = try await healthDataFetcher.fetchAndProcessHealthData()

let generator = PromptGenerator(with: healthData)
let mainPrompt = generator.buildMainPrompt()
runningPrompt = [Chat(role: .system, content: mainPrompt)]
return PromptGenerator(with: healthData).buildMainPrompt()
vishnuravi marked this conversation as resolved.
Show resolved Hide resolved
}

func queryOpenAI() async throws {
querying = true
/// Creates an `LLMSchema`, sets it up for use with an `LLMRunner`, injects the system prompt
/// into the context, and assigns the resulting `LLMSession` to the `llm` property. For more
/// information, please refer to the [`SpeziLLM`](https://swiftpackageindex.com/StanfordSpezi/SpeziLLM/documentation/spezillm) documentation.
///
/// If the `--mockMode` feature flag is set, this function will use `LLMMockSchema()`, otherwise
/// will use `LLMOpenAISchema` with the model type specified in the `model` parameter.
/// - Parameter model: the type of OpenAI model to use
@MainActor
func prepareLLM(with model: LLMOpenAIModelType) async throws {
guard llm == nil else {
return

Check warning on line 42 in HealthGPT/HealthGPT/HealthDataInterpreter.swift

View check run for this annotation

Codecov / codecov/patch

HealthGPT/HealthGPT/HealthDataInterpreter.swift#L42

Added line #L42 was not covered by tests
}

var llmSchema: any LLMSchema

let chatStreamResults = try await openAIComponent.queryAPI(withChat: runningPrompt)
if FeatureFlags.mockMode {
llmSchema = LLMMockSchema()
} else {
llmSchema = LLMOpenAISchema(parameters: .init(modelType: model))

Check warning on line 50 in HealthGPT/HealthGPT/HealthDataInterpreter.swift

View check run for this annotation

Codecov / codecov/patch

HealthGPT/HealthGPT/HealthDataInterpreter.swift#L50

Added line #L50 was not covered by tests
vishnuravi marked this conversation as resolved.
Show resolved Hide resolved
}

for try await chatStreamResult in chatStreamResults {
for choice in chatStreamResult.choices {
if runningPrompt.last?.role == .assistant {
let previousChatMessage = runningPrompt.last ?? Chat(role: .assistant, content: "")
runningPrompt[runningPrompt.count - 1] = Chat(
role: .assistant,
content: (previousChatMessage.content ?? "") + (choice.delta.content ?? "")
)
} else {
runningPrompt.append(Chat(role: .assistant, content: choice.delta.content ?? ""))
}
}
let llm = llmRunner(with: llmSchema)
vishnuravi marked this conversation as resolved.
Show resolved Hide resolved

systemPrompt = try await generateSystemPrompt()
llm.context.append(systemMessage: systemPrompt)
self.llm = llm
}

/// Queries the LLM using the current session in the `llm` property and adds the output to the context.
@MainActor
func queryLLM() async throws {
guard let llm,
llm.context.last?.role == .user || !(llm.context.contains(where: { $0.role == .assistant }) ) else {
return
}

querying = false
let stream = try await llm.generate()

for try await token in stream {
llm.context.append(assistantOutput: token)
}
}

/// Resets the LLM context and re-injects the system prompt.
@MainActor
func resetChat() {
llm?.context.reset()
llm?.context.append(systemMessage: systemPrompt)
}
}
Loading
Loading