Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[editor] Model Selector #662

Merged
merged 1 commit into from
Dec 29, 2023
Merged

[editor] Model Selector #662

merged 1 commit into from
Dec 29, 2023

Conversation

rholinshead
Copy link
Contributor

@rholinshead rholinshead commented Dec 29, 2023

[editor] Model Selector

[editor] Model Selector

Implement the model selector and add a placeholder call to the future api endpoint for updating the model.

Screen.Recording.2023-12-28.at.8.29.42.PM.mov

Stack created with Sapling. Best reviewed with ReviewStack.

const onUpdatePromptModelSettings = useCallback(
async (promptIndex: number, newModelSettings: any) => {
dispatch({
type: "UPDATE_PROMPT_MODEL_SETTINGS",
index: promptIndex,
modelSettings: newModelSettings,
});
// TODO: Call server-side endpoint to update model settings
// TODO: Call server-side endpoint to update model
Copy link
Contributor Author

@rholinshead rholinshead Dec 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not setting this up just yet (unlike model name below) since there is a higher potential for bugs in managing the settings object & don't want to add friction to whomever is adding the server-side endpoint. I'll complete this once the endpoint is implemented

updateModel: (
promptName?: string,
modelData?: string | ModelMetadata
) => Promise<void /*{ aiconfig: AIConfig }*/>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not return aiconfig?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will once the endpoint is implemented -- right now there's no updated aiconfig to return.

Comment on lines +71 to +74
// return await ufetch.post(ROUTE_TABLE.UPDATE_MODEL,
// prompt_name: promptName,
// model_data: modelData,
// });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you commenting this out to add it later once the update_model() api is built on backend?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep

@rossdanlm
Copy link
Contributor

Btw not sure if you're aware, but I refactored the update_model() python SDK to take in model_name and model_settings as two separate function args: https://github.com/lastmile-ai/aiconfig/blob/ca59d2c1657fe6e2655f1a0c25180f9407b97cab/python/src/aiconfig/schema.py#L492C34-L492C34

PR for full details: #507

debounce(
(promptName?: string, modelMetadata?: string | ModelMetadata) =>
callbacks.updateModel(promptName, modelMetadata),
250
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we define this as a const somewhere? I see it's also 250 for addPrompt

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can pull it out to a const for now in a separate PR, but it could technically be dependent on where it's used

callbacks.updateModel(promptName, modelMetadata),
250
),
[callbacks.updatePrompt]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be .updateModel?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, yep, thanks

);
const currentModel = prompt.metadata?.model;
let modelData: string | ModelMetadata | undefined = newModel;
if (newModel && currentModel && typeof currentModel !== "string") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think with the new SDK you won't have to do individual checks because we're passing model_name and settings as two separate args, hopefully this makes it easier for you

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, ok - for the updated config it seems like we can't support the following scenario:
Change model name for a prompt and don't set any settings (i.e. set model to the string name instead of {name: <name>, settings: {}}). This is also another case where we have multiple ways of representing the same thing in aiconfig, and might be annoying to set the model name and clear the settings but always have settings: {} set by the editor instead of just having model: name in the config?

On top of that, if we change the model and it did already have settings set, we would ideally want to maintain only the settings that are supported by the new model (e.g. say temperature, topK are used by both models) as defined in the prompt schema that is associated with the model provider, and drop anything not supported. This might need to wait until post-MVP, though. For now we can either clear the settings or keep them entirely... I was doing the former here, but open to opinions

// TODO: Consolidate
} catch (err: any) {
showNotification({
title: "Error updating prompt model",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to read the http message that we return and we display it? Can be done future PR

} catch (err: any) {
showNotification({
title: "Error updating prompt model",
message: err.message,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I guess it's done here?

Comment on lines +316 to 317
onUpdateModel={onUpdatePromptModel}
onUpdateModelSettings={onUpdatePromptModelSettings}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh ok cool, so we are keeping them as separate actions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ya, we should keep them separate since there's different logic for them:

  • update settings --> keep same model name but new model settings object
  • update model --> change model name and either keep settings or clear them

@@ -37,6 +38,12 @@ export type UpdatePromptNameAction = {
name: string;
};

export type UpdatePromptModelAction = {
Copy link
Contributor

@rossdanlm rossdanlm Dec 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this being used anywhere in this PR? Does the displatch type being set to "onUpdatePromptModel" mean that it's automatically mapped to this action type?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we dispatch the "UPDATE_PROMPT_MODEL" action, that action is implicitly typed as this UpdatePromptModelAction type. We add it to MutateAIConfigAction above and that does two things:

  1. Makes it part of the AIConfigReducerAction via the union for that type, which means our reducer below will require a case "UPDATE_PROMPT_MODEL" to handle it to ensure the state is update -- this is the automatic mapping you're referring to
  2. Allows us to dispatch CONSOLIDATE_AICONFIG after we get the response, in order to update the client state with response state (not done here because we don't have an aiconfig response to do that with just yet)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thanks for the explanation!

Comment on lines +149 to +154
model: action.modelName
? {
name: action.modelName,
// TODO: Consolidate settings based on schema union
}
: undefined,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hopefully don't need to do this logic with updated SDK

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll revisit this once the endpoint is implemented. We still need some UI state change here (this happens before the request is sent and before the response is returned to ensure responsive UI). Probably what will need to be done is:

  • this action sets just the name
  • aiconfig response comes back with full metadata, either cleared settings or updated settings to match supported settings based on the schema
  • from aiconfig response, we consolidate the UI state model metadata with updated settings

import { memo, useCallback, useEffect, useState } from "react";
import { showNotification } from "@mantine/notifications";
import { memo, useCallback, useState } from "react";
import useLoadModels from "../../hooks/useLoadModels";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, much cleaner

@@ -41,30 +41,14 @@ function ModelMenuItems({

export default memo(function AddPromptButton({ addPrompt, getModels }: Props) {
const [modelSearch, setModelSearch] = useState("");
Copy link
Contributor

@rossdanlm rossdanlm Dec 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you clarify what modelSearch string supposed to represent?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a search input and modelSearch is the text of that input

Comment on lines +57 to +65
filter={(searchValue: string, item: AutocompleteItem) => {
if (showAll) {
return true;
}

const modelName: string = item.value;
return modelName
.toLocaleLowerCase()
.includes(searchValue.toLocaleLowerCase().trim());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright so what's the difference between this vs. the functionality in getModels callback:

const getModels = useCallback(async (search: string) => {
// For now, rely on caching and handle client-side search filtering
// We will use server-side search filtering for Gradio
const res = await ufetch.get(ROUTE_TABLE.LIST_MODELS);
const models = res.data;
if (search && search.length > 0) {
return models.filter((model: string) => model.indexOf(search) >= 0);
}
return models;
}, []);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is specific to this AutoComplete UI component because needs to handle the search/filtering of the provided data on its own


useEffect(() => {
loadModels(modelSearch);
}, [loadModels, modelSearch]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come we need to pass in modelSearch as a dependency here but not getModels as a dependency on L21? Is it because we're implicitly assuming that getModels should only be defined once and never change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, good catch. getModels should be added above. Something must be up with my linter because it's missing these deps :(

onSelectModel(value);
models.some((model) => {
if (model === value) {
setShowAll(true);
Copy link
Contributor

@rossdanlm rossdanlm Dec 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok just to clarify, this onChange only occurs when a value is selected from the autocomplete options? Does this also occur when user changes the search string? If this occurs when user changes the text and it matches a model name, we wouldn't want to have setShowAll to true right? I noticed that even when we enter gpt-4 in your test plan, at the bottom are still some model names that don't match like dall-e-3

Screenshot 2023-12-29 at 10 14 58

Copy link
Contributor Author

@rholinshead rholinshead Dec 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is mainly copied from our model selector in Workbooks and is sort of a workaround to match our desired behaviour. By default, the component will only ever show results in the dropdown that match the text in the input box. So, if the aiconfig has model set to gpt-4 when you load it, the dropdown would only ever show models with gpt-4 substring.
This isn't what we want, though, since users may not know that there are other non gpt-4 models to use then. The behaviour we want is:

  • if the user is typing to 'search' for a model, then the dropdown should filter on that search substring
  • otherwise, if the model is selected from the list / defaulted from the config then we want to show all the model options to change to

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I generally assumed that the user would just click the x button on the right to see everything, hmmmm. I don't think it's a blocker but feels a bit strange to me that auto-completions don't remain filtered after selected. We can do this as P2 someday

Copy link
Contributor

@rossdanlm rossdanlm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This generally looks good to me. Biggest point is my comment about the SDK being refactored to take in name + settings separately, but we can fix forward instead of worrying about merge conflict. Also have question about the autocompletion filtering.

This'll also be easier for you to test once I add this endpoint on the backend for you

# [editor] Model Selector

Implement the model selector and add a placeholder call to the future api endpoint for updating the model.


https://github.com/lastmile-ai/aiconfig/assets/5060851/812f3a10-aaac-4ec3-9f1f-4d21ebc99e9f
@rholinshead rholinshead merged commit 2bda80c into main Dec 29, 2023
2 checks passed
rholinshead added a commit that referenced this pull request Dec 29, 2023
[editor] Support Deleting Prompt

# [editor] Support Deleting Prompt

Add the UI side for deleting a prompt, with placeholder to call the
server endpoint once it's implemented



https://github.com/lastmile-ai/aiconfig/assets/5060851/b3e682dd-a3ed-45c1-b153-b4b6e2094616

---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/lastmile-ai/aiconfig/pull/665).
* #667
* #666
* __->__ #665
* #662
rholinshead added a commit that referenced this pull request Dec 29, 2023
[editor] Run Prompt (non-streaming)

# [editor] Run Prompt (non-streaming)

With non-streaming /run endpoint properly returning the response now, we
can update the client config with the resulting output. We can also
leverage the client-side `_ui` state for maintaining 'isRunning' state
to show a spinner / disable the execute button for now (TODO: we may
need to handle canceling execution as well).



https://github.com/lastmile-ai/aiconfig/assets/5060851/67fa793a-9a60-4285-a7d1-69fff633d6a6

---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/lastmile-ai/aiconfig/pull/667).
* __->__ #667
* #666
* #665
* #662
@rossdanlm rossdanlm deleted the pr662 branch December 29, 2023 21:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants