Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[editor][easy] Add 'model' to relevant Prompt Schemas #801

Merged
merged 3 commits into from
Jan 8, 2024
Merged

Conversation

rholinshead
Copy link
Contributor

@rholinshead rholinshead commented Jan 6, 2024

[editor][easy] Add 'model' to relevant Prompt Schemas

[editor][easy] Add 'model' to relevant Prompt Schemas

There is still some investigation needed to know how to handle model vs parser nicely in the editor UX. For now, let's just expose the 'model' option in the settings for those parsers that support it to ensure the usage of those parsers won't be blocked.


Stack created with Sapling. Best reviewed with ReviewStack.

Comment on lines +13 to +15
model: {
type: "string",
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what these are used for. Why do we need to have this type: "string" field?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This determines how the setting property 'model' is rendered in the nice settings renderer:
Screenshot 2024-01-08 at 10 42 56 AM

The PromptSchema is a way for model parsers to define what their input, model settings and metadata are structured like, because they are otherwise entirely generic. Without this static schema we have no way of knowing what to render for these prompts

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add description in the info text so people know what this is for? I'm still not really sure. How is this different from the select model top right selector on each prompt?

Copy link
Contributor Author

@rholinshead rholinshead Jan 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can add a description but the descriptions are specific to each parser's prompt schema implementation. For openai for example, model description from https://platform.openai.com/docs/api-reference/chat/create is

ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.

Description tooltips don't support markdown so there's no way to nicely add an external link as hyperlink (also, don't know if we would want to support that since custom parsers could then technically add malicious hyperlinks ?)

How is this different from the select model top right selector on each prompt

The select model top right selector is technically not even a model right now since AIConfig allows us to register arbitrary ids for models when registering model parsers. Take HuggingFaceTextGenerationParser for example, we register "HuggingFaceTextGenerationParser": "HuggingFaceTextGenerationParser" as parser and model id. So if we don't have a 'model' option in the settings there is no way to specify what model to actually use for the API call. Same for AnyscaleEndpoint.

Also, technically the case for gpt-4, etc (until #783 lands), since we register the model/parser as "gpt-4": "gpt-4" but pull the actual model for the API from the settings (not from model name in the prompt metadata).

#782 goes into more details, but as-is we can't really use the top-right model selector as specifying the actual model to use for the prompt api request as a result of the parser/model confusion

Ryan Holinshead added 3 commits January 8, 2024 10:34
# [editor] ErrorBoundary Renderer for PromptInput

With the ability to set arbitrary JSON for the prompt input, if we have a PromptSchema which states what the input should look like, we should be able to do some basic validation that the input at least matches the general type (for now, compare 'string' PromptInput vs object -- in the future, we could do some basic validation against the schema).

If there is any error with rendering the schema-based rendering for the input, we can use ErrorBoundary to fallback to an error message informing the user that the format is invalid and to toggle to JSON editor to fix.

<img width="1083" alt="Screenshot 2024-01-05 at 11 01 07 PM" src="https://github.com/lastmile-ai/aiconfig/assets/5060851/d0a817f5-b85b-4501-9ca5-28b324061ed4">

Toggling to the JSON editor will clear the error state and try to render the correct Schema renderer when toggling back.

Note that in dev mode, these errors are still propagated to a top-level error screen which you can dismiss to show the fallback. In prod, that won't happen and will just show the fallback and an error in the console.

We will do the same for the SettingsRenderer in a subsequent PR

## Testing:
- Updated `OpenAIChatModelParserPromptSchema` input to match that of `OpenAIChatVisionModelParserPromptSchema` while the input is still string. See the error fallback
- Toggle to JSON editor and back, ensure error fallback still shows
- Toggle to JSON editor and update to valid input
```
{
        "attachments": [
          {
            "data": "https://s3.amazonaws.com/files.uploads.lastmileai.com/uploads/cldxsqbel0000qs8owp8mkd0z/2023_12_1_21_23_24/942/Screenshot 2023-11-28 at 11.11.25 AM.png",
            "mime_type": "image/png"
          },
          {
            "data": "https://s3.amazonaws.com/files.uploads.lastmileai.com/uploads/cldxsqbel0000qs8owp8mkd0z/2023_12_1_21_23_24/8325/Screenshot 2023-11-28 at 1.51.52 PM.png",
            "mime_type": "image/png"
          }
        ],
        "data": "What do these images show?"
      }
```
and see proper schema rendering


https://github.com/lastmile-ai/aiconfig/assets/5060851/1343a201-e5eb-46a0-a7f5-a4bdb228d120
# [editor] ErrorBoundary Renderer for ModelSettings

With the ability to set arbitrary JSON for the model settings, if we have a PromptSchema which states what the setting types should be, the settings renderer will assume those types match the config settings JSON. If the JSON doesn't actually match the schema types, it's possible for the settings renderer to throw an error. If that happens, we can use an ErrorBoundary to fallback to an error message informing the user that the format is invalid and to toggle the JSON editor to fix.

<img width="510" alt="Screenshot 2024-01-06 at 3 56 50 PM" src="https://github.com/lastmile-ai/aiconfig/assets/5060851/1eec492b-d342-4a24-ae27-a83441c827cd">

Toggling to the JSON editor will clear the error state and try to render the correct Schema renderer when toggling back.

Note that in dev mode, these errors are still propagated to a top-level error screen which you can dismiss to show the fallback. In prod, that won't happen and will just show the fallback and an error in the console.

## Testing:
- Use JSON editor for model settings to set frequency_penalty to a string; expected value is a number so the settings renderer will error. See the error fallback
- Toggle to JSON editor and back, ensure error fallback still shows
- Toggle to JSON editor and update to valid settings and then toggle back to see proper schema rendering


https://github.com/lastmile-ai/aiconfig/assets/5060851/060ddb20-129d-450e-97ab-cf2939269f90
# [editor][easy] Add 'model' to relevant Prompt Schemas

There is still some investigation needed to know how to handle model vs parser nicely in the editor UX. For now, let's just expose the 'model' option in the settings for those parsers that support it to ensure the usage of those parsers won't be blocked.
rholinshead added a commit that referenced this pull request Jan 8, 2024
[editor] Add Output Error Rendering

# [editor] Add Output Error Rendering

Adding some basic rendering for Error output types if they're ever added
to any configs. Also, propagate server run errors through the client as
(client-only) Error outputs to display for the relevant prompt:

## Testing:
- Raise an exception in the api/run method on the server and ensure it
propagates to the output
<img width="992" alt="Screenshot 2024-01-06 at 4 33 55 PM"
src="https://github.com/lastmile-ai/aiconfig/assets/5060851/93309f64-4ce5-47dd-b553-c6fb44daaca0">

---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/lastmile-ai/aiconfig/pull/803).
* __->__ #803
* #802
* #801
* #800
* #799
@rholinshead rholinshead merged commit 4f96967 into main Jan 8, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants