Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add inferno-ml-server-types package #102

Merged
merged 69 commits into from
Feb 26, 2024
Merged

Add inferno-ml-server-types package #102

merged 69 commits into from
Feb 26, 2024

Conversation

ngua
Copy link
Contributor

@ngua ngua commented Feb 26, 2024

Adds the inferno-ml-server-types package containing the API for a server that can evaluate Inferno scripts using ML primitives. There is no actual implementation of inferno-ml-server here

This and the whole Inferno ML infrastructure is still a WIP. However, a WIP implementation will need to be merged here in order to merge some WIPs in other repos. The following at least are not yet complete:

  • making a polymorphic prelude for inferno-ml that doesn't depend on hasktorch (helpful for typechecking in cases where we can't use hasktorch)
  • golden and other tests for the new types; however, since many of the types are polymorphic on something, it probably makes more sense to just add them in one of the places where inferno-ml-server or the bridge server are implemented
  • a metrics endpoint for inferno-ml-server

I've also made a few changes to inferno-ml, which are listed in the changelog

:<|> "inference" :> "cancel" :> Put '[JSON] ()
-- Register the bridge. This is an `inferno-ml-server` endpoint, not a
-- bridge endpoint
:<|> "bridge" :> ReqBody '[JSON] BridgeInfo :> Post '[JSON] ()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might want to just embed the bridge info directly into the servers (if we use a stable domain name, for example). But doing it this way allows for the bridge info to be updated dynamically (i.e. while the inferno-ml-server is running)


{- ORMOLU_DISABLE -}
languagesByCode :: Map (Char, Char) Text
languagesByCode =
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if we actually need these, but it might be nice for an eventual frontend to only display valid ISO 6391-2 codes. (There's a package for this on hackage, but it's ancient and uses a type with 100+ constructors to represent the code)

= VTensor T.Tensor
| VModel T.ScriptModule
| VExtended x
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty awful and hacky, but I can't think of a better way to allow additional types (which is necessary in my WIP inferno-ml-server implementation). The problem is that VCustom is set to MlValue, but we need the custom type to be able to represent additional things (not just MlValues). I guess it would be possible to create another type that can hold an MlValue or another type, but then it would be necessary to map over all of the ML prelude functions. Or perhaps it might be possible once I add the more polymorphic ML prelude

If anyone has a better idea, please let me know!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I knew this would be a problem when I introduced VCustom... Is there a way to use a typeclass to make this nicer? E.g.

zerosFun :: (MonadThrow m, CustomValue c, HasTensorValue c) => Value c m

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that could work. Maybe something like

class HasMlTypes c where
  type Tensor c
  type Model c
  ... 

would work for both of the required types

*Note*: we use https://pvp.haskell.org/ (MAJOR.MAJOR.MINOR.PATCH)

## 0.0.1
* Initial pre-release
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to include a textual tag indicating a pre-release, but Cabal doesn't like that

Copy link
Collaborator

@siddharth-krishna siddharth-krishna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Partial review: reviewed changes to inferno-ml, now looking through inferno-ml-server-types

= VTensor T.Tensor
| VModel T.ScriptModule
| VExtended x
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I knew this would be a problem when I introduced VCustom... Is there a way to use a typeclass to make this nicer? E.g.

zerosFun :: (MonadThrow m, CustomValue c, HasTensorValue c) => Value c m

Copy link
Collaborator

@siddharth-krishna siddharth-krishna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks. (And thanks especially for the great documentation!)

I've left some small questions.

import Servant.Client.Streaming (ClientM, client)

-- | Get the status of the server. @Nothing@ indicates that an inference job
-- is being evaluated. @Just ()@ means the server is idle
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting choice of encoding the status... I'd have expected Nothing to indicate server is idle

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree it's counterintuitive. It's just from my laziness dealing with the MVar in the actual implementation (if the MVar is empty or not). I could swap it

statusC :: ClientM (Maybe ())

-- | Run an inference parameter
inferenceC :: Id (InferenceParam uid gid p s) -> Maybe Int64 -> ClientM ()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The second argument is a resolution?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I should document that

-- (e.g. a UUID for use with @inferno-lsp@)
--
-- For existing inference params, this is the foreign key for the specific
-- script in the 'InferenceScript' table
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious about this -- it looks like it's for backwards compatibility but we haven't yet launched inferno-ml -- is this something we intend to remove before the public launch?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, it's so that we can support posting all of the information to create an inference param without needing to have a script field be a VCObjectHash. So for example, it could be the UUID of a script parsed by inferno-lsp

By "existing" param, I mean when we save a param to the DB, this field will be a VCObjectHash, not that there are params saved somewhere already that we need to maintain backwards compatibility with. I will rephrase it to make that clearer

| IDouble Double
| ITuple (IValue, IValue)
| ITime EpochTime
| IEmpty
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might also want booleans. Also, perhaps we should sync this with the inferno-tachdb value conversions in plow-inferno, to make sure both support the same set of value types

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree. We should probably add a type representing a limited subset of Inferno values to one of the OSS packages and use that in both places

@ngua
Copy link
Contributor Author

ngua commented Feb 26, 2024

Thanks @siddharth-krishna, I will address some of the comments/ideas as I keep working on this

@ngua ngua merged commit 9be95fa into main Feb 26, 2024
1 check passed
@ngua ngua deleted the rory-inferno-ml-server-types branch February 26, 2024 09:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants