-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add inferno-ml-server-types
package
#102
Conversation
:<|> "inference" :> "cancel" :> Put '[JSON] () | ||
-- Register the bridge. This is an `inferno-ml-server` endpoint, not a | ||
-- bridge endpoint | ||
:<|> "bridge" :> ReqBody '[JSON] BridgeInfo :> Post '[JSON] () |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to just embed the bridge info directly into the servers (if we use a stable domain name, for example). But doing it this way allows for the bridge info to be updated dynamically (i.e. while the inferno-ml-server
is running)
|
||
{- ORMOLU_DISABLE -} | ||
languagesByCode :: Map (Char, Char) Text | ||
languagesByCode = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if we actually need these, but it might be nice for an eventual frontend to only display valid ISO 6391-2 codes. (There's a package for this on hackage, but it's ancient and uses a type with 100+ constructors to represent the code)
= VTensor T.Tensor | ||
| VModel T.ScriptModule | ||
| VExtended x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is pretty awful and hacky, but I can't think of a better way to allow additional types (which is necessary in my WIP inferno-ml-server
implementation). The problem is that VCustom
is set to MlValue
, but we need the custom type to be able to represent additional things (not just MlValue
s). I guess it would be possible to create another type that can hold an MlValue
or another type, but then it would be necessary to map over all of the ML
prelude functions. Or perhaps it might be possible once I add the more polymorphic ML
prelude
If anyone has a better idea, please let me know!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I knew this would be a problem when I introduced VCustom
... Is there a way to use a typeclass to make this nicer? E.g.
zerosFun :: (MonadThrow m, CustomValue c, HasTensorValue c) => Value c m
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that could work. Maybe something like
class HasMlTypes c where
type Tensor c
type Model c
...
would work for both of the required types
*Note*: we use https://pvp.haskell.org/ (MAJOR.MAJOR.MINOR.PATCH) | ||
|
||
## 0.0.1 | ||
* Initial pre-release |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to include a textual tag indicating a pre-release, but Cabal doesn't like that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Partial review: reviewed changes to inferno-ml, now looking through inferno-ml-server-types
= VTensor T.Tensor | ||
| VModel T.ScriptModule | ||
| VExtended x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I knew this would be a problem when I introduced VCustom
... Is there a way to use a typeclass to make this nicer? E.g.
zerosFun :: (MonadThrow m, CustomValue c, HasTensorValue c) => Value c m
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks. (And thanks especially for the great documentation!)
I've left some small questions.
import Servant.Client.Streaming (ClientM, client) | ||
|
||
-- | Get the status of the server. @Nothing@ indicates that an inference job | ||
-- is being evaluated. @Just ()@ means the server is idle |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting choice of encoding the status... I'd have expected Nothing to indicate server is idle
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree it's counterintuitive. It's just from my laziness dealing with the MVar
in the actual implementation (if the MVar
is empty or not). I could swap it
statusC :: ClientM (Maybe ()) | ||
|
||
-- | Run an inference parameter | ||
inferenceC :: Id (InferenceParam uid gid p s) -> Maybe Int64 -> ClientM () |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The second argument is a resolution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I should document that
-- (e.g. a UUID for use with @inferno-lsp@) | ||
-- | ||
-- For existing inference params, this is the foreign key for the specific | ||
-- script in the 'InferenceScript' table |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious about this -- it looks like it's for backwards compatibility but we haven't yet launched inferno-ml -- is this something we intend to remove before the public launch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it's so that we can support posting all of the information to create an inference param without needing to have a script
field be a VCObjectHash
. So for example, it could be the UUID of a script parsed by inferno-lsp
By "existing" param, I mean when we save a param to the DB, this field will be a VCObjectHash
, not that there are params saved somewhere already that we need to maintain backwards compatibility with. I will rephrase it to make that clearer
| IDouble Double | ||
| ITuple (IValue, IValue) | ||
| ITime EpochTime | ||
| IEmpty |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might also want booleans. Also, perhaps we should sync this with the inferno-tachdb value conversions in plow-inferno, to make sure both support the same set of value types
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree. We should probably add a type representing a limited subset of Inferno values to one of the OSS packages and use that in both places
Co-authored-by: Siddharth Krishna <[email protected]>
Thanks @siddharth-krishna, I will address some of the comments/ideas as I keep working on this |
Adds the
inferno-ml-server-types
package containing the API for a server that can evaluate Inferno scripts using ML primitives. There is no actual implementation ofinferno-ml-server
hereThis and the whole Inferno ML infrastructure is still a WIP. However, a WIP implementation will need to be merged here in order to merge some WIPs in other repos. The following at least are not yet complete:
inferno-ml
that doesn't depend onhasktorch
(helpful for typechecking in cases where we can't usehasktorch
)inferno-ml-server
or the bridge server are implementedinferno-ml-server
I've also made a few changes to
inferno-ml
, which are listed in the changelog