Note
While working on several breaking changes to tiny-decoders, I tried out releasing them piece by piece. The idea was that you could either upgrade version by version, only having to deal with one or a few breaking changes at a time, or wait and do a bunch of them at the same time. That’s why there are so many breaking changes in such a short time.
Currently there are no more breaking changes planned.
This release renames fieldsAuto
to fields
. Both of those functions used to exist, but fields
was deprecated in version 11.0.0 and removed in version 14.0.0. There’s no need for the Auto
suffix anymore.
This release renames fieldsUnion
to taggedUnion
since it better describes what it is, and it goes along better with the tag
function.
This release renames nullable
to nullOr
to be consistent with undefinedOr
.
This release adds a JSON
object with parse
and stringify
methods, similar to the standard global JSON
object. The difference is that tiny-decoder’s versions also take a Codec
, which makes them safer. Read more about it in the documentation.
This release adds more support for primitives.
These are the primitive types:
type primitive = bigint | boolean | number | string | symbol | null | undefined;
stringUnion
has been renamed toprimitiveUnion
and now works with literals of any primitive type, not just strings. You can now create a codec for a union of numbers, for example.tag
now accepts literals of any primitive type, not just strings. For example, this allows for easily decoding a tagged union where the discriminator isisAdmin: true
andisAdmin: false
, or a tagged union where the tags are numbers.- A
bigint
codec has been added – a codec forbigint
values. There are now codecs for all primitive types, except:symbol
: I don’t think this is useful. Useconst mySymbol: unique symbol = Symbol(); primitiveUnion([mySymbol])
instead.undefined
: UseprimitiveUnion([undefined])
if needed.null
: UseprimitiveUnion([null])
if needed.
multi
now supportsbigint
andsymbol
, covering all primitive types. Additionally, sincemulti
is basically the JavaScripttypeof
operator as a codec, it now also supportsfunction
.repr
now recognizesbigint
and prints for example123n
instead ofBigInt
. It already supported symbols (and all other primitive types) since before.- The
DecoderError
type had slight changes due to the above. If all you do with errors isformat(error)
, then you won’t notice.
In short, all you need to do to upgrade is change stringUnion
into primitiveUnion
.
This release introduces Codec
:
type Codec<Decoded, Encoded = unknown> = {
decoder: (value: unknown) => DecoderResult<Decoded>;
encoder: (value: Decoded) => Encoded;
};
A codec is an object with a decoder and an encoder.
The decoder of a codec is the Decoder
type from previous versions of tiny-decoders.
An encoder is a function that turns Decoded
back into what the input looked like. You can think of it as “turning Decoded
back into unknown
”, but usually the Encoded
type variable is inferred to something more precise.
All functions in tiny-decoders have been changed to work with Codec
s instead of Decoder
s (and the Decoder
type does not exist anymore – it is only part of the new Codec
type).
Overall, most things are the same. Things accept and return Codec
s instead of Decoder
s now, but many times it does not affect your code.
The biggest changes are:
- Unlike a
Decoder
, aCodec
is not callable. You need to add.decoder
. For example, changemyDecoder(data)
tomyDecoder.decoder(data)
. Then rename tomyCodec.decoder(data)
for clarity. map
andflatMap
now take two functions: The same function as before for transforming the decoded data, but now also a second function for turning the data back again. This is usually trivial to implement.- A custom
Decoder
was just a function. A customCodec
is an object withdecoder
andencoder
fields. Wrap your existing decoder function in such an object, and then implement the encoder (the inverse of the decoder). This is usually trivial as well.
Finally, this release adds a couple of small things:
- The
InferEncoded
utility type.Infer
still infers the type for the decoder.InferEncoded
infers the type for the encoder. - The
unknown
codec. It’s occasionally useful, and now that you need to specify both a decoder and an encoder it crossed the triviality threshold for being included in the package.
The motivations for codecs are:
- TypeScript can now find some edge case errors that it couldn’t before: Extra fields in
fieldsAuto
and inconsistent encoded common fields infieldsUnion
. - If you use
map
,flatMap
,field
andtag
to turn JSON into nicer or more type safe types, you can now easily reverse that again when you need to serialize back to JSON.
This release removes the second type variable from Decoder
.
Before:
type Decoder<T, U = unknown> = (value: U) => DecoderResult<T>;
After:
type Decoder<T> = (value: unknown) => DecoderResult<T>;
This change unlocks further changes that will come in future releases.
Fixed: fieldsAuto
now reports the correct field name when there’s an error in a renamed field.
const decoder = fieldsAuto({
firstName: field(string, { renameFrom: "first_name" }),
});
decoder({ first_name: false });
Before:
At root["firstName"]:
Expected a string
Got: false
After:
At root["first_name"]:
Expected a string
Got: false
This release removes the second argument from undefinedOr
and nullable
, which was a default value to use in place of undefined
or null
, respectively. You now need to use map
instead. This change unlocks further changes that will come in future releases.
Before:
const decoder1 = undefinedOr(string, "default value");
const decoder2 = nullable(string, undefined);
After:
const decoder1 = map(undefinedOr(string), (value) => value ?? "default value");
const decoder2 = map(nullable(string), (value) => value ?? undefined);
This release changes decoders from throwing errors to returning a DecoderResult
:
type Decoder<T> = (value: unknown) => DecoderResult<T>;
type DecoderResult<T> =
| {
tag: "DecoderError";
error: DecoderError;
}
| {
tag: "Valid";
value: T;
};
This change is nice because:
- It avoids
try-catch
when you run a decoder, which is annoying due to the caught error is typed asany
orunknown
, which required anerror instanceof DecoderError
check. - You previously needed to remember to use the
.format()
method ofDecoderErrors
, but now it’s more obvious how to deal with errors. - The type definition of
Decoder
tells the whole story: Now it’s explicit that they can fail, while previously it was implicit.
DecoderError
is now a plain object instead of a class, and DecoderErrorVariant
is no longer exposed – there’s just DecoderError
now. Use the new format
function to turn a DecoderError
into a string, similar to what DecoderError.prototype.format
did before.
You now have to use the Infer
utility type (added in version 15.1.0) instead of ReturnType
. ReturnType
gives you a DecoderResult<T>
while Infer
gives you just T
.
chain
has been removed and replaced with map
and flatMap
. In all places you used chain
, you need to switch to map
if the operation cannot fail (you just transform the data), or flatMap
if it can fail. For flatMap
, you should not throw errors but instead return a DecoderResult
. You might need to use a try-catch
to do this. For example, if you used the RegExp
constructor in chain
before to create a regex, you might have relied on tiny-decoders catching the errors for invalid regex syntax errors. Now you need to catch that yourself. Note that TypeScript won’t help you what you need to catch. Similarly, you also need to return a DecoderError
instead of throwing in custom decoders.
This function can potentially help you migrate tricky decoders where you’re not sure if something might throw errors. It wraps a given decoder in a try-catch
and returns a new decoder that swallows everything as DecoderError
s.
function catcher<T>(decoder: Decoder<T>): Decoder<T> {
return (value) => {
try {
return decoder(value);
} catch (error) {
return {
tag: "DecoderError",
error: {
tag: "custom",
message: error instanceof Error ? error.message : String(error),
got: value,
path: [],
},
};
}
};
}
This release adds the Infer
utility type. It’s currently basically just an alias to the TypeScript built-in ReturnType
utility type, but in a future version of tiny-decoders it’ll need to do a little bit more than just ReturnType
. If you’d like to reduce the amount of migration work when upgrading to that future version, change all your ReturnType<typeof myDecoder>
to Infer<typeof myDecoder>
now!
This release changes the options parameter of fieldsAuto
and fieldsUnion
from:
{ exact = "allow extra" }: { exact?: "allow extra" | "throw" } = {}
To:
{ allowExtraFields = true }: { allowExtraFields?: boolean } = {}
This is because:
- A future tiny-decoders version will be return value based instead of throwing errors, so
"throw"
will not make sense anymore. - tiny-decoders used to have a third alternative for that option – that’s why it’s currently a string union rather than a boolean. While at it, we could just as well simplify into a boolean.
This release removes the fields
function, which was deprecated in version 11.0.0. See the release notes for version 11.0.0 for how to replace fields
with fieldsAuto
, chain
and custom decoders.
Warning
This release contains a breaking change, but no TypeScript errors! Be careful!
Version 11.0.0 made changes to fieldsAuto
, but had a temporary behavior for backwards compatibility, awaiting the changes to fieldsUnion
in version 12.0.0. This release (13.0.0) removes that temporary behavior.
You need to be on the lookout for these two patterns:
fieldsAuto({
field1: undefinedOr(someDecoder),
field2: () => someValue,
});
Previously, the above decoder would succeed even if field1
or field2
were missing.
- If
field1
was missing, the temporary behavior infieldsAuto
would call the decoder atfield1
withundefined
, which would succeed due toundefinedOr
. If you did the version 11.0.0 migration perfectly, this shouldn’t matter. But upgrading to 13.0.0 might uncover some places where you useundefinedOr(someDecoder)
but meant to usefield(someDecoder, { optional(true) })
orfield(undefinedOr(someDecoder), { optional(true) })
(the latter is the “safest” approach in that it is the most permissive). - If
field2
was missing, the temporary behavior infieldsAuto
would call the decoder atfield2
withundefined
, which would succeed due to that decoder ignoring its input and always succeeding with the same value.
Here’s an example of how to upgrade the “always succeed” pattern:
const productDecoder: Decoder<Product> = fieldsAuto({
name: string,
price: number,
version: () => 1,
});
Use chain
instead:
const productDecoder: Decoder<Product> = chain(
fieldsAuto({
name: string,
price: number,
}),
(props) => ({ ...props, version: 1 }),
);
It’s a little bit more verbose, but unlocks further changes that will come in future releases.
This release changes how fieldsUnion
works. The new way should be easier to use, and it looks more similar to the type definition of a tagged union.
-
Changed: The first argument to
fieldsUnion
is no longer the common field name used in the JSON, but the common field name used in TypeScript. This doesn’t matter if you use the same common field name in both JSON and TypeScript. But if you did use different names – don’t worry, you’ll get TypeScript errors so you won’t forget to update something. -
Changed: The second argument to
fieldsUnion
is now an array of objects, instead of an object with decoders. The objects in the array are “fieldsAuto
objects” – they fit when passed tofieldsAuto
as well. All of those objects must have the first argument tofieldsUnion
as a key, and use the newtag
function on that key. -
Added: The
tag
function. Used withfieldsUnion
, once for each variant of the union.tag("MyTag")
returns aField
with a decoder that requires the input"MyTag"
and returns"MyTag"
. The metadata of theField
also advertises that the tag value is"MyTag"
, whichfieldsUnion
uses to know what to do. Thetag
function also lets you use a different common field in JSON than in TypeScript (similar to thefield
function for other fields).
Here’s an example of how to upgrade:
fieldsUnion("tag", {
Circle: fieldsAuto({
tag: () => "Circle" as const,
radius: number,
}),
Rectangle: fields((field) => ({
tag: "Rectangle" as const,
width: field("width_px", number),
height: field("height_px", number),
})),
});
After:
fieldsUnion("tag", [
{
tag: tag("Circle"),
radius: number,
},
{
tag: tag("Rectangle"),
width: field(number, { renameFrom: "width_px" }),
height: field(number, { renameFrom: "height_px" }),
},
]);
And here’s an example of how to upgrade a case where the JSON and TypeScript names are different:
fieldsUnion("type", {
circle: fieldsAuto({
tag: () => "Circle" as const,
radius: number,
}),
square: fieldsAuto({
tag: () => "Square" as const,
size: number,
}),
});
After:
fieldsUnion("tag", [
{
tag: tag("Circle", { renameTagFrom: "circle", renameFieldFrom: "type" }),
radius: number,
},
{
tag: tag("Square", { renameTagFrom: "square", renameFieldFrom: "type" }),
size: number,
},
]);
This release deprecates fields
, and makes fieldsAuto
more powerful so that it can do most of what only fields
could before. Removing fields
unlocks further changes that will come in future releases. It’s also nice to have just one way of decoding objects (fieldsAuto
), instead of having two. Finally, the changes to fieldsAuto
gets rid of a flawed design choice which solves several reported bugs: #22 and #24.
-
Changed:
optional
has been removed and replaced byundefinedOr
and a new function calledfield
. Theoptional
function did two things: It made a decoder also acceptundefined
, and marked fields as optional. Now there’s one function for each use case. -
Added: The new
field
function returns aField
type, which is a decoder with some metadata. The metadata tells whether the field is optional, and whether the field has a different name in the JSON object. -
Changed:
fieldsAuto
takes an object like before, where the values areDecoder
s like before, but now the values can beField
s as well (returned from thefield
function). Passing a plainDecoder
instead of aField
is just a convenience shortcut for passing aField
with the default metadata (the field is required, and has the same name both in TypeScript and in JSON). -
Changed:
fieldsAuto
no longer computes which fields are optional by checking if the type of the field includes| undefined
. Instead, it’s based purely on theField
metadata. -
Changed:
const myDecoder = fieldsAuto<MyType>({ /* ... */ })
now needs to be written asconst myDecoder: Decoder<MyType> = fieldsAuto({ /* ... */ })
. It is no longer recommended to specify the generic offieldsAuto
, and doing so does not mean the same thing anymore. Either annotate the decoder as any other, or don’t and infer the type. -
Added:
recursive
. It’s needed when making a decoder for a recursive data structure usingfieldsAuto
. (Previously, the recommendation was to usefields
for recursive objects.) -
Changed: TypeScript 5+ is now required, because the above uses const type parameters) (added in 5.0), and leads to the exactOptionalPropertyTypes (added in 4.4) option in
tsconfig.json
being recommended (see the documentation for thefield
function for why).
The motivation for the changes are:
-
Supporting TypeScript’s exactOptionalPropertyTypes option. That option decouples optional fields (
field?:
) and union with undefined (| undefined
). Now tiny-decoders has done that too. -
Supporting generic decoders. Marking the fields as optional was previously done by looking for fields with
| undefined
in their type. However, if the type of a field is generic, TypeScript can’t know if the type is going to have| undefined
until the generic type is instantiated with a concrete type. As such it couldn’t know if the field should be optional or not yet either. This resulted in it being very difficult and ugly trying to write a type annotation for a generic function returning a decoder – in practice it was unusable without forcing TypeScript to the wanted type annotation. #24 -
Stop setting all optional fields to
undefined
when they are missing (rather than leaving them out). #22 -
Better error messages for missing required fields.
Before:
At root["firstName"]: Expected a string Got: undefined
After:
At root: Expected an object with a field called: "firstName" Got: { "id": 1, "first_name": "John" }
In other words,
fieldsAuto
now checks if fields exist, rather than trying to access them regardless. Previously,fieldsAuto
randecoderAtKey(object[key])
even whenkey
did not exist inobject
, which is equivalent todecoderAtKey(undefined)
. Whether or not that succeeded was up to ifdecoderAtKey
was usingoptional
or not. This resulted in the worse (but technically correct) error message. The new version offieldsAuto
knows if the field is supposed to be optional or not thanks to theField
type and thefield
function mentioned above.[!WARNING]
Temporary behavior: If a field is missing and not marked as optional,fieldsAuto
still tries the decoder at the field (passingundefined
to it). If the decoder succeeds (because it allowsundefined
or succeeds for any input), that value is used. If it fails, the regular “missing field” error is thrown. This means thatfieldsAuto({ name: undefinedOr(string) })
successfully produces{ name: undefined }
if given{}
as input. It is supposed to fail in that case (because a required field is missing), but temporarily it does not fail. This is to support howfieldsUnion
is used currently. WhenfieldsUnion
is updated to a new API in an upcoming version of tiny-decoders, this temporary behavior infieldsAuto
will be removed. -
Being able to rename fields with
fieldsAuto
. Now you don’t need to refactor fromfieldsAuto
tofields
anymore if you need to rename a field. This is done by using thefield
function. -
Getting rid of
fields
unlocks further changes that will come in future releases. (Note:fields
is only deprecated in this release, not removed.)
Here’s an example illustrating the difference between optional fields and accepting undefined
:
fieldsAuto({
// Required field.
a: string,
// Optional field.
b: field(string, { optional: true }),
// Required field that can be set to `undefined`:
c: undefinedOr(string),
// Optional field that can be set to `undefined`:
d: field(undefinedOr(string), { optional: true }),
});
The inferred type of the above is:
type Inferred = {
a: string;
b?: string;
c: string | undefined;
d?: string | undefined;
};
In all places where you use optional(x)
currently, you need to figure out if you should use undefinedOr(x)
or field(x, { optional: true })
or field(undefinedOr(x), { optional: true })
.
The field
function also lets you rename fields. This means that you can refactor:
fields((field) => ({
firstName: field("first_name", string),
}));
Into:
fieldsAuto({
firstName: field(string, { renameFrom: "first_name" }),
});
If you used fields
for other reasons, you can refactor them away by using recursive
, chain
and writing custom decoders.
Read the documentation for fieldsAuto
and field
to learn more about how they work.
Changed: multi
has a new API.
Before:
type Id = { tag: "Id"; id: string } | { tag: "LegacyId"; id: number };
const idDecoder: Decoder<Id> = multi({
string: (id) => ({ tag: "Id" as const, id }),
number: (id) => ({ tag: "LegacyId" as const, id }),
});
After:
type Id = { tag: "Id"; id: string } | { tag: "LegacyId"; id: number };
const idDecoder: Decoder<Id> = chain(multi(["string", "number"]), (value) => {
switch (value.type) {
case "string":
return { tag: "Id" as const, id: value.value };
case "number":
return { tag: "LegacyId" as const, id: value.value };
}
});
Like before, you specify the types you want (string
and number
above), but now you get a tagged union back ({ type: "string", value: string } | { type: "number", value: number }
) instead of supplying functions to call for each type. You typically want to pair this with chain
, switching on the different variants of the tagged union.
This change unlocks further changes that will come in future releases.
Changed: repr
now prints objects and arrays slightly differently, and some options have changed.
tiny-decoders has always printed representations of values on a single line. This stems back to when tiny-decoders used to print a “stack trace” (showing you a little of each parent object and array) – then it was useful to have a very short, one-line representation. Since that’s not a thing anymore, it’s more helpful to print objects and arrays multi-line: One array item or object key–value per line.
Here’s how the options have changed:
recurse: boolean
: Replaced bydepth: number
. Defaults to 0 (which prints the current object or array, but does not recurse).recurseMaxLength
: Removed.maxLength
is now used always. This is because values are printed multi-line; apart for the indentation there’s the same amount of space available regardless of how deeply nested a value is.maxObjectChildren
: The default has changed from 3 to 5, which is the same as formaxArrayChildren
.- Finally, the new
indent: string
option is the indent used when recursing. It defaults to" "
(two spaces).
Before:
At root["user"]:
Expected a string
Got: {"firstName": "John", "lastName": "Doe", "dateOfBirth": Date, (4 more)}
After:
At root["user"]:
Expected a string
Got: {
"firstName": "John",
"lastName": "Doe",
"dateOfBirth": Date,
"tags": Array(2),
"likes": 42,
(2 more)
}
Changed: stringUnion
now takes an array instead of an object.
Before:
stringUnion({ green: null, red: null });
After:
stringUnion(["green", "red"]);
This is clearer, and made the implementation of stringUnion
simpler.
If you have an object and want to use its keys for a string union there’s an example of that in the type inference file.
- Fixed: The TypeScript definitions can now be found if you use
"type": "module"
in your package.json and"module": "Node16"
or"module": "NodeNext"
in your tsconfig.json.
-
Changed: Removed “tolerant decoding”:
- Decoders no longer take an optional second
errors
parameter. - The
mode
option has been removed fromarray
,record
andfield
. - The
"push"
value has been removed from theexact
option offields
andfieldsAuto
.
Out of all the projects I’ve used tiny-decoders in, only one of them has used this feature. And even in that case it was overkill. Regular all-or-nothing decoding is enough.
Removing this feature makes tiny-decoders easier to understand, and tinier, which is the goal.
- Decoders no longer take an optional second
-
Changed:
stringUnion
now acceptsRecord<string, unknown>
instead ofRecord<string, null>
. If you already have an object with the correct keys but non-null values, then it can be handy to be able to use that object.
- Improved:
.message
ofDecoderError
s now link to the docs, which point you to using.format()
instead for better error messages. - Improved: Sensitive formatting now has
(Actual values are hidden in sensitive mode.)
in the message to make it more clear that it is possible to get the actual values in the messages.
-
Removed: Flow support. This package has been re-written in TypeScript and is now TypeScript only.
-
Changed: New API.
-
Renamed:
map
→chain
-
Renamed:
dict
→record
-
Renamed:
pair
→tuple
-
Renamed:
triple
→tuple
-
Renamed:
autoFields
→fieldsAuto
-
Removed:
lazy
. Usefields
ormulti
instead. -
Removed:
either
. Usemulti
orfields
instead. -
Removed:
constant
. I have not found any use case for it. -
Removed:
WithUndefinedAsOptional
.fields
andfieldsAuto
do that (adding?
to optional fields) automatically. -
Removed:
repr.sensitive
.repr
now takes asensitive: boolean
option instead, since you’re in control of formatting viaDecoderError
. For example, callerror.format({ sensitive: true })
on a caughterror
to format it sensitively. -
Added:
multi
-
Added:
tuple
-
Added:
stringUnion
-
Added:
fieldsUnion
-
Added:
nullable
-
Added: The
exact
option forfields
andfieldsAuto
, which lets you error on extraneous properties. -
Changed:
optional
now only deals withundefined
, notnull
. Usenullable
fornull
. Use both if you want to handle bothundefined
andnull
. -
Changed: Decoders now works on either objects or arrays, not both. For example,
array
only acceptsArray
s, not array-like types. For array-like types,instanceof
-check instead.fields
still lets you work on arrays if you pass the{ allow: "array" }
, for cases wheretuple
won’t cut it. -
Decoders that take options now take an object of options. For example, change
array(string, { default: undefined })
intoarray(string, { mode: { default: undefined } })
.
-
-
Changed: A few modern JavaScript features such as
class
and...
spread are now used (which should be supported in all evergreen browsers, but not Internet Explorer). -
Changed: Slightly different error messages.
-
Fixed: The package now works both in ESM and CJS.
-
Fixed:
record
andfieldsAuto
now avoid assigning to__proto__
. The TypeScript types won’t even let you do it! -
Improved: The decoders now throw
DecoderError
s, which you can format in any way you like. Or just call.format()
on them to go with the default formatting.
- Changed:
record
is now calledfields
and now works with both objects and arrays. Besides being more flexible, this reduces the footprint of the library and means there’s one thing less to learn. - Removed:
tuple
. Usefields
instead. - Changed:
pair
,tuple
andarray
now work with any array-like objects, not justArray
s. - Removed:
mixedArray
andmixedDict
. Because of the above changes,mixedArray
isn’t used internally anymore andmixedDict
had to change to allow arrays. I haven’t really had a need for these outside tiny-decoders so I decided to remove them both. - Added: The
WithUndefinedAsOptional
helper type for TypeScript. When inferring types fromfields
andautoRecord
decoders, all fields are inferred as required, even ones where you use theoptional
decoder. The helper type lets you turn fields that can be undefined into optional fields, by changing allkey: T | undefined
tokey?: T | undefined
.
- Removed: The “stack trace,” showing you a little of each parent object and array, in error messages is now gone. After using tiny-decoders for a while I noticed this not being super useful. It’s nicer to look at the whole object in a tool of choice, and just use the error message to understand where the error is, and what is wrong.
- Changed:
repr.short
is now calledrepr.sensitive
because of the above change. - Removed: The
key
option ofrepr
. It’s not needed since the “stack traces” were removed. - Changed: Object keys in the part showing you where an error occurred are no longer truncated.
- Changed: Literals, such as strings, are now allowed to be 100 characters long before being truncated. Inside objects and arrays, the limit is 20 characters, just like before. The idea is that printed values are at most 100–120 characters roughly. Now, strings and other literals can utilize more of that space (rather than always being clipped already at 20 characters).
- Added: The
maxLength
andrecurseMaxLength
options ofrepr
which control the above change.
- Added: You can now set
repr.short = true
to get shorter error messages, containing only where the error happened and the actual and expected types, but not showing any actual values. This is useful if you’re dealing with sensitive data, such as email addresses, passwords or social security numbers, you might not want that data to potentially appear in error logs. Another use case is if you simply prefer a shorter, one-line message. - Improved: Documentation on type inference in TypeScript.
- Fixed an oversight regarding the recommended type annotation for
autoRecord
decoders in Flow. No code changes.
After using this library for a while in a real project, I found a bunch of things that could be better. This version brings some bigger changes to the API, making it more powerful and easier to use, and working better with TypeScript.
The new features adds half a kilobyte to the bundle, but it’s worth it.
-
Added: When decoding arrays and objects, you can now opt into tolerant decoding, where you can recover from errors, either by skipping values or providing defaults. Whenever that happens, the message of the error that would otherwise have been thrown is pushed to an
errors
array (Array<string>
, if provided), allowing you to inspect what was ignored. -
Added: A new
record
function. This makes renaming and combining fields much easier, and allows decoding by type name easily without having to learn aboutandThen
andfieldAndThen
.field
has been integrated intorecord
rather than being its own decoder. The oldrecord
function is now calledautoRecord
. -
Added:
tuple
. It’s likerecord
, but for arrays/tuples. -
Added:
pair
andtriple
. These are convenience functions for decoding tuples of length 2 and 3. I found myself decoding quite a few pairs and the old way of doing it felt overly verbose. And the newtuple
API wasn’t short enough either for these common cases. -
Changed:
record
has been renamed toautoRecord
. (A new function has been added, and it’s calledrecord
but does not work like the oldrecord
.)autoRecord
also has a new TypeScript type annotation, which is better and easier to understand. -
Changed:
fieldDeep
has been renamed to justdeep
, sincefield
has been removed. -
Removed:
group
. There’s no need for it with the new API. It was mostly used to decode objects/records while renaming some keys. Many times the migration is easy:// Before: group({ firstName: field("first_name", string), lastName: field("last_name", string), }); // After: record((field) => ({ firstName: field("first_name", string), lastName: field("last_name", string), }));
-
Removed:
field
. It is now part of the newrecord
andtuple
functions (fortuple
it’s calleditem
). If you usedfield
to pluck a single value you can migrate as follows:// Before: field("name", string); field(0, string); // After: record((field) => field("name", string)); tuple((item) => item(0, string));
-
Removed:
andThen
. I found no use cases for it after the newrecord
function was added. -
Removed:
fieldAndThen
. There’s no need for it with the newrecord
function. Here’s an example migration:Before:
type Shape = | { type: "Circle"; radius: number; } | { type: "Rectangle"; width: number; height: number; }; function getShapeDecoder(type: string): (value: unknown) => Shape { switch (type) { case "Circle": return record({ type: () => "Circle", radius: number, }); case "Rectangle": return record({ type: () => "Rectangle", width: number, height: number, }); default: throw new TypeError(`Invalid Shape type: ${repr(type)}`); } } const shapeDecoder = fieldAndThen("type", string, getShapeDecoder);
After:
type Shape = | { type: "Circle"; radius: number; } | { type: "Rectangle"; width: number; height: number; }; function getShapeDecoder(type: string): Decoder<Shape> { switch (type) { case "Circle": return autoRecord({ type: () => "Circle", radius: number, }); case "Rectangle": return autoRecord({ type: () => "Rectangle", width: number, height: number, }); default: throw new TypeError(`Invalid Shape type: ${repr(type)}`); } } const shapeDecoder = record((field, fieldError, obj, errors) => { const decoder = field("type", getShapeDecoder); return decoder(obj, errors); });
Alternatively:
type Shape = | { type: "Circle"; radius: number; } | { type: "Rectangle"; width: number; height: number; }; const shapeDecoder = record((field, fieldError): Shape => { const type = field("type", string); switch (type) { case "Circle": return { type: "Circle", radius: field("radius", number), }; case "Rectangle": return autoRecord({ type: "Rectangle", width: field("width", number), height: field("height", number), }); default: throw fieldError("type", `Invalid Shape type: ${repr(type)}`); } });
- Changed:
mixedArray
now returns$ReadOnlyArray<mixed>
instead ofArray<mixed>
. See this Flow issue for more information: facebook/flow#7684 - Changed:
mixedDict
now returns{ +[string]: mixed }
(readonly) instead of{ [string]: mixed }
. See this Flow issue for more information: facebook/flow#7685
- Initial release.