Releases: gabrielmfern/intricate
Releases · gabrielmfern/intricate
Intricate v0.2.2
- Solve a small miscalculation in the tanh activation function differential
Full Changelog: v0.2.1...v0.2.2
Intricate v0.2.1
- Remove a small if statement that would panic for graphics cards of a certain type of vendor that I got from the wgpu compute example, not sure why it was there.
Full Changelog: v0.2.0...v0.2.1
Intricate v0.2.0
- Add the possibility to save and load layers
- Update the README to explain somethings about the saving and loading as well as somethings about it behind the scenes
- Make the 'get_last_inputs' and 'get_last_outputs' of the layers return references instead of copying things
- Implement an approximate equal for matrices and vectors to use in tests like Softmax for it not to fail when it works
Full Changelog: v0.1.4...v0.2.0
Intricate v0.1.4
- Make the workgroup sizes for all current shaders be (16, 16, 1) to work at its most.
Full Changelog: v0.1.3...v0.1.4
Intricate v0.1.3
- Make the ModelF64 instantiate the GPU device the same way ModelF32 does it.
- Fix the Categorical Cross Entropy differential that was being calculated as the negative of what it should've been.
- Fix a small problem in the XoR example that was using TrainingOptionsF64 instead of TrainingOptionsF32.
Full Changelog: v0.1.2...v0.1.3
intricate v0.1.2
- Add a Dense CPU layer that computes f32 numbers
- Implement Matrix Operations for Vec<Vec>
Full Changelog: v0.1.1...Latest
Intricate v0.1.1
Make some things better to work with and change the README and the XoR example accordingly.
v0.1.0
Added GPU acceleration and F32 versions of things.
Some cool things that still need to be done are:
- writing some kind of macro to generate the code for f32 and f64 versions of certain structs and traits to not have duplicated code.
- making so that the 'get' methods implemented return slices instead of copies of the vectors as to not duplicate things in RAM and save as much RAM as possible for very large models.
- improve the GPU shaders, perhaps finding a way to send the full unflattened matrices to the GPU instead of sending just a flattened array.
- create GPU accelerated activations and loss functions as to make everything GPU accelerated.
- perhaps write some shader to calculate the Model loss to output gradient (derivatives).
- implement convolutional layers and perhaps even solve some image classification problems in a example
- add a example that uses GPU acceleration
So still a lot of work to do.