Skip to content

Releases: gabrielmfern/intricate

Intricate v0.2.2

02 Aug 01:00
Compare
Choose a tag to compare
  • Solve a small miscalculation in the tanh activation function differential

Full Changelog: v0.2.1...v0.2.2

Intricate v0.2.1

31 Jul 20:47
Compare
Choose a tag to compare
  • Remove a small if statement that would panic for graphics cards of a certain type of vendor that I got from the wgpu compute example, not sure why it was there.

Full Changelog: v0.2.0...v0.2.1

Intricate v0.2.0

31 Jul 04:57
Compare
Choose a tag to compare
  • Add the possibility to save and load layers
  • Update the README to explain somethings about the saving and loading as well as somethings about it behind the scenes
  • Make the 'get_last_inputs' and 'get_last_outputs' of the layers return references instead of copying things
  • Implement an approximate equal for matrices and vectors to use in tests like Softmax for it not to fail when it works

Full Changelog: v0.1.4...v0.2.0

Intricate v0.1.4

30 Jul 22:09
Compare
Choose a tag to compare
  • Make the workgroup sizes for all current shaders be (16, 16, 1) to work at its most.

Full Changelog: v0.1.3...v0.1.4

Intricate v0.1.3

30 Jul 19:54
Compare
Choose a tag to compare
  • Make the ModelF64 instantiate the GPU device the same way ModelF32 does it.
  • Fix the Categorical Cross Entropy differential that was being calculated as the negative of what it should've been.
  • Fix a small problem in the XoR example that was using TrainingOptionsF64 instead of TrainingOptionsF32.

Full Changelog: v0.1.2...v0.1.3

intricate v0.1.2

30 Jul 16:29
Compare
Choose a tag to compare
  • Add a Dense CPU layer that computes f32 numbers
  • Implement Matrix Operations for Vec<Vec>

Full Changelog: v0.1.1...Latest

Intricate v0.1.1

30 Jul 12:08
Compare
Choose a tag to compare

Make some things better to work with and change the README and the XoR example accordingly.

v0.1.0

30 Jul 02:00
Compare
Choose a tag to compare

Added GPU acceleration and F32 versions of things.

Some cool things that still need to be done are:

  • writing some kind of macro to generate the code for f32 and f64 versions of certain structs and traits to not have duplicated code.
  • making so that the 'get' methods implemented return slices instead of copies of the vectors as to not duplicate things in RAM and save as much RAM as possible for very large models.
  • improve the GPU shaders, perhaps finding a way to send the full unflattened matrices to the GPU instead of sending just a flattened array.
  • create GPU accelerated activations and loss functions as to make everything GPU accelerated.
  • perhaps write some shader to calculate the Model loss to output gradient (derivatives).
  • implement convolutional layers and perhaps even solve some image classification problems in a example
  • add a example that uses GPU acceleration
    So still a lot of work to do.