Skip to content

Commit

Permalink
Simplify and fix some links
Browse files Browse the repository at this point in the history
  • Loading branch information
FrancescAlted committed Dec 13, 2024
1 parent 30df5d7 commit 6330806
Show file tree
Hide file tree
Showing 5 changed files with 7 additions and 10 deletions.
4 changes: 3 additions & 1 deletion conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -633,7 +633,9 @@
# relative URL.
#
# If you don't need any of these, just set to []
REDIRECTIONS = []
REDIRECTIONS = [
("/pages/btune", "https://ironarray.io/btune"),
]

# Presets of commands to execute to deploy. Can be anything, for
# example, you may use rsync:
Expand Down
7 changes: 1 addition & 6 deletions pages/blosc-in-depth.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,7 @@ Blosc2 also includes `NDim, a container with multi-dimensional capabilities <htt
:width: 75%
:align: center

Finally, `Python-Blosc2 <https://github.com/Blosc/python-blosc2>`_ is not only a Python wrapper for C-Blosc2, but also a powerful computing engine that can perform advanced computations on compressed data. It is designed to work transparently with NumPy arrays, while leveraging both NumPy and numexpr for achieving great performance. Among the main differences between the new computing engine and NumPy or numexpr, you can find:

* Support for ndarrays that are compressed in-memory, on-disk or `on the network <https://github.com/ironArray/Caterva2>`_.
* Can perform many kind of math expressions, including reductions, indexing, filters and more.
* Support for NumPy ufunc mechanism, allowing to mix and match NumPy and Blosc2 computations.
* Excellent integration with Numba and Cython via User Defined Functions.
Finally, there is `Python-Blosc2 <https://github.com/Blosc/python-blosc2>`_ which, besides being a Python wrapper for C-Blosc2, also brings a `powerful computing engine <https://www.blosc.org/python-blosc2/getting_started/overview.html#operating-with-ndarrays>`_ that can perform advanced computations on compressed data that can be arbitrarily large and potentially `distributed <https://ironarray.io/caterva2>`_.

Find more information in the `Python-Blosc2 documentation <https://www.blosc.org/python-blosc2>`_.

Expand Down
2 changes: 1 addition & 1 deletion posts/blosc2-lossy-compression.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Lossy compression is a powerful tool for optimizing storage space, reducing band

With its advanced compression methodologies and adept memory management, Blosc2 empowers users to strike a harmonious balance between compression ratio, speed, and fidelity. This attribute renders it especially suitable for scenarios where resource limitations or performance considerations hold significant weight.

Finally, there are ongoing efforts towards integrating fidelity into our `BTune AI tool <http://btune.blosc.org/>`_. This enhancement will empower the tool to autonomously identify the most suitable codecs and filters, balancing compression level, precision, and **fidelity** according to user-defined preferences. Keep an eye out for updates!
Finally, there are ongoing efforts towards integrating fidelity into our `BTune AI tool <https://ironarray.io/btune>`_. This enhancement will empower the tool to autonomously identify the most suitable codecs and filters, balancing compression level, precision, and **fidelity** according to user-defined preferences. Keep an eye out for updates!

Whether you're working with scientific data, multimedia content, or large-scale datasets, Blosc2 offers a comprehensive solution for efficient data compression and handling.

Expand Down
2 changes: 1 addition & 1 deletion posts/pytables-b2nd-slicing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,6 @@ The benchmarks above show how optimized Blosc2 NDim's two-level partitioning com

It is worth noting that these techniques still have some limitations: they only work with contiguous slices (that is, with step 1 on every dimension), and on datasets with the same byte ordering as the host machine. Also, although results are good indeed, there may still be room for implementation improvement, but that will require extra code profiling and parameter adjustments.

Finally, as mentioned in the `Blosc2 NDim`_ post, if you need help in `finding the best parameters <http://btune.blosc.org/>`_ for your use case, feel free to reach out to the Blosc team at `contact (at) blosc.org`.
Finally, as mentioned in the `Blosc2 NDim`_ post, if you need help in `finding the best parameters <https://ironarray.io/btune>`_ for your use case, feel free to reach out to the Blosc team at `contact (at) blosc.org`.

Enjoy data!
2 changes: 1 addition & 1 deletion posts/python-blosc2-improvements.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Continue reading for knowing the new features a bit more in depth.
Retrieve data with `__getitem__` and `get_slice`
------------------------------------------------

The most general way to store data in Python-Blosc2 is through a `SChunk` (super-chunk) object. Here the data is split into chunks of the same size. So until now, the only way of working with it was chunk by chunk (see `the basics tutorial <https://github.com/Blosc/python-blosc2/blob/main/examples/tutorial-basics.ipynb>`_).
The most general way to store data in Python-Blosc2 is through a `SChunk` (super-chunk) object. Here the data is split into chunks of the same size. So until now, the only way of working with it was chunk by chunk (see `tutorial <https://www.blosc.org/python-blosc2/getting_started/tutorials/07.schunk-basics.html>`_).

With the new version, you can get general data slices with the handy `__getitem__()` method without having to mess with chunks manually. The only inconvenience is that this returns a bytes object, which is difficult to read by humans. To overcome this, we have also implemented the `get_slice()` method; it comes with two optional params: `start` and `stop` for selecting the slice you are interested in. Also, you can pass to `out` any Python object supporting the `Buffer Protocol <http://jakevdp.github.io/blog/2014/05/05/introduction-to-the-python-buffer-protocol/>`_ and it will be filled with the data slice. One common example is to pass a NumPy array in the `out` argument::

Expand Down

0 comments on commit 6330806

Please sign in to comment.