diff --git a/release_docs/HISTORY-1_14_0-1_16_0.txt b/release_docs/HISTORY-1_14_0-1_16_0.txt new file mode 100644 index 00000000000..815dbfa8fe3 --- /dev/null +++ b/release_docs/HISTORY-1_14_0-1_16_0.txt @@ -0,0 +1,6387 @@ +HDF5 History +============ + +This file contains development history of the HDF5 1.14 branch + +06. Release Information for hdf5-1.14.5 +05. Release Information for hdf5-1.14.4 +04. Release Information for hdf5-1.14.3 +03. Release Information for hdf5-1.14.2 +02. Release Information for hdf5-1.14.1 +01. Release Information for hdf5-1.14.0 + +[Search on the string '%%%%' for section breaks of each release.] + +%%%%1.14.5%%%% + +HDF5 version 1.14.5 released on 2024-09-30 +================================================================================ + + +INTRODUCTION +============ + +This document describes the differences between this release and the previous +HDF5 release. It contains information on the platforms tested and known +problems in this release. For more details check the HISTORY*.txt files in the +HDF5 source. + +Note that documentation in the links below will be updated at the time of each +final release. + +Links to HDF5 documentation can be found on: + + https://support.hdfgroup.org/releases/hdf5/latest-docs.html + +The official HDF5 releases can be obtained from: + + https://support.hdfgroup.org/downloads/index.html + +Changes from Release to Release and New Features in the HDF5-1.14.x release series +can be found at: + + https://support.hdfgroup.org/releases/hdf5/documentation/release_specific_info.md + +If you have any questions or comments, please send them to the HDF Help Desk: + + help@hdfgroup.org + + +CONTENTS +======== + +- New Features +- Support for new platforms and languages +- Bug Fixes since HDF5-1.14.4 +- Platforms Tested +- Known Problems +- CMake vs. Autotools installations + + +New Features +============ + + Configuration: + ------------- + - Added signed Windows msi binary and signed Apple dmg binary files. + + The release process now provides signed Windows and Mac installation + binaries in addition to the Debian and rpm installation binaries. The Mac + binaries are built as universal binaries on an ARM-based Mac. Installer + files are no longer compressed into packaged archives. + + - Moved examples to the HDF5Examples folder in the source tree. + + Moved the C++ and Fortran examples from the examples folder to the HDF5Examples + folder and renamed to TUTR, tutorial. This is referenced from the LearnBasics + doxygen page. + + - Added support for using zlib-ng package as the zlib library: + + CMake: HDF5_USE_ZLIB_NG + Autotools: --enable-zlibng + + Added the option HDF5_USE_ZLIB_NG to allow the replacement of the + default ZLib package by the zlib-ng package as a built-in compression library. + + - Disable CMake UNITY_BUILD for hdf5 + + CMake added a target property, UNITY_BUILD, that when set to true, the target + source files will be combined into batches for faster compilation. By default, + the setting is OFF, but could be enabled by a project that includes HDF5 as a subproject. + + HDF5 has disabled this feature by setting the property to OFF in the HDFMacros.cmake file. + + - Removed "function/code stack" debugging configuration option: + + CMake: HDF5_ENABLE_CODESTACK + Autotools: --enable-codestack + + This was used to debug memory leaks internal to the library, but has been + broken for >1.5 years and is now easily replaced with third-party tools + (e.g. libbacktrace: https://github.com/ianlancetaylor/libbacktrace) on an + as-needed basis when debugging an issue. + + - Added configure options for enabling/disabling non-standard programming + language features + + - Added the CMake variable HDF5_ENABLE_ROS3_VFD to the HDF5 CMake config + file hdf5-config.cmake. This allows it to easily detect if the library + has been built with or without read-only S3 functionality. + + + Library: + -------- + - Added new routines for interacting with error stacks: H5Epause_stack, + H5Eresume_stack, and H5Eis_paused. These routines can be used to + indicate that errors from a call to an HDF5 routine should not be + pushed on to an error stack. Primarily targeted toward third-party + developers of Virtual File Drivers (VFDs) and Virtual Object Layer (VOL) + connectors, these routines allow developers to perform "speculative" + operations (such as trying to open a file or object) without requiring + that the error stack be cleared after a speculative operation fails. + + + Parallel Library: + ----------------- + - + + + Fortran Library: + ---------------- + + - Add Fortran H5R APIs: + h5rcreate_attr_f, h5rcreate_object_f, h5rcreate_region_f, + h5ropen_attr_f, h5ropen_object_f, h5ropen_region_f, + h5rget_file_name_f, h5rget_attr_name_f, h5rget_obj_name_f, + h5rcopy_f, h5requal_f, h5rdestroy_f, h5rget_type_f + + + C++ Library: + ------------ + - + + + Java Library: + ------------- + - + + + Tools: + ------ + - Added doxygen files for the tools + + Implement the tools usage text as pages in doxygen. + + - Added option to adjust the page buffer size in tools + + The page buffer cache size for a file can now be adjusted using the + --page-buffer-size=N + option in the h5repack, h5diff, h5dump, h5ls, and h5stat tools. This + will call the H5Pset_page_buffer_size() API function with the specified + size in bytes. + + - Allowed h5repack to reserve space for a user block without a file + + This is useful for users who want to reserve space in the file for + future use without requiring a file to copy. + + + High-Level APIs: + ---------------- + - + + + C Packet Table API: + ------------------- + - + + + Internal header file: + --------------------- + - + + + Documentation: + -------------- + - Documented that leaving HDF5 threads running at termination is unsafe + + Added doc/threadsafety-warning.md as a warning that threads which use HDF5 + resources must be closed before either process exit or library close. + If HDF5 threads are alive during either of these operations, their resources + will not be cleaned up properly and undefined behavior is possible. + + This document also includes a discussion on potential ways to mitigate this issue. + + + +Support for new platforms, languages and compilers +================================================== + - + + +Bug Fixes since HDF5-1.14.4 release +=================================== + Library + ------- + - Fixed a memory leak in H5F__accum_write() + + The memory was allocated in H5F__accum_write() and was to be freed in + H5F__accum_reset() during the closing process but a failure occurred just + before the deallocation, leaving the memory un-freed. The problem is + now fixed. + + Fixes GitHub #4585 + + - Fixed an incorrect returned value by H5LTfind_dataset() + + H5LTfind_dataset() returned true for non-existing datasets because it only + compared up to the length of the searched string, such as "Day" vs "DayNight". + Applied the user's patch to correct this behavior. + + Fixes GitHub #4780 + + - Fixed a segfault by H5Gmove2, extended to fix H5Lcopy and H5Lmove + + A user's application segfaulted when it passed in an invalid location ID + to H5Gmove2. The src and dst location IDs must be either a file or a group + ID. The fix was also applied to H5Lcopy and H5Lmove. Now, all these + three functions will fail if either the src or dst location ID is not a file + or a group ID. + + Fixes GitHub #4737 + + - Fixed a segfault by H5Lget_info() + + A user's program generated a segfault when the ID passed into H5Lget_info() + was a datatype ID. This was caused by non-VOL functions being used internally + where VOL functions should have been. This correction was extended to many + other functions to prevent potential issue in the future. + + Fixes GitHub #4730 + + - Fixed a segfault by H5Fget_intent(), extended to fix several other functions + + A user's program generated a segfault when the ID passed into H5Fget_intent() + was not a file ID. In addition to H5Fget_intent(), a number of APIs also failed + to detect an incorrect ID being passed in, which can potentially cause various + failures, including segfault. The affected functions are listed below and now + properly detect incorrect ID parameters: + + H5Fget_intent() + H5Fget_fileno() + H5Fget_freespace() + H5Fget_create_plist() + H5Fget_access_plist() + H5Fget_vfd_handle() + H5Dvlen_get_buf_size() + H5Fget_mdc_config() + H5Fset_mdc_config() + H5Freset_mdc_hit_rate_stats() + + Fixes GitHub #4656 and GitHub #4662 + + - Fixed a bug with large external datasets + + When performing a large I/O on an external dataset, the library would only + issue a single read or write system call. This could cause errors or cause + the data to be incorrect. These calls do not guarantee that they will + process the entire I/O request, and may need to be called multiple times + to complete the I/O, advancing the buffer and reducing the size by the + amount actually processed by read or write each time. Implemented this + algorithm for external datasets in both the read and write cases. + + Fixes GitHub #4216 + Fixes h5py GitHub #2394 + + - Fixed a bug in the Subfiling VFD that could cause a buffer over-read + and memory allocation failures + + When performing vector I/O with the Subfiling VFD, making use of the + vector I/O size extension functionality could cause the VFD to read + past the end of the "I/O sizes" array that is passed in. When an entry + in the "I/O sizes" array has the value 0 and that entry is at an array + index greater than 0, this signifies that the value in the preceding + array entry should be used for the rest of the I/O vectors, effectively + extending the last valid I/O size across the remaining entries. This + allows an application to save a bit on memory by passing in a smaller + "I/O sizes" array. The Subfiling VFD didn't implement a check for this + functionality in the portion of the code that generates I/O vectors, + causing it to read past the end of the "I/O sizes" array when it was + shorter than expected. This could also result in memory allocation + failures, as the nearby memory allocations are based off the values + read from that array, which could be uninitialized. + + - Fixed H5Rget_attr_name to return the length of the attribute's name + without the null terminator + + H5Rget_file_name and H5Rget_obj_name both return the name's length + without the null terminator. H5Rget_attr_name now behaves consistently + with the other two APIs. Going forward, all the get character string + APIs in HDF5 will be modified/written in this manner, regarding the + length of a character string. + + Fixes GitHub #4447 + + - Fixed heap-buffer-overflow in h5dump + + h5dump aborted when provided with a malformed input file. The was because + the buffer size for checksum was smaller than H5_SIZEOF_CHKSUM, causing + an overflow while calculating the offset to the checksum in the buffer. + A check was added so H5F_get_checksums would fail appropriately in all + of its occurrences. + + Fixes GitHub #4434 + + - Fixed library to allow usage of page buffering feature for serial file + access with parallel builds of HDF5 + + When HDF5 is built with parallel support enabled, previously the library would + disallow any usage of page buffering, even if a file was not opened with + parallel access. The library now allows usage of page buffering for serial + file access with parallel builds of HDF5. Usage of page buffering is still + disabled for any form of parallel file access, even if only 1 MPI process + is used. + + - Fixed function H5Requal to actually compare the reference pointers + + Fixed an issue with H5Requal always returning true because the + function was only comparing the ref2_ptr to itself. + + - Fixed infinite loop closing library issue when h5dump with a user provided test file + + The library's metadata cache calls the "get_final_load_size" client callback + to find out the actual size of the object header. As the size obtained + exceeds the file's EOA, it throws an error but the object header structure + allocated through the client callback is not freed, causing the issue + described. + + (1) Free the structure allocated in the object header client callback after + saving the needed information in udata. (2) Deserialize the object header + prefix in the object header's "deserialize" callback regardless. + + Fixes GitHub #3790 + + + Java Library + ------------ + - + + + Configuration + ------------- + - Fixed usage issue with FindZLIB.cmake module + + When building HDF5 with CMake and relying on the FindZLIB.cmake module, + the Find module would correctly find the ZLIB library but not set an OUTPUT_NAME + on the target. Also, the target returned, ZLIB::ZLIB, was not in the ZLIB_LIBRARIES + variable. This caused issues when requesting the OUTPUT_NAME of the target in + the pkg-config settings. + + Similar to HDF5_USE_LIBAEC_STATIC, "Find static AEC library", option, we added + a new option, HDF5_USE_ZLIB_STATIC, "Find static zlib library". These options + allow a user to specify whether to use a static or shared version of the compression + library in a find_package call. + + - Corrected usage of FetchContent in the HDFLibMacros.cmake file. + + CMake version 3.30 changed the behavior of the FetchContent module to deprecate + the use of FetchContent_Populate() in favor of FetchContent_MakeAvailable(). Therefore, + the copying of HDF specialized CMakeLists.txt files to the dependent project's source + was implemented in the FetchContent_Declare() call. + + - Fixed/reverted an Autotools configure hack that causes problems on MacOS + + A sed line in configure.ac was added in the past to paper over some + problems with older versions of the Autotools that would add incorrect + linker flags. This used the -i option in a way that caused silent + errors on MacOS that did not break the build. + + The original fix for this problem (in 1.14.4) removed the sed line + entirely, but it turns out that the sed cleanup is still necessary + on some systems, where empty -l options will be added to the libtool + script. + + This sed line has been restored and reworked to not use -i. + + Fixes GitHub issues #3843 and #4448 + + - Fixed a list index out of range issue in the runTest.cmake file + + Fixed an issue in config/cmake/runTest.cmake where the CMake logic + would try to access an invalid list index if the number of lines in + a test's output and reference files don't match. + + - Fix Autotools -Werror cleanup + + The Autotools temporarily scrub -Werror(=whatever) from CFLAGS, etc. + so configure checks don't trip over warnings generated by configure + check programs. The sed line originally only scrubbed -Werror but not + -Werror=something, which would cause errors when the '=something' was + left behind in CFLAGS. + + The sed line has been updated to handle -Werror=something lines. + + Fixes one issue raised in #3872 + + - Changed default of 'Error on HDF5 doxygen warnings' DOXYGEN_WARN_AS_ERROR option. + + The default setting of DOXYGEN_WARN_AS_ERROR to 'FAIL_ON_WARNINGS' has been changed + to 'NO'. It was decided that the setting was too aggressive and should be a user choice. + The github actions and scripts have been updated to reflect this. + + * HDF5_ENABLE_DOXY_WARNINGS: ON/OFF (Default: OFF) + * --enable-doxygen-errors: enable/disable (Default: disable) + + + Tools + ----- + - Fixed several issues in ph5diff + + The parallel logic for the ph5diff tool inside the shared h5diff code was + refactored and cleaned up to fix several issues with the ph5diff tool. This + fixed: + + - several concurrency issues in ph5diff that can result in interleaved + output, + - an issue where output can sometimes be dropped when it ends up in + ph5diff's output overflow file, and + - an issue where MPI_Init was called after HDF5 had been initialized, + preventing the library from setting up an MPI communicator attribute + to perform library cleanup on MPI_Finalize. + + + Performance + ------------- + - + + + Fortran API + ----------- + - + + + High-Level Library + ------------------ + - + + + Fortran High-Level APIs + ----------------------- + - + + + Documentation + ------------- + - + + + F90 APIs + -------- + - + + + C++ APIs + -------- + - + + + Testing + ------- + - + + +Platforms Tested +=================== + + - HDF5 is tested with the two latest macOS versions that are available + on github runners. As new major macOS versions become available, HDF5 + will discontinue support for the older version and add the new latest + version to its list of compatible systems, along with the previous + version. + + Linux 6.8.0-1010-aws GNU gcc, gfortran, g++ + #10-Ubuntu SMP 2024 x86_64 (Ubuntu 13.2.0-23ubuntu4) 13.2.0 + GNU/Linux Ubuntu 24.04 Ubuntu clang version 18.1.3 (1ubuntu1) + Intel(R) oneAPI DPC++/C++ Compiler 2024.2.0 + ifx (IFX) 2024.2.0 20240602 + (cmake and autotools) + + Linux 6.5.0-1018-aws GNU gcc, gfortran, g++ + #18-Ubuntu SMP x86_64 GNU/Linux (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 + Ubuntu 22.04 Ubuntu clang version 14.0.0-1ubuntu1 + Intel(R) oneAPI DPC++/C++ Compiler 2024.0.2 + ifx (IFX) 2024.0.2 20231213 + (cmake and autotools) + + Linux 5.14.21-cray_shasta_c cray-mpich/8.1.28 + #1 SMP x86_64 GNU/Linux cce/15.0.0 + (frontier) gcc/13.2 + (cmake) + + Linux 5.14.0-427.24.1.el9_4 GNU gcc, gfortran, g++ (Red Hat 11.4.1-3) + #1 SMP x86_64 GNU/Linux clang version 17.0.6 + Rocky 9 Intel(R) oneAPI DPC++/C++ Compiler 2024.2.0 + ifx (IFX) 2024.2.0 + (cmake and autotools) + + Linux-4.18.0-553.16.1.1toss.t4 openmpi/4.1.2 + #1 SMP x86_64 GNU/Linux clang 14.0.6 + (corona, dane) GCC 12.1.1 + Intel(R) oneAPI DPC++/C++ Compiler 2023.2.1 + ifx (IFX) 2023.2.1 + + Linux-4.18.0-553.5.1.1toss.t4 openmpi/4.1/4.1.6 + #1 SMP x86_64 GNU/Linux clang 16.0.6 + (eclipse) GCC 12.3.0 + Intel(R) oneAPI DPC++/C++ Compiler 2024.0.2 + ifx (IFX) 2024.0.2 + (cmake) + + Linux 4.14.0-115.35.1.3chaos spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 17.0.6 + (vortex) GCC 12.2.1 + nvhpc 24.1 + XL 2023.06.28 + (cmake) + + Linux-4.14.0-115.35.1 spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 14.0.5, 15.0.6 + (lassen) GCC 8.3.1 + XL 2021.09.22, 2022.08.05 + (cmake) + + Linux 3.10.0-1160.36.2.el7.ppc64 gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + #1 SMP ppc64be GNU/Linux g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + Power8 (echidna) GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + + Linux 3.10.0-1160.80.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos7 Version 4.8.5 20150623 (Red Hat 4.8.5-4) + (jelly/kituo/moohan) Version 4.9.3, Version 7.2.0, Version 8.3.0, + Version 9.1.0, Version 10.2.0 + Intel(R) C (icc), C++ (icpc), Fortran (icc) + compilers: + Version 17.0.0.098 Build 20160721 + GNU C (gcc) and C++ (g++) 4.8.5 compilers + with NAG Fortran Compiler Release 7.1(Hanzomon) + Intel(R) C (icc) and C++ (icpc) 17.0.0.098 compilers + with NAG Fortran Compiler Release 7.1(Hanzomon) + MPICH 3.1.4 compiled with GCC 4.9.3 + MPICH 3.3 compiled with GCC 7.2.0 + OpenMPI 3.1.3 compiled with GCC 7.2.0 and 4.1.2 + compiled with GCC 9.1.0 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Versions 18.4.0 and 19.10-0 + NVIDIA nvc, nvfortran and nvc++ version 22.5-0 + (autotools and cmake) + + + Linux-3.10.0-1160.119.1.1chaos openmpi/4.1.4 + #1 SMP x86_64 GNU/Linux clang 16.0.6 + (skybridge) Intel(R) oneAPI DPC++/C++ Compiler 2023.2.0 + ifx (IFX) 2023.2.0 + (cmake) + + Linux-3.10.0-1160.90.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux clang 16.0.6 + (attaway) GCC 12.1.0 + Intel(R) oneAPI DPC++/C++ Compiler 2024.0.2 + ifx (IFX) 2024.0.2 + (cmake) + + Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos6 Version 4.4.7 20120313 + (platypus) Version 4.9.3, 5.3.0, 6.2.0 + MPICH 3.1.4 compiled with GCC 4.9.3 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Version 19.10-0 + + Windows 10 x64 Visual Studio 2019 w/ clang 12.0.0 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2019 w/ Intel (C/C++ only - cmake) + Visual Studio 2022 w/ clang 17.0.3 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2022 w/ Intel C/C++ oneAPI 2023 (cmake) + Visual Studio 2019 w/ MSMPI 10.1 (C only - cmake) + + +Known Problems +============== + + - When building with the NAG Fortran compiler using the Autotools and libtool + 2.4.2 or earlier, the -shared flag will be missing '-Wl,', which will cause + compilation to fail. This is due to a bug in libtool that was fixed in 2012 + and released in 2.4.4 in 2014. + + - When the library detects and builds in support for the _Float16 datatype, an + issue has been observed on at least one MacOS 14 system where the library + fails to initialize due to not being able to detect the byte order of the + _Float16 type (https://github.com/HDFGroup/hdf5/issues/4310): + + #5: H5Tinit_float.c line 308 in H5T__fix_order(): failed to detect byte order + major: Datatype + minor: Unable to initialize object + + If this issue is encountered, support for the _Float16 type can be disabled + with a configuration option: + + CMake: HDF5_ENABLE_NONSTANDARD_FEATURE_FLOAT16=OFF + Autotools: --disable-nonstandard-feature-float16 + + - When HDF5 is compiled with NVHPC versions 23.5 - 23.9 (additional versions may + also be applicable) and with -O2 (or higher) and -DNDEBUG, test failures occur + in the following tests: + + H5PLUGIN-filter_plugin + H5TEST-flush2 + H5TEST-testhdf5-base + MPI_TEST_t_filters_parallel + + Sporadic failures (even with lower -O levels): + Java JUnit-TestH5Pfapl + Java JUnit-TestH5D + + Also, NVHPC will fail to compile the test/tselect.c test file with a compiler + error of 'use of undefined value' when the optimization level is -O2 or higher. + + This is confirmed to be a bug in the nvc compiler that has been fixed as of + 23.11. If you are using an affected version of the NVidia compiler, the + work-around is to set the optimization level to -O1. + + https://forums.developer.nvidia.com/t/hdf5-no-longer-compiles-with-nv-23-9/269045 + + - CMake files do not behave correctly with paths containing spaces. + Do not use spaces in paths because the required escaping for handling spaces + results in very complex and fragile build files. + + - At present, metadata cache images may not be generated by parallel + applications. Parallel applications can read files with metadata cache + images, but since this is a collective operation, a deadlock is possible + if one or more processes do not participate. + + - The subsetting option in ph5diff currently will fail and should be avoided. + The subsetting option works correctly in serial h5diff. + + - Flang Fortran compilation will fail (last check version 17) due to not yet + implemented: (1) derived type argument passed by value (H5VLff.F90), + and (2) support for REAL with KIND = 2 in intrinsic SPACING used in testing. + + - Fortran tests HDF5_1_8.F90 and HDF5_F03.F90 will fail with Cray compilers + greater than version 16.0 due to a compiler bug. The latest version verified + as failing was version 17.0. + + - Several tests currently fail on certain platforms: + MPI_TEST-t_bigio fails with spectrum-mpi on ppc64le platforms. + + MPI_TEST-t_subfiling_vfd and MPI_TEST_EXAMPLES-ph5_subfiling fail with + cray-mpich on theta and with XL compilers on ppc64le platforms. + + MPI_TEST_testphdf5_tldsc fails with cray-mpich 7.7 on cori and theta. + + - File space may not be released when overwriting or deleting certain nested + variable length or reference types. + + - Known problems in previous releases can be found in the HISTORY*.txt files + in the HDF5 source. Please report any new problems found to + help@hdfgroup.org. + + +CMake vs. Autotools installations +================================= +While both build systems produce similar results, there are differences. +Each system produces the same set of folders on Linux (only CMake works +on standard Windows); bin, include, lib and share. Autotools places the +COPYING and RELEASE.txt file in the root folder, CMake places them in +the share folder. + +The bin folder contains the tools and the build scripts. Additionally, CMake +creates dynamic versions of the tools with the suffix "-shared". Autotools +installs one set of tools depending on the "--enable-shared" configuration +option. + build scripts + ------------- + Autotools: h5c++, h5cc, h5fc + CMake: h5c++, h5cc, h5hlc++, h5hlcc + +The include folder holds the header files and the fortran mod files. CMake +places the fortran mod files into separate shared and static subfolders, +while Autotools places one set of mod files into the include folder. Because +CMake produces a tools library, the header files for tools will appear in +the include folder. + +The lib folder contains the library files, and CMake adds the pkgconfig +subfolder with the hdf5*.pc files used by the bin/build scripts created by +the CMake build. CMake separates the C interface code from the fortran code by +creating C-stub libraries for each Fortran library. In addition, only CMake +installs the tools library. The names of the szip libraries are different +between the build systems. + +The share folder will have the most differences because CMake builds include +a number of CMake specific files for support of CMake's find_package and support +for the HDF5 Examples CMake project. + +The issues with the gif tool are: + HDFFV-10592 CVE-2018-17433 + HDFFV-10593 CVE-2018-17436 + HDFFV-11048 CVE-2020-10809 +These CVE issues have not yet been addressed and are avoided by not building +the gif tool by default. Enable building the High-Level tools with these options: + autotools: --enable-hlgiftools + cmake: HDF5_BUILD_HL_GIF_TOOLS=ON + + +%%%%1.14.4%%%% + +HDF5 version 1.14.4-2 released on 2024-04-15 +================================================================================ + + +INTRODUCTION +============ + +This document describes the differences between this release and the previous +HDF5 release. It contains information on the platforms tested and known +problems in this release. For more details check the HISTORY*.txt files in the +HDF5 source. + +Note that documentation in the links below will be updated at the time of each +final release. + +Links to HDF5 documentation can be found on: + + https://portal.hdfgroup.org/documentation/ + +The official HDF5 releases can be obtained from: + + https://www.hdfgroup.org/downloads/hdf5/ + +Changes from release to release and new features in the HDF5-1.14.x release series +can be found at: + + https://portal.hdfgroup.org/documentation/hdf5-docs/release_specific_info.html + +If you have any questions or comments, please send them to the HDF Help Desk: + + help@hdfgroup.org + + +CONTENTS +======== + +- New Features +- Support for new platforms and languages +- Bug Fixes since HDF5-1.14.3 +- Platforms Tested +- Known Problems +- CMake vs. Autotools installations + + +New Features +============ + + Configuration: + ------------- + - Added configure options for enabling/disabling non-standard programming + language features + + * Added a new configuration option that allows enabling or disabling of + support for features that are extensions to programming languages, such + as support for the _Float16 datatype: + + CMake: HDF5_ENABLE_NONSTANDARD_FEATURES (ON/OFF) (Default: ON) + Autotools: --enable-nonstandard-features (yes/no) (Default: yes) + + When this option is enabled, configure time checks are still performed + to ensure that a feature can be used properly, but these checks may not + be sufficient when compiler support for a feature is incomplete or broken, + resulting in library build failures. When set to OFF/no, this option + provides a way to disable support for all non-standard features to avoid + these issues. Individual features can still be re-enabled with their + respective configuration options. + + * Added a new configuration option that allows enabling or disabling of + support for the _Float16 C datatype: + + CMake: HDF5_ENABLE_NONSTANDARD_FEATURE_FLOAT16 (ON/OFF) (Default: ON) + Autotools: --enable-nonstandard-feature-float16 (yes/no) (Default: yes) + + While support for the _Float16 C datatype can generally be detected and + used properly, some compilers have incomplete support for the datatype + and will pass configure time checks while still failing to build HDF5. + This option provides a way to disable support for the _Float16 datatype + when the compiler doesn't have the proper support for it. + + - Deprecate bin/cmakehdf5 script + + With the improvements made in CMake since version 3.23 and the addition + of CMake preset files, this script is no longer necessary. + + See INSTALL_CMake.txt file, Section X: Using CMakePresets.json for compiling + + - Overhauled LFS support checks + + In 2024, we can assume that Large File Support (LFS) exists on all + systems we support, though it may require flags to enable it, + particularly when building 32-bit binaries. The HDF5 source does + not use any of the 64-bit specific API calls (e.g., ftello64) + or explicit 64-bit offsets via off64_t. + + Autotools + + * We now use AC_SYS_LARGEFILE to determine how to support LFS. We + previously used a custom m4 script for this. + + CMake + + * The HDF_ENABLE_LARGE_FILE option (advanced) has been removed + * We no longer run a test program to determine if LFS works, which + will help with cross-compiling + * On Linux we now unilaterally set -D_LARGEFILE_SOURCE and + -D_FILE_OFFSET_BITS=64, regardless of 32/64 bit system. CMake + doesn't offer a nice equivalent to AC_SYS_LARGEFILE and since + those options do nothing on 64-bit systems, this seems safe and + covers all our bases. We don't set -D_LARGEFILE64_SOURCE since + we don't use any of the POSIX 64-bit specific API calls like + ftello64, as noted above. + * We didn't test for LFS support on non-Linux platforms. We've added + comments for how LFS should probably be supported on AIX and Solaris, + which seem to be alive, though uncommon. PRs would be appreciated if + anyone wishes to test this. + + This overhaul also fixes GitHub #2395, which points out that the LFS flags + used when building with CMake differ based on whether CMake has been + run before. The LFS check program that caused this problem no longer exists. + + - The CMake HDF5_ENABLE_DEBUG_H5B option has been removed + + This enabled some additional version-1 B-tree checks. These have been + removed so the option is no longer necessary. + + This option was CMake-only and marked as advanced. + + - New option for building with static CRT in Windows + + The following option has been added: + HDF5_BUILD_STATIC_CRT_LIBS "Build With Static Windows CRT Libraries" OFF + Because our minimum CMake is 3.18, the macro to change runtime flags no longer + works as CMake changed the default behavior in CMake 3.15. + + Fixes GitHub issue #3984 + + - Added support for the new MSVC preprocessor + + Microsoft added support for a new, standards-conformant preprocessor + to MSVC, which can be enabled with the /Zc:preprocessor option. This + preprocessor would trip over our HDopen() variadic function-like + macro, which uses a feature that only works with the legacy preprocessor. + + ifdefs have been added that select the correct HDopen() form and + allow building HDF5 with the /Zc:preprocessor option. + + The HDopen() macro is located in an internal header file and only + affects building the HDF5 library from source. + + Fixes GitHub #2515 + + - Renamed HDF5_ENABLE_USING_MEMCHECKER to HDF5_USING_ANALYSIS_TOOL + + The HDF5_USING_ANALYSIS_TOOL is used to indicate to test macros that + an analysis tool is being used and that the tests should not use + the runTest.cmake macros and it's variations. The analysis tools, + like valgrind, test the macro code instead of the program under test. + + HDF5_ENABLE_USING_MEMCHECKER is still used for controlling the HDF5 + define, H5_USING_MEMCHECKER. + + - New option for building and naming tools in CMake + + The following option has been added: + HDF5_BUILD_STATIC_TOOLS "Build Static Tools Not Shared Tools" OFF + + The default will build shared tools unless BUILD_SHARED_LIBS = OFF. + Tools will no longer have "-shared" as only one set of tools will be created. + + - Incorporated HDF5 examples repository into HDF5 library. + + The HDF5Examples folder is equivalent to the hdf5-examples repository. + This enables building and testing the examples + during the library build process or after the library has been installed. + Previously, the hdf5-examples archives were downloaded + for packaging with the library. Now the examples can be built + and tested without a packaged install of the library. + + However, to maintain the ability to use the HDF5Examples with an installed + library, it is necessary to map the option names used by the library + to those used by the examples. The typical pattern is: + = + HDF_BUILD_FORTRAN = ${HDF5_BUILD_FORTRAN} + + - Added new option for CMake to mark tests as SKIPPED. + + HDF5_DISABLE_TESTS_REGEX is a REGEX string that will be checked with + test names and if there is a match then that test's property will be + set to DISABLED. HDF5_DISABLE_TESTS_REGEX can be initialized on the + command line: "-DHDF5_DISABLE_TESTS_REGEX:STRING=" + See CMake documentation for regex-specification. + + - Added defaults to CMake for long double conversion checks + + HDF5 performs a couple of checks at build time to see if long double + values can be converted correctly (IBM's Power architecture uses a + special format for long doubles). These checks were performed using + TRY_RUN, which is a problem when cross-compiling. + + These checks now use default values appropriate for most non-Power + systems when cross-compiling. The cache values can be pre-set if + necessary, which will preempt both the TRY_RUN and the default. + + Affected values: + H5_LDOUBLE_TO_LONG_SPECIAL (default no) + H5_LONG_TO_LDOUBLE_SPECIAL (default no) + H5_LDOUBLE_TO_LLONG_ACCURATE (default yes) + H5_LLONG_TO_LDOUBLE_CORRECT (default yes) + H5_DISABLE_SOME_LDOUBLE_CONV (default no) + + Fixes GitHub #3585 + + + Library: + -------- + - Relaxed behavior of H5Pset_page_buffer_size() when opening files + + This API call sets the size of a file's page buffer cache. This call + was extremely strict about matching its parameters to the file strategy + and page size used to create the file, requiring a separate open of the + file to obtain these parameters. + + These requirements have been relaxed when using the fapl to open + a previously-created file: + + * When opening a file that does not use the H5F_FSPACE_STRATEGY_PAGE + strategy, the setting is ignored and the file will be opened, but + without a page buffer cache. This was previously an error. + + * When opening a file that has a page size larger than the desired + page buffer cache size, the page buffer cache size will be increased + to the file's page size. This was previously an error. + + The behavior when creating a file using H5Pset_page_buffer_size() is + unchanged. + + Fixes GitHub issue #3382 + + - Added support for _Float16 16-bit half-precision floating-point datatype + + Support for the _Float16 C datatype has been added on platforms where: + + - The _Float16 datatype and its associated macros (FLT16_MIN, FLT16_MAX, + FLT16_EPSILON, etc.) are available + - A simple test program that converts between the _Float16 datatype and + other datatypes with casts can be successfully compiled and run at + configure time. Some compilers appear to be buggy or feature-incomplete + in this regard and will generate calls to compiler-internal functions + for converting between the _Float16 datatype and other datatypes, but + will not link these functions into the build, resulting in build + failures. + + The following new macros have been added: + + H5_HAVE__FLOAT16 - This macro is defined in H5pubconf.h and will have + the value 1 if support for the _Float16 datatype is + available. It will not be defined otherwise. + + H5_SIZEOF__FLOAT16 - This macro is defined in H5pubconf.h and will have + a value corresponding to the size of the _Float16 + datatype, as computed by sizeof(). It will have the + value 0 if support for the _Float16 datatype is not + available. + + H5_HAVE_FABSF16 - This macro is defined in H5pubconf.h and will have the + value 1 if the fabsf16 function is available for use. + + H5_LDOUBLE_TO_FLOAT16_CORRECT - This macro is defined in H5pubconf.h and + will have the value 1 if the platform can + correctly convert long double values to + _Float16. Some compilers have issues with + this. + + H5T_NATIVE_FLOAT16 - This macro maps to the ID of an HDF5 datatype representing + the native C _Float16 datatype for the platform. If + support for the _Float16 datatype is not available, the + macro will map to H5I_INVALID_HID and should not be used. + + H5T_IEEE_F16BE - This macro maps to the ID of an HDF5 datatype representing + a big-endian IEEE 754 16-bit floating-point datatype. This + datatype is available regardless of whether _Float16 support + is available or not. + + H5T_IEEE_F16LE - This macro maps to the ID of an HDF5 datatype representing + a little-endian IEEE 754 16-bit floating-point datatype. + This datatype is available regardless of whether _Float16 + support is available or not. + + The following new hard datatype conversion paths have been added, but + will only be used when _Float16 support is available: + + H5T_NATIVE_SCHAR <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_UCHAR <-> H5T_NATIVE_FLOAT16 + H5T_NATIVE_SHORT <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_USHORT <-> H5T_NATIVE_FLOAT16 + H5T_NATIVE_INT <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_UINT <-> H5T_NATIVE_FLOAT16 + H5T_NATIVE_LONG <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_ULONG <-> H5T_NATIVE_FLOAT16 + H5T_NATIVE_LLONG <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_ULLONG <-> H5T_NATIVE_FLOAT16 + H5T_NATIVE_FLOAT <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_DOUBLE <-> H5T_NATIVE_FLOAT16 + H5T_NATIVE_LDOUBLE <-> H5T_NATIVE_FLOAT16 + + The H5T_NATIVE_LDOUBLE -> H5T_NATIVE_FLOAT16 hard conversion path will only + be available and used if H5_LDOUBLE_TO_FLOAT16_CORRECT has a value of 1. Otherwise, + the conversion will be emulated in software by the library. + + Note that in the absence of any compiler flags for architecture-specific + tuning, the generated code for datatype conversions with the _Float16 type + may perform conversions by first promoting the type to float. Use of + architecture-specific tuning compiler flags may instead allow for the + generation of specialized instructions, such as AVX512-FP16 instructions, + if available. + + - Made several improvements to the datatype conversion code + + * The datatype conversion code was refactored to use pointers to + H5T_t datatype structures internally rather than IDs wrapping + the pointers to those structures. These IDs are needed if an + application-registered conversion function or conversion exception + function are involved during the conversion process. For simplicity, + the conversion code simply passed these IDs down and let the internal + code unwrap the IDs as necessary when needing to access the wrapped + H5T_t structures. However, this could cause a significant amount of + repeated ID lookups for compound datatypes and other container-like + datatypes. The code now passes down pointers to the datatype + structures and only creates IDs to wrap those pointers as necessary. + Quick testing showed an average ~3x to ~10x improvement in performance + of conversions on container-like datatypes, depending on the + complexity of the datatype. + + * A conversion "context" structure was added to hold information about + the current conversion being performed. This allows conversions on + container-like datatypes to be optimized better by skipping certain + portions of the conversion process that remain relatively constant + when multiple elements of the container-like datatype are being + converted. + + * After refactoring the datatype conversion code to use pointers + internally rather than IDs, several copies of datatypes that were + made by higher levels of the library were able to be removed. The + internal IDs that were previously registered to wrap those copied + datatypes were also able to be removed. + + - Implemented optimized support for vector I/O in the Subfiling VFD + + Previously, the Subfiling VFD would handle vector I/O requests by + breaking them down into individual I/O requests, one for each entry + in the I/O vectors provided. This could result in poor I/O performance + for features in HDF5 that utilize vector I/O, such as parallel I/O + to filtered datasets. The Subfiling VFD now properly handles vector + I/O requests in their entirety, resulting in fewer I/O calls, improved + vector I/O performance and improved vector I/O memory efficiency. + + - Added support for in-place type conversion in most cases + + In-place type conversion allows the library to perform type conversion + without an intermediate type conversion buffer. This can improve + performance by allowing I/O in a single operation over the entire + selection instead of being limited by the size of the intermediate buffer. + Implemented for I/O on contiguous and chunked datasets when the selection + is contiguous in memory and when the memory datatype is not smaller than + the file datatype. + + - Changed selection I/O to be on by default when using the MPIO file driver + + - Added support for selection I/O in the MPIO file driver + + Previously, only vector I/O operations were supported. Support for + selection I/O should improve performance and reduce memory uses in some + cases. + + - Changed the error handling for a not found path in the find plugin process. + + While attempting to load a plugin the HDF5 library will fail if one of the + directories in the plugin paths does not exist, even if there are more paths + to check. Instead of exiting the function with an error, just logged the error + and continue processing the list of paths to check. + + - Implemented support for temporary security credentials for the Read-Only + S3 (ROS3) file driver. + + When using temporary security credentials, one also needs to specify a + session/security token next to the access key id and secret access key. + This token can be specified by the new API function H5Pset_fapl_ros3_token(). + The API function H5Pget_fapl_ros3_token() can be used to retrieve + the currently set token. + + - Added a Subfiling VFD configuration file prefix environment variable + + The Subfiling VFD now checks for values set in a new environment + variable "H5FD_SUBFILING_CONFIG_FILE_PREFIX" to determine if the + application has specified a pathname prefix to apply to the file + path for its configuration file. For example, this can be useful + for cases where the application wishes to write subfiles to a + machine's node-local storage while placing the subfiling configuration + file on a file system readable by all machine nodes. + + - Added H5Pset_selection_io(), H5Pget_selection_io(), and + H5Pget_no_selection_io_cause() API functions to manage the selection I/O + feature. This can be used to enable collective I/O with type conversion, + or it can be used with custom VFDs that support vector or selection I/O. + + - Added H5Pset_modify_write_buf() and H5Pget_modify_write_buf() API + functions to allow the library to modify the contents of write buffers, in + order to avoid malloc/memcpy. Currently only used for type conversion + with selection I/O. + + + Parallel Library: + ----------------- + - + + + Fortran Library: + ---------------- + - Added Fortran H5E APIs: + h5eregister_class_f, h5eunregister_class_f, h5ecreate_msg_f, h5eclose_msg_f + h5eget_msg_f, h5epush_f, h5eget_num_f, h5ewalk_f, h5eget_class_name_f, + h5eappend_stack_f, h5eget_current_stack_f, h5eset_current_stack_f, h5ecreate_stack_f, + h5eclose_stack_f, h5epop_f, h5eprint_f (C h5eprint v2 signature) + + - Added API support for Fortran MPI_F08 module definitions: + Adds support for MPI's MPI_F08 module datatypes: type(MPI_COMM) and type(MPI_INFO) for HDF5 APIs: + H5PSET_FAPL_MPIO_F, H5PGET_FAPL_MPIO_F, H5PSET_MPI_PARAMS_F, H5PGET_MPI_PARAMS_F + Ref. #3951 + + - Added Fortran APIs: + H5FGET_INTENT_F, H5SSEL_ITER_CREATE_F, H5SSEL_ITER_GET_SEQ_LIST_F, + H5SSEL_ITER_CLOSE_F, H5S_mp_H5SSEL_ITER_RESET_F + + - Added Fortran Parameters: + H5S_SEL_ITER_GET_SEQ_LIST_SORTED_F, H5S_SEL_ITER_SHARE_WITH_DATASPACE_F + + - Added Fortran Parameters: + H5S_BLOCK_F and H5S_PLIST_F + + - The configuration definitions file, H5config_f.inc, is now installed + and the HDF5 version number has been added to it. + + - Added Fortran APIs: + h5fdelete_f + + - Added Fortran APIs: + h5vlnative_addr_to_token_f and h5vlnative_token_to_address_f + + + C++ Library: + ------------ + - + + + Java Library: + ------------- + - + + + Tools: + ------ + - + + + High-Level APIs: + ---------------- + - + + + C Packet Table API: + ------------------- + - + + + Internal header file: + --------------------- + - + + + Documentation: + -------------- + - + + +Support for new platforms, languages and compilers +================================================== + - + + +Bug Fixes since HDF5-1.14.3 release +=================================== + Configuration: + ------------- + - Fix Autotools -Werror cleanup + + The Autotools temporarily scrub -Werror(=whatever) from CFLAGS, etc. + so configure checks don't trip over warnings generated by configure + check programs. The sed line originally only scrubbed -Werror but not + -Werror=something, which would cause errors when the '=something' was + left behind in CFLAGS. + + The sed line has been updated to handle -Werror=something lines. + + Fixes one issue raised in #3872 + + Library + ------- + - Fixed a leak of datatype IDs created internally during datatype conversion + + Fixed an issue where the library could leak IDs that it creates internally + for compound datatype members during datatype conversion. When the library's + table of datatype conversion functions is modified (such as when a new + conversion function is registered with the library from within an application), + the compound datatype conversion function has to recalculate data that it + has cached. When recalculating that data, the library was registering new + IDs for each of the members of the source and destination compound datatypes + involved in the conversion process and was overwriting the old cached IDs + without first closing them. This would result in use-after-free issues due + to multiple IDs pointing to the same internal H5T_t structure, as well as + crashes due to the library not gracefully handling partially initialized or + partially freed datatypes on library termination. + + Fixes h5py GitHub #2419 + + - Fixed many (future) CVE issues + + A partner organization corrected many potential security issues, which + were fixed and reported to us before submission to MITRE. These do + not have formal CVE issues assigned to them yet, so the numbers assigned + here are just placeholders. We will update the HDF5 1.14 CVE list (link + below) when official MITRE CVE tracking numbers are assigned. + + These CVE issues are generally of the same form as other reported HDF5 + CVE issues, and rely on the library failing while attempting to read + a malformed file. Most of them cause the library to segfault and will + probably be assigned "medium (~5/10)" scores by NIST, like the other + HDF5 CVE issues. + + The issues that were reported to us have all been fixed in this release, + so HDF5 will continue to have no unfixed public CVE issues. + + NOTE: HDF5 versions earlier than 1.14.4 should be considered vulnerable + to these issues and users should upgrade to 1.14.4 as soon as + possible. Note that it's possible to build the 1.14 library with + HDF5 1.8, 1.10, etc. API bindings for people who wish to enjoy + the benefits of a more secure library but don't want to upgrade + to the latest API. We will not be bringing the CVE fixes to earlier + versions of the library (they are no longer supported). + + LIST OF CVE ISSUES FIXED IN THIS RELEASE: + + * CVE-2024-0116-001 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5D__scatter_mem resulting in causing denial of service or potential + code execution + + * CVE-2024-0112-001 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5S__point_deserialize resulting in the corruption of the + instruction pointer and causing denial of service or potential code + execution + + * CVE-2024-0111-001 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5T__conv_struct_opt resulting in causing denial of service or + potential code execution + + * CVE-2023-1208-002 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5O__mtime_new_encode resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1208-001 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5O__layout_encode resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1207-001 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5O__dtype_encode_helper causing denial of service or potential + code execution + + * CVE-2023-1205-001 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5VM_array_fill resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1202-002 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5T__get_native_type resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1202-001 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5T__ref_mem_setnull resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1130-001 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5T_copy_reopen resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1125-001 + HDF5 versions <= 1.14.3 contain a heap buffer overflow in + H5Z__nbit_decompress_one_byte caused by the earlier use of an + initialized pointer. This may result in Denial of Service or + potential code execution + + * CVE-2023-1114-001 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5HG_read resulting in the corruption of the instruction pointer + and causing denial of service or potential code execution + + * CVE-2023-1113-002 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5F_addr_decode_len resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1113-001 + HDF5 versions <= 1.14.3 contain a heap buffer overflow caused by + the unsafe use of strdup in H5MM_xstrdup, resulting in denial of + service or potential code execution + + * CVE-2023-1108-001 + HDF5 versions <= 1.14.3 contain a out-of-bounds read operation in + H5FL_arr_malloc resulting in denial of service or potential code + execution + + * CVE-2023-1104-004 + HDF5 versions <= 1.14.3 contain a out-of-bounds read operation in + H5T_close_real resulting in denial of service or potential code + execution + + * CVE-2023-1104-003 + HDF5 library versions <=1.14.3 contain a heap buffer overflow flaw + in the function H5HL__fl_deserialize resulting in denial of service + or potential code execution + + * CVE-2023-1104-002 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5HL__fl_deserialize resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1104-001 + HDF5 library versions <=1.14.3 contains a stack overflow in the + function H5E_printf_stack resulting in denial of service or + potential code execution + + * CVE-2023-1023-001 + HDF5 library versions <=1.14.3 heap buffer overflow in + H5VM_memcpyvv which may result in denial of service or code + execution + + * CVE-2023-1019-001 + HDF5 library versions <=1.14.3 contain a stack buffer overflow in + H5VM_memcpyvv resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1018-001 + HDF5 library versions <=1.14.3 contain a memory corruption in + H5A__close resulting in the corruption of the instruction pointer + and causing denial of service or potential code execution + + * CVE-2023-1017-002 + HDF5 library versions <=1.14.3 may use an uninitialized value + H5A__attr_release_table resulting in denial of service + + * CVE-2023-1017-001 + HDF5 library versions <=1.14.3 may attempt to dereference + uninitialized values in h5tools_str_sprint, which will lead to + denial of service + + * CVE-2023-1013-004 + HDF5 versions <= 1.13.3 contain a stack buffer overflow in + H5HG_read resulting in denial of service or potential code + execution + + * CVE-2023-1013-003 + HDF5 library versions <=1.14.3 contain a buffer overrun in + H5Z__filter_fletcher32 resulting in the corruption of the + instruction pointer and causing denial of service or potential + code execution + + * CVE-2023-1013-002 + HDF5 library versions <=1.14.3 contain a buffer overrun in + H5O__linfo_decode resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1013-001 + HDF5 library versions <=1.14.3 contain a buffer overrun in + H5Z__filter_scaleoffset resulting in the corruption of the + instruction pointer and causing denial of service or potential + code execution + + * CVE-2023-1012-001 + HDF5 library versions <=1.14.3 contain a stack buffer overflow in + H5R__decode_heap resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1010-001 + HDF5 library versions <=1.14.3 contain a stack buffer overflow in + H5FL_arr_malloc resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1009-001 + HDF5 library versions <=1.14.3 contain a stack buffer overflow in + H5FL_arr_malloc resulting in the corruption of the instruction + pointer and causing denial of service or potential code execution + + * CVE-2023-1006-004 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5A__attr_release_table resulting in the corruption of the + instruction pointer and causing denial of service or potential code + execution + + * CVE-2023-1006-003 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5T__bit_find resulting in the corruption of the instruction pointer + and causing denial of service or potential code execution. + + * CVE-2023-1006-002 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5HG_read resulting in the corruption of the instruction pointer + and causing denial of service or potential code execution + + * CVE-2023-1006-001 + HDF5 library versions <=1.14.3 contain a heap buffer overflow in + H5HG__cache_heap_deserialize resulting in the corruption of the + instruction pointer and causing denial of service or potential code + execution + + FULL OFFICIAL HDF5 CVE list (from mitre.org): + + https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=HDF5 + + 1.14.x CVE tracking list: + + https://github.com/HDFGroup/hdf5/blob/hdf5_1_14/CVE_list_1_14.md + + HDF5 CVE regression test suite (includes proof-of-concept files): + + https://github.com/HDFGroup/cve_hdf5 + + - Fixed a divide-by-zero issue when a corrupt file sets the page size to 0 + + If a corrupt file sets the page buffer size in the superblock to zero, + the library could attempt to divide by zero when allocating space in + the file. The library now checks for valid page buffer sizes when + reading the superblock message. + + Fixes oss-fuzz issue 58762 + + - Fixed a bug when using array datatypes with certain parent types + + Array datatype conversion would never use a background buffer, even if the + array's parent type (what the array is an array of) required a background + buffer for conversion. This resulted in crashes in some cases when using + an array of compound, variable length, or reference datatypes. Array types + now use a background buffer if needed by the parent type. + + - Fixed potential buffer read overflows in H5PB_read + + H5PB_read previously did not account for the fact that the size of the + read it's performing could overflow the page buffer pointer, depending + on the calculated offset for the read. This has been fixed by adjusting + the size of the read if it's determined that it would overflow the page. + + - Fixed CVE-2017-17507 + + This CVE was previously declared fixed, but later testing with a static + build of HDF5 showed that it was not fixed. + + When parsing a malformed (fuzzed) compound type containing variable-length + string members, the library could produce a segmentation fault, crashing + the library. + + This was fixed after GitHub PR #4234 + + Fixes GitHub issue #3446 + + - Fixed a cache assert with very large metadata objects + + If the library tries to load a metadata object that is above a + certain size, this would trip an assert in debug builds. This could + happen if you create a very large number of links in an old-style + group that uses local heaps. + + There is no need for this assert. The library's metadata cache + can handle large objects. The assert has been removed. + + Fixes GitHub #3762 + + - Fixed an issue with the Subfiling VFD and multiple opens of a + file + + An issue with the way the Subfiling VFD handles multiple opens + of the same file caused the file structures for the extra opens + to occasionally get mapped to an incorrect subfiling context + object. The VFD now correctly maps the file structures for + additional opens of an already open file to the same context + object. + + - Fixed a bug that causes the library to incorrectly identify + the endian-ness of 16-bit and smaller C floating-point datatypes + + When detecting the endian-ness of an in-memory C floating-point + datatype, the library previously always assumed that the type + was at least 32 bits in size. This resulted in invalid memory + accesses and would usually cause the library to identify the + datatype as having an endian-ness of H5T_ORDER_VAX. This has + now been fixed. + + - Fixed a bug that causes an invalid memory access issue when + converting 16-bit floating-point values to integers with the + library's software conversion function + + The H5T__conv_f_i function previously always assumed that + floating-point values were at least 32 bits in size and would + access invalid memory when attempting to convert 16-bit + floating-point values to integers. To fix this, parts of the + H5T__conv_f_i function had to be rewritten, which also resulted + in a significant speedup when converting floating-point values + to integers where the library does not have a hard conversion + path. This is the case for any floating-point values with a + datatype not represented by H5T_NATIVE_FLOAT16 (if _Float16 is + supported), H5T_NATIVE_FLOAT, H5T_NATIVE_DOUBLE or + H5T_NATIVE_LDOUBLE. + + - Fixed a bug that can cause incorrect data when overflows occur + while converting integer values to floating-point values with + the library's software conversion function + + The H5T__conv_i_f function had a bug which previously caused it + to return incorrect data when an overflow occurs and an application's + conversion exception callback function decides not to handle the + overflow. Rather than return positive infinity, the library would + return truncated data. This has now been fixed. + + - Corrected H5Soffset_simple() when offset is NULL + + The reference manual states that the offset parameter of H5Soffset_simple() + can be set to NULL to reset the offset of a simple dataspace to 0. This + has never been true, and passing NULL was regarded as an error. + + The library will now accept NULL for the offset parameter and will + correctly set the offset to zero. + + Fixes HDFFV-9299 + + - Fixed an issue where the Subfiling VFD's context object cache could + grow too large + + The Subfiling VFD keeps a cache of its internal context objects to + speed up access to a context object for a particular file, as well + as access to that object across multiple opens of the same file. + However, opening a large amount of files with the Subfiling VFD over + the course of an application's lifetime could cause this cache to grow + too large and result in the application running out of available MPI + communicator objects. On file close, the Subfiling VFD now simply + evicts context objects out of its cache and frees them. It is assumed + that multiple opens of a file will be a less common use case for the + Subfiling VFD, but this can be revisited if it proves to be an issue + for performance. + + - Fixed error when overwriting certain nested variable length types + + Previously, when using a datatype that included a variable length type + within a compound or array within another variable length type, and + overwriting data with a shorter (top level) variable length sequence, an + error could occur. This has been fixed. + + - Take user block into account in H5Dchunk_iter() and H5Dget_chunk_info() + + The address reported by the following functions did not correctly + take the user block into account: + + * H5Dchunk_iter() <-- addr passed to callback + * H5Dget_chunk_info() <-- addr parameter + * H5Dget_chunk_info_by_coord() <-- addr parameter + + This means that these functions reported logical HDF5 file addresses, + which would only be equal to the physical addresses when there is no + user block prepended to the HDF5 file. This is unfortunate, as the + primary use of these functions is to get physical addresses in order + to directly access the chunks. + + The listed functions now correctly take the user block into account, + so they will emit physical addresses that can be used to directly + access the chunks. + + Fixes #3003 + + - Fixed asserts raised by large values of H5Pset_est_link_info() parameters + + If large values for est_num_entries and/or est_name_len were passed + to H5Pset_est_link_info(), the library would attempt to create an + object header NIL message to reserve enough space to hold the links in + compact form (i.e., concatenated), which could exceed allowable object + header message size limits and trip asserts in the library. + + This bug only occurred when using the HDF5 1.8 file format or later and + required the product of the two values to be ~64k more than the size + of any links written to the group, which would cause the library to + write out a too-large NIL spacer message to reserve the space for the + unwritten links. + + The library now inspects the phase change values to see if the dataset + is likely to be compact and checks the size to ensure any NIL spacer + messages won't be larger than the library allows. + + Fixes GitHub #1632 + + - Fixed a bug where H5Tset_fields does not account for any offset + set for a floating-point datatype when determining if values set + for spos, epos, esize, mpos and msize make sense for the datatype + + Previously, H5Tset_fields did not take datatype offsets into account + when determining if the values set make sense for the datatype. + This would cause the function to fail when the precision for a + datatype is correctly set such that the offset bits are not included. + This has now been fixed. + + - Fixed H5Fget_access_plist so that it returns the file locking + settings for a file + + When H5Fget_access_plist (and the internal H5F_get_access_plist) + is called on a file, the returned File Access Property List has + the library's default file locking settings rather than any + settings set for the file. This causes two problems: + + - Opening an HDF5 file through an external link using H5Gopen, + H5Dopen, etc. with H5P_DEFAULT for the Dataset/Group/etc. + Access Property List will cause the external file to be opened + with the library's default file locking settings rather than + inheriting them from the parent file. This can be surprising + when a file is opened with file locking disabled, but its + external files are opened with file locking enabled. + + - An application cannot make use of the H5Pset_elink_fapl + function to match file locking settings between an external + file and its parent file without knowing the correct setting + ahead of time, as calling H5Fget_access_plist on the parent + file will not return the correct settings. + + This has been fixed by copying a file's file locking settings + into the newly-created File Access Property List in H5F_get_access_plist. + + This fix partially addresses GitHub issue #4011 + + - Memory usage growth issue + + Starting with the HDF5 1.12.1 release, an issue (GitHub issue #1256) + was observed where running a simple program that has a loop of opening + a file, reading from an object with a variable-length datatype and + then closing the file would result in the process fairly quickly + running out of memory. Upon further investigation, it was determined + that this memory was being kept around in the library's datatype + conversion pathway cache that is used to speed up datatype conversions + which are repeatedly used within an HDF5 application's lifecycle. For + conversions involving variable-length or reference datatypes, each of + these cached pathway entries keeps a reference to its associated file + for later use. Since the file was being closed and reopened on each + loop iteration, and since the library compares for equality between + instances of opened files (rather than equality of the actual files) + when determining if it can reuse a cached conversion pathway, it was + determining that no cached conversion pathways could be reused and was + creating a new cache entry on each loop iteration during I/O. This + would lead to constant growth of that cache and the memory it consumed, + as well as constant growth of the memory consumed by each cached entry + for the reference to its associated file. + + To fix this issue, the library now removes any cached datatype + conversion path entries for variable-length or reference datatypes + associated with a particular file when that file is closed. + + Fixes GitHub #1256 + + - Suppressed floating-point exceptions in H5T init code + + The floating-point datatype initialization code in H5Tinit_float.c + could raise FE_INVALID exceptions while munging bits and performing + comparisons that might involve NaN. This was not a problem when the + initialization code was executed in H5detect at compile time (prior + to 1.14.3), but now that the code is executed at library startup + (1.14.3+), these exceptions can be caught by user code, as is the + default in the NAG Fortran compiler. + + Starting in 1.14.4, we now suppress floating-point exceptions while + initializing the floating-point types and clear FE_INVALID before + restoring the original environment. + + Fixes GitHub #3831 + + - Fixed a file handle leak in the core VFD + + When opening a file with the core VFD and a file image, if the file + already exists, the file check would leak the POSIX file handle. + + Fixes GitHub issue #635 + + - Dropped support for MPI-2 + + The MPI-2 supporting artifacts have been removed due to the cessation + of MPI-2 maintenance and testing since version HDF5 1.12. + + + - Fixed a segfault when using a user-defined conversion function between compound datatypes + + During type info initialization for compound datatype conversion, the library checked if the + datatypes are subsets of one another in order to perform special conversion handling. + This check uses information that is only defined if a library conversion function is in use. + The library now skips this check for user-defined conversion functions. + + Fixes Github issue #3840 + + Java Library + ------------ + - + + + Configuration + ------------- + - Changed default of 'Error on HDF5 doxygen warnings' DOXYGEN_WARN_AS_ERROR option. + + The default setting of DOXYGEN_WARN_AS_ERROR to 'FAIL_ON_WARNINGS' has been changed + to 'NO'. It was decided that the setting was too aggressive and should be a user choice. + The github actions and scripts have been updated to reflect this. + + * HDF5_ENABLE_DOXY_WARNINGS: ON/OFF (Default: OFF) + * --enable-doxygen-errors: enable/disable (Default: disable) + + - Removed an Autotools configure hack that causes problems on MacOS + + A sed line in configure.ac was added in the past to paper over some + problems with older versions of the Autotools that would add incorrect + linker flags. This hack is not needed with recent versions of the + Autotools and the sed line errors on MacOS (though this was a silent + error that didn't break the build) so the hack has been removed. + + Fixes GitHub issue #3843 + + - Fixed an issue where the h5tools_test_utils test program was being + installed on the system for Autotools builds of HDF5 + + The h5tools_test_utils test program was mistakenly added to bin_PROGRAMS + in its Makefile.am configuration file, causing the executable to be + installed on the system. The executable is now added to noinst_PROGRAMS + instead and will no longer be installed on the system for Autotools builds + of HDF5. The CMake configuration code already avoids installing the + executable on the system. + + + Tools + ----- + - Renamed h5fuse.sh to h5fuse + + Addresses Discussion #3791 + + + Performance + ------------- + - + + + Fortran API + ----------- + - Fixed: HDF5 fails to compile with -Werror=lto-type-mismatch + + Removed the use of the offending C stub wrapper. + + Fixes GitHub issue #3987 + + + High-Level Library + ------------------ + - Fixed a memory leak in H5LTopen_file_image with H5LT_FILE_IMAGE_DONT_COPY flag + + When the H5LT_FILE_IMAGE_DONT_COPY flag is passed to H5LTopen_file_image, the + internally-allocated udata structure gets leaked as the core file driver doesn't + have a way to determine when or if it needs to call the "udata_free" callback. + This has been fixed by freeing the udata structure when the "image_free" callback + gets made during file close, where the file is holding the last reference to the + udata structure. + + Fixes GitHub issue #827 + + + Fortran High-Level APIs + ----------------------- + - + + + Documentation + ------------- + - + + + F90 APIs + -------- + - + + + C++ APIs + -------- + - + + + Testing + ------- + - Fixed a bug in the dt_arith test when H5_WANT_DCONV_EXCEPTION is not + defined + + The dt_arith test program's test_particular_fp_integer sub-test tries + to ensure that the library correctly raises a datatype conversion + exception when converting a floating-point value to an integer overflows. + However, this test would run even when H5_WANT_DCONV_EXCEPTION isn't + defined, causing the test to fail due to the library not raising + datatype conversion exceptions. This has now been fixed by not running + the test when H5_WANT_DCONV_EXCEPTION is not defined. + + - Fixed a testing failure in testphdf5 on Cray machines + + On some Cray machines, what appears to be a bug in Cray MPICH was causing + calls to H5Fis_accessible to create a 0-byte file with strange Unix + permissions. This was causing an H5Fdelete file deletion test in the + testphdf5 program to fail due to a just-deleted HDF5 file appearing to + still be accessible on the file system. The issue in Cray MPICH has been + worked around for the time being by resetting the MPI_Info object on the + File Access Property List used to MPI_INFO_NULL before passing it to the + H5Fis_accessible call. + + - A bug was fixed in the HDF5 API test random datatype generation code + + A bug in the random datatype generation code could cause test failures + when trying to generate an enumeration datatype that has duplicated + name/value pairs in it. This has now been fixed. + + - A bug was fixed in the HDF5 API test VOL connector registration checking code + + The HDF5 API test code checks to see if the VOL connector specified by the + HDF5_VOL_CONNECTOR environment variable (if any) is registered with the library + before attempting to run tests with it so that testing can be skipped and an + error can be returned when a VOL connector fails to register successfully. + Previously, this code didn't account for VOL connectors that specify extra + configuration information in the HDF5_VOL_CONNECTOR environment variable and + would incorrectly report that the specified VOL connector isn't registered + due to including the configuration information as part of the VOL connector + name being checked for registration status. This has now been fixed. + + - Fixed Fortran 2003 test with gfortran-v13, optimization levels O2,O3 + + Fixes failing Fortran 2003 test with gfortran, optimization level O2,O3 + with -fdefault-real-16. Fixes GH #2928. + + +Platforms Tested +=================== + + - HDF5 supports the latest macOS versions, including the current and two + preceding releases. As new major macOS versions become available, HDF5 + will discontinue support for the oldest version and add the latest + version to its list of compatible systems, along with the previous two + releases. + + Linux 5.16.14-200.fc35 GNU gcc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) + #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) + Fedora35 clang version 13.0.0 (Fedora 13.0.0-3.fc35) + (cmake and autotools) + + Linux 5.19.0-1023-aws GNU gcc, gfortran, g++ + #24-Ubuntu SMP x86_64 GNU/Linux (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 + Ubuntu 22.04 Ubuntu clang version 14.0.0-1ubuntu1 + Intel(R) oneAPI DPC++/C++ Compiler 2023.1.0 + ifort (IFORT) 2021.9.0 20230302 + (cmake and autotools) + + Linux 5.14.21-cray_shasta_c cray-mpich/8.1.23 + #1 SMP x86_64 GNU/Linux cce/15.0.0 + (frontier) gcc/12.2.0 + (cmake) + + Linux 5.11.0-34-generic GNU gcc (GCC) 9.4.0-1ubuntu1 + #36-Ubuntu SMP x86_64 GNU/Linux GNU Fortran (GCC) 9.4.0-1ubuntu1 + Ubuntu 20.04 Ubuntu clang version 10.0.0-4ubuntu1 + Intel(R) oneAPI DPC++/C++ Compiler 2023.1.0 + ifort (IFORT) 2021.9.0 20230302 + (cmake and autotools) + + Linux 4.14.0-115.35.1.1chaos aue/openmpi/4.1.4-arm-22.1.0.12 + #1 SMP aarch64 GNU/Linux Arm C/C++/Fortran Compiler version 22.1 + (stria) (based on LLVM 13.0.1) + (cmake) + + Linux 4.14.0-115.35.1.3chaos spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 12.0.1 + (vortex) GCC 8.3.1 + XL 2021.09.22 + (cmake) + + Linux-4.14.0-115.21.2 spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 12.0.1, 14.0.5 + (lassen) GCC 8.3.1 + XL 16.1.1.2, 2021.09.22, 2022.08.05 + (cmake) + + Linux-4.12.14-197.99-default cray-mpich/7.7.14 + #1 SMP x86_64 GNU/Linux cce 12.0.3 + (theta) GCC 11.2.0 + llvm 9.0 + Intel 19.1.2 + + Linux 3.10.0-1160.36.2.el7.ppc64 gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + #1 SMP ppc64be GNU/Linux g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + Power8 (echidna) GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + + Linux 3.10.0-1160.24.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos7 Version 4.8.5 20150623 (Red Hat 4.8.5-4) + (jelly/kituo/moohan) Version 4.9.3, Version 7.2.0, Version 8.3.0, + Version 9.1.0, Version 10.2.0 + Intel(R) C (icc), C++ (icpc), Fortran (icc) + compilers: + Version 17.0.0.098 Build 20160721 + GNU C (gcc) and C++ (g++) 4.8.5 compilers + with NAG Fortran Compiler Release 7.1(Hanzomon) + Intel(R) C (icc) and C++ (icpc) 17.0.0.098 compilers + with NAG Fortran Compiler Release 7.1(Hanzomon) + MPICH 3.1.4 compiled with GCC 4.9.3 + MPICH 3.3 compiled with GCC 7.2.0 + OpenMPI 3.1.3 compiled with GCC 7.2.0 and 4.1.2 + compiled with GCC 9.1.0 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Versions 18.4.0 and 19.10-0 + NVIDIA nvc, nvfortran and nvc++ version 22.5-0 + (autotools and cmake) + + + Linux-3.10.0-1160.0.0.1chaos openmpi-4.1.2 + #1 SMP x86_64 GNU/Linux clang 6.0.0, 11.0.1 + (quartz) GCC 7.3.0, 8.1.0 + Intel 19.0.4, 2022.2, oneapi.2022.2 + + Linux-3.10.0-1160.90.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux GCC 7.2.0 + (skybridge) Intel/19.1 + (cmake) + + Linux-3.10.0-1160.90.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux GCC 7.2.0 + (attaway) Intel/19.1 + (cmake) + + Linux-3.10.0-1160.90.1.1chaos openmpi-intel/4.1 + #1 SMP x86_64 GNU/Linux Intel/19.1.2, 21.3.0 and 22.2.0 + (chama) (cmake) + + macOS Apple M1 11.6 Apple clang version 12.0.5 (clang-1205.0.22.11) + Darwin 20.6.0 arm64 gfortran GNU Fortran (Homebrew GCC 11.2.0) 11.1.0 + (macmini-m1) Intel icc/icpc/ifort version 2021.3.0 202106092021.3.0 20210609 + + macOS Big Sur 11.3.1 Apple clang version 12.0.5 (clang-1205.0.22.9) + Darwin 20.4.0 x86_64 gfortran GNU Fortran (Homebrew GCC 10.2.0_3) 10.2.0 + (bigsur-1) Intel icc/icpc/ifort version 2021.2.0 20210228 + + Mac OS X El Capitan 10.11.6 Apple clang version 7.3.0 from Xcode 7.3 + 64-bit gfortran GNU Fortran (GCC) 5.2.0 + (osx1011test) Intel icc/icpc/ifort version 16.0.2 + + Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos6 Version 4.4.7 20120313 + (platypus) Version 4.9.3, 5.3.0, 6.2.0 + MPICH 3.1.4 compiled with GCC 4.9.3 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Version 19.10-0 + + Windows 10 x64 Visual Studio 2019 w/ clang 12.0.0 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2019 w/ Intel (C/C++ only - cmake) + Visual Studio 2022 w/ clang 15.0.1 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2022 w/ Intel C/C++/Fortran oneAPI 2023 (cmake) + Visual Studio 2019 w/ MSMPI 10.1 (C only - cmake) + + +Known Problems +============== + + - When building with the NAG Fortran compiler using the Autotools and libtool + 2.4.2 or earlier, the -shared flag will be missing '-Wl,', which will cause + compilation to fail. This is due to a bug in libtool that was fixed in 2012 + and released in 2.4.4 in 2014. + + - When the library detects and builds in support for the _Float16 datatype, an + issue has been observed on at least one MacOS 14 system where the library + fails to initialize due to not being able to detect the byte order of the + _Float16 type (https://github.com/HDFGroup/hdf5/issues/4310): + + #5: H5Tinit_float.c line 308 in H5T__fix_order(): failed to detect byte order + major: Datatype + minor: Unable to initialize object + + If this issue is encountered, support for the _Float16 type can be disabled + with a configuration option: + + CMake: HDF5_ENABLE_NONSTANDARD_FEATURE_FLOAT16=OFF + Autotools: --disable-nonstandard-feature-float16 + + - When HDF5 is compiled with NVHPC versions 23.5 - 23.9 (additional versions may + also be applicable) and with -O2 (or higher) and -DNDEBUG, test failures occur + in the following tests: + + H5PLUGIN-filter_plugin + H5TEST-flush2 + H5TEST-testhdf5-base + MPI_TEST_t_filters_parallel + + Sporadic failures (even with lower -O levels): + Java JUnit-TestH5Pfapl + Java JUnit-TestH5D + + Also, NVHPC will fail to compile the test/tselect.c test file with a compiler + error of 'use of undefined value' when the optimization level is -O2 or higher. + + This is confirmed to be a bug in the nvc compiler that has been fixed as of + 23.11. If you are using an affected version of the NVidia compiler, the + work-around is to set the optimization level to -O1. + + https://forums.developer.nvidia.com/t/hdf5-no-longer-compiles-with-nv-23-9/269045 + + - CMake files do not behave correctly with paths containing spaces. + Do not use spaces in paths because the required escaping for handling spaces + results in very complex and fragile build files. + + - At present, metadata cache images may not be generated by parallel + applications. Parallel applications can read files with metadata cache + images, but since this is a collective operation, a deadlock is possible + if one or more processes do not participate. + + - The subsetting option in ph5diff currently will fail and should be avoided. + The subsetting option works correctly in serial h5diff. + + - Flang Fortran compilation will fail (last check version 17) due to not yet + implemented: (1) derived type argument passed by value (H5VLff.F90), + and (2) support for REAL with KIND = 2 in intrinsic SPACING used in testing. + + - Fortran tests HDF5_1_8.F90 and HDF5_F03.F90 will fail with Cray compilers + greater than version 16.0 due to a compiler bug. The latest version verified + as failing was version 17.0. + + - Several tests currently fail on certain platforms: + MPI_TEST-t_bigio fails with spectrum-mpi on ppc64le platforms. + + MPI_TEST-t_subfiling_vfd and MPI_TEST_EXAMPLES-ph5_subfiling fail with + cray-mpich on theta and with XL compilers on ppc64le platforms. + + MPI_TEST_testphdf5_tldsc fails with cray-mpich 7.7 on cori and theta. + + - File space may not be released when overwriting or deleting certain nested + variable length or reference types. + + - Known problems in previous releases can be found in the HISTORY*.txt files + in the HDF5 source. Please report any new problems found to + help@hdfgroup.org. + + +CMake vs. Autotools installations +================================= +While both build systems produce similar results, there are differences. +Each system produces the same set of folders on Linux (only CMake works +on standard Windows); bin, include, lib and share. Autotools places the +COPYING and RELEASE.txt file in the root folder, CMake places them in +the share folder. + +The bin folder contains the tools and the build scripts. Additionally, CMake +creates dynamic versions of the tools with the suffix "-shared". Autotools +installs one set of tools depending on the "--enable-shared" configuration +option. + build scripts + ------------- + Autotools: h5c++, h5cc, h5fc + CMake: h5c++, h5cc, h5hlc++, h5hlcc + +The include folder holds the header files and the fortran mod files. CMake +places the fortran mod files into separate shared and static subfolders, +while Autotools places one set of mod files into the include folder. Because +CMake produces a tools library, the header files for tools will appear in +the include folder. + +The lib folder contains the library files, and CMake adds the pkgconfig +subfolder with the hdf5*.pc files used by the bin/build scripts created by +the CMake build. CMake separates the C interface code from the fortran code by +creating C-stub libraries for each Fortran library. In addition, only CMake +installs the tools library. The names of the szip libraries are different +between the build systems. + +The share folder will have the most differences because CMake builds include +a number of CMake specific files for support of CMake's find_package and support +for the HDF5 Examples CMake project. + +The issues with the gif tool are: + HDFFV-10592 CVE-2018-17433 + HDFFV-10593 CVE-2018-17436 + HDFFV-11048 CVE-2020-10809 +These CVE issues have not yet been addressed and are avoided by not building +the gif tool by default. Enable building the High-Level tools with these options: + autotools: --enable-hlgiftools + cmake: HDF5_BUILD_HL_GIF_TOOLS=ON + + +%%%%1.14.3%%%% + +HDF5 version 1.14.3 released on 2023-10-27 +================================================================================ + + +INTRODUCTION +============ + +This document describes the differences between this release and the previous +HDF5 release. It contains information on the platforms tested and known +problems in this release. For more details check the HISTORY*.txt files in the +HDF5 source. + +Note that documentation in the links below will be updated at the time of each +final release. + +Links to HDF5 documentation can be found on The HDF5 web page: + + https://portal.hdfgroup.org/display/HDF5/HDF5 + +The official HDF5 releases can be obtained from: + + https://www.hdfgroup.org/downloads/hdf5/ + +Changes from release to release and new features in the HDF5-1.14.x release series +can be found at: + + https://portal.hdfgroup.org/display/HDF5/Release+Specific+Information + +If you have any questions or comments, please send them to the HDF Help Desk: + + help@hdfgroup.org + + +CONTENTS +======== + +- New Features +- Support for new platforms and languages +- Bug Fixes since HDF5-1.14.2 +- Platforms Tested +- Known Problems +- CMake vs. Autotools installations + + +New Features +============ + + Configuration: + ------------- + - Improved support for Intel oneAPI + + * Separates the old 'classic' Intel compiler settings and warnings + from the oneAPI settings + * Uses `-check nouninit` in debug builds to avoid false positives + when building H5_buildiface with `-check all` + * Both Autotools and CMake + + - Added new options for CMake and Autotools to control the Doxygen + warnings as errors setting. + + * HDF5_ENABLE_DOXY_WARNINGS: ON/OFF (Default: ON) + * --enable-doxygen-errors: enable/disable (Default: enable) + + The default will fail to compile if the doxygen parsing generates warnings. + The option can be disabled for certain versions of doxygen with parsing + issues. i.e. 1.9.5, 1.9.8. + + Addresses GitHub issue #3398 + + - Added support for AOCC and classic Flang w/ the Autotools + + * Adds a config/clang-fflags options file to support Flang + * Corrects missing "-Wl," from linker options in the libtool wrappers + when using Flang, the MPI Fortran compiler wrappers, and building + the shared library. This would often result in unrecognized options + like -soname. + * Enable -nomp w/ Flang to avoid linking to the OpenMPI library. + + CMake can build the parallel, shared library w/ Fortran using AOCC + and Flang, so no changes were needed for that build system. + + Fixes GitHub issues #3439, #1588, #366, #280 + + - Converted the build of libaec and zlib to use FETCH_CONTENT with CMake. + + Using the CMake FetchContent module, the external filters can populate + content at configure time via any method supported by the ExternalProject + module. Whereas ExternalProject_Add() downloads at build time, the + FetchContent module makes content available immediately, allowing the + configure step to use the content in commands like add_subdirectory(), + include() or file() operations. + + Removed HDF options for using FETCH_CONTENT explicitly: + BUILD_SZIP_WITH_FETCHCONTENT:BOOL + BUILD_ZLIB_WITH_FETCHCONTENT:BOOL + + - Thread-safety + static library disabled on Windows w/ CMake + + The thread-safety feature requires hooks in DllMain(), which is only + present in the shared library. + + We previously just warned about this, but now any CMake configuration + that tries to build thread-safety and the static library will fail. + This cannot be overridden with ALLOW_UNSUPPORTED. + + Fixes GitHub issue #3613 + + - Autotools builds now build the szip filter by default when an appropriate + library is found + + Since libaec is prevalent and BSD-licensed for both encoding and + decoding, we build the szip filter by default now. + + Both autotools and CMake build systems will process the szip filter the same as + the zlib filter is processed. + + - Removed CMake cross-compiling variables + + * HDF5_USE_PREGEN + * HDF5_BATCH_H5DETECT + + These were used to work around H5detect and H5make_libsettings and + are no longer required. + + - Running H5make_libsettings is no longer required for cross-compiling + + The functionality of H5make_libsettings is now handled via template files, + so H5make_libsettings has been removed. + + - Running H5detect is no longer required for cross-compiling + + The functionality of H5detect is now exercised at library startup, + so H5detect has been removed. + + + Library: + -------- + - Added a simple cache to the read-only S3 (ros3) VFD + + The read-only S3 VFD now caches the first N bytes of a file stored + in S3 to avoid a lot of small I/O operations when opening files. + This cache is per-file and created when the file is opened. + + N is currently 16 MiB or the size of the file, whichever is smaller. + + Addresses GitHub issue #3381 + + - Added new API function H5Pget_actual_selection_io_mode() + + This function allows the user to determine if the library performed + selection I/O, vector I/O, or scalar (legacy) I/O during the last HDF5 + operation performed with the provided DXPL. + + + Parallel Library: + ----------------- + - Added optimized support for the parallel compression feature when + using the multi-dataset I/O API routines collectively + + Previously, calling H5Dwrite_multi/H5Dread_multi collectively in parallel + with a list containing one or more filtered datasets would cause HDF5 to + break out of the optimized multi-dataset I/O mode and instead perform I/O + by looping over each dataset in the I/O request. The library has now been + updated to perform I/O in a more optimized manner in this case by first + performing I/O on all the filtered datasets at once and then performing + I/O on all the unfiltered datasets at once. + + - Changed H5Pset_evict_on_close so that it can be called with a parallel + build of HDF5 + + Previously, H5Pset_evict_on_close would always fail when called from a + parallel build of HDF5, stating that the feature is not supported with + parallel HDF5. This failure would occur even if a parallel build of HDF5 + was used with a serial HDF5 application. H5Pset_evict_on_close can now + be called regardless of the library build type and the library will + instead fail during H5Fcreate/H5Fopen if the "evict on close" property + has been set to true and the file is being opened for parallel access + with more than 1 MPI process. + + + Fortran Library: + ---------------- + - Fixed an uninitialized error return value for hdferr + to return the error state of the h5aopen_by_idx_f API. + + - Added h5pget_vol_cap_flags_f and related Fortran VOL + capability definitions. + + - Fortran async APIs H5A, H5D, H5ES, H5G, H5F, H5L and H5O were added. + + - Added Fortran APIs: + h5pset_selection_io_f, h5pget_selection_io_f, + h5pget_actual_selection_io_mode_f, + h5pset_modify_write_buf_f, h5pget_modify_write_buf_f + + - Added Fortran APIs: + h5get_free_list_sizes_f, h5dwrite_chunk_f, h5dread_chunk_f, + h5fget_info_f, h5lvisit_f, h5lvisit_by_name_f, + h5pget_no_selection_io_cause_f, h5pget_mpio_no_collective_cause_f, + h5sselect_shape_same_f, h5sselect_intersect_block_f, + h5pget_file_space_page_size_f, h5pset_file_space_page_size_f, + h5pget_file_space_strategy_f, h5pset_file_space_strategy_f + + - Removed "-commons" linking option on Darwin, as COMMON and EQUIVALENCE + are no longer used in the Fortran source. + + Fixes GitHub issue #3571 + + C++ Library: + ------------ + - + + + Java Library: + ------------- + - + + + Tools: + ------ + - + + + High-Level APIs: + ---------------- + - Added Fortran HL API: h5doappend_f + + + C Packet Table API: + ------------------- + - + + + Internal header file: + --------------------- + - + + + Documentation: + -------------- + - + + +Support for new platforms, languages and compilers +================================================== + - + + +Bug Fixes since HDF5-1.14.2 release +=================================== + Library + ------- + - Fixed some issues with chunk index metadata not getting read + collectively when collective metadata reads are enabled + + When looking up dataset chunks during I/O, the parallel library + temporarily disables collective metadata reads since it's generally + unlikely that the application will read the same chunks from all + MPI ranks. Leaving collective metadata reads enabled during + chunk lookups can lead to hangs or other bad behavior depending + on the chunk indexing structure used for the dataset in question. + However, due to the way that dataset chunk index metadata was + previously loaded in a deferred manner, this could mean that + the metadata for the main chunk index structure or its + accompanying pieces of metadata (e.g., fixed array data blocks) + could end up being read independently if these chunk lookup + operations are the first chunk index-related operation that + occurs on a dataset. This behavior is generally observed when + opening a dataset for which the metadata isn't in the metadata + cache yet and then immediately performing I/O on that dataset. + This behavior is not generally observed when creating a dataset + and then performing I/O on it, as the relevant metadata will + usually be in the metadata cache as a side effect of creating + the chunk index structures during dataset creation. + + This issue has been fixed by adding callbacks to the different + chunk indexing structure classes that allow more explicit control + over when chunk index metadata gets loaded. When collective + metadata reads are enabled, the necessary index metadata will now + get loaded collectively by all MPI ranks at the start of dataset + I/O to ensure that the ranks don't unintentionally read this + metadata independently further on. These changes fix collective + loading of the main chunk index structure, as well as v2 B-tree + root nodes, extensible array index blocks and fixed array data + blocks. There are still pieces of metadata that cannot currently + be loaded collectively, however, such as extensible array data + blocks, data block pages and super blocks, as well as fixed array + data block pages. These pieces of metadata are not necessarily + read in by all MPI ranks since this depends on which chunks the + ranks have selected in the dataset. Therefore, reading of these + pieces of metadata remains an independent operation. + + - Fixed potential hangs in parallel library during collective I/O with + independent metadata writes + + When performing collective parallel writes to a dataset where metadata + writes are requested as (or left as the default setting of) independent, + hangs could potentially occur during metadata cache sync points. This + was due to incorrect management of the internal state tracking whether + an I/O operation should be collective or not, causing the library to + attempt collective writes of metadata when they were meant to be + independent writes. During the metadata cache sync points, if the number + of cache entries being flushed was a multiple of the number of MPI ranks + in the MPI communicator used to access the HDF5 file, an equal amount of + collective MPI I/O calls were made and the dataset write call would be + successful. However, when the number of cache entries being flushed was + NOT a multiple of the number of MPI ranks, the ranks with more entries + than others would get stuck in an MPI_File_set_view call, while other + ranks would get stuck in a post-write MPI_Barrier call. This issue has + been fixed by correctly switching to independent I/O temporarily when + writing metadata independently during collective dataset I/O. + + - Fixed a bug with the way the Subfiling VFD assigns I/O concentrators + + During a file open operation, the Subfiling VFD determines the topology + of the application and uses that to select a subset of MPI ranks that + I/O will be forwarded to, called I/O concentrators. The code for this + had previously assumed that the parallel job launcher application (e.g., + mpirun, srun, etc.) would distribute MPI ranks sequentially to a node's + processors until all processors on that node have been assigned before + going on to the next node. When the launcher application mapped MPI ranks + to nodes in a different fashion, such as round-robin, this could cause + the Subfiling VFD to incorrectly map MPI ranks as I/O concentrators, + leading to missing subfiles. + + - Fixed a file space allocation bug in the parallel library for chunked + datasets + + With the addition of support for incremental file space allocation for + chunked datasets with filters applied to them that are created/accessed + in parallel, a bug was introduced to the library's parallel file space + allocation code. This could cause file space to not be allocated correctly + for datasets without filters applied to them that are created with serial + file access and later opened with parallel file access. In turn, this could + cause parallel writes to those datasets to place incorrect data in the file. + + - Fixed an assertion failure in Parallel HDF5 when a file can't be created + due to an invalid library version bounds setting + + An assertion failure could occur in H5MF_settle_raw_data_fsm when a file + can't be created with Parallel HDF5 due to specifying the use of a paged, + persistent file free space manager + (H5Pset_file_space_strategy(..., H5F_FSPACE_STRATEGY_PAGE, 1, ...)) with + an invalid library version bounds combination + (H5Pset_libver_bounds(..., H5F_LIBVER_EARLIEST, H5F_LIBVER_V18)). This + has now been fixed. + + - Fixed an assertion in a previous fix for CVE-2016-4332 + + An assert could fail when processing corrupt files that have invalid + shared message flags (as in CVE-2016-4332). + + The assert statement in question has been replaced with pointer checks + that don't raise errors. Since the function is in cleanup code, we do + our best to close and free things, even when presented with partially + initialized structs. + + Fixes CVE-2016-4332 and HDFFV-9950 (confirmed via the cve_hdf5 repo) + + - Fixed performance regression with some compound type conversions + + In-place type conversion was introduced for most use cases in 1.14.2. + While being able to use the read buffer for type conversion potentially + improves performance by performing the entire I/O at once, it also + disables the optimized compound type conversion used when the destination + is a subset of the source. Disabled in-place type conversion when using + this optimized conversion and there is no benefit in terms of the I/O + size. + + - Reading a H5std_string (std::string) via a C++ DataSet previously + truncated the string at the first null byte as if reading a C string. + Fixed length datasets are now read into H5std_string as a fixed length + string of the appropriate size. Variable length datasets will still be + truncated at the first null byte. + + Fixes Github issue #3034 + + - Fixed write buffer overflow in H5O__alloc_chunk + + The overflow was found by OSS-Fuzz https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=58658 + + Java Library + ------------ + - + + + Configuration + ------------- + - Fixes the ordering of INCLUDES when building with CMake + + Include directories in the source or build tree should come before other + directories to prioritize headers in the sources over installed ones. + + Fixes GitHub #1027 + + - The accum test now passes on macOS 12+ (Monterey) w/ CMake + + Due to changes in the way macOS handles LD_LIBRARY_PATH, the accum test + started failing on macOS 12+ when building with CMake. CMake has been + updated to set DYLD_LIBRARY_PATH on macOS and the test now passes. + + Fixes GitHub #2994, #2261, and #1289 + + - Changed the default settings used by CMake for the GZIP filter + + The default for the option HDF5_ENABLE_Z_LIB_SUPPORT was OFF. Now the default is ON. + This was done to match the defaults used by the autotools configure.ac. + In addition, the CMake message level for not finding a suitable filter library was + changed from FATAL_ERROR (which would halt the build process) to WARNING (which + will print a message to stderr). Associated files and documentation were changed to match. + + In addition, the default settings in the config/cmake/cacheinit.cmake file were changed to + allow CMake to disable building the filters if the tgz file could not be found. The option + to allow CMake to download the file from the original Github location requires setting + the ZLIB_USE_LOCALCONTENT option to OFF for gzip. And setting the LIBAEC_USE_LOCALCONTENT + option to OFF for libaec (szip). + + Fixes GitHub issue #2926 + + + Tools + ----- + - Fixed an issue with unmatched MPI messages in ph5diff + + The "manager" MPI rank in ph5diff was unintentionally sending "program end" + messages to its workers twice, leading to an error from MPICH similar to the + following: + + Abort(810645519) on node 1 (rank 1 in comm 0): Fatal error in internal_Finalize: Other MPI error, error stack: + internal_Finalize(50)...........: MPI_Finalize failed + MPII_Finalize(394)..............: + MPIR_Comm_delete_internal(1224).: Communicator (handle=44000000) being freed has 1 unmatched message(s) + MPIR_Comm_release_always(1250)..: + MPIR_finalize_builtin_comms(154): + + + Performance + ------------- + - + + + Fortran API + ----------- + - + + + High-Level Library + ------------------ + - + + + Fortran High-Level APIs + ----------------------- + - + + + Documentation + ------------- + - + + + F90 APIs + -------- + - + + + C++ APIs + -------- + - + + + Testing + ------- + - Disabled running of MPI Atomicity tests for OpenMPI major versions < 5 + + Support for MPI atomicity operations is not implemented for major + versions of OpenMPI less than version 5. This would cause the MPI + atomicity tests for parallel HDF5 to sporadically fail when run + with OpenMPI. Testphdf5 now checks if OpenMPI is being used and will + skip running the atomicity tests if the major version of OpenMPI is + < 5. + + - Fixed Fortran 2003 test with gfortran-v13, optimization levels O2,O3 + + Fixes failing Fortran 2003 test with gfortran, optimization level O2,O3 + with -fdefault-real-16. Fixes GH #2928. + + +Platforms Tested +=================== + + Linux 5.19.0-1023-aws GNU gcc, gfortran, g++ + #24-Ubuntu SMP x86_64 GNU/Linux (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 + Ubuntu 22.04 Ubuntu clang version 14.0.0-1ubuntu1 + Intel(R) oneAPI DPC++/C++ Compiler 2023.1.0 + ifort (IFORT) 2021.9.0 20230302 + (cmake and autotools) + + Linux 5.16.14-200.fc35 GNU gcc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) + #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) + Fedora35 clang version 13.0.0 (Fedora 13.0.0-3.fc35) + (cmake and autotools) + + Linux 5.14.21-cray_shasta_c cray-mpich/8.1.27 + #1 SMP x86_64 GNU/Linux cce/15.0.0 + (frontier) gcc/12.2.0 + (cmake) + + Linux 5.11.0-34-generic GNU gcc (GCC) 9.4.0-1ubuntu1 + #36-Ubuntu SMP x86_64 GNU/Linux GNU Fortran (GCC) 9.4.0-1ubuntu1 + Ubuntu 20.04 Ubuntu clang version 10.0.0-4ubuntu1 + Intel(R) oneAPI DPC++/C++ Compiler 2023.1.0 + ifort (IFORT) 2021.9.0 20230302 + (cmake and autotools) + + Linux 4.14.0-115.35.1.1chaos aue/openmpi/4.1.4-arm-22.1.0.12 + #1 SMP aarch64 GNU/Linux Arm C/C++/Fortran Compiler version 22.1 + (stria) (based on LLVM 13.0.1) + (cmake) + + Linux 4.14.0-115.35.1.3chaos spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 12.0.1 + (vortex) GCC 8.3.1 + XL 2021.09.22 + (cmake) + + Linux-4.14.0-115.21.2 spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 12.0.1, 14.0.5 + (lassen) GCC 8.3.1 + XL 16.1.1.2, 2021.09.22, 2022.08.05 + (cmake) + + Linux-4.12.14-197.99-default cray-mpich/7.7.14 + #1 SMP x86_64 GNU/Linux cce 12.0.3 + (theta) GCC 11.2.0 + llvm 9.0 + Intel 19.1.2 + + Linux 3.10.0-1160.36.2.el7.ppc64 gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + #1 SMP ppc64be GNU/Linux g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + Power8 (echidna) GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + + Linux 3.10.0-1160.24.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos7 Version 4.8.5 20150623 (Red Hat 4.8.5-4) + (jelly/kituo/moohan) Version 4.9.3, Version 7.2.0, Version 8.3.0, + Version 9.1.0, Version 10.2.0 + Intel(R) C (icc), C++ (icpc), Fortran (icc) + compilers: + Version 17.0.0.098 Build 20160721 + GNU C (gcc) and C++ (g++) 4.8.5 compilers + with NAG Fortran Compiler Release 7.1(Hanzomon) + Intel(R) C (icc) and C++ (icpc) 17.0.0.098 compilers + with NAG Fortran Compiler Release 7.1(Hanzomon) + MPICH 3.1.4 compiled with GCC 4.9.3 + MPICH 3.3 compiled with GCC 7.2.0 + OpenMPI 3.1.3 compiled with GCC 7.2.0 and 4.1.2 + compiled with GCC 9.1.0 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Versions 18.4.0 and 19.10-0 + NVIDIA nvc, nvfortran and nvc++ version 22.5-0 + (autotools and cmake) + + + Linux-3.10.0-1160.0.0.1chaos openmpi-4.1.2 + #1 SMP x86_64 GNU/Linux clang 6.0.0, 11.0.1 + (quartz) GCC 7.3.0, 8.1.0 + Intel 19.0.4, 2022.2, oneapi.2022.2 + + Linux-3.10.0-1160.90.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux GCC 7.2.0 + (skybridge) Intel/19.1 + (cmake) + + Linux-3.10.0-1160.90.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux GCC 7.2.0 + (attaway) Intel/19.1 + (cmake) + + Linux-3.10.0-1160.90.1.1chaos openmpi-intel/4.1 + #1 SMP x86_64 GNU/Linux Intel/19.1.2, 21.3.0 and 22.2.0 + (chama) (cmake) + + macOS Apple M1 11.6 Apple clang version 12.0.5 (clang-1205.0.22.11) + Darwin 20.6.0 arm64 gfortran GNU Fortran (Homebrew GCC 11.2.0) 11.1.0 + (macmini-m1) Intel icc/icpc/ifort version 2021.3.0 202106092021.3.0 20210609 + + macOS Big Sur 11.3.1 Apple clang version 12.0.5 (clang-1205.0.22.9) + Darwin 20.4.0 x86_64 gfortran GNU Fortran (Homebrew GCC 10.2.0_3) 10.2.0 + (bigsur-1) Intel icc/icpc/ifort version 2021.2.0 20210228 + + Mac OS X El Capitan 10.11.6 Apple clang version 7.3.0 from Xcode 7.3 + 64-bit gfortran GNU Fortran (GCC) 5.2.0 + (osx1011test) Intel icc/icpc/ifort version 16.0.2 + + Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos6 Version 4.4.7 20120313 + (platypus) Version 4.9.3, 5.3.0, 6.2.0 + MPICH 3.1.4 compiled with GCC 4.9.3 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Version 19.10-0 + + Windows 10 x64 Visual Studio 2019 w/ clang 12.0.0 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2019 w/ Intel oneAPI 2023.2 C/C++ only - cmake) + Visual Studio 2022 w/ clang 16.0.5 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2022 w/ Intel oneAPI 2023.2 (C/C++ only - cmake) + Visual Studio 2019 w/ MSMPI 10.1 (C only - cmake) + + +Known Problems +============== + + Building HDF5 Fortran on Windows with Intel oneAPI 2023.2 currently fails for + this release with link errors. As a result, Windows binaries for this release + will not include Fortran. The problem will be addressed in HDF5 1.14.4. + + IEEE standard arithmetic enables software to raise exceptions such as overflow, + division by zero, and other illegal operations without interrupting or halting + the program flow. The HDF5 C library intentionally performs these exceptions. + Therefore, the "-ieee=full" nagfor switch is necessary when compiling a program + to avoid stopping on an exception. + + CMake files do not behave correctly with paths containing spaces. + Do not use spaces in paths because the required escaping for handling spaces + results in very complex and fragile build files. + ADB - 2019/05/07 + + At present, metadata cache images may not be generated by parallel + applications. Parallel applications can read files with metadata cache + images, but since this is a collective operation, a deadlock is possible + if one or more processes do not participate. + + CPP ptable test fails on both VS2017 and VS2019 with Intel compiler, JIRA + issue: HDFFV-10628. This test will pass with VS2015 with Intel compiler. + + The subsetting option in ph5diff currently will fail and should be avoided. + The subsetting option works correctly in serial h5diff. + + Several tests currently fail on certain platforms: + MPI_TEST-t_bigio fails with spectrum-mpi on ppc64le platforms. + + MPI_TEST-t_subfiling_vfd and MPI_TEST_EXAMPLES-ph5_subfiling fail with + cray-mpich on theta and with XL compilers on ppc64le platforms. + + MPI_TEST_testphdf5_tldsc fails with cray-mpich 7.7 on cori and theta. + + Known problems in previous releases can be found in the HISTORY*.txt files + in the HDF5 source. Please report any new problems found to + help@hdfgroup.org. + + +CMake vs. Autotools installations +================================= +While both build systems produce similar results, there are differences. +Each system produces the same set of folders on linux (only CMake works +on standard Windows); bin, include, lib and share. Autotools places the +COPYING and RELEASE.txt file in the root folder, CMake places them in +the share folder. + +The bin folder contains the tools and the build scripts. Additionally, CMake +creates dynamic versions of the tools with the suffix "-shared". Autotools +installs one set of tools depending on the "--enable-shared" configuration +option. + build scripts + ------------- + Autotools: h5c++, h5cc, h5fc + CMake: h5c++, h5cc, h5hlc++, h5hlcc + +The include folder holds the header files and the fortran mod files. CMake +places the fortran mod files into separate shared and static subfolders, +while Autotools places one set of mod files into the include folder. Because +CMake produces a tools library, the header files for tools will appear in +the include folder. + +The lib folder contains the library files, and CMake adds the pkgconfig +subfolder with the hdf5*.pc files used by the bin/build scripts created by +the CMake build. CMake separates the C interface code from the fortran code by +creating C-stub libraries for each Fortran library. In addition, only CMake +installs the tools library. The names of the szip libraries are different +between the build systems. + +The share folder will have the most differences because CMake builds include +a number of CMake specific files for support of CMake's find_package and support +for the HDF5 Examples CMake project. + +The issues with the gif tool are: + HDFFV-10592 CVE-2018-17433 + HDFFV-10593 CVE-2018-17436 + HDFFV-11048 CVE-2020-10809 +These CVE issues have not yet been addressed and are avoided by not building +the gif tool by default. Enable building the High-Level tools with these options: + autotools: --enable-hlgiftools + cmake: HDF5_BUILD_HL_GIF_TOOLS=ON + + +%%%%1.14.2%%%% + +HDF5 version 1.14.2 released on 2023-08-11 +================================================================================ + + +INTRODUCTION +============ + +This document describes the differences between this release and the previous +HDF5 release. It contains information on the platforms tested and known +problems in this release. For more details check the HISTORY*.txt files in the +HDF5 source. + +Note that documentation in the links below will be updated at the time of each +final release. + +Links to HDF5 documentation can be found on The HDF5 web page: + + https://portal.hdfgroup.org/display/HDF5/HDF5 + +The official HDF5 releases can be obtained from: + + https://www.hdfgroup.org/downloads/hdf5/ + +Changes from release to release and new features in the HDF5-1.14.x release series +can be found at: + + https://portal.hdfgroup.org/display/HDF5/Release+Specific+Information + +If you have any questions or comments, please send them to the HDF Help Desk: + + help@hdfgroup.org + + +CONTENTS +======== + +- New Features +- Support for new platforms and languages +- Bug Fixes since HDF5-1.14.1 +- Platforms Tested +- Known Problems +- CMake vs. Autotools installations + + +New Features +============ + + Configuration: + ------------- + - Updated HDF5 API tests CMake code to support VOL connectors + + * Implemented support for fetching, building and testing HDF5 + VOL connectors during the library build process and documented + the feature under doc/cmake-vols-fetchcontent.md + + * Implemented the HDF5_TEST_API_INSTALL option that enables + installation of the HDF5 API tests on the system + + + Library: + -------- + - Added support for in-place type conversion in most cases + + In-place type conversion allows the library to perform type conversion + without an intermediate type conversion buffer. This can improve + performance by allowing I/O in a single operation over the entire + selection instead of being limited by the size of the intermediate buffer. + Implemented for I/O on contiguous and chunked datasets when the selection + is contiguous in memory and when the memory datatype is not smaller than + the file datatype. + + - Changed selection I/O to be on by default when using the MPIO file driver + + - Added support for selection I/O in the MPIO file driver + + Previously, only vector I/O operations were supported. Support for + selection I/O should improve performance and reduce memory uses in some + cases. + + - Change the error handling for a not found path in the find plugin process. + + While attempting to load a plugin the HDF5 library will fail if one of the + directories in the plugin paths does not exist, even if there are more paths + to check. Instead of exiting the function with an error, just log the error + and continue processing the list of paths to check. + + + Parallel Library: + ----------------- + - + + + Fortran Library: + ---------------- + - + + + C++ Library: + ------------ + - + + + Java Library: + ------------- + - + + + Tools: + ------ + - + + + High-Level APIs: + ---------------- + - + + + C Packet Table API: + ------------------- + - + + + Internal header file: + --------------------- + - + + + Documentation: + -------------- + - + + +Support for new platforms, languages and compilers +================================================== + - Linux 5.14.21-cray_shasta_c + #1 SMP x86_64 GNU/Linux + (frontier) + + +Bug Fixes since HDF5-1.14.1 release +=================================== + Library + ------- + - Fixed bugs in selection I/O + + Previously, the library could fail in some cases when performing selection + I/O with type conversion. + + - Fixed CVE-2018-13867 + + A corrupt file containing an invalid local heap datablock address + could trigger an assert failure when the metadata cache attempted + to load the datablock from storage. + + The local heap now verifies that the datablock address is valid + when the local heap header information is parsed. + + - Fixed CVE-2018-11202 + + A malformed file could result in chunk index memory leaks. Under most + conditions (i.e., when the --enable-using-memchecker option is NOT + used), this would result in a small memory leak and and infinite loop + and abort when shutting down the library. The infinite loop would be + due to the "free list" package not being able to clear its resources + so the library couldn't shut down. When the "using a memory checker" + option is used, the free lists are disabled so there is just a memory + leak with no abort on library shutdown. + + The chunk index resources are now correctly cleaned up when reading + misparsed files and valgrind confirms no memory leaks. + + - Fixed an issue where an assert statement was converted to an + incorrect error check statement + + An assert statement in the library dealing with undefined dataset data + fill values was converted to an improper error check that would always + trigger when a dataset's fill value was set to NULL (undefined). This + has now been fixed. + + - Fixed an assertion failure when attempting to use the Subfiling IOC + VFD directly + + The Subfiling feature makes use of two Virtual File Drivers, the + Subfiling VFD and the IOC (I/O Concentrator) VFD. The two VFDs are + intended to be stacked together such that the Subfiling VFD sits + "on top" of the IOC VFD and routes I/O requests through it; using the + IOC VFD alone is currently unsupported. The IOC VFD has been fixed so + that an error message is displayed in this situation rather than causing + an assertion failure. + + - Fixed a potential bug when copying empty enum datatypes + + Copying an empty enum datatype (including implicitly, as when an enum + is a part of a compound datatype) would fail in an assert in debug + mode and could fail in release mode depending on how the platform + handles undefined behavior regarding size 0 memory allocations and + using memcpy with a NULL src pointer. + + The library is now more careful about using memory operations when + copying empty enum datatypes and will not error or raise an assert. + + - Added an AAPL check to H5Acreate + + A check was added to H5Acreate to ensure that a failure is correctly + returned when an invalid Attribute Access Property List is passed + in to the function. The HDF5 API tests were failing for certain + build types due to this condition not being checked previously. + + + Java Library + ------------ + - Fixed switch case 'L' block missing a break statement. + + The HDF5Array.arrayify method is missing a break statement in the case 'L': section + which causes it to fall through and throw an HDF5JavaException when attempting to + read an Array[Array[Long]]. + + The error was fixed by inserting a break statement at the end of the case 'L': sections. + + Fixes GitHub issue #3056 + + + Configuration + ------------- + - Fixed a configuration issue that prevented building of the Subfiling VFD on macOS + + Checks were added to the CMake and Autotools code to verify that CLOCK_MONOTONIC_COARSE, + PTHREAD_MUTEX_ADAPTIVE_NP and pthread_condattr_setclock() are available before attempting + to use them in Subfiling VFD-related utility code. Without these checks, attempting + to build the Subfiling VFD on macOS would fail. + + + Tools + ----- + - Fixed an issue in h5repack for variable-length typed datasets + + When repacking datasets into a new file, h5repack tries to determines whether + it can use H5Ocopy to copy each dataset into the new file, or if it needs to + manually re-create the dataset, then read data from the old dataset and write + it to the new dataset. H5repack was previously using H5Ocopy for datasets with + variable-length datatypes, but this can be problematic if the global heap + addresses involved do not match exactly between the old and new files. These + addresses could change for a variety of reasons, such as the command-line options + provided to h5repack, how h5repack allocates space in the repacked file, etc. + Since H5Ocopy does not currently perform any translation when these addresses + change, datasets that were repacked with H5Ocopy could become unreadable in the + new file. H5repack has been fixed to repack variable-length typed datasets without + using H5Ocopy to ensure that the new datasets always have the correct global heap + addresses. + + + Performance + ------------- + - + + + Fortran API + ----------- + - + + High-Level Library + ------------------ + - + + + Fortran High-Level APIs + ----------------------- + - + + + Documentation + ------------- + - + + + F90 APIs + -------- + - + + + C++ APIs + -------- + - + + + Testing + ------- + - Fixed a testing failure in testphdf5 on Cray machines + + On some Cray machines, what appears to be a bug in Cray MPICH was causing + calls to H5Fis_accessible to create a 0-byte file with strange Unix + permissions. This was causing an H5Fdelete file deletion test in the + testphdf5 program to fail due to a just-deleted HDF5 file appearing to + still be accessible on the file system. The issue in Cray MPICH has been + worked around for the time being by resetting the MPI_Info object on the + File Access Property List used to MPI_INFO_NULL before passing it to the + H5Fis_accessible call. + + - A bug was fixed in the HDF5 API test random datatype generation code + + A bug in the random datatype generation code could cause test failures + when trying to generate an enumeration datatype that has duplicated + name/value pairs in it. This has now been fixed. + + - A bug was fixed in the HDF5 API test VOL connector registration checking code + + The HDF5 API test code checks to see if the VOL connector specified by the + HDF5_VOL_CONNECTOR environment variable (if any) is registered with the library + before attempting to run tests with it so that testing can be skipped and an + error can be returned when a VOL connector fails to register successfully. + Previously, this code didn't account for VOL connectors that specify extra + configuration information in the HDF5_VOL_CONNECTOR environment variable and + would incorrectly report that the specified VOL connector isn't registered + due to including the configuration information as part of the VOL connector + name being checked for registration status. This has now been fixed. + + +Platforms Tested +=================== + + Linux 5.19.0-1023-aws GNU gcc, gfortran, g++ + #24-Ubuntu SMP x86_64 GNU/Linux (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 + Ubuntu 22.04 Ubuntu clang version 14.0.0-1ubuntu1 + Intel(R) oneAPI DPC++/C++ Compiler 2023.1.0 + ifort (IFORT) 2021.9.0 20230302 + (cmake and autotools) + + Linux 5.16.14-200.fc35 GNU gcc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) + #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) + Fedora35 clang version 13.0.0 (Fedora 13.0.0-3.fc35) + (cmake and autotools) + + Linux 5.14.21-cray_shasta_c cray-mpich/8.1.23 + #1 SMP x86_64 GNU/Linux cce/15.0.0 + (frontier) gcc/12.2.0 + (cmake) + + Linux 5.11.0-34-generic GNU gcc (GCC) 9.4.0-1ubuntu1 + #36-Ubuntu SMP x86_64 GNU/Linux GNU Fortran (GCC) 9.4.0-1ubuntu1 + Ubuntu 20.04 Ubuntu clang version 10.0.0-4ubuntu1 + Intel(R) oneAPI DPC++/C++ Compiler 2023.1.0 + ifort (IFORT) 2021.9.0 20230302 + (cmake and autotools) + + Linux 4.14.0-115.35.1.1chaos aue/openmpi/4.1.4-arm-22.1.0.12 + #1 SMP aarch64 GNU/Linux Arm C/C++/Fortran Compiler version 22.1 + (stria) (based on LLVM 13.0.1) + (cmake) + + Linux 4.14.0-115.35.1.3chaos spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 12.0.1 + (vortex) GCC 8.3.1 + XL 2021.09.22 + (cmake) + + Linux-4.14.0-115.21.2 spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 12.0.1, 14.0.5 + (lassen) GCC 8.3.1 + XL 16.1.1.2, 2021.09.22, 2022.08.05 + (cmake) + + Linux-4.12.14-197.99-default cray-mpich/7.7.14 + #1 SMP x86_64 GNU/Linux cce 12.0.3 + (theta) GCC 11.2.0 + llvm 9.0 + Intel 19.1.2 + + Linux 3.10.0-1160.36.2.el7.ppc64 gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + #1 SMP ppc64be GNU/Linux g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + Power8 (echidna) GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + + Linux 3.10.0-1160.24.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos7 Version 4.8.5 20150623 (Red Hat 4.8.5-4) + (jelly/kituo/moohan) Version 4.9.3, Version 7.2.0, Version 8.3.0, + Version 9.1.0, Version 10.2.0 + Intel(R) C (icc), C++ (icpc), Fortran (icc) + compilers: + Version 17.0.0.098 Build 20160721 + GNU C (gcc) and C++ (g++) 4.8.5 compilers + with NAG Fortran Compiler Release 7.1(Hanzomon) + Intel(R) C (icc) and C++ (icpc) 17.0.0.098 compilers + with NAG Fortran Compiler Release 7.1(Hanzomon) + MPICH 3.1.4 compiled with GCC 4.9.3 + MPICH 3.3 compiled with GCC 7.2.0 + OpenMPI 3.1.3 compiled with GCC 7.2.0 and 4.1.2 + compiled with GCC 9.1.0 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Versions 18.4.0 and 19.10-0 + NVIDIA nvc, nvfortran and nvc++ version 22.5-0 + (autotools and cmake) + + + Linux-3.10.0-1160.0.0.1chaos openmpi-4.1.2 + #1 SMP x86_64 GNU/Linux clang 6.0.0, 11.0.1 + (quartz) GCC 7.3.0, 8.1.0 + Intel 19.0.4, 2022.2, oneapi.2022.2 + + Linux-3.10.0-1160.90.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux GCC 7.2.0 + (skybridge) Intel/19.1 + (cmake) + + Linux-3.10.0-1160.90.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux GCC 7.2.0 + (attaway) Intel/19.1 + (cmake) + + Linux-3.10.0-1160.90.1.1chaos openmpi-intel/4.1 + #1 SMP x86_64 GNU/Linux Intel/19.1.2, 21.3.0 and 22.2.0 + (chama) (cmake) + + macOS Apple M1 11.6 Apple clang version 12.0.5 (clang-1205.0.22.11) + Darwin 20.6.0 arm64 gfortran GNU Fortran (Homebrew GCC 11.2.0) 11.1.0 + (macmini-m1) Intel icc/icpc/ifort version 2021.3.0 202106092021.3.0 20210609 + + macOS Big Sur 11.3.1 Apple clang version 12.0.5 (clang-1205.0.22.9) + Darwin 20.4.0 x86_64 gfortran GNU Fortran (Homebrew GCC 10.2.0_3) 10.2.0 + (bigsur-1) Intel icc/icpc/ifort version 2021.2.0 20210228 + + Mac OS X El Capitan 10.11.6 Apple clang version 7.3.0 from Xcode 7.3 + 64-bit gfortran GNU Fortran (GCC) 5.2.0 + (osx1011test) Intel icc/icpc/ifort version 16.0.2 + + + Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos6 Version 4.4.7 20120313 + (platypus) Version 4.9.3, 5.3.0, 6.2.0 + MPICH 3.1.4 compiled with GCC 4.9.3 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Version 19.10-0 + + Windows 10 x64 Visual Studio 2019 w/ clang 12.0.0 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2019 w/ Intel C/C++ only cmake) + Visual Studio 2022 w/ clang 15.0.1 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2022 w/ Intel C/C++/Fortran oneAPI 2023 (cmake) + Visual Studio 2019 w/ MSMPI 10.1 (C only - cmake) + + +Known Problems +============== + + CMake files do not behave correctly with paths containing spaces. + Do not use spaces in paths because the required escaping for handling spaces + results in very complex and fragile build files. + ADB - 2019/05/07 + + At present, metadata cache images may not be generated by parallel + applications. Parallel applications can read files with metadata cache + images, but since this is a collective operation, a deadlock is possible + if one or more processes do not participate. + + CPP ptable test fails on both VS2017 and VS2019 with Intel compiler, JIRA + issue: HDFFV-10628. This test will pass with VS2015 with Intel compiler. + + The subsetting option in ph5diff currently will fail and should be avoided. + The subsetting option works correctly in serial h5diff. + + Several tests currently fail on certain platforms: + MPI_TEST-t_bigio fails with spectrum-mpi on ppc64le platforms. + + MPI_TEST-t_subfiling_vfd and MPI_TEST_EXAMPLES-ph5_subfiling fail with + cray-mpich on theta and with XL compilers on ppc64le platforms. + + MPI_TEST_testphdf5_tldsc fails with cray-mpich 7.7 on theta. + + Known problems in previous releases can be found in the HISTORY*.txt files + in the HDF5 source. Please report any new problems found to + help@hdfgroup.org. + + +CMake vs. Autotools installations +================================= +While both build systems produce similar results, there are differences. +Each system produces the same set of folders on linux (only CMake works +on standard Windows); bin, include, lib and share. Autotools places the +COPYING and RELEASE.txt file in the root folder, CMake places them in +the share folder. + +The bin folder contains the tools and the build scripts. Additionally, CMake +creates dynamic versions of the tools with the suffix "-shared". Autotools +installs one set of tools depending on the "--enable-shared" configuration +option. + build scripts + ------------- + Autotools: h5c++, h5cc, h5fc + CMake: h5c++, h5cc, h5hlc++, h5hlcc + +The include folder holds the header files and the fortran mod files. CMake +places the fortran mod files into separate shared and static subfolders, +while Autotools places one set of mod files into the include folder. Because +CMake produces a tools library, the header files for tools will appear in +the include folder. + +The lib folder contains the library files, and CMake adds the pkgconfig +subfolder with the hdf5*.pc files used by the bin/build scripts created by +the CMake build. CMake separates the C interface code from the fortran code by +creating C-stub libraries for each Fortran library. In addition, only CMake +installs the tools library. The names of the szip libraries are different +between the build systems. + +The share folder will have the most differences because CMake builds include +a number of CMake specific files for support of CMake's find_package and support +for the HDF5 Examples CMake project. + +The issues with the gif tool are: + HDFFV-10592 CVE-2018-17433 + HDFFV-10593 CVE-2018-17436 + HDFFV-11048 CVE-2020-10809 +These CVE issues have not yet been addressed and are avoided by not building +the gif tool by default. Enable building the High-Level tools with these options: + autotools: --enable-hlgiftools + cmake: HDF5_BUILD_HL_GIF_TOOLS=ON + + +%%%%1.14.1%%%% + +HDF5 version 1.14.1-2 released on 2023-05-11 +================================================================================ +HDF5 1.14.1-2 is a patch release for HDF5 1.14.1. The only change in the patch +release is that Autoconf 2.71 was used to generate the Autotools build files, +which allows building with Intel's oneAPI. + + +INTRODUCTION +============ + +This document describes the differences between this release and the previous +HDF5 release. It contains information on the platforms tested and known +problems in this release. For more details check the HISTORY*.txt files in the +HDF5 source. + +Note that documentation in the links below will be updated at the time of each +final release. + +Links to HDF5 documentation can be found on The HDF5 web page: + + https://portal.hdfgroup.org/display/HDF5/HDF5 + +The official HDF5 releases can be obtained from: + + https://www.hdfgroup.org/downloads/hdf5/ + +Changes from release to release and new features in the HDF5-1.14.x release series +can be found at: + + https://portal.hdfgroup.org/display/HDF5/Release+Specific+Information + +If you have any questions or comments, please send them to the HDF Help Desk: + + help@hdfgroup.org + + +CONTENTS +======== + +- New Features +- Support for new platforms and languages +- Bug Fixes since HDF5-1.14.0 +- Platforms Tested +- Known Problems +- CMake vs. Autotools installations + + +New Features +============ + + Configuration: + ------------- + - Added new CMake options for building and running HDF5 API tests + (Experimental) + + HDF5 API tests are an experimental feature, primarily targeted + toward HDF5 VOL connector authors, that is currently being developed. + These tests exercise the HDF5 API and are being integrated back + into the HDF5 library from the HDF5 VOL tests repository + (https://github.com/HDFGroup/vol-tests). To support this feature, + the following new options have been added to CMake: + + * HDF5_TEST_API: ON/OFF (Default: OFF) + + Controls whether the HDF5 API tests will be built. These tests + will only be run during testing of HDF5 if the HDF5_TEST_SERIAL + (for serial tests) and HDF5_TEST_PARALLEL (for parallel tests) + options are enabled. + + * HDF5_TEST_API_INSTALL: ON/OFF (Default: OFF) + + Controls whether the HDF5 API test executables will be installed + on the system alongside the HDF5 library. This option is currently + not functional. + + * HDF5_TEST_API_ENABLE_ASYNC: ON/OFF (Default: OFF) + + Controls whether the HDF5 Async API tests will be built. These + tests will only be run if the VOL connector used supports Async + operations. + + * HDF5_TEST_API_ENABLE_DRIVER: ON/OFF (Default: OFF) + + Controls whether to build the HDF5 API test driver program. This + test driver program is useful for VOL connectors that use a + client/server model where the server needs to be up and running + before the VOL connector can function. This option is currently + not functional. + + * HDF5_TEST_API_SERVER: String (Default: "") + + Used to specify a path to the server executable that the test + driver program should execute. + + - Added support for CMake presets file. + + CMake supports two main files, CMakePresets.json and CMakeUserPresets.json, + that allow users to specify common configure options and share them with others. + HDF added a CMakePresets.json file of a typical configuration and support + file, config/cmake-presets/hidden-presets.json. + Also added a section to INSTALL_CMake.txt with very basic explanation of the + process to use CMakePresets. + + - Deprecated and removed old SZIP library in favor of LIBAEC library + + LIBAEC library has been used in HDF5 binaries as the szip library of choice + for a few years. We are removing the options for using the old SZIP library. + + Also removed the config/cmake/FindSZIP.cmake file. + + - Enabled instrumentation of the library by default in CMake for parallel + debug builds + + HDF5 can be configured to instrument portions of the parallel library to + aid in debugging. Autotools builds of HDF5 turn this capability on by + default for parallel debug builds and off by default for other build types. + CMake has been updated to match this behavior. + + - Added new option to build libaec and zlib inline with CMake. + + Using the CMake FetchContent module, the external filters can populate + content at configure time via any method supported by the ExternalProject + module. Whereas ExternalProject_Add() downloads at build time, the + FetchContent module makes content available immediately, allowing the + configure step to use the content in commands like add_subdirectory(), + include() or file() operations. + + The HDF options (and defaults) for using this are: + BUILD_SZIP_WITH_FETCHCONTENT:BOOL=OFF + LIBAEC_USE_LOCALCONTENT:BOOL=OFF + BUILD_ZLIB_WITH_FETCHCONTENT:BOOL=OFF + ZLIB_USE_LOCALCONTENT:BOOL=OFF + + The CMake variables to control the path and file names: + LIBAEC_TGZ_ORIGPATH:STRING + LIBAEC_TGZ_ORIGNAME:STRING + ZLIB_TGZ_ORIGPATH:STRING + ZLIB_TGZ_ORIGNAME:STRING + + See the CMakeFilters.cmake and config/cmake/cacheinit.cmake files for usage. + + + Library: + -------- + - Added a Subfiling VFD configuration file prefix environment variable + + The Subfiling VFD now checks for values set in a new environment + variable "H5FD_SUBFILING_CONFIG_FILE_PREFIX" to determine if the + application has specified a pathname prefix to apply to the file + path for its configuration file. For example, this can be useful + for cases where the application wishes to write subfiles to a + machine's node-local storage while placing the subfiling configuration + file on a file system readable by all machine nodes. + + - Added H5Pset_selection_io(), H5Pget_selection_io(), and + H5Pget_no_selection_io_cause() API functions to manage the selection I/O + feature. This can be used to enable collective I/O with type conversion, + or it can be used with custom VFDs that support vector or selection I/O. + + - Added H5Pset_modify_write_buf() and H5Pget_modify_write_buf() API + functions to allow the library to modify the contents of write buffers, in + order to avoid malloc/memcpy. Currently only used for type conversion + with selection I/O. + + + Parallel Library: + ----------------- + - + + + Fortran Library: + ---------------- + - Fortran async APIs H5A, H5D, H5ES, H5G, H5F, H5L and H5O were added. + + - Added Fortran APIs: + h5pset_selection_io_f, h5pget_selection_io_f + h5pset_modify_write_buf_f, h5pget_modify_write_buf_f + + C++ Library: + ------------ + - + + + Java Library: + ------------- + - + + + Tools: + ------ + - + + + High-Level APIs: + ---------------- + - + + + C Packet Table API: + ------------------- + - + + + Internal header file: + --------------------- + - + + + Documentation: + -------------- + - Ported the existing VOL Connector Author Guide document to doxygen. + + Added new dox file, VOLConnGuide.dox. + + +Support for new platforms, languages and compilers +================================================== + - + + +Bug Fixes since HDF5-1.14.0 release +=================================== + Library + ------- + - Fixed a bug in H5Ocopy that could generate invalid HDF5 files + + H5Ocopy was missing a check to determine whether the new object's + object header version is greater than version 1. Without this check, + copying of objects with object headers that are smaller than a + certain size would cause H5Ocopy to create an object header for the + new object that has a gap in the header data. According to the + HDF5 File Format Specification, this is not allowed for version + 1 of the object header format. + + Fixes GitHub issue #2653 + + - Fixed H5Pget_vol_cap_flags and H5Pget_vol_id to accept H5P_DEFAULT + + H5Pget_vol_cap_flags and H5Pget_vol_id were updated to correctly + accept H5P_DEFAULT for the 'plist_id' FAPL parameter. Previously, + they would fail if provided with H5P_DEFAULT as the FAPL. + + - Fixed ROS3 VFD anonymous credential usage with h5dump and h5ls + + ROS3 VFD anonymous credential functionality became broken in h5dump + and h5ls in the HDF5 1.14.0 release with the added support for VFD + plugins, which changed the way that the tools handled setting of + credential information that the VFD uses. The tools could be + provided the command-line option of "--s3-cred=(,,)" as a workaround + for anonymous credential usage, but the documentation for this + option stated that anonymous credentials could be used by simply + omitting the option. The latter functionality has been restored. + + Fixes GitHub issue #2406 + + - Fixed memory leaks when processing malformed object header continuation messages + + Malformed object header continuation messages can result in a too-small + buffer being passed to the decode function, which could lead to reading + past the end of the buffer. Additionally, errors in processing these + malformed messages can lead to allocated memory not being cleaned up. + + This fix adds bounds checking and cleanup code to the object header + continuation message processing. + + Fixes GitHub issue #2604 + + - Fixed memory leaks, aborts, and overflows in H5O EFL decode + + The external file list code could call assert(), read past buffer + boundaries, and not properly clean up resources when parsing malformed + external data files messages. + + This fix cleans up allocated memory, adds buffer bounds checks, and + converts asserts to HDF5 error checking. + + Fixes GitHub issue #2605 + + - Fixed potential heap buffer overflow in decoding of link info message + + Detections of buffer overflow were added for decoding version, index + flags, link creation order value, and the next three addresses. The + checkings will remove the potential invalid read of any of these + values that could be triggered by a malformed file. + + Fixes GitHub issue #2603 + + - Memory leak + + Memory leak was detected when running h5dump with "pov". The memory was allocated + via H5FL__malloc() in hdf5/src/H5FL.c + + The fuzzed file "pov" was an HDF5 file containing an illegal continuation message. + When deserializing the object header chunks for the file, memory is allocated for the + array of continuation messages (cont_msg_info->msgs) in continuation message info struct. + As error is encountered in loading the illegal message, the memory allocated for + cont_msg_info->msgs needs to be freed. + + Fixes GitHub issue #2599 + + - Fixed memory leaks that could occur when reading a dataset from a + malformed file + + When attempting to read layout, pline, and efl information for a + dataset, memory leaks could occur if attempting to read pline/efl + information threw an error, which is due to the memory that was + allocated for pline and efl not being properly cleaned up on error. + + Fixes GitHub issue #2602 + + - Fixed potential heap buffer overrun in group info header decoding from malformed file + + H5O__ginfo_decode could sometimes read past allocated memory when parsing a + group info message from the header of a malformed file. + + It now checks buffer size before each read to properly throw an error in these cases. + + Fixes GitHub issue #2601 + + - Fixed potential buffer overrun issues in some object header decode routines + + Several checks were added to H5O__layout_decode and H5O__sdspace_decode to + ensure that memory buffers don't get overrun when decoding buffers read from + a (possibly corrupted) HDF5 file. + + - Fixed issues in the Subfiling VFD when using the SELECT_IOC_EVERY_NTH_RANK + or SELECT_IOC_TOTAL I/O concentrator selection strategies + + Multiple bugs involving these I/O concentrator selection strategies + were fixed, including: + + * A bug that caused the selection strategy to be altered when + criteria for the strategy was specified in the + H5FD_SUBFILING_IOC_SELECTION_CRITERIA environment variable as + a single value, rather than in the old and undocumented + 'integer:integer' format + * Two bugs which caused a request for 'N' I/O concentrators to + result in 'N - 1' I/O concentrators being assigned, which also + lead to issues if only 1 I/O concentrator was requested + + Also added a regression test for these two I/O concentrator selection + strategies to prevent future issues. + + - Fixed a heap buffer overflow that occurs when reading from + a dataset with a compact layout within a malformed HDF5 file + + During opening of a dataset that has a compact layout, the + library allocates a buffer that stores the dataset's raw data. + The dataset's object header that gets written to the file + contains information about how large of a buffer the library + should allocate. If this object header is malformed such that + it causes the library to allocate a buffer that is too small + to hold the dataset's raw data, future I/O to the dataset can + result in heap buffer overflows. To fix this issue, an extra + check is now performed for compact datasets to ensure that + the size of the allocated buffer matches the expected size + of the dataset's raw data (as calculated from the dataset's + dataspace and datatype information). If the two sizes do not + match, opening of the dataset will fail. + + Fixes GitHub issue #2606 + + - Fixed a memory corruption issue that can occur when reading + from a dataset using a hyperslab selection in the file + dataspace and a point selection in the memory dataspace + + When reading from a dataset using a hyperslab selection in + the dataset's file dataspace and a point selection in the + dataset's memory dataspace where the file dataspace's "rank" + is greater than the memory dataspace's "rank", memory corruption + could occur due to an incorrect number of selection points + being copied when projecting the point selection onto the + hyperslab selection's dataspace. + + - Fixed an issue with collective metadata writes of global heap data + + New test failures in parallel netCDF started occurring with debug + builds of HDF5 due to an assertion failure and this was reported in + GitHub issue #2433. The assertion failure began happening after the + collective metadata write pathway in the library was updated to use + vector I/O so that parallel-enabled HDF5 Virtual File Drivers (other + than the existing MPI I/O VFD) can support collective metadata writes. + + The assertion failure was fixed by updating collective metadata writes + to treat global heap metadata as raw data, as done elsewhere in the + library. + + Fixes GitHub issue #2433 + + - Fix CVE-2021-37501 / GHSA-rfgw-5vq3-wrjf + + Check for overflow when calculating on-disk attribute data size. + + A bogus hdf5 file may contain dataspace messages with sizes + which lead to the on-disk data sizes to exceed what is addressable. + When calculating the size, make sure, the multiplication does not + overflow. + The test case was crafted in a way that the overflow caused the + size to be 0. + + Fixes GitHub issue #2458 + + - Fixed buffer overflow error in image decoding function. + + The error occurred in the function for decoding address from the specified + buffer, which is called many times from the function responsible for image + decoding. The length of the buffer is known in the image decoding function, + but no checks are produced, so the buffer overflow can occur in many places, + including callee functions for address decoding. + + The error was fixed by inserting corresponding checks for buffer overflow. + + Fixes GitHub issue #2432 + + + Java Library + ------------ + - + + + Configuration + ------------- + - Fixed syntax of generator expressions used by CMake + + Add quotes around the generator expression should allow CMake to + correctly parse the expression. Generator expressions are typically + parsed after command arguments. If a generator expression contains + spaces, new lines, semicolons or other characters that may be + interpreted as command argument separators, the whole expression + should be surrounded by quotes when passed to a command. Failure to + do so may result in the expression being split and it may no longer + be recognized as a generator expression. + + Fixes GitHub issue #2906 + + - Fixed improper include of Subfiling VFD build directory + + With the release of the Subfiling Virtual File Driver feature, compiler + flags were added to the Autotools build's CPPFLAGS and AM_CPPFLAGS + variables to always include the Subfiling VFD source code directory, + regardless of whether the VFD is enabled and built or not. These flags + are needed because the header files for the VFD contain macros that are + assumed to always be available, such as H5FD_SUBFILING_NAME, so the + header files are unconditionally included in the HDF5 library. However, + these flags are only needed when building HDF5, so they belong in the + H5_CPPFLAGS variable instead. Inclusion in the CPPFLAGS and AM_CPPFLAGS + variables would export these flags to the h5cc and h5c++ wrapper scripts, + as well as the libhdf5.settings file, which would break builds of software + that use HDF5 and try to use or parse information out of these files after + deleting temporary HDF5 build directories. + + Fixes GitHub issues #2422 and #2621 + + - Correct the CMake generated pkg-config file + + The pkg-config file generated by CMake had the order and placement of the + libraries wrong. Also added support for debug library names. + + Changed the order of Libs.private libraries so that dependencies come after + dependents. Did not move the compression libraries into Requires.private + because there was not a way to determine if the compression libraries had + supported pkconfig files. Still recommend that the CMake config file method + be used for building projects with CMake. + + Fixes GitHub issues #1546 and #2259 + + - Force lowercase Fortran module file names + + The Cray Fortran compiler uses uppercase Fortran module file names, which + caused CMake installs to fail. A compiler option was added to use lowercase + instead. + + + Tools + ----- + - Names of objects with square brackets will have trouble without the + special argument, --no-compact-subset, on the h5dump command line. + + h5diff did not have this option and now it has been added. + + Fixes GitHub issue #2682 + + - In the tools traverse function - an error in either visit call + will bypass the cleanup of the local data variables. + + Replaced the H5TOOLS_GOTO_ERROR with just H5TOOLS_ERROR. + + Fixes GitHub issue #2598 + + + Performance + ------------- + - + + + Fortran API + ----------- + - + + High-Level Library + ------------------ + - + + + Fortran High-Level APIs + ----------------------- + - + + + Documentation + ------------- + - + + + F90 APIs + -------- + - + + + C++ APIs + -------- + - + + + Testing + ------- + - + + +Platforms Tested +=================== + + Linux 5.16.14-200.fc35 GNU gcc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) + #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) + Fedora35 clang version 13.0.0 (Fedora 13.0.0-3.fc35) + (cmake and autotools) + + Linux 5.11.0-34-generic GNU gcc (GCC) 9.3.0-17ubuntu1 + #36-Ubuntu SMP x86_64 GNU/Linux GNU Fortran (GCC) 9.3.0-17ubuntu1 + Ubuntu 20.04 Ubuntu clang version 10.0.0-4 + (cmake and autotools) + + Linux 5.3.18-150300-cray_shasta_c cray-mpich/8.1.16 + #1 SMP x86_64 GNU/Linux Cray clang 14.0.0 + (crusher) GCC 11.2.0 + (cmake) + + Linux 4.14.0-115.35.1.1chaos openmpi 4.0.5 + #1 SMP aarch64 GNU/Linux GCC 9.2.0 (ARM-build-5) + (stria) GCC 7.2.0 (Spack GCC) + (cmake) + + Linux 4.14.0-115.35.1.3chaos spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 12.0.1 + (vortex) GCC 8.3.1 + XL 16.1.1 + (cmake) + + Linux-4.14.0-115.21.2 spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 12.0.1, 14.0.5 + (lassen) GCC 8.3.1 + XL 16.1.1.2, 2021,09.22, 2022.08.05 + (cmake) + + Linux-4.12.14-197.99-default cray-mpich/7.7.14 + #1 SMP x86_64 GNU/Linux cce 12.0.3 + (theta) GCC 11.2.0 + llvm 9.0 + Intel 19.1.2 + + Linux 3.10.0-1160.36.2.el7.ppc64 gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + #1 SMP ppc64be GNU/Linux g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + Power8 (echidna) GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + + Linux 3.10.0-1160.24.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos7 Version 4.8.5 20150623 (Red Hat 4.8.5-4) + (jelly/kituo/moohan) Version 4.9.3, Version 5.3.0, Version 6.3.0, + Version 7.2.0, Version 8.3.0, Version 9.1.0 + Intel(R) C (icc), C++ (icpc), Fortran (icc) + compilers: + Version 17.0.0.098 Build 20160721 + GNU C (gcc) and C++ (g++) 4.8.5 compilers + with NAG Fortran Compiler Release 6.1(Tozai) + Intel(R) C (icc) and C++ (icpc) 17.0.0.098 compilers + with NAG Fortran Compiler Release 6.1(Tozai) + MPICH 3.1.4 compiled with GCC 4.9.3 + MPICH 3.3 compiled with GCC 7.2.0 + OpenMPI 2.1.6 compiled with icc 18.0.1 + OpenMPI 3.1.3 and 4.0.0 compiled with GCC 7.2.0 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Version 19.10-0 + (autotools and cmake) + + Linux-3.10.0-1160.0.0.1chaos openmpi-4.1.2 + #1 SMP x86_64 GNU/Linux clang 6.0.0, 11.0.1 + (quartz) GCC 7.3.0, 8.1.0 + Intel 19.0.4, 2022.2, oneapi.2022.2 + + Linux-3.10.0-1160.71.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux GCC 7.2.0 + (skybridge) Intel/19.1 + (cmake) + + Linux-3.10.0-1160.66.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux GCC 7.2.0 + (attaway) Intel/19.1 + (cmake) + + Linux-3.10.0-1160.59.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux Intel/19.1 + (chama) (cmake) + + macOS Apple M1 11.6 Apple clang version 12.0.5 (clang-1205.0.22.11) + Darwin 20.6.0 arm64 gfortran GNU Fortran (Homebrew GCC 11.2.0) 11.1.0 + (macmini-m1) Intel icc/icpc/ifort version 2021.3.0 202106092021.3.0 20210609 + + macOS Big Sur 11.3.1 Apple clang version 12.0.5 (clang-1205.0.22.9) + Darwin 20.4.0 x86_64 gfortran GNU Fortran (Homebrew GCC 10.2.0_3) 10.2.0 + (bigsur-1) Intel icc/icpc/ifort version 2021.2.0 20210228 + + macOS High Sierra 10.13.6 Apple LLVM version 10.0.0 (clang-1000.10.44.4) + 64-bit gfortran GNU Fortran (GCC) 6.3.0 + (bear) Intel icc/icpc/ifort version 19.0.4.233 20190416 + + macOS Sierra 10.12.6 Apple LLVM version 9.0.0 (clang-900.39.2) + 64-bit gfortran GNU Fortran (GCC) 7.4.0 + (kite) Intel icc/icpc/ifort version 17.0.2 + + Mac OS X El Capitan 10.11.6 Apple clang version 7.3.0 from Xcode 7.3 + 64-bit gfortran GNU Fortran (GCC) 5.2.0 + (osx1011test) Intel icc/icpc/ifort version 16.0.2 + + + Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos6 Version 4.4.7 20120313 + (platypus) Version 4.9.3, 5.3.0, 6.2.0 + MPICH 3.1.4 compiled with GCC 4.9.3 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Version 19.10-0 + + Windows 10 x64 Visual Studio 2015 w/ Intel C/C++/Fortran 18 (cmake) + Visual Studio 2017 w/ Intel C/C++/Fortran 19 (cmake) + Visual Studio 2019 w/ clang 12.0.0 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2019 w/ Intel C/C++/Fortran oneAPI 2022 (cmake) + Visual Studio 2022 w/ clang 15.0.1 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2022 w/ Intel C/C++/Fortran oneAPI 2022 (cmake) + Visual Studio 2019 w/ MSMPI 10.1 (C only - cmake) + + +Known Problems +============== + + CMake files do not behave correctly with paths containing spaces. + Do not use spaces in paths because the required escaping for handling spaces + results in very complex and fragile build files. + ADB - 2019/05/07 + + At present, metadata cache images may not be generated by parallel + applications. Parallel applications can read files with metadata cache + images, but since this is a collective operation, a deadlock is possible + if one or more processes do not participate. + + CPP ptable test fails on both VS2017 and VS2019 with Intel compiler, JIRA + issue: HDFFV-10628. This test will pass with VS2015 with Intel compiler. + + The subsetting option in ph5diff currently will fail and should be avoided. + The subsetting option works correctly in serial h5diff. + + Several tests currently fail on certain platforms: + MPI_TEST-t_bigio fails with spectrum-mpi on ppc64le platforms. + + MPI_TEST-t_subfiling_vfd and MPI_TEST_EXAMPLES-ph5_subfiling fail with + cray-mpich on theta and with XL compilers on ppc64le platforms. + + MPI_TEST_testphdf5_tldsc fails with cray-mpich 7.7 on cori and theta. + + Known problems in previous releases can be found in the HISTORY*.txt files + in the HDF5 source. Please report any new problems found to + help@hdfgroup.org. + + +CMake vs. Autotools installations +================================= +While both build systems produce similar results, there are differences. +Each system produces the same set of folders on linux (only CMake works +on standard Windows); bin, include, lib and share. Autotools places the +COPYING and RELEASE.txt file in the root folder, CMake places them in +the share folder. + +The bin folder contains the tools and the build scripts. Additionally, CMake +creates dynamic versions of the tools with the suffix "-shared". Autotools +installs one set of tools depending on the "--enable-shared" configuration +option. + build scripts + ------------- + Autotools: h5c++, h5cc, h5fc + CMake: h5c++, h5cc, h5hlc++, h5hlcc + +The include folder holds the header files and the fortran mod files. CMake +places the fortran mod files into separate shared and static subfolders, +while Autotools places one set of mod files into the include folder. Because +CMake produces a tools library, the header files for tools will appear in +the include folder. + +The lib folder contains the library files, and CMake adds the pkgconfig +subfolder with the hdf5*.pc files used by the bin/build scripts created by +the CMake build. CMake separates the C interface code from the fortran code by +creating C-stub libraries for each Fortran library. In addition, only CMake +installs the tools library. The names of the szip libraries are different +between the build systems. + +The share folder will have the most differences because CMake builds include +a number of CMake specific files for support of CMake's find_package and support +for the HDF5 Examples CMake project. + +The issues with the gif tool are: + HDFFV-10592 CVE-2018-17433 + HDFFV-10593 CVE-2018-17436 + HDFFV-11048 CVE-2020-10809 +These CVE issues have not yet been addressed and are avoided by not building +the gif tool by default. Enable building the High-Level tools with these options: + autotools: --enable-hlgiftools + cmake: HDF5_BUILD_HL_GIF_TOOLS=ON + + +%%%%1.14.0%%%% + +HDF5 version 1.14.0 released on 2022-12-28 +================================================================================ + + +INTRODUCTION +============ + +This document describes the differences between this release and the previous +HDF5 release. It contains information on the platforms tested and known +problems in this release. For more details check the HISTORY*.txt files in the +HDF5 source. + +Note that documentation in the links below will be updated at the time of each +final release. + +Links to HDF5 documentation can be found on The HDF5 web page: + + https://portal.hdfgroup.org/display/HDF5/HDF5 + +The official HDF5 releases can be obtained from: + + https://www.hdfgroup.org/downloads/hdf5/ + +Changes from Release to Release and New Features in the HDF5-1.13.x release series +can be found at: + + https://portal.hdfgroup.org/display/HDF5/Release+Specific+Information + +If you have any questions or comments, please send them to the HDF Help Desk: + + help@hdfgroup.org + + +CONTENTS +======== + +- New Features +- Support for new platforms and languages +- Bug Fixes since HDF5-1.12.0 +- Platforms Tested +- Known Problems +- CMake vs. Autotools installations + + +New Features +============ + + Configuration: + ------------- + - Removal of MPE support + + The ability to build with MPE instrumentation has been removed along with + the following configure options: + + Autotools: + --with-mpe= + + CMake has never supported building with MPE support. + + (DER - 2022/11/08) + + - Removal of dmalloc support + + The ability to build with dmalloc support has been removed along with + the following configure options: + + Autotools: + --with-dmalloc= + + CMake: + HDF5_ENABLE_USING_DMALLOC + + (DER - 2022/11/08) + + - Removal of memory allocation sanity checks configure options + + With the removal of the memory allocation sanity checks feature, the + following configure options are no longer necessary and have been + removed: + + Autotools: + --enable-memory-alloc-sanity-check + + CMake: + HDF5_MEMORY_ALLOC_SANITY_CHECK + HDF5_ENABLE_MEMORY_STATS + + (DER - 2022/11/03) + + - Add new CMake configuration variable HDF5_USE_GNU_DIRS + + HDF5_USE_GNU_DIRS (default OFF) selects the use of GNU Coding Standard install + directory variables by including the CMake module, GNUInstallDirs(see CMake + documentation for details). The HDF_DIR_PATHS macro in the HDFMacros.cmake file + sets various PATH variables for use during the build, test and install processes. + By default, the historical settings for these variables will be used. + + (ADB - 2022/10/21, GH-2175, GH-1716) + + - Update CMake minimum version to 3.18 + + Updated CMake minimum version from 3.12 to 3.18 and removed version checks + which were added for Windows features not yet available in version 3.12. Also + removed configure macros and code checks for old style code compile checks. + + (ADB - 2022/08/29, HDFFV-11329) + + - Correct the usage of CMAKE_Fortran_MODULE_DIRECTORY and where to + install Fortran mod files. + + The Fortran modules files, ending in .mod are files describing a + Fortran 90 (and above) module API and ABI. These are not like C + header files describing an API, they are compiler dependent and + arch dependent, and not easily readable by a human being. They are + nevertheless searched for in the includes directories by gfortran + (in directories specified with -I). + + Autotools configure uses the -fmoddir option to specify the folder. + CMake will use "mod" folder by default unless overridden by the CMake + variable; HDF5_INSTALL_MODULE_DIR. + + (ADB - 2022/07/21) + + - HDF5 memory allocation sanity checking is now off by default for + Autotools debug builds + + HDF5 can be configured to perform sanity checking on internal memory + allocations by adding heap canaries to these allocations. However, + enabling this option can cause issues with external filter plugins + when working with (reallocating/freeing/allocating and passing back) + buffers. + + Previously, this option was off by default for all CMake build types, + but only off by default for non-debug Autotools builds. Since debug + is the default build mode for HDF5 when built from source with + Autotools, this can result in surprising segfaults that don't occur + when an application is built against a release version of HDF5. + Therefore, this option is now off by default for all build types + across both CMake and Autotools. + + (JTH - 2022/03/01) + + - Reworked corrected path searched by CMake find_package command + + The install path for cmake find_package files had been changed to use + "share/cmake" + for all platforms. However setting the HDF5_ROOT variable failed to locate + the configuration files. The build variable HDF5_INSTALL_CMAKE_DIR is now + set to the /cmake folder. The location of the configuration + files can still be specified by the "HDF5_DIR" variable. + + (ADB - 2022/02/02) + + - CPack will now generate RPM/DEB packages. + + Enabled the RPM and DEB CPack generators on linux. In addition to + generating STGZ and TGZ packages, CPack will try to package the + library for RPM and DEB packages. This is the initial attempt and + may change as issues are resolved. + + (ADB - 2022/01/27) + + - Added new option to the h5cc scripts produced by CMake. + + Add -showconfig option to h5cc scripts to cat the + libhdf5.settings file to the standard output. + + (ADB - 2022/01/25) + + - CMake will now run the PowerShell script tests in test/ by default + on Windows. + + The test directory includes several shell script tests that previously + were not run by CMake on Windows. These are now run by default. + If TEST_SHELL_SCRIPTS is ON and PWSH is found, the PowerShell scripts + will execute. Similar to the bash scripts on unix platforms. + + (ADB - 2021/11/23) + + - Added new configure option to support building parallel tools. + See Tools below (autotools - CMake): + --enable-parallel-tools HDF5_BUILD_PARALLEL_TOOLS + + (RAW - 2021/10/25) + + - Added new configure options to enable dimension scales APIs (H5DS*) to + use new object references with the native VOL connector (aka native HDF5 + library). New references are always used for non-native terminal VOL + connectors (e.g., DAOS). + + Autotools --enable-dimension-scales-with-new-ref + CMake HDF5_DIMENSION_SCALES_NEW_REF=ON + + (EIP - 2021/10/25, HDFFV-11180) + + - Refactored the utils folder. + + Added subfolder test and moved the 'swmr_check_compat_vfd.c file' + from test into utils/test. Deleted the duplicate swmr_check_compat_vfd.c + file in hl/tools/h5watch folder. Also fixed vfd check options. + + (ADB - 2021/10/18) + + - Changed autotools and CMake configurations to derive both + compilation warnings-as-errors and warnings-only-warn configurations + from the same files, 'config/*/*error*'. Removed redundant files + 'config/*/*noerror*'. + + (DCY - 2021/09/29) + + - Adds C++ Autotools configuration file for Intel + + * Checks for icpc as the compiler + * Sets std=c++11 + * Copies most non-warning flags from intel-flags + + (DER - 2021/06/02) + + - Adds C++ Autotools configuration file for PGI + + * Checks for pgc++ as the compiler name (was: pgCC) + * Sets -std=c++11 + * Other options basically match new C options (below) + + (DER - 2021/06/02) + + - Updates PGI C options + + * -Minform set to warn (was: inform) to suppress spurious messages + * Sets -gopt -O2 as debug options + * Sets -O4 as 'high optimization' option + * Sets -O0 as 'no optimization' option + * Removes specific settings for PGI 9 and 10 + + (DER - 2021/06/02) + + - A C++11-compliant compiler is now required to build the C++ wrappers + + CMAKE_CXX_STANDARD is now set to 11 when building with CMake and + -std=c++11 is added when building with clang/gcc via the Autotools. + + (DER - 2021/05/27) + + - CMake will now run the shell script tests in test/ by default + + The test directory includes several shell script tests that previously + were not run by CMake. These are now run by default. TEST_SHELL_SCRIPTS + has been set to ON and SH_PROGRAM has been set to bash (some test + scripts use bash-isms). Platforms without bash (e.g., Windows) will + ignore the script tests. + + (DER - 2021/05/23) + + - Removed unused HDF5_ENABLE_HSIZET option from CMake + + This has been unused for some time and has no effect. + + (DER - 2021/05/23) + + - CMake no longer builds the C++ library by default + + HDF5_BUILD_CPP_LIB now defaults to OFF, which is in line with the + Autotools build defaults. + + (DER - 2021/04/20) + + - Removal of pre-VS2015 work-arounds + + HDF5 now requires Visual Studio 2015 or greater, so old work-around + code and definitions have been removed, including: + + * + * snprintf and vsnprintf + * llround, llroundf, lround, lroundf, round, roundf + * strtoll and strtoull + * va_copy + * struct timespec + + (DER - 2021/03/22) + + - Add CMake variable HDF5_LIB_INFIX + + This infix is added to all library names after 'hdf5'. + e.g. the infix '_openmpi' results in the library name 'libhdf5_openmpi.so' + This name is used in packages on debian based systems. + (see https://packages.debian.org/jessie/amd64/libhdf5-openmpi-8/filelist) + + (barcode - 2021/03/22) + + - On macOS, Universal Binaries can now be built, allowing native execution on + both Intel and Apple Silicon (ARM) based Macs. + + To do so, set CMAKE_OSX_ARCHITECTURES="x86_64;arm64" + + (SAM - 2021/02/07, github-311) + + - Added a configure-time option to control certain compiler warnings + diagnostics + + A new configure-time option was added that allows some compiler warnings + diagnostics to have the default operation. This is mainly intended for + library developers and currently only works for gcc 10 and above. The + diagnostics flags apply to C, C++ and Fortran compilers and will appear + in "H5 C Flags", H5 C++ Flags" and H5 Fortran Flags, respectively. They + will NOT be exported to h5cc, etc. + + The default is OFF, which will disable the warnings URL and color attributes + for the warnings output. ON will not add the flags and allow default behavior. + + Autotools: --enable-diags + + CMake: HDF5_ENABLE_BUILD_DIAGS + + (ADB - 2021/02/05, HDFFV-11213) + + - CMake option to build the HDF filter plugins project as an external project + + The HDF filter plugins project is a collection of registered compression + filters that can be dynamically loaded when needed to access data stored + in a hdf5 file. This CMake-only option allows the plugins to be built and + distributed with the hdf5 library and tools. Like the options for szip and + zlib, either a tgz file or a git repository can be specified for the source. + + The option was refactored to use the CMake FetchContent process. This allows + more control over the filter targets, but required external project command + options to be moved to a CMake include file, HDF5PluginCache.cmake. Also + enabled the filter examples to be used as tests for operation of the + filter plugins. + + (ADB - 2020/12/10, OESS-98) + + - FreeBSD Autotools configuration now defaults to 'cc' and 'c++' compilers + + On FreeBSD, the autotools defaulted to 'gcc' as the C compiler and did + not process C++ options. Since FreeBSD 10, the default compiler has + been clang (via 'cc'). + + The default compilers have been set to 'cc' for C and 'c++' for C++, + which will pick up clang and clang++ respectively on FreeBSD 10+. + Additionally, clang options are now set correctly for both C and C++ + and g++ options will now be set if that compiler is being used (an + omission from the former functionality). + + (DER - 2020/11/28, HDFFV-11193) + + - Fixed POSIX problems when building w/ gcc on Solaris + + When building on Solaris using gcc, the POSIX symbols were not + being set correctly, which could lead to issues like clock_gettime() + not being found. + + The standard is now set to gnu99 when building with gcc on Solaris, + which allows POSIX things to be #defined and linked correctly. This + differs slightly from the gcc norm, where we set the standard to c99 + and manually set POSIX #define symbols. + + (DER - 2020/11/25, HDFFV-11191) + + - Added a configure-time option to consider certain compiler warnings + as errors + + A new configure-time option was added that converts some compiler warnings + to errors. This is mainly intended for library developers and currently + only works for gcc and clang. The warnings that are considered errors + will appear in the generated libhdf5.settings file. These warnings apply + to C and C++ code and will appear in "H5 C Flags" and H5 C++ Flags", + respectively. They will NOT be exported to h5cc, etc. + + The default is OFF. Building with this option may fail when compiling + on operating systems and with compiler versions not commonly used by + the library developers. Compilation may also fail when headers not + under the control of the library developers (e.g., mpi.h, hdfs.h) raise + warnings. + + Autotools: --enable-warnings-as-errors + + CMake: HDF5_ENABLE_WARNINGS_AS_ERRORS + + (DER - 2020/11/23, HDFFV-11189) + + - Autotools and CMake target added to produce doxygen generated documentation + + The default is OFF or disabled. + Autoconf option is '--enable-doxygen' + autotools make target is 'doxygen' and will build all doxygen targets + CMake configure option is 'HDF5_BUILD_DOC'. + CMake target is 'doxygen' for all available doxygen targets + CMake target is 'hdf5lib_doc' for the src subdirectory + + (ADB - 2020/11/03) + + - CMake option to use MSVC naming conventions with MinGW + + HDF5_MSVC_NAMING_CONVENTION option enable to use MSVC naming conventions + when using a MinGW toolchain + + (xan - 2020/10/30) + + - CMake option to statically link gcc libs with MinGW + + HDF5_MINGW_STATIC_GCC_LIBS allows to statically link libg/libstdc++ + with the MinGW toolchain + + (xan - 2020/10/30) + + - CMake option to build the HDF filter plugins project as an external project + + The HDF filter plugins project is a collection of registered compression + filters that can be dynamically loaded when needed to access data stored + in a hdf5 file. This CMake-only option allows the plugins to be built and + distributed with the hdf5 library and tools. Like the options for szip and + zlib, either a tgz file or a git repository can be specified for the source. + + The necessary options are (see the INSTALL_CMake.txt file): + HDF5_ENABLE_PLUGIN_SUPPORT + PLUGIN_TGZ_NAME or PLUGIN_GIT_URL + There are more options necessary for various filters and the plugin project + documents should be referenced. + + (ADB - 2020/09/27, OESS-98) + + - Added CMake option to format source files + + HDF5_ENABLE_FORMATTERS option will enable creation of targets using the + pattern - HDF5_*_SRC_FORMAT - where * corresponds to the source folder + or tool folder. All sources can be formatted by executing the format target; + make format + + (ADB - 2020/08/24) + + - Add file locking configure and CMake options + + HDF5 1.10.0 introduced a file locking scheme, primarily to help + enforce SWMR setup. Formerly, the only user-level control of the scheme + was via the HDF5_USE_FILE_LOCKING environment variable. + + This change introduces configure-time options that control whether + or not file locking will be used and whether or not the library + ignores errors when locking has been disabled on the file system + (useful on some HPC Lustre installations). + + In both the Autotools and CMake, the settings have the effect of changing + the default property list settings (see the H5Pset/get_file_locking() + entry, below). + + The yes/no/best-effort file locking configure setting has also been + added to the libhdf5.settings file. + + Autotools: + + An --enable-file-locking=(yes|no|best-effort) option has been added. + + yes: Use file locking. + no: Do not use file locking. + best-effort: Use file locking and ignore "disabled" errors. + + CMake: + + Two self-explanatory options have been added: + + HDF5_USE_FILE_LOCKING + HDF5_IGNORE_DISABLED_FILE_LOCKS + + Setting both of these to ON is the equivalent to the Autotools' + best-effort setting. + + NOTE: + The precedence order of the various file locking control mechanisms is: + + 1) HDF5_USE_FILE_LOCKING environment variable (highest) + + 2) H5Pset_file_locking() + + 3) configure/CMake options (which set the property list defaults) + + 4) library defaults (currently best-effort) + + (DER - 2020/07/30, HDFFV-11092) + + - CMake option to link the generated Fortran MOD files into the include + directory. + + The Fortran generation of MOD files by a Fortran compile can produce + different binary files between SHARED and STATIC compiles with different + compilers and/or different platforms. Note that it has been found that + different versions of Fortran compilers will produce incompatible MOD + files. Currently, CMake will locate these MOD files in subfolders of + the include directory and add that path to the Fortran library target + in the CMake config file, which can be used by the CMake find library + process. For other build systems using the binary from a CMake install, + a new CMake configuration can be used to copy the pre-chosen version + of the Fortran MOD files into the install include directory. + + The default will depend on the configuration of + BUILD_STATIC_LIBS and BUILD_SHARED_LIBS: + YES YES Default to SHARED + YES NO Default to STATIC + NO YES Default to SHARED + NO NO Default to SHARED + The defaults can be overridden by setting the config option + HDF5_INSTALL_MOD_FORTRAN to one of NO, SHARED, or STATIC + + (ADB - 2020/07/09, HDFFV-11116) + + - CMake option to use AEC (open source SZip) library instead of SZip + + The open source AEC library is a replacement library for SZip. In + order to use it for hdf5 the libaec CMake source was changed to add + "-fPIC" and exclude test files. Autotools does not build the + compression libraries within hdf5 builds. New option USE_LIBAEC is + required to compensate for the different files produced by AEC build. + + (ADB - 2020/04/22, OESS-65) + + - CMake ConfigureChecks.cmake file now uses CHECK_STRUCT_HAS_MEMBER + + Some handcrafted tests in HDFTests.c has been removed and the CMake + CHECK_STRUCT_HAS_MEMBER module has been used. + + (ADB - 2020/03/24, TRILAB-24) + + - Both build systems use same set of warnings flags + + GNU C, C++ and gfortran warnings flags were moved to files in a config + sub-folder named gnu-warnings. Flags that only are available for a specific + version of the compiler are in files named with that version. + Clang C warnings flags were moved to files in a config sub-folder + named clang-warnings. + Intel C, Fortran warnings flags were moved to files in a config sub-folder + named intel-warnings. + + There are flags in named "error-xxx" files with warnings that may + be promoted to errors. Some source files may still need fixes. + + There are also pairs of files named "developer-xxx" and "no-developer-xxx" + that are chosen by the CMake option:HDF5_ENABLE_DEV_WARNINGS or the + configure option:--enable-developer-warnings. + + In addition, CMake no longer applies these warnings for examples. + + (ADB - 2020/03/24, TRILAB-192) + + + Library: + -------- + - Overhauled the Virtual Object Layer (VOL) + + The virtual object layer (VOL) was added in HDF5 1.12.0 but the initial + implementation required API-breaking changes to better support optional + operations and pass-through VOL connectors. The original VOL API is + now considered deprecated and VOL users and connector authors should + target the 1.14 VOL API. + + The specific changes are too extensive to document in a release note, so + VOL users and connector authors should consult the updated VOL connector + author's guide and the 1.12-1.14 VOL migration guide. + + (DER - 2022/12/28) + + - H5VLquery_optional() signature change + + The last parameter of this API call has changed from a pointer to hbool_t + to a pointer to uint64_t. Due to the changes in how optional operations + are handled in the 1.14 VOL API, we cannot make the old API call work + with the new scheme, so there is no API compatibility macro for it. + + (DER - 2022/12/28) + + - H5I_free_t callback signature change + + In order to support asynchronous operations and future IDs, the signature + of the H5I_free_t callback has been modified to take a second 'request' + parameter. Due to the nature of the internal library changes, no API + compatibility macro is available for this change. + + (DER - 2022/12/28) + + - Fix for CVE-2019-8396 + + Malformed HDF5 files may have truncated content which does not match + the expected size. When H5O__pline_decode() attempts to decode these it + may read past the end of the allocated space leading to heap overflows + as bounds checking is incomplete. + + The fix ensures each element is within bounds before reading. + + (2022/11/09 - HDFFV-10712, CVE-2019-8396, GitHub #2209) + + - Removal of memory allocation sanity checks feature + + This feature added heap canaries and statistics tracking for internal + library memory operations. Unfortunately, the heap canaries caused + problems when library memory operations were mixed with standard C + library memory operations (such as in the filter pipeline, where + buffers may have to be reallocated). Since any platform with a C + compiler also usually has much more sophisticated memory sanity + checking tools than the HDF5 library provided (e.g., valgrind), we + have decided to to remove the feature entirely. + + In addition to the configure changes described above, this also removes + the following from the public API: + H5get_alloc_stats() + H5_alloc_stats_t + + (DER - 2022/11/03) + + - Added multi dataset I/O feature + + Added H5Dread_multi, H5Dread_multi_async, H5Dwrite_multi, and + H5Dwrite_multi_async API routines to allow I/O on multiple datasets with a + single API call. Added H5Dread_multi_f and H5Dwrite_multi_f Fortran + wrappers. Updated VOL callbacks for dataset I/O to support multi dataset + I/O. + + (NAF - 2022/10/19) + + - Onion VFD + + The onion VFD allows creating "versioned" HDF5 files. File open/close + operations after initial file creation will add changes to an external + "onion" file (.onion extension by default) instead of the original file. + Each written revision can be opened independently. + + To open a file with the onion VFD, use the H5Pset_fapl_onion() API call + (does not need to be used for the initial creation of the file). The + options for the H5FD_onion_fapl_info_t struct are described in H5FDonion.h. + + The H5FDonion_get_revision_count() API call can be used to query a file + to find out how many revisions have been created. + + (DER - 2022/08/02) + + - Subfiling VFD + + The HDF5 Subfiling VFD is a new MPI-based file driver that allows an + HDF5 application to distribute an HDF5 file across a collection of + "sub-files" in equal-sized data segment "stripes". I/O to the logical + HDF5 file is then directed to the appropriate "sub-file" according to + the Subfiling configuration and a system of I/O concentrators, which + are MPI ranks operating worker threads. + + By allowing a configurable stripe size, number of I/O concentrators and + method for selecting MPI ranks as I/O concentrators, the Subfiling VFD + aims to enable an HDF5 application to find a middle ground between the + single shared file and file-per-process approaches to parallel file I/O + for the particular machine the application is running on. In general, the + goal is to avoid some of the complexity of the file-per-process approach + while also minimizing the locking issues of the single shared file approach + on a parallel file system. + + Also included with the Subfiling VFD is a new h5fuse.sh script which + reads a Subfiling configuration file and then combines the various + sub-files back into a single HDF5 file. By default, the h5fuse.sh script + looks in the current directory for the Subfiling configuration file, + but can also be pointed to the configuration file with a command-line + option. + + The Subfiling VFD can be used by calling H5Pset_fapl_subfiling() on a + File Access Property List and using that FAPL for file operations. Note + that the Subfiling VFD currently has the following limitations: + + * Does not currently support HDF5 collective I/O, other than collective + metadata writes and reads as set by H5Pset_coll_metadata_write() and + H5Pset_all_coll_metadata_ops() + + * The Subfiling VFD should not currently be used with an HDF5 library + that has been built with thread-safety enabled. This can cause deadlocks + when failures occur due to interactions between the VFD's internal + threads and HDF5's global lock. + + (JTH - 2022/07/22) + + - Add a new public function, H5ESget_requests() + + This function allows the user to retrieve request pointers from an event + set. It is intended for use primarily by VOL plugin developers. + + (NAF - 2022/01/11) + + - Adds new file driver-level memory copy operation for + "ctl" callback and updates compact dataset I/O routines + to utilize it + + When accessing an HDF5 file with a file driver that uses + memory allocated in special ways (e.g., without standard + library's `malloc`), a crash could be observed when HDF5 + tries to perform `memcpy` operations on such a memory + region. + + These changes add a new H5FD_FEAT_MEMMANAGE VFD feature + flag, which, if specified as supported by a VFD, will + inform HDF5 that the VFD either uses special memory + management routines or wishes to perform memory management + in a specific way. Therefore, this flag instructs HDF5 to + ask the file driver to perform memory management for + certain operations. + + These changes also introduce a new "ctl" callback + operation identified by the H5FD_CTL__MEM_COPY op code. + This operation simply asks a VFD to perform a memory copy. + The arguments to this operation are passed to the "ctl" + callback's "input" parameter as a pointer to a struct + defined as: + + struct H5FD_ctl_memcpy_args_t { + void * dstbuf; /**< Destination buffer */ + hsize_t dst_off; /**< Offset within destination buffer */ + const void *srcbuf; /**< Source buffer */ + hsize_t src_off; /**< Offset within source buffer */ + size_t len; /**< Length of data to copy from source buffer */ + } H5FD_ctl_memcpy_args_t; + + Further, HDF5's compact dataset I/O routines were + identified as a problematic area that could cause a crash + for VFDs that make use of special memory management. Those + I/O routines were therefore updated to make use of this new + "ctl" callback operation in order to ask the underlying + file driver to correctly handle memory copies. + + (JTH - 2021/09/28) + + - Adds new "ctl" callback to VFD H5FD_class_t structure + with the following prototype: + + herr_t (*ctl)(H5FD_t *file, uint64_t op_code, + uint64_t flags, const void *input, + void **output); + + This newly-added "ctl" callback allows Virtual File + Drivers to intercept and handle arbitrary operations + identified by an operation code. Its parameters are + as follows: + + `file` [in] - A pointer to the file to be operated on + `op_code` [in] - The operation code identifying the + operation to be performed + `flags` [in] - Flags governing the behavior of the + operation performed (see H5FDpublic.h + for a list of valid flags) + `input` [in] - A pointer to arguments passed to the + VFD performing the operation + `output` [out] - A pointer for the receiving VFD to + use for output from the operation + + (JRM - 2021/08/16) + + - Change how the release part of version, in major.minor.release is checked + for compatibility + + The HDF5 library uses a function, H5check_version, to check that + the version defined in the header files, which is used to compile an + application is compatible with the version codified in the library, which + the application loads at runtime. This previously required an exact match + or the library would print a warning, dump the build settings and then + abort or continue. An environment variable controlled the logic. + + Now the function first checks that the library release version, in + major.minor.release, is not older than the version in the headers. + Secondly, if the release version is different, it checks if either + the library version or the header version is in the exception list, in + which case the release part of version, in major.minor.release, must + be exact. An environment variable still controls the logic. + + (ADB - 2021/07/27) + + - gcc warning suppression macros were moved out of H5public.h + + The HDF5 library uses a set of macros to suppress warnings on gcc. + These warnings were originally located in H5public.h so that the + multi VFD (which only uses public headers) could also make use of them + but internal macros should not be publicly exposed like this. + + These macros have now been moved to H5private.h. Pending future multi + VFD refactoring, the macros have been duplicated in H5FDmulti.c to + suppress the format string warnings there. + + (DER - 2021/06/03) + + - H5Gcreate1() now rejects size_hint parameters larger than UINT32_MAX + + The size_hint value is ultimately stored in a uint32_t struct field, + so specifying a value larger than this on a 64-bit machine can cause + undefined behavior including crashing the system. + + The documentation for this API call was also incorrect, stating that + passing a negative value would cause the library to use a default + value. Instead, passing a "negative" value actually passes a very large + value, which is probably not what the user intends and can cause + crashes on 64-bit systems. + + The Doxygen documentation has been updated and passing values larger + than UINT32_MAX for size_hint will now produce a normal HDF5 error. + + (DER - 2021/04/29, HDFFV-11241) + + + - H5Pset_fapl_log() no longer crashes when passed an invalid fapl ID + + When passed an invalid fapl ID, H5Pset_fapl_log() would usually + segfault when attempting to free an uninitialized pointer in the error + handling code. This behavior is more common in release builds or + when the memory sanitization checks were not selected as a build + option. + + The pointer is now correctly initialized and the API call now + produces a normal HDF5 error when fed an invalid fapl ID. + + (DER - 2021/04/28, HDFFV-11240) + + - Fixes a segfault when H5Pset_mdc_log_options() is called multiple times + + The call incorrectly attempts to free an internal copy of the previous + log location string, which causes a segfault. This only happens + when the call is invoked multiple times on the same property list. + On the first call to a given fapl, the log location is set to NULL so + the segfault does not occur. + + The string is now handled properly and the segfault no longer occurs. + + (DER - 2021/04/27, HDFFV-11239) + + - HSYS_GOTO_ERROR now emits the results of GetLastError() on Windows + + HSYS_GOTO_ERROR is an internal macro that is used to produce error + messages when system calls fail. These strings include errno and the + the associated strerror() value, which are not particularly useful + when a Win32 API call fails. + + On Windows, this macro has been updated to include the result of + GetLastError(). When a system call fails on Windows, usually only + one of errno and GetLastError() will be useful, however we emit both + for the user to parse. The Windows error message is not emitted as + it would be awkward to free the FormatMessage() buffer given the + existing HDF5 error framework. Users will have to look up the error + codes in MSDN. + + The format string on Windows has been changed from: + + "%s, errno = %d, error message = '%s'" + + to: + + "%s, errno = %d, error message = '%s', Win32 GetLastError() = %"PRIu32"" + + for those inclined to parse it for error values. + + (DER - 2021/03/21) + + - File locking now works on Windows + + Since version 1.10.0, the HDF5 library has used a file locking scheme + to help enforce one reader at a time accessing an HDF5 file, which can + be helpful when setting up readers and writers to use the single- + writer/multiple-readers (SWMR) access pattern. + + In the past, this was only functional on POSIX systems where flock() or + fcntl() were present. Windows used a no-op stub that always succeeded. + + HDF5 now uses LockFileEx() and UnlockFileEx() to lock the file using the + same scheme as POSIX systems. We lock the entire file when we set up the + locks (by passing DWORDMAX as both size parameters to LockFileEx()). + + (DER - 2021/03/19, HDFFV-10191) + + - H5Epush_ret() now requires a trailing semicolon + + H5Epush_ret() is a function-like macro that has been changed to + contain a `do {} while(0)` loop. Consequently, a trailing semicolon + is now required to end the `while` statement. Previously, a trailing + semi would work, but was not mandatory. This change was made to allow + clang-format to correctly format the source code. + + (SAM - 2021/03/03) + + - Improved performance of H5Sget_select_elem_pointlist + + Modified library to cache the point after the last block of points + retrieved by H5Sget_select_elem_pointlist, so a subsequent call to the + same function to retrieve the next block of points from the list can + proceed immediately without needing to iterate over the point list. + + (NAF - 2021/01/19) + + - Replaced H5E_ATOM with H5E_ID in H5Epubgen.h + + The term "atom" is archaic and not in line with current HDF5 library + terminology, which uses "ID" instead. "Atom" has mostly been purged + from the library internals and this change removes H5E_ATOM from + the H5Epubgen.h (exposed via H5Epublic.h) and replaces it with + H5E_ID. + + (DER - 2020/11/24, HDFFV-11190) + + - Add a new public function H5Ssel_iter_reset + + This function resets a dataspace selection iterator back to an + initial state so that it may be used for iteration once more. + This can be useful when needing to iterate over a selection + multiple times without having to repeatedly create/destroy + a selection iterator for that dataspace selection. + + (JTH - 2020/09/18) + + - Remove HDFS VFD stubs + + The original implementation of the HDFS VFD included non-functional + versions of the following public API calls when the HDFS VFD is + not built as a part of the HDF5 library: + + * H5FD_hdfs_init() + * H5Pget_fapl_hdfs() + * H5Pset_fapl_hdfs() + + They will remain present in HDF5 1.10 and HDF5 1.12 releases + for binary compatibility purposes but have been removed as of 1.14.0. + + Note that this has nothing to do with the real HDFS VFD API calls + that are fully functional when the HDFS VFD is configured and built. + + We simply changed: + + #ifdef LIBHDFS + + #else + + #endif + + to: + + #ifdef LIBHDFS + + #endif + + Which is how the other optional VFDs are handled. + + (DER - 2020/08/27) + + - Add Mirror VFD + + Use TCP/IP sockets to perform write-only (W/O) file I/O on a remote + machine. Must be used in conjunction with the Splitter VFD. + + (JOS - 2020/03/13, TBD) + + - Add Splitter VFD + + Maintain separate R/W and W/O channels for "concurrent" file writes + to two files using a single HDF5 file handle. + + (JOS - 2020/03/13, TBD) + + + Parallel Library: + ----------------- + - Several improvements to parallel compression feature, including: + + * Improved support for collective I/O (for both writes and reads) + + * Significant reduction of memory usage for the feature as a whole + + * Reduction of copying of application data buffers passed to H5Dwrite + + * Addition of support for incremental file space allocation for filtered + datasets created in parallel. Incremental file space allocation is the + default for these types of datasets (early file space allocation is + also still supported), while early file space allocation is still the + default (and only supported at allocation time) for unfiltered datasets + created in parallel. Incremental file space allocation should help with + parallel HDF5 applications that wish to use fill values on filtered + datasets, but would typically avoid doing so since dataset creation in + parallel would often take an excessive amount of time. Since these + datasets previously used early file space allocation, HDF5 would + allocate space for and write fill values to every chunk in the dataset + at creation time, leading to noticeable overhead. Instead, with + incremental file space allocation, allocation of file space for chunks + and writing of fill values to those chunks will be delayed until each + individual chunk is initially written to. + + * Addition of support for HDF5's "don't filter partial edge chunks" flag + (https://portal.hdfgroup.org/display/HDF5/H5P_SET_CHUNK_OPTS) + + * Addition of proper support for HDF5 fill values with the feature + + * Addition of 'H5_HAVE_PARALLEL_FILTERED_WRITES' macro to H5pubconf.h + so HDF5 applications can determine at compile-time whether the feature + is available + + * Addition of simple examples (ph5_filtered_writes.c and + ph5_filtered_writes_no_sel.c) under examples directory to demonstrate + usage of the feature + + * Improved coverage of regression testing for the feature + + (JTH - 2022/2/23) + + + Fortran Library: + ---------------- + - Added pointer based H5Dfill_f API + + Added Fortran H5Dfill_f, which is fully equivalent to the C API. It accepts pointers, + fill value datatype and datatype of dataspace elements. + + (MSB - 2022/10/10, HDFFV-10734.) + + - H5Fget_name_f fixed to handle correctly trailing whitespaces and + newly allocated buffers. + + (MSB - 2021/08/30, github-826,972) + + - Add wrappers for H5Pset/get_file_locking() API calls + + h5pget_file_locking_f() + h5pset_file_locking_f() + + See the configure option discussion for HDFFV-11092 (above) for more + information on the file locking feature and how it's controlled. + + (DER - 2020/07/30, HDFFV-11092) + + + C++ Library: + ------------ + - Added two new constructors to H5::H5File class + + Two new constructors were added to allow opening a file with non-default + access property list. + + - Add wrappers for H5Pset/get_file_locking() API calls + + FileAccPropList::setFileLocking() + FileAccPropList::getFileLocking() + + See the configure option discussion for HDFFV-11092 (above) for more + information on the file locking feature and how it's controlled. + + (DER - 2020/07/30, HDFFV-11092) + + + Java Library: + ------------- + - Added version of H5Rget_name to return the name as a Java string. + + Other functions that get_name process the get_size then get the name + within the JNI implementation. Now H5Rget_name has a H5Rget_name_string. + + (ADB - 2022/07/12) + + - Added reference support to H5A and H5D read write vlen JNI functions. + + Added the implementation to handle VL references as an Array of Lists + of byte arrays. + + The JNI wrappers translate the Array of Lists to/from the hvl_t vlen + structures. The wrappers use the specified datatype arguments for the + List type translation, it is expected that the Java type is correct. + + (ADB - 2022/07/11, HDFFV-11318) + + - H5A and H5D read write vlen JNI functions were incorrect. + + Corrected the vlen function implementations for the basic primitive types. + The VLStrings functions now correctly use the implementation that had been + the VL functions. (VLStrings functions did not have an implementation.) + The new VL functions implementation now expect an Array of Lists between + Java and the JNI wrapper. + + The JNI wrappers translate the Array of Lists to/from the hvl_t vlen + structures. The wrappers use the specified datatype arguments for the + List type translation, it is expected that the Java type is correct. + + (ADB - 2022/07/07, HDFFV-11310) + + - H5A and H5D read write JNI functions had flawed vlen datatype check. + + Adapted tools function for JNI utils file. This reduced multiple calls + to a single check and variable. The variable can then be used to call + the H5Treclaim function. Adjusted existing test and added new test. + + (ADB - 2022/06/22) + + - Replaced HDF5AtomException with HDF5IdException + + Since H5E_ATOM changed to H5E_ID in the C library, the Java exception + that wraps the error category was also renamed. Its functionality + remains unchanged aside from the name. + + (See also the HDFFV-11190 note in the C library section) + + (DER - 2020/11/24, HDFFV-11190) + + - Added new H5S functions. + + H5Sselect_copy, H5Sselect_shape_same, H5Sselect_adjust, + H5Sselect_intersect_block, H5Sselect_project_intersection, + H5Scombine_hyperslab, H5Smodify_select, H5Scombine_select + wrapper functions added. + + (ADB - 2020/10/27, HDFFV-10868) + + - Add wrappers for H5Pset/get_file_locking() API calls + + H5Pset_file_locking() + H5Pget_use_file_locking() + H5Pget_ignore_disabled_file_locking() + + Unlike the C++ and Fortran wrappers, there are separate getters for the + two file locking settings, each of which returns a boolean value. + + See the configure option discussion for HDFFV-11092 (above) for more + information on the file locking feature and how it's controlled. + + (DER - 2020/07/30, HDFFV-11092) + + + Tools: + ------ + - Building h5perf/h5perf_serial in "standalone mode" has been removed + + Building h5perf separately from the library was added circa 2008 + in HDF5 1.6.8. It's unclear what purpose this serves and the current + implementation is currently broken. The existing files require + H5private.h and the symbols we use to determine how the copied + platform-independence scheme should be used come from H5pubconf.h, + which may not match the compiler being used to build standalone h5perf. + + Due to the maintenance overhead and lack of a clear use case, support + for building h5perf and h5perf_serial separately from the HDF5 library + has been removed. + + (DER - 2022/07/15) + + - The perf tool has been removed + + The small `perf` tool didn't really do anything special and the name + conflicts with gnu's perf tool. + + (DER - 2022/07/15, GitHub #1787) + + - 1.10 References in containers were not displayed properly by h5dump. + + Ported 1.10 tools display function to provide ability to inspect and + display 1.10 reference data. + + (ADB - 2022/06/22) + + - h5repack added an optional verbose value for reporting R/W timing. + + In addition to adding timing capture around the read/write calls in + h5repack, added help text to indicate how to show timing for read/write; + -v N, --verbose=N Verbose mode, print object information. + N - is an integer greater than 1, 2 displays read/write timing + (ADB - 2021/11/08) + + - Added a new (unix ONLY) parallel meta tool 'h5dwalk', which utilizes the + mpifileutils (https://hpc.github.io/mpifileutils) open source utility + library to enable parallel execution of other HDF5 tools. + This approach can greatly enhance the serial hdf5 tool performance over large + collections of files by utilizing MPI parallelism to distribute an application + load over many independent MPI ranks and files. + + An introduction to the mpifileutils library and initial 'User Guide' for + the new 'h5dwalk" tool can be found at: + https://github.com/HDFGroup/hdf5doc/tree/master/RFCs/HDF5/tools/parallel_tools + + (RAW - 2021/10/25) + + - Refactored the perform tools and removed depends on test library. + + Moved the perf and h5perf tools from tools/test/perform to + tools/src/h5perf so that they can be installed. This required + that the test library dependency be removed by copying the + needed functions from h5test.c. + The standalone scripts and other perform tools remain in the + tools/test/perform folder. + + (ADB - 2021/08/10) + + - Removed partial long exceptions + + Some of the tools accepted shortened versions of the long options + (ex: --datas instead of --dataset). These were implemented inconsistently, + are difficult to maintain, and occasionally block useful long option + names. These partial long options have been removed from all the tools. + + (DER - 2021/08/03) + + - h5repack added help text for user-defined filters. + + Added help text line that states the valid values of the filter flag + for user-defined filters; + filter_flag: 1 is OPTIONAL or 0 is MANDATORY + + (ADB - 2021/01/14, HDFFV-11099) + + - Added h5delete tool + + Deleting HDF5 storage when using the VOL can be tricky when the VOL + does not create files. The h5delete tool is a simple wrapper around + the H5Fdelete() API call that uses the VOL specified in the + HDF5_VOL_CONNECTOR environment variable to delete a "file". If + the call to H5Fdelete() fails, the tool will attempt to use + the POSIX remove(3) call to remove the file. + + Note that the HDF5 library does currently have support for + H5Fdelete() in the native VOL connector. + + (DER - 2020/12/16) + + - h5repack added options to control how external links are handled. + + Currently h5repack preserves external links and cannot copy and merge + data from the external files. Two options, merge and prune, were added to + control how to merge data from an external link into the resulting file. + --merge Follow external soft link recursively and merge data. + --prune Do not follow external soft links and remove link. + --merge --prune Follow external link, merge data and remove dangling link. + + (ADB - 2020/08/05, HDFFV-9984) + + - h5repack was fixed to repack the reference attributes properly. + The code line that checks if the update of reference inside a compound + datatype is misplaced outside the code block loop that carries out the + check. In consequence, the next attribute that is not the reference + type was repacked again as the reference type and caused the failure of + repacking. The fix is to move the corresponding code line to the correct + code block. + + (KY -2020/02/07, HDFFV-11014) + + + High-Level APIs: + ---------------- + - added set/get for unsigned long long attributes + + The attribute writing high-level API has been expanded to include + public set/get functions for ULL attributes, analogously to the + existing set/get for other types. + + (AF - 2021/09/08) + + + C Packet Table API: + ------------------- + - + + + Internal header file: + --------------------- + - All the #defines named H5FD_CTL__* were renamed to H5FD_CTL_*, i.e. the double underscore was reduced to a single underscore. + + + Documentation: + -------------- + - Doxygen User Guide documentation is available when configured and generated. + The resulting documentation files will be in the share/html subdirectory + of the HDF5 install directory. + + (ADB - 2022/08/09) + + +Support for new platforms, languages and compilers +================================================== + - + + +Bug Fixes since HDF5-1.12.0 release +=================================== + Library + ------- + - Seg fault on file close + + h5debug fails at file close with core dump on a file that has an + illegal file size in its cache image. In H5F_dest(), the library + performs all the closing operations for the file and keeps track of + the error encountered when reading the file cache image. + At the end of the routine, it frees the file's file structure and + returns error. Due to the error return, the file object is not removed + from the ID node table. This eventually causes assertion failure in + H5VL__native_file_close() when the library finally exits and tries to + access that file object in the table for closing. + + The closing routine, H5F_dest(), will not free the file structure if + there is error, keeping a valid file structure in the ID node table. + It will be freed later in H5VL__native_file_close() when the + library exits and terminates the file package. + + (VC - 2022/12/14, HDFFV-11052, CVE-2020-10812) + + - Fix CVE-2018-13867 / GHSA-j8jr-chrh-qfrf + + Validate location (offset) of the accumulated metadata when comparing. + + Initially, the accumulated metadata location is initialized to HADDR_UNDEF + - the highest available address. Bogus input files may provide a location + or size matching this value. Comparing this address against such bogus + values may provide false positives. Thus make sure, the value has been + initialized or fail the comparison early and let other parts of the + code deal with the bogus address/size. + Note: To avoid unnecessary checks, it is assumed that if the 'dirty' + member in the same structure is true the location is valid. + + (EFE - 2022/10/10 GH-2230) + + - Fix CVE-2018-16438 / GHSA-9xmm-cpf8-rgmx + + Make sure info block for external links has at least 3 bytes. + + According to the specification, the information block for external links + contains 1 byte of version/flag information and two 0 terminated strings + for the object linked to and the full path. + Although not very useful, the minimum string length for each (with + terminating 0) would be one byte. + Checking this helps to avoid SEGVs triggered by bogus files. + + (EFE - 2022/10/09 GH-2233) + + - CVE-2021-46244 / GHSA-vrxh-5gxg-rmhm + + Compound datatypes may not have members of size 0 + + A member size of 0 may lead to an FPE later on as reported in + CVE-2021-46244. To avoid this, check for this as soon as the + member is decoded. + + (EFE - 2022/10/05 GEH-2242) + + + - Fix CVE-2021-45830 / GHSA-5h2h-fjjr-x9m2 + + Make H5O__fsinfo_decode() more resilient to out-of-bound reads. + + When decoding a file space info message in H5O__fsinfo_decode() make + sure each element to be decoded is still within the message. Malformed + hdf5 files may have trunkated content which does not match the + expected size. Checking this will prevent attempting to decode + unrelated data and heap overflows. So far, only free space manager + address data was checked before decoding. + + (EFE - 2022/10/05 GH-2228) + + - Fix CVE-2021-46242 / GHSA-x9pw-hh7v-wjpf + + When evicting driver info block, NULL the corresponding entry. + + Since H5C_expunge_entry() called (from H5AC_expunge_entry()) sets the flag + H5C__FLUSH_INVALIDATE_FLAG, the driver info block will be freed. NULLing + the pointer in f->shared->drvinfo will prevent use-after-free when it is + used in other functions (like H5F__dest()) - as other places will check + whether the pointer is initialized before using its value. + + (EFE - 2022/09/29 GH-2254) + + - Fix CVE-2021-45833 / GHSA-x57p-jwp6-4v79 + + Report error if dimensions of chunked storage in data layout < 2 + + For Data Layout Messages version 1 & 2 the specification state + that the value stored in the data field is 1 greater than the + number of dimensions in the dataspace. For version 3 this is + not explicitly stated but the implementation suggests it to be + the case. + Thus the set value needs to be at least 2. For dimensionality + < 2 an out-of-bounds access occurs. + + (EFE - 2022/09/28 GH-2240) + + - Fix CVE-2018-14031 / GHSA-2xc7-724c-r36j + + Parent of enum datatype message must have the same size as the + enum datatype message itself. + Functions accessing the enumeration values use the size of the + enumeration datatype to determine the size of each element and + how much data to copy. + Thus the size of the enumeration and its parent need to match. + Check in H5O_dtype_decode_helper() to avoid unpleasant surprises + later. + + (EFE - 2022/09/28 GH-2236) + + - Fix CVE-2018-17439 / GHSA-vcxv-vp43-rch7 + + H5IMget_image_info(): Make sure to not exceed local array size + + Malformed hdf5 files may provide more dimensions than the array dim[] in + H5IMget_image_info() is able to hold. Check number of elements first by calling + H5Sget_simple_extent_dims() with NULL for both 'dims' and 'maxdims' arguments. + This will cause the function to return only the number of dimensions. + The fix addresses a stack overflow on write. + + (EFE - 2022/09/27 HDFFV-10589, GH-2226) + + - Fixed an issue with variable length attributes + + Previously, if a variable length attribute was held open while its file + was opened through another handle, the same attribute was opened through + the second file handle, and the second file and attribute handles were + closed, attempting to write to the attribute through the first handle + would cause an error. + + (NAF - 2022/10/24) + + - Memory leak + + A memory leak was observed with variable-length fill value in + H5O_fill_convert() function in H5Ofill.c. The leak is + manifested by running valgrind on test/set_extent.c. + + Previously, fill->buf is used for datatype conversion + if it is large enough and the variable-length information + is therefore lost. A buffer is now allocated regardless + so that the element in fill->buf can later be reclaimed. + + (VC - 2022/10/10, HDFFV-10840) + + - Fixed an issue with hyperslab selections + + Previously, when combining hyperslab selections, it was possible for the + library to produce an incorrect combined selection. + + (NAF - 2022/09/25) + + - Fixed an issue with attribute type conversion with compound datatypes + + Previously, when performing type conversion for attribute I/O with a + compound datatype, the library would not fill the background buffer with + the contents of the destination, potentially causing data to be lost when + only writing to a subset of the compound fields. + + (NAF - 2022/08/22, GitHub #2016) + + - The offset parameter in H5Dchunk_iter() is now scaled properly + + In earlier HDF5 1.13.x versions, the chunk offset was not scaled by the + chunk dimensions. This offset parameter in the callback now matches + that of H5Dget_chunk_info(). + + (@mkitti - 2022/08/06, GitHub #1419) + + - Converted an assertion on (possibly corrupt) file contents to a normal + error check + + Previously, the library contained an assertion check that a read superblock + doesn't contain a superblock extension message when the superblock + version < 2. When a corrupt HDF5 file is read, this assertion can be triggered + in debug builds of HDF5. In production builds, this situation could cause + either a library error or a crash, depending on the platform. + + (JTH - 2022/07/08, HDFFV-11316/HDFFV-11317) + + - Fixed a metadata cache bug when resizing a pinned/protected cache entry + + When resizing a pinned/protected cache entry, the metadata + cache code previously would wait until after resizing the + entry to attempt to log the newly-dirtied entry. This + caused H5C_resize_entry to mark the entry as dirty and made + H5AC_resize_entry think that it didn't need to add the + newly-dirtied entry to the dirty entries skiplist. + + Thus, a subsequent H5AC__log_moved_entry would think it + needed to allocate a new entry for insertion into the dirty + entry skip list, since the entry didGn't exist on that list. + This caused an assertion failure, as the code to allocate a + new entry assumes that the entry is not dirty. + + (JRM - 2022/02/28) + + - Issue #1436 identified a problem with the H5_VERS_RELEASE check in the + H5check_version function. + + Investigating the original fix, #812, we discovered some inconsistencies + with a new block added to check H5_VERS_RELEASE for incompatibilities. + This new block was not using the new warning text dealing with the + H5_VERS_RELEASE check and would cause the warning to be duplicated. + + By removing the H5_VERS_RELEASE argument in the first check for + H5_VERS_MAJOR and H5_VERS_MINOR, the second check would only check + the H5_VERS_RELEASE for incompatible release versions. This adheres + to the statement that except for the develop branch, all release versions + in a major.minor maintenance branch should be compatible. The prerequisite + is that an application will not use any APIs not present in all release versions. + + (ADB - 2022/02/24, #1438) + + - Unified handling of collective metadata reads to correctly fix old bugs + + Due to MPI-related issues occurring in HDF5 from mismanagement of the + status of collective metadata reads, they were forced to be disabled + during chunked dataset raw data I/O in the HDF5 1.10.5 release. This + wouldn't generally have affected application performance because HDF5 + already disables collective metadata reads during chunk lookup, since + it is generally unlikely that the same chunks will be read by all MPI + ranks in the I/O operation. However, this was only a partial solution + that wasn't granular enough. + + This change now unifies the handling of the file-global flag and the + API context-level flag for collective metadata reads in order to + simplify querying of the true status of collective metadata reads. Thus, + collective metadata reads are once again enabled for chunked dataset + raw data I/O, but manually controlled at places where some processing + occurs on MPI rank 0 only and would cause issues when collective + metadata reads are enabled. + + (JTH - 2021/11/16, HDFFV-10501/HDFFV-10562) + + - Fixed several potential MPI deadlocks in library failure conditions + + In the parallel library, there were several places where MPI rank 0 + could end up skipping past collective MPI operations when some failure + occurs in rank 0-specific processing. This would lead to deadlocks + where rank 0 completes an operation while other ranks wait in the + collective operation. These places have been rewritten to have rank 0 + push an error and try to cleanup after the failure, then continue to + participate in the collective operation to the best of its ability. + + (JTH - 2021/11/09) + + - Fixed an H5Pget_filter_by_id1/2() assert w/ out of range filter IDs + + Both H5Pget_filter_by_id1 and 2 did not range check the filter ID, which + could trip as assert in debug versions of the library. The library now + returns a normal HDF5 error when the filter ID is out of range. + + (DER - 2021/11/23, HDFFV-11286) + + - Fixed an issue with collective metadata reads being permanently disabled + after a dataset chunk lookup operation. This would usually cause a + mismatched MPI_Bcast and MPI_ERR_TRUNCATE issue in the library for + simple cases of H5Dcreate() -> H5Dwrite() -> H5Dcreate(). + + (JTH - 2021/11/08, HDFFV-11090) + + - Fixed cross platform incompatibility of references within variable length + types + + Reference types within variable length types previously could not be + read on a platform with different endianness from where they were + written. Fixed so cross platform portability is restored. + + (NAF - 2021/09/30) + + - Detection of simple data transform function "x" + + In the case of the simple data transform function "x" the (parallel) + library recognizes this is the same as not applying this data transform + function. This improves the I/O performance. In the case of the parallel + library, it also avoids breaking to independent I/O, which makes it + possible to apply a filter when writing or reading data to or from + the HDF5 file. + + (JWSB - 2021/09/13) + + - Fixed an invalid read and memory leak when parsing corrupt file space + info messages + + When the corrupt file from CVE-2020-10810 was parsed by the library, + the code that imports the version 0 file space info object header + message to the version 1 struct could read past the buffer read from + the disk, causing an invalid memory read. Not catching this error would + cause downstream errors that eventually resulted in a previously + allocated buffer to be unfreed when the library shut down. In builds + where the free lists are in use, this could result in an infinite loop + and SIGABRT when the library shuts down. + + We now track the buffer size and raise an error on attempts to read + past the end of it. + + (DER - 2021/08/12, HDFFV-11053) + + + - Fixed CVE-2018-14460 + + The tool h5repack produced a segfault when the rank in dataspace + message was corrupted, causing invalid read while decoding the + dimension sizes. + + The problem was fixed by ensuring that decoding the dimension sizes + and max values will not go beyond the end of the buffer. + + (BMR - 2021/05/12, HDFFV-11223) + + - Fixed CVE-2018-11206 + + The tool h5dump produced a segfault when the size of a fill value + message was corrupted and caused a buffer overflow. + + The problem was fixed by verifying the fill value's size + against the buffer size before attempting to access the buffer. + + (BMR - 2021/03/15, HDFFV-10480) + + - Fixed CVE-2018-14033 (same issue as CVE-2020-10811) + + The tool h5dump produced a segfault when the storage size message + was corrupted and caused a buffer overflow. + + The problem was fixed by verifying the storage size against the + buffer size before attempting to access the buffer. + + (BMR - 2021/03/15, HDFFV-11159/HDFFV-11049) + + - Remove underscores on header file guards + + Header file guards used a variety of underscores at the beginning of the define. + + Removed all leading (some trailing) underscores from header file guards. + + (ADB - 2021/03/03, #361) + + - Fixed a segmentation fault + + A segmentation fault occurred with a Mathworks corrupted file. + + A detection of accessing a null pointer was added to prevent the problem. + + (BMR - 2021/02/19, HDFFV-11150) + + - Fixed issue with MPI communicator and info object not being + copied into new FAPL retrieved from H5F_get_access_plist + + Added logic to copy the MPI communicator and info object into + the output FAPL. MPI communicator is retrieved from the VFD, while + the MPI info object is retrieved from the file's original FAPL. + + (JTH - 2021/02/15, HDFFV-11109) + + - Fixed problems with vlens and refs inside compound using + H5VLget_file_type() + + Modified library to properly ref count H5VL_object_t structs and only + consider file vlen and reference types to be equal if their files are + the same. + + (NAF - 2021/01/22) + + - Fixed CVE-2018-17432 + + The tool h5repack produced a segfault on a corrupted file which had + invalid rank for scalar or NULL datatype. + + The problem was fixed by modifying the dataspace encode and decode + functions to detect and report invalid rank. h5repack now fails + with an error message for the corrupted file. + + (BMR - 2020/10/26, HDFFV-10590) + + - Creation of dataset with optional filter + + When the combination of type, space, etc doesn't work for filter + and the filter is optional, it was supposed to be skipped but it was + not skipped and the creation failed. + + Allowed the creation of the dataset in such a situation. + + (BMR - 2020/08/13, HDFFV-10933) + + - Explicitly declared dlopen to use RTLD_LOCAL + + dlopen documentation states that if neither RTLD_GLOBAL nor + RTLD_LOCAL are specified, then the default behavior is unspecified. + The default on linux is usually RTLD_LOCAL while macos will default + to RTLD_GLOBAL. + + (ADB - 2020/08/12, HDFFV-11127) + + - H5Sset_extent_none() sets the dataspace class to H5S_NO_CLASS which + causes asserts/errors when passed to other dataspace API calls. + + H5S_NO_CLASS is an internal class value that should not have been + exposed via a public API call. + + In debug builds of the library, this can cause assert() function to + trip. In non-debug builds, it will produce normal library errors. + + The new library behavior is for H5Sset_extent_none() to convert + the dataspace into one of type H5S_NULL, which is better handled + by the library and easier for developers to reason about. + + (DER - 2020/07/27, HDFFV-11027) + + - Fixed issues CVE-2018-13870 and CVE-2018-13869 + + When a buffer overflow occurred because a name length was corrupted + and became very large, h5dump crashed on memory access violation. + + A check for reading pass the end of the buffer was added to multiple + locations to prevent the crashes and h5dump now simply fails with an + error message when this error condition occurs. + + (BMR - 2020/07/22, HDFFV-11120 and HDFFV-11121) + + - Fixed the segmentation fault when reading attributes with multiple threads + + It was reported that the reading of attributes with variable length string + datatype will crash with segmentation fault particularly when the number of + threads is high (>16 threads). The problem was due to the file pointer that + was set in the variable length string datatype for the attribute. That file + pointer was already closed when the attribute was accessed. + + The problem was fixed by setting the file pointer to the current opened file pointer + when the attribute was accessed. Similar patch up was done before when reading + dataset with variable length string datatype. + + (VC - 2020/07/13, HDFFV-11080) + + - Fixed CVE-2020-10810 + + The tool h5clear produced a segfault during an error recovery in + the superblock decoding. An internal pointer was reset to prevent + further accessing when it is not assigned with a value. + + (BMR - 2020/06/29, HDFFV-11053) + + - Fixed CVE-2018-17435 + + The tool h52gif produced a segfault when the size of an attribute + message was corrupted and caused a buffer overflow. + + The problem was fixed by verifying the attribute message's size + against the buffer size before accessing the buffer. h52gif was + also fixed to display the failure instead of silently exiting + after the segfault was eliminated. + + (BMR - 2020/06/19, HDFFV-10591) + + + Java Library + ------------ + - Improve variable-length datatype handling in JNI. + + The existing JNI read-write functions could handle variable-length datatypes + that were simple variable-length datatype with an atomic sub-datatype. More + complex combinations could not be handled. Reworked the JNI read-write functions + to recursively inspect datatypes for variable-length sub-datatypes. + + (ADB - 2022/10/12, HDFFV-8701,10375) + + - JNI utility function does not handle new references. + + The JNI utility function for converting reference data to string did + not use the new APIs. In addition to fixing that function, added new + java tests for using the new APIs. + + (ADB - 2021/02/16, HDFFV-11212) + + - The H5FArray.java class, in which virtually the entire execution time + is spent using the HDFNativeData method that converts from an array + of bytes to an array of the destination Java type. + + 1. Convert the entire byte array into a 1-d array of the desired type, + rather than performing 1 conversion per row; + 2. Use the Java Arrays method copyOfRange to grab the section of the + array from (1) that is desired to be inserted into the destination array. + + (PGT,ADB - 2020/12/13, HDFFV-10865) + + + Configuration + ------------- + - Remove Javadoc generation + + The use of doxygen now supersedes the requirement to build javadocs. We do not + have the resources to continue to support two documentation methods and have + chosen doxygen as our standard. + + (ADB - 2022/12/19) + + - Change the default for building the high-level GIF tools + + The gif2h5 and h52gif high-level tools are deprecated and will be removed + in a future release. The default build setting for them has been changed + from enabled to disabled. A user can enable the build of these tools if + needed. + + autotools: --enable-hlgiftools + cmake: HDF5_BUILD_HL_GIF_TOOLS=ON + + Disabling the GIF tools eliminates the following CVEs: + + HDFFV-10592 CVE-2018-17433 + HDFFV-10593 CVE-2018-17436 + HDFFV-11048 CVE-2020-10809 + + (ADB - 2022/12/16) + + - Change the settings of the *pc files to use the correct format + + The pkg-config files generated by CMake uses incorrect syntax for the 'Requires' + settings. Changing the set to use 'lib-name = version' instead 'lib-name-version' + fixes the issue + + (ADB - 2022/12/06 HDFFV-11355) + + - Move MPI libraries link from PRIVATE to PUBLIC + + The install dependencies were not including the need for MPI libraries when + an application or library was built with the C library. Also updated the + CMake target link command to use the newer style MPI::MPI_C link variable. + + (ADB - 2022/10/27) + + - Corrected path searched by CMake find_package command + + The install path for cmake find_package files had been changed to use + "share/cmake" + for all platforms. However the trailing "hdf5" directory was not removed. + This "hdf5" additional directory has been removed. + + (ADB - 2021/09/27) + + - Corrected pkg-config compile script + + It was discovered that the position of the "$@" argument for the command + in the compile script may fail on some platforms and configurations. The + position of the "$@"command argument was moved before the pkg-config sub command. + + (ADB - 2021/08/30) + + - Fixed CMake C++ compiler flags + + A recent refactoring of the C++ configure files accidentally removed the + file that executed the enable_language command for C++ needed by the + HDFCXXCompilerFlags.cmake file. Also updated the intel warnings files, + including adding support for windows platforms. + + (ADB - 2021/08/10) + + - Better support for libaec (open-source Szip library) in CMake + + Implemented better support for libaec 1.0.5 (or later) library. This version + of libaec contains improvements for better integration with HDF5. Furthermore, + the variable USE_LIBAEC_STATIC has been introduced to allow to make use of + static version of libaec library. Use libaec_DIR or libaec_ROOT to set + the location in which libaec can be found. + + Be aware, the Szip library of libaec 1.0.4 depends on another library within + libaec library. This dependency is not specified in the current CMake + configuration which means that one can not use the static Szip library of + libaec 1.0.4 when building HDF5. This has been resolved in libaec 1.0.5. + + (JWSB - 2021/06/22) + + - Refactor CMake configure for Fortran + + The Fortran configure tests for KINDs reused a single output file that was + read to form the Integer and Real Kinds defines. However, if config was run + more then once, the CMake completed variable prevented the tests from executing + again and the last value saved in the file was used to create the define. + Creating separate files for each KIND solved the issue. + + In addition the test for H5_PAC_C_MAX_REAL_PRECISION was not pulling in + defines for proper operation and did not define H5_PAC_C_MAX_REAL_PRECISION + correctly for a zero value. This was fixed by supplying the required defines. + In addition it was moved from the Fortran specific HDF5UseFortran.camke file + to the C centric ConfigureChecks.cmake file. + + (ADB - 2021/06/03) + + - Move emscripten flag to compile flags + + The emscripten flag, -O0, was removed from target_link_libraries command + to the correct target_compile_options command. + + (ADB - 2021/04/26 HDFFV-11083) + + - Remove arbitrary warning flag groups from CMake builds + + The arbitrary groups were created to reduce the quantity of warnings being + reported that overwhelmed testing report systems. Considerable work has + been accomplished to reduce the warning count and these arbitrary groups + are no longer needed. + Also the default for all warnings, HDF5_ENABLE_ALL_WARNINGS, is now ON. + + Visual Studio warnings C4100, C4706, and C4127 have been moved to + developer warnings, HDF5_ENABLE_DEV_WARNINGS, and are disabled for normal builds. + + (ADB - 2021/03/22, HDFFV-11228) + + - Reclassify CMake messages, to allow new modes and --log-level option + + CMake message commands have a mode argument. By default, STATUS mode + was chosen for any non-error message. CMake version 3.15 added additional + modes, NOTICE, VERBOSE, DEBUG and TRACE. All message commands with a mode + of STATUS were reviewed and most were reclassified as VERBOSE. The new + mode was protected by a check for a CMake version of at least 3.15. If CMake + version 3.17 or above is used, the user can use the command line option + of "--log-level" to further restrict which message commands are displayed. + + (ADB - 2021/01/11, HDFFV-11144) + + - Fixes Autotools determination of the stat struct having an st_blocks field + + A missing parenthesis in an autoconf macro prevented building the test + code used to determine if the stat struct contains the st_blocks field. + Now that the test functions correctly, the H5_HAVE_STAT_ST_BLOCKS #define + found in H5pubconf.h will be defined correctly on both the Autotools and + CMake. This #define is only used in the tests and does not affect the + HDF5 C library. + + (DER - 2021/01/07, HDFFV-11201) + + - Add missing ENV variable line to hdfoptions.cmake file + + Using the build options to use system SZIP/ZLIB libraries need to also + specify the library root directory. Setting the {library}_ROOT ENV + variable was added to the hdfoptions.cmake file. + + (ADB - 2020/10/19 HDFFV-11108) + + + Tools + ----- + - Fix h5repack to only print output when verbose option is selected + + When timing option was added to h5repack, the check for verbose was + incorrectly implemented. + + (ADB - 2022/12/02, GH #2270) + + - Changed how h5dump and h5ls identify long double. + + Long double support is not consistent across platforms. Tools will always + identify long double as 128-bit [little/big]-endian float nn-bit precision. + New test file created for datasets with attributes for float, double and + long double. In addition any unknown integer or float datatype will now + also show the number of bits for precision. + These files are also used in the java tests. + + (ADB - 2021/03/24, HDFFV-11229,HDFFV-11113) + + - Fixed tools argument parsing. + + Tools parsing used the length of the option from the long array to match + the option from the command line. This incorrectly matched a shorter long + name option that happened to be a subset of another long option. + Changed to match whole names. + + (ADB - 2021/01/19, HDFFV-11106) + + - The tools library was updated by standardizing the error stack process. + + General sequence is: + h5tools_setprogname(PROGRAMNAME); + h5tools_setstatus(EXIT_SUCCESS); + h5tools_init(); + ... process the command-line (check for error-stack enable) ... + h5tools_error_report(); + ... (do work) ... + h5diff_exit(ret); + + (ADB - 2020/07/20, HDFFV-11066) + + - h5diff fixed a command line parsing error. + + h5diff would ignore the argument to -d (delta) if it is smaller than DBL_EPSILON. + The macro H5_DBL_ABS_EQUAL was removed and a direct value comparison was used. + + (ADB - 2020/07/20, HDFFV-10897) + + - h5diff added a command line option to ignore attributes. + + h5diff would ignore all objects with a supplied path if the exclude-path argument is used. + Adding the exclude-attribute argument will only exclude attributes, with the supplied path, + from comparison. + + (ADB - 2020/07/20, HDFFV-5935) + + - h5diff added another level to the verbose argument to print filenames. + + Added verbose level 3 that is level 2 plus the filenames. The levels are: + 0 : Identical to '-v' or '--verbose' + 1 : All level 0 information plus one-line attribute status summary + 2 : All level 1 information plus extended attribute status report + 3 : All level 2 information plus file names + + (ADB - 2020/07/20, HDFFV-1005) + + + Performance + ------------- + - + + + Fortran API + ----------- + - h5open_f and h5close_f fixes + * Fixed it so both h5open_f and h5close_f can be called multiple times. + * Fixed an issue with open objects remaining after h5close_f was called. + * Added additional tests. + (MSB, 2022/04/19, HDFFV-11306) + + + High-Level Library + ------------------ + - Fixed HL_test_packet, test for packet table vlen of vlen. + + Incorrect length assignment. + + (ADB - 2021/10/14) + + + Fortran High-Level APIs + ----------------------- + - + + + Documentation + ------------- + - + + + F90 APIs + -------- + - + + + C++ APIs + -------- + - Added DataSet::operator= + + Some compilers complain if the copy constructor is given explicitly + but the assignment operator is implicitly set to default. + + (2021/05/19) + + + Testing + ------- + - Stopped java/test/junit.sh.in installing libs for testing under ${prefix} + + Lib files needed are now copied to a subdirectory in the java/test + directory, and on Macs the loader path for libhdf5.xxxs.so is changed + in the temporary copy of libhdf5_java.dylib. + + (LRK, 2020/07/02, HDFFV-11063) + + +Platforms Tested +=================== + + Linux 5.16.14-200.fc35 GNU gcc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) + #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) + Fedora35 clang version 13.0.0 (Fedora 13.0.0-3.fc35) + (cmake and autotools) + + Linux 5.15.0-1026-aws gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 + #30-Ubuntu SMP x86_64 GNU/Linux GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 + Ubuntu 22.04 Ubuntu clang version 14.0.0-1ubuntu1 + (cmake and autotools) + + Linux 5.13.0-1031-aws GNU gcc (GCC) 9.4.0-1ubuntu1 + #35-Ubuntu SMP x86_64 GNU/Linux GNU Fortran (GCC) 9.4.0-1ubuntu1 + Ubuntu 20.04 clang version 10.0.0-4ubuntu1 + (cmake and autotools) + + Linux 5.3.18-150300-cray_shasta_c cray-mpich/8.3.3 + #1 SMP x86_64 GNU/Linux Cray clang 14.0.2, 15.0.0 + (crusher) GCC 11.2.0, 12.1.0 + (cmake) + + Linux 4.18.0-348.7.1.el8_5 gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4) + #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4) + CentOS8 clang version 12.0.1 (Red Hat 12.0.1) + (cmake and autotools) + + Linux 4.14.0-115.35.1.1chaos openmpi 4.0.5 + #1 SMP aarch64 GNU/Linux GCC 9.3.0 (ARM-build-5) + (stria) GCC 7.2.0 (Spack GCC) + arm/20.1 + arm/22.1 + (cmake) + + Linux 4.14.0-115.35.1.3chaos spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 12.0.1 + (vortex) GCC 8.3.1 + XL 16.1.1 + (cmake) + + Linux-4.14.0-115.21.2 spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 12.0.1, 14.0.5 + (lassen) GCC 8.3.1 + XL 16.1.1.2, 2021,09.22, 2022.08.05 + (cmake) + + Linux-4.12.14-197.99-default cray-mpich/7.7.14 + #1 SMP x86_64 GNU/Linux cce 12.0.3 + (theta) GCC 11.2.0 + llvm 9.0 + Intel 19.1.2 + + Linux 3.10.0-1160.36.2.el7.ppc64 gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + #1 SMP ppc64be GNU/Linux g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + Power8 (echidna) GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) + + Linux 3.10.0-1160.24.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos7 Version 4.8.5 20150623 (Red Hat 4.8.5-4) + (jelly/kituo/moohan) Version 4.9.3, Version 5.3.0, Version 6.3.0, + Version 7.2.0, Version 8.3.0, Version 9.1.0 + Intel(R) C (icc), C++ (icpc), Fortran (icc) + compilers: + Version 17.0.0.098 Build 20160721 + GNU C (gcc) and C++ (g++) 4.8.5 compilers + with NAG Fortran Compiler Release 6.1(Tozai) + Intel(R) C (icc) and C++ (icpc) 17.0.0.098 compilers + with NAG Fortran Compiler Release 6.1(Tozai) + MPICH 3.1.4 compiled with GCC 4.9.3 + MPICH 3.3 compiled with GCC 7.2.0 + OpenMPI 2.1.6 compiled with icc 18.0.1 + OpenMPI 3.1.3 and 4.0.0 compiled with GCC 7.2.0 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Version 19.10-0 + (autotools and cmake) + + Linux-3.10.0-1160.0.0.1chaos openmpi-4.1.2 + #1 SMP x86_64 GNU/Linux clang 6.0.0, 11.0.1 + (quartz) GCC 7.3.0, 8.1.0 + Intel 19.0.4, 2022.2, oneapi.2022.2 + + Linux-3.10.0-1160.71.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux GCC 7.2.0 + (skybridge) Intel/19.1 + (cmake) + + Linux-3.10.0-1160.66.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux GCC 7.2.0 + (attaway) Intel/19.1 + (cmake) + + Linux-3.10.0-1160.59.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux Intel/19.1 + (chama) (cmake) + + macOS Apple M1 11.6 Apple clang version 12.0.5 (clang-1205.0.22.11) + Darwin 20.6.0 arm64 gfortran GNU Fortran (Homebrew GCC 11.2.0) 11.1.0 + (macmini-m1) Intel icc/icpc/ifort version 2021.3.0 202106092021.3.0 20210609 + + macOS Big Sur 11.3.1 Apple clang version 12.0.5 (clang-1205.0.22.9) + Darwin 20.4.0 x86_64 gfortran GNU Fortran (Homebrew GCC 10.2.0_3) 10.2.0 + (bigsur-1) Intel icc/icpc/ifort version 2021.2.0 20210228 + + macOS High Sierra 10.13.6 Apple LLVM version 10.0.0 (clang-1000.10.44.4) + 64-bit gfortran GNU Fortran (GCC) 6.3.0 + (bear) Intel icc/icpc/ifort version 19.0.4.233 20190416 + + macOS Sierra 10.12.6 Apple LLVM version 9.0.0 (clang-900.39.2) + 64-bit gfortran GNU Fortran (GCC) 7.4.0 + (kite) Intel icc/icpc/ifort version 17.0.2 + + Mac OS X El Capitan 10.11.6 Apple clang version 7.3.0 from Xcode 7.3 + 64-bit gfortran GNU Fortran (GCC) 5.2.0 + (osx1011test) Intel icc/icpc/ifort version 16.0.2 + + + Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++) + #1 SMP x86_64 GNU/Linux compilers: + Centos6 Version 4.4.7 20120313 + (platypus) Version 4.9.3, 5.3.0, 6.2.0 + MPICH 3.1.4 compiled with GCC 4.9.3 + PGI C, Fortran, C++ for 64-bit target on + x86_64; + Version 19.10-0 + + Windows 10 x64 Visual Studio 2015 w/ Intel C/C++/Fortran 18 (cmake) + Visual Studio 2017 w/ Intel C/C++/Fortran 19 (cmake) + Visual Studio 2019 w/ clang 12.0.0 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2019 w/ Intel C/C++/Fortran oneAPI 2022 (cmake) + Visual Studio 2022 w/ clang 15.0.1 + with MSVC-like command-line (C/C++ only - cmake) + Visual Studio 2022 w/ Intel C/C++/Fortran oneAPI 2022 (cmake) + Visual Studio 2019 w/ MSMPI 10.1 (C only - cmake) + + +Known Problems +============== + + ************************************************************ + * _ * + * (_) * + * __ ____ _ _ __ _ __ _ _ __ __ _ * + * \ \ /\ / / _` | '__| '_ \| | '_ \ / _` | * + * \ V V / (_| | | | | | | | | | | (_| | * + * \_/\_/ \__,_|_| |_| |_|_|_| |_|\__, | * + * __/ | * + * |___/ * + * * + * Please refrain from running any program (including * + * HDF5 tests) which uses the subfiling VFD on Perlmutter * + * at the National Energy Research Scientific Computing * + * Center, NERSC. * + * Doing so may cause a system disruption due to subfiling * + * crashing Lustre. The system's Lustre bug is expected * + * to be resolved by 2023. * + * * + ************************************************************ + + There is a bug in OpenMPI 4.1.0-4.1.4 that can result in incorrect + results from MPI I/O requests unless one of the following parameters + is passed to mpirun: + + --mca io ^ompio + + --mca fbtl_posix_read_data_sieving 0 + + This bug has been fixed in later versions of OpenMPI. + + Further discussion can be found here: + + https://www.hdfgroup.org/2022/11/workarounds-for-openmpi-bug-exposed-by-make-check-in-hdf5-1-13-3/ + + When using the subfiling feature with OpenMPI it is often necessary to + increase the maximum number of threads: + + --mca common_pami_max_threads 4096 + + There is a bug in MPICH 4.0.0-4.0.3 where using device=ch4:ofi (the default) + can cause failures in the testphdf5 test program. Using ch4:ucx or ch3 + allows the test to pass. The bug appears to be fixed in the upcoming 4.1 + release. + + These MPI implementation bugs may also be present in implementations derived + from OpenMPI or MPICH. The workarounds listed above may need to be adjusted + to match the derived implementation, or in some cases, there may be no + workaround. + + The accum test fails on MacOS 12.6.2 (Monterey) with clang 14.0.0. The + reason for this failure and its impact are unknown. + + The onion test has failures on Windows when built using Intel OneAPI + 2022.3. The cause of these failures is under investigation. + + CMake files do not behave correctly with paths containing spaces. + Do not use spaces in paths because the required escaping for handling spaces + results in very complex and fragile build files. + ADB - 2019/05/07 + + At present, metadata cache images may not be generated by parallel + applications. Parallel applications can read files with metadata cache + images, but since this is a collective operation, a deadlock is possible + if one or more processes do not participate. + + CPP ptable test fails on both VS2017 and VS2019 with Intel compiler, JIRA + issue: HDFFV-10628. This test will pass with VS2015 with Intel compiler. + + The subsetting option in ph5diff currently will fail and should be avoided. + The subsetting option works correctly in serial h5diff. + + Several tests currently fail on certain platforms: + MPI_TEST-t_bigio fails with spectrum-mpi on ppc64le platforms. + + MPI_TEST-t_subfiling_vfd and MPI_TEST_EXAMPLES-ph5_subfiling fail with + cray-mpich on theta and with XL compilers on ppc64le platforms. + + MPI_TEST_testphdf5_tldsc fails with cray-mpich 7.7 on cori and theta. + + Known problems in previous releases can be found in the HISTORY*.txt files + in the HDF5 source. Please report any new problems found to + help@hdfgroup.org. + + +CMake vs. Autotools installations +================================= +While both build systems produce similar results, there are differences. +Each system produces the same set of folders on linux (only CMake works +on standard Windows); bin, include, lib and share. Autotools places the +COPYING and RELEASE.txt file in the root folder, CMake places them in +the share folder. + +The bin folder contains the tools and the build scripts. Additionally, CMake +creates dynamic versions of the tools with the suffix "-shared". Autotools +installs one set of tools depending on the "--enable-shared" configuration +option. + build scripts + ------------- + Autotools: h5c++, h5cc, h5fc + CMake: h5c++, h5cc, h5hlc++, h5hlcc + +The include folder holds the header files and the fortran mod files. CMake +places the fortran mod files into separate shared and static subfolders, +while Autotools places one set of mod files into the include folder. Because +CMake produces a tools library, the header files for tools will appear in +the include folder. + +The lib folder contains the library files, and CMake adds the pkgconfig +subfolder with the hdf5*.pc files used by the bin/build scripts created by +the CMake build. CMake separates the C interface code from the fortran code by +creating C-stub libraries for each Fortran library. In addition, only CMake +installs the tools library. The names of the szip libraries are different +between the build systems. + +The share folder will have the most differences because CMake builds include +a number of CMake specific files for support of CMake's find_package and support +for the HDF5 Examples CMake project. + +The issues with the gif tool are: + HDFFV-10592 CVE-2018-17433 + HDFFV-10593 CVE-2018-17436 + HDFFV-11048 CVE-2020-10809 +These CVE issues have not yet been addressed and are avoided by not building +the gif tool by default. Enable building the High-Level tools with these options: + autotools: --enable-hltools + cmake: HDF5_BUILD_HL_TOOLS=ON diff --git a/release_docs/HISTORY-1_16.txt b/release_docs/HISTORY-1_16.txt new file mode 100644 index 00000000000..1841a57c1ab --- /dev/null +++ b/release_docs/HISTORY-1_16.txt @@ -0,0 +1,9 @@ +HDF5 History +============ + +This file contains development history of the HDF5 1.16 branch + +01. Release Information for hdf5-1.16.0 + +[Search on the string '%%%%' for section breaks of each release.] + diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt index baedbe1124d..1013e6c8ac9 100644 --- a/release_docs/RELEASE.txt +++ b/release_docs/RELEASE.txt @@ -36,7 +36,7 @@ CONTENTS - New Features - Support for new platforms and languages -- Bug Fixes since HDF5-1.14.0 +- Bug Fixes since HDF5-1.14.5 - Platforms Tested - Known Problems - CMake vs. Autotools installations @@ -47,12 +47,6 @@ New Features Configuration: ------------- - - Added signed Windows msi binary and signed Apple dmg binary files. - - The release process now provides signed Windows and Apple installation - binaries in addition to the debian and rpm installation binaries. Also - these installer files are no longer compressed into packaged archives. - - Added configuration option for internal threading/concurrency support: CMake: HDF5_ENABLE_THREADS (ON/OFF) (Default: ON) @@ -65,399 +59,9 @@ New Features disable the 'threadsafe' option, but not vice versa. The 'threads' option must be on to enable the subfiling VFD. - - Moved examples to the HDF5Examples folder in the source tree. - - Moved the C++ and Fortran examples from the examples folder to the HDF5Examples - folder and renamed to TUTR, tutorial. This is referenced from the LearnBasics - doxygen page. - - - Added support for using zlib-ng package as the zlib library: - - CMake: HDF5_USE_ZLIB_NG - Autotools: --enable-zlibng - - Added the option HDF5_USE_ZLIB_NG to allow the replacement of the - default ZLib package by the zlib-ng package as a built-in compression library. - - - Disable CMake UNITY_BUILD for hdf5 - - CMake added a target property, UNITY_BUILD, that when set to true, the target - source files will be combined into batches for faster compilation. By default, - the setting is OFF, but could be enabled by a project that includes HDF5 as a subproject. - - HDF5 has disabled this feature by setting the property to OFF in the HDFMacros.cmake file. - - - Removed "function/code stack" debugging configuration option: - - CMake: HDF5_ENABLE_CODESTACK - Autotools: --enable-codestack - - This was used to debug memory leaks internal to the library, but has been - broken for >1.5 years and is now easily replaced with third-party tools - (e.g. libbacktrace: https://github.com/ianlancetaylor/libbacktrace) on an - as-needed basis when debugging an issue. - - - Added configure options for enabling/disabling non-standard programming - language features - - * Added a new configuration option that allows enabling or disabling of - support for features that are extensions to programming languages, such - as support for the _Float16 datatype: - - CMake: HDF5_ENABLE_NONSTANDARD_FEATURES (ON/OFF) (Default: ON) - Autotools: --enable-nonstandard-features (yes/no) (Default: yes) - - When this option is enabled, configure time checks are still performed - to ensure that a feature can be used properly, but these checks may not - be sufficient when compiler support for a feature is incomplete or broken, - resulting in library build failures. When set to OFF/no, this option - provides a way to disable support for all non-standard features to avoid - these issues. Individual features can still be re-enabled with their - respective configuration options. - - * Added a new configuration option that allows enabling or disabling of - support for the _Float16 C datatype: - - CMake: HDF5_ENABLE_NONSTANDARD_FEATURE_FLOAT16 (ON/OFF) (Default: ON) - Autotools: --enable-nonstandard-feature-float16 (yes/no) (Default: yes) - - While support for the _Float16 C datatype can generally be detected and - used properly, some compilers have incomplete support for the datatype - and will pass configure time checks while still failing to build HDF5. - This option provides a way to disable support for the _Float16 datatype - when the compiler doesn't have the proper support for it. - - - Deprecate bin/cmakehdf5 script - - With the improvements made in CMake since version 3.23 and the addition - of CMake preset files, this script is no longer necessary. - - See INSTALL_CMake.txt file, Section X: Using CMakePresets.json for compiling - - - Overhauled LFS support checks - - In 2024, we can assume that Large File Support (LFS) exists on all - systems we support, though it may require flags to enable it, - particularly when building 32-bit binaries. The HDF5 source does - not use any of the 64-bit specific API calls (e.g., ftello64) - or explicit 64-bit offsets via off64_t. - - Autotools - - * We now use AC_SYS_LARGEFILE to determine how to support LFS. We - previously used a custom m4 script for this. - - CMake - - * The HDF_ENABLE_LARGE_FILE option (advanced) has been removed - * We no longer run a test program to determine if LFS works, which - will help with cross-compiling - * On Linux we now unilaterally set -D_LARGEFILE_SOURCE and - -D_FILE_OFFSET_BITS=64, regardless of 32/64 bit system. CMake - doesn't offer a nice equivalent to AC_SYS_LARGEFILE and since - those options do nothing on 64-bit systems, this seems safe and - covers all our bases. We don't set -D_LARGEFILE64_SOURCE since - we don't use any of the POSIX 64-bit specific API calls like - ftello64, as noted above. - * We didn't test for LFS support on non-Linux platforms. We've added - comments for how LFS should probably be supported on AIX and Solaris, - which seem to be alive, though uncommon. PRs would be appreciated if - anyone wishes to test this. - - This overhaul also fixes GitHub #2395, which points out that the LFS flags - used when building with CMake differ based on whether CMake has been - run before. The LFS check program that caused this problem no longer exists. - - - The CMake HDF5_ENABLE_DEBUG_H5B option has been removed - - This enabled some additional version-1 B-tree checks. These have been - removed so the option is no longer necessary. - - This option was CMake-only and marked as advanced. - - - New option for building with static CRT in Windows - - The following option has been added: - HDF5_BUILD_STATIC_CRT_LIBS "Build With Static Windows CRT Libraries" OFF - Because our minimum CMake is 3.18, the macro to change runtime flags no longer - works as CMake changed the default behavior in CMake 3.15. - - Fixes GitHub issue #3984 - - - Added support for the new MSVC preprocessor - - Microsoft added support for a new, standards-conformant preprocessor - to MSVC, which can be enabled with the /Zc:preprocessor option. This - preprocessor would trip over our HDopen() variadic function-like - macro, which uses a feature that only works with the legacy preprocessor. - - ifdefs have been added that select the correct HDopen() form and - allow building HDF5 with the /Zc:preprocessor option. - - The HDopen() macro is located in an internal header file and only - affects building the HDF5 library from source. - - Fixes GitHub #2515 - - - Renamed HDF5_ENABLE_USING_MEMCHECKER to HDF5_USING_ANALYSIS_TOOL - - The HDF5_USING_ANALYSIS_TOOL is used to indicate to test macros that - an analysis tool is being used and that the tests should not use - the runTest.cmake macros and it's variations. The analysis tools, - like valgrind, test the macro code instead of the program under test. - - HDF5_ENABLE_USING_MEMCHECKER is still used for controlling the HDF5 - define, H5_USING_MEMCHECKER. - - - New option for building and naming tools in CMake - - The following option has been added: - HDF5_BUILD_STATIC_TOOLS "Build Static Tools Not Shared Tools" OFF - - The default will build shared tools unless BUILD_SHARED_LIBS = OFF. - Tools will no longer have "-shared" as only one set of tools will be created. - - - Incorporated HDF5 examples repository into HDF5 library. - - The HDF5Examples folder is equivalent to the hdf5-examples repository. - This enables building and testing the examples - during the library build process or after the library has been installed. - Previously, the hdf5-examples archives were downloaded - for packaging with the library. Now the examples can be built - and tested without a packaged install of the library. - - However, to maintain the ability to use the HDF5Examples with an installed - library, it is necessary to map the option names used by the library - to those used by the examples. The typical pattern is: - = - HDF_BUILD_FORTRAN = ${HDF5_BUILD_FORTRAN} - - - Added new option for CMake to mark tests as SKIPPED. - - HDF5_DISABLE_TESTS_REGEX is a REGEX string that will be checked with - test names and if there is a match then that test's property will be - set to DISABLED. HDF5_DISABLE_TESTS_REGEX can be initialized on the - command line: "-DHDF5_DISABLE_TESTS_REGEX:STRING=" - See CMake documentation for regex-specification. - - - Added defaults to CMake for long double conversion checks - - HDF5 performs a couple of checks at build time to see if long double - values can be converted correctly (IBM's Power architecture uses a - special format for long doubles). These checks were performed using - TRY_RUN, which is a problem when cross-compiling. - - These checks now use default values appropriate for most non-Power - systems when cross-compiling. The cache values can be pre-set if - necessary, which will preempt both the TRY_RUN and the default. - - Affected values: - H5_LDOUBLE_TO_LONG_SPECIAL (default no) - H5_LONG_TO_LDOUBLE_SPECIAL (default no) - H5_LDOUBLE_TO_LLONG_ACCURATE (default yes) - H5_LLONG_TO_LDOUBLE_CORRECT (default yes) - H5_DISABLE_SOME_LDOUBLE_CONV (default no) - - Fixes GitHub #3585 - - - Improved support for Intel oneAPI - - * Separates the old 'classic' Intel compiler settings and warnings - from the oneAPI settings - * Uses `-check nouninit` in debug builds to avoid false positives - when building H5_buildiface with `-check all` - * Both Autotools and CMake - - - Added new options for CMake and Autotools to control the Doxygen - warnings as errors setting. - - * HDF5_ENABLE_DOXY_WARNINGS: ON/OFF (Default: ON) - * --enable-doxygen-errors: enable/disable (Default: enable) - - The default will fail compile if the doxygen parsing generates warnings. - The option can be disabled if certain versions of doxygen have parsing - issues. i.e. 1.9.5, 1.9.8. - - Addresses GitHub issue #3398 - - - Added support for AOCC and classic Flang w/ the Autotools - - * Adds a config/clang-fflags options file to support Flang - * Corrects missing "-Wl," from linker options in the libtool wrappers - when using Flang, the MPI Fortran compiler wrappers, and building - the shared library. This would often result in unrecognized options - like -soname. - * Enable -nomp w/ Flang to avoid linking to the OpenMPI library. - - CMake can build the parallel, shared library w/ Fortran using AOCC - and Flang, so no changes were needed for that build system. - - Fixes GitHub issues #3439, #1588, #366, #280 - - - Converted the build of libaec and zlib to use FETCH_CONTENT with CMake. - - Using the CMake FetchContent module, the external filters can populate - content at configure time via any method supported by the ExternalProject - module. Whereas ExternalProject_Add() downloads at build time, the - FetchContent module makes content available immediately, allowing the - configure step to use the content in commands like add_subdirectory(), - include() or file() operations. - - Removed HDF options for using FETCH_CONTENT explicitly: - BUILD_SZIP_WITH_FETCHCONTENT:BOOL - BUILD_ZLIB_WITH_FETCHCONTENT:BOOL - - - Thread-safety + static library disabled on Windows w/ CMake - - The thread-safety feature requires hooks in DllMain(), which is only - present in the shared library. - - We previously just warned about this, but now any CMake configuration - that tries to build thread-safety and the static library will fail. - This cannot be overridden with ALLOW_UNSUPPORTED. - - Fixes GitHub issue #3613 - - - Autotools builds now build the szip filter by default when an appropriate - library is found - - Since libaec is prevalent and BSD-licensed for both encoding and - decoding, we build the szip filter by default now. - - Both autotools and CMake build systems will process the szip filter the same as - the zlib filter is processed. - - - Removed CMake cross-compiling variables - - * HDF5_USE_PREGEN - * HDF5_BATCH_H5DETECT - - These were used to work around H5detect and H5make_libsettings and - are no longer required. - - - Running H5make_libsettings is no longer required for cross-compiling - - The functionality of H5make_libsettings is now handled via template files, - so H5make_libsettings has been removed. - - - Running H5detect is no longer required for cross-compiling - - The functionality of H5detect is now exercised at library startup, - so H5detect has been removed. - - - Updated HDF5 API tests CMake code to support VOL connectors - - * Implemented support for fetching, building and testing HDF5 - VOL connectors during the library build process and documented - the feature under doc/cmake-vols-fetchcontent.md - - * Implemented the HDF5_TEST_API_INSTALL option that enables - installation of the HDF5 API tests on the system - - - Added new CMake options for building and running HDF5 API tests - (Experimental) - - HDF5 API tests are an experimental feature, primarily targeted - toward HDF5 VOL connector authors, that is currently being developed. - These tests exercise the HDF5 API and are being integrated back - into the HDF5 library from the HDF5 VOL tests repository - (https://github.com/HDFGroup/vol-tests). To support this feature, - the following new options have been added to CMake: - - * HDF5_TEST_API: ON/OFF (Default: OFF) - - Controls whether the HDF5 API tests will be built. These tests - will only be run during testing of HDF5 if the HDF5_TEST_SERIAL - (for serial tests) and HDF5_TEST_PARALLEL (for parallel tests) - options are enabled. - - * HDF5_TEST_API_INSTALL: ON/OFF (Default: OFF) - - Controls whether the HDF5 API test executables will be installed - on the system alongside the HDF5 library. This option is currently - not functional. - - * HDF5_TEST_API_ENABLE_ASYNC: ON/OFF (Default: OFF) - - Controls whether the HDF5 Async API tests will be built. These - tests will only be run if the VOL connector used supports Async - operations. - - * HDF5_TEST_API_ENABLE_DRIVER: ON/OFF (Default: OFF) - - Controls whether to build the HDF5 API test driver program. This - test driver program is useful for VOL connectors that use a - client/server model where the server needs to be up and running - before the VOL connector can function. This option is currently - not functional. - - * HDF5_TEST_API_SERVER: String (Default: "") - - Used to specify a path to the server executable that the test - driver program should execute. - - - Added support for CMake presets file. - - CMake supports two main files, CMakePresets.json and CMakeUserPresets.json, - that allow users to specify common configure options and share them with others. - HDF added a CMakePresets.json file of a typical configuration and support - file, config/cmake-presets/hidden-presets.json. - Also added a section to INSTALL_CMake.txt with very basic explanation of the - process to use CMakePresets. - - - Deprecated and removed old SZIP library in favor of LIBAEC library - - LIBAEC library has been used in HDF5 binaries as the szip library of choice - for a few years. We are removing the options for using the old SZIP library. - - Also removed the config/cmake/FindSZIP.cmake file. - - - Enabled instrumentation of the library by default in CMake for parallel - debug builds - - HDF5 can be configured to instrument portions of the parallel library to - aid in debugging. Autotools builds of HDF5 turn this capability on by - default for parallel debug builds and off by default for other build types. - CMake has been updated to match this behavior. - - - Added new option to build libaec and zlib inline with CMake. - - Using the CMake FetchContent module, the external filters can populate - content at configure time via any method supported by the ExternalProject - module. Whereas ExternalProject_Add() downloads at build time, the - FetchContent module makes content available immediately, allowing the - configure step to use the content in commands like add_subdirectory(), - include() or file() operations. - - The HDF options (and defaults) for using this are: - BUILD_SZIP_WITH_FETCHCONTENT:BOOL=OFF - LIBAEC_USE_LOCALCONTENT:BOOL=OFF - BUILD_ZLIB_WITH_FETCHCONTENT:BOOL=OFF - ZLIB_USE_LOCALCONTENT:BOOL=OFF - - The CMake variables to control the path and file names: - LIBAEC_TGZ_ORIGPATH:STRING - LIBAEC_TGZ_ORIGNAME:STRING - ZLIB_TGZ_ORIGPATH:STRING - ZLIB_TGZ_ORIGNAME:STRING - - See the CMakeFilters.cmake and config/cmake/cacheinit.cmake files for usage. - - - Added the CMake variable HDF5_ENABLE_ROS3_VFD to the HDF5 CMake config - file hdf5-config.cmake. This allows to easily detect if the library - has been built with or without read-only S3 functionality. Library: -------- - - Added new routines for interacting with error stacks: H5Epause_stack, - H5Eresume_stack, and H5Eis_paused. These routines can be used to - indicate that errors from a call to an HDF5 routine should not be - pushed on to an error stack. Primarily targeted toward 3rd-party - developers of Virtual File Drivirs (VFDs) and Virtual Object Layer (VOL) - connectors, these routines allow developers to perform "speculative" - operations (such as trying to open a file or object) without requiring - that the error stack be cleared after a speculative operation fails. - - H5Pset_external() now uses HDoff_t, which is always a 64-bit type The H5Pset_external() call took an off_t parameter in HDF5 1.14.x and @@ -474,302 +78,16 @@ New Features Fixes GitHub issue #3506 - - Relaxed behavior of H5Pset_page_buffer_size() when opening files - - This API call sets the size of a file's page buffer cache. This call - was extremely strict about matching its parameters to the file strategy - and page size used to create the file, requiring a separate open of the - file to obtain these parameters. - - These requirements have been relaxed when using the fapl to open - a previously-created file: - - * When opening a file that does not use the H5F_FSPACE_STRATEGY_PAGE - strategy, the setting is ignored and the file will be opened, but - without a page buffer cache. This was previously an error. - - * When opening a file that has a page size larger than the desired - page buffer cache size, the page buffer cache size will be increased - to the file's page size. This was previously an error. - - The behavior when creating a file using H5Pset_page_buffer_size() is - unchanged. - - Fixes GitHub issue #3382 - - - Added support for _Float16 16-bit half-precision floating-point datatype - - Support for the _Float16 C datatype has been added on platforms where: - - - The _Float16 datatype and its associated macros (FLT16_MIN, FLT16_MAX, - FLT16_EPSILON, etc.) are available - - A simple test program that converts between the _Float16 datatype and - other datatypes with casts can be successfully compiled and run at - configure time. Some compilers appear to be buggy or feature-incomplete - in this regard and will generate calls to compiler-internal functions - for converting between the _Float16 datatype and other datatypes, but - will not link these functions into the build, resulting in build - failures. - - The following new macros have been added: - - H5_HAVE__FLOAT16 - This macro is defined in H5pubconf.h and will have - the value 1 if support for the _Float16 datatype is - available. It will not be defined otherwise. - - H5_SIZEOF__FLOAT16 - This macro is defined in H5pubconf.h and will have - a value corresponding to the size of the _Float16 - datatype, as computed by sizeof(). It will have the - value 0 if support for the _Float16 datatype is not - available. - - H5_HAVE_FABSF16 - This macro is defined in H5pubconf.h and will have the - value 1 if the fabsf16 function is available for use. - - H5_LDOUBLE_TO_FLOAT16_CORRECT - This macro is defined in H5pubconf.h and - will have the value 1 if the platform can - correctly convert long double values to - _Float16. Some compilers have issues with - this. - - H5T_NATIVE_FLOAT16 - This macro maps to the ID of an HDF5 datatype representing - the native C _Float16 datatype for the platform. If - support for the _Float16 datatype is not available, the - macro will map to H5I_INVALID_HID and should not be used. - - H5T_IEEE_F16BE - This macro maps to the ID of an HDF5 datatype representing - a big-endian IEEE 754 16-bit floating-point datatype. This - datatype is available regardless of whether _Float16 support - is available or not. - - H5T_IEEE_F16LE - This macro maps to the ID of an HDF5 datatype representing - a little-endian IEEE 754 16-bit floating-point datatype. - This datatype is available regardless of whether _Float16 - support is available or not. - - The following new hard datatype conversion paths have been added, but - will only be used when _Float16 support is available: - - H5T_NATIVE_SCHAR <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_UCHAR <-> H5T_NATIVE_FLOAT16 - H5T_NATIVE_SHORT <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_USHORT <-> H5T_NATIVE_FLOAT16 - H5T_NATIVE_INT <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_UINT <-> H5T_NATIVE_FLOAT16 - H5T_NATIVE_LONG <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_ULONG <-> H5T_NATIVE_FLOAT16 - H5T_NATIVE_LLONG <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_ULLONG <-> H5T_NATIVE_FLOAT16 - H5T_NATIVE_FLOAT <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_DOUBLE <-> H5T_NATIVE_FLOAT16 - H5T_NATIVE_LDOUBLE <-> H5T_NATIVE_FLOAT16 - - The H5T_NATIVE_LDOUBLE -> H5T_NATIVE_FLOAT16 hard conversion path will only - be available and used if H5_LDOUBLE_TO_FLOAT16_CORRECT has a value of 1. Otherwise, - the conversion will be emulated in software by the library. - - Note that in the absence of any compiler flags for architecture-specific - tuning, the generated code for datatype conversions with the _Float16 type - may perform conversions by first promoting the type to float. Use of - architecture-specific tuning compiler flags may instead allow for the - generation of specialized instructions, such as AVX512-FP16 instructions, - if available. - - - Made several improvements to the datatype conversion code - - * The datatype conversion code was refactored to use pointers to - H5T_t datatype structures internally rather than IDs wrapping - the pointers to those structures. These IDs are needed if an - application-registered conversion function or conversion exception - function are involved during the conversion process. For simplicity, - the conversion code simply passed these IDs down and let the internal - code unwrap the IDs as necessary when needing to access the wrapped - H5T_t structures. However, this could cause a significant amount of - repeated ID lookups for compound datatypes and other container-like - datatypes. The code now passes down pointers to the datatype - structures and only creates IDs to wrap those pointers as necessary. - Quick testing showed an average ~3x to ~10x improvement in performance - of conversions on container-like datatypes, depending on the - complexity of the datatype. - - * A conversion "context" structure was added to hold information about - the current conversion being performed. This allows conversions on - container-like datatypes to be optimized better by skipping certain - portions of the conversion process that remain relatively constant - when multiple elements of the container-like datatype are being - converted. - - * After refactoring the datatype conversion code to use pointers - internally rather than IDs, several copies of datatypes that were - made by higher levels of the library were able to be removed. The - internal IDs that were previously registered to wrap those copied - datatypes were also able to be removed. - - - Implemented optimized support for vector I/O in the Subfiling VFD - - Previously, the Subfiling VFD would handle vector I/O requests by - breaking them down into individual I/O requests, one for each entry - in the I/O vectors provided. This could result in poor I/O performance - for features in HDF5 that utilize vector I/O, such as parallel I/O - to filtered datasets. The Subfiling VFD now properly handles vector - I/O requests in their entirety, resulting in fewer I/O calls, improved - vector I/O performance and improved vector I/O memory efficiency. - - - Added a simple cache to the read-only S3 (ros3) VFD - - The read-only S3 VFD now caches the first N bytes of a file stored - in S3 to avoid a lot of small I/O operations when opening files. - This cache is per-file and created when the file is opened. - - N is currently 16 MiB or the size of the file, whichever is smaller. - - Addresses GitHub issue #3381 - - - Added new API function H5Pget_actual_selection_io_mode() - - This function allows the user to determine if the library performed - selection I/O, vector I/O, or scalar (legacy) I/O during the last HDF5 - operation performed with the provided DXPL. - - - Added support for in-place type conversion in most cases - - In-place type conversion allows the library to perform type conversion - without an intermediate type conversion buffer. This can improve - performance by allowing I/O in a single operation over the entire - selection instead of being limited by the size of the intermediate buffer. - Implemented for I/O on contiguous and chunked datasets when the selection - is contiguous in memory and when the memory datatype is not smaller than - the file datatype. - - - Changed selection I/O to be on by default when using the MPIO file driver - - - Added support for selection I/O in the MPIO file driver - - Previously, only vector I/O operations were supported. Support for - selection I/O should improve performance and reduce memory uses in some - cases. - - - Changed the error handling for a not found path in the find plugin process. - - While attempting to load a plugin the HDF5 library will fail if one of the - directories in the plugin paths does not exist, even if there are more paths - to check. Instead of exiting the function with an error, just logged the error - and continue processing the list of paths to check. - - - Implemented support for temporary security credentials for the Read-Only - S3 (ROS3) file driver. - - When using temporary security credentials, one also needs to specify a - session/security token next to the access key id and secret access key. - This token can be specified by the new API function H5Pset_fapl_ros3_token(). - The API function H5Pget_fapl_ros3_token() can be used to retrieve - the currently set token. - - - Added a Subfiling VFD configuration file prefix environment variable - - The Subfiling VFD now checks for values set in a new environment - variable "H5FD_SUBFILING_CONFIG_FILE_PREFIX" to determine if the - application has specified a pathname prefix to apply to the file - path for its configuration file. For example, this can be useful - for cases where the application wishes to write subfiles to a - machine's node-local storage while placing the subfiling configuration - file on a file system readable by all machine nodes. - - - Added H5Pset_selection_io(), H5Pget_selection_io(), and - H5Pget_no_selection_io_cause() API functions to manage the selection I/O - feature. This can be used to enable collective I/O with type conversion, - or it can be used with custom VFDs that support vector or selection I/O. - - - Added H5Pset_modify_write_buf() and H5Pget_modify_write_buf() API - functions to allow the library to modify the contents of write buffers, in - order to avoid malloc/memcpy. Currently only used for type conversion - with selection I/O. - Parallel Library: ----------------- - - Added optimized support for the parallel compression feature when - using the multi-dataset I/O API routines collectively - - Previously, calling H5Dwrite_multi/H5Dread_multi collectively in parallel - with a list containing one or more filtered datasets would cause HDF5 to - break out of the optimized multi-dataset I/O mode and instead perform I/O - by looping over each dataset in the I/O request. The library has now been - updated to perform I/O in a more optimized manner in this case by first - performing I/O on all the filtered datasets at once and then performing - I/O on all the unfiltered datasets at once. - - - Changed H5Pset_evict_on_close so that it can be called with a parallel - build of HDF5 - - Previously, H5Pset_evict_on_close would always fail when called from a - parallel build of HDF5, stating that the feature is not supported with - parallel HDF5. This failure would occur even if a parallel build of HDF5 - was used with a serial HDF5 application. H5Pset_evict_on_close can now - be called regardless of the library build type and the library will - instead fail during H5Fcreate/H5Fopen if the "evict on close" property - has been set to true and the file is being opened for parallel access - with more than 1 MPI process. + - Fortran Library: ---------------- + - - - Add Fortran H5R APIs: - h5rcreate_attr_f, h5rcreate_object_f, h5rcreate_region_f, - h5ropen_attr_f, h5ropen_object_f, h5ropen_region_f, - h5rget_file_name_f, h5rget_attr_name_f, h5rget_obj_name_f, - h5rcopy_f, h5requal_f, h5rdestroy_f, h5rget_type_f - - - Added Fortran H5E APIs: - h5eregister_class_f, h5eunregister_class_f, h5ecreate_msg_f, h5eclose_msg_f - h5eget_msg_f, h5epush_f, h5eget_num_f, h5ewalk_f, h5eget_class_name_f, - h5eappend_stack_f, h5eget_current_stack_f, h5eset_current_stack_f, h5ecreate_stack_f, - h5eclose_stack_f, h5epop_f, h5eprint_f (C h5eprint v2 signature) - - - Added API support for Fortran MPI_F08 module definitions: - Adds support for MPI's MPI_F08 module datatypes: type(MPI_COMM) and type(MPI_INFO) for HDF5 APIs: - H5PSET_FAPL_MPIO_F, H5PGET_FAPL_MPIO_F, H5PSET_MPI_PARAMS_F, H5PGET_MPI_PARAMS_F - Ref. #3951 - - - Added Fortran APIs: - H5FGET_INTENT_F, H5SSEL_ITER_CREATE_F, H5SSEL_ITER_GET_SEQ_LIST_F, - H5SSEL_ITER_CLOSE_F, H5S_mp_H5SSEL_ITER_RESET_F - - - Added Fortran Parameters: - H5S_SEL_ITER_GET_SEQ_LIST_SORTED_F, H5S_SEL_ITER_SHARE_WITH_DATASPACE_F - - - Added Fortran Parameters: - H5S_BLOCK_F and H5S_PLIST_F - - - The configuration definitions file, H5config_f.inc, is now installed - and the HDF5 version number has been added to it. - - - Added Fortran APIs: - h5fdelete_f - - - Added Fortran APIs: - h5vlnative_addr_to_token_f and h5vlnative_token_to_address_f - - - Fixed an uninitialized error return value for hdferr - to return the error state of the h5aopen_by_idx_f API. - - - Added h5pget_vol_cap_flags_f and related Fortran VOL - capability definitions. - - - Fortran async APIs H5A, H5D, H5ES, H5G, H5F, H5L and H5O were added. - - - Added Fortran APIs: - h5pset_selection_io_f, h5pget_selection_io_f, - h5pget_actual_selection_io_mode_f, - h5pset_modify_write_buf_f, h5pget_modify_write_buf_f - - - Added Fortran APIs: - h5get_free_list_sizes_f, h5dwrite_chunk_f, h5dread_chunk_f, - h5fget_info_f, h5lvisit_f, h5lvisit_by_name_f, - h5pget_no_selection_io_cause_f, h5pget_mpio_no_collective_cause_f, - h5sselect_shape_same_f, h5sselect_intersect_block_f, - h5pget_file_space_page_size_f, h5pset_file_space_page_size_f, - h5pget_file_space_strategy_f, h5pset_file_space_strategy_f - - - Removed "-commons" linking option on Darwin, as COMMON and EQUIVALENCE - are no longer used in the Fortran source. - - Fixes GitHub issue #3571 C++ Library: ------------ @@ -783,27 +101,12 @@ New Features Tools: ------ - - Add doxygen files for the tools - - Implement the tools usage text as pages in doxygen. - - - Add option to adjust the page buffer size in tools - - The page buffer cache size for a file can now be adjusted using the - --page-buffer-size=N - option in the h5repack, h5diff, h5dump, h5ls, and h5stat tools. This - will call the H5Pset_page_buffer_size() API function with the specified - size in bytes. - - - Allow h5repack to reserve space for a user block without a file - - This is useful for users who want to reserve space - in the file for future use without requiring a file to copy. + - High-Level APIs: ---------------- - - Added Fortran HL API: h5doappend_f + - C Packet Table API: @@ -818,1066 +121,23 @@ New Features Documentation: -------------- - - Documented that leaving HDF5 threads running at termination is unsafe - - Added doc/threadsafety-warning.md as a warning that threads which use HDF5 - resources must be closed before either process exit or library close. - If HDF5 threads are alive during either of these operations, their resources - will not be cleaned up properly and undefined behavior is possible. - - This document also includes a discussion on potential ways to mitigate this issue. - + - Support for new platforms, languages and compilers ================================================== - -Bug Fixes since HDF5-1.14.0 release +Bug Fixes since HDF5-1.14.5 release =================================== Library ------- + - - - Fixed a memory leak in H5F__accum_write() - - The memory was allocated in H5F__accum_write() and was to be freed in - H5F__accum_reset() during the closing process but a failure occurred just - before the deallocation, leaving the memory un-freed. The problem is - now fixed. - - Fixes GitHub #4585 - - - Fixed an incorrect returned value by H5LTfind_dataset() - - H5LTfind_dataset() returned true for non-existing datasets because it only - compared up to the length of the searched string, such as "Day" vs "DayNight". - Applied the user's patch to correct this behavior. - - Fixes GitHub #4780 - - - Fixed a segfault by H5Gmove2, extending to H5Lcopy and H5Lmove - - A user's application segfaulted when it passed in an invalid location ID - to H5Gmove2. The src and dst location IDs must be either a file or a group - ID. The fix was also applied to H5Lcopy and H5Lmove. Now, all these - three functions will fail if either the src or dst location ID is not a file - or a group ID. - - Fixes GitHub #4737 - - - Fixed a segfault by H5Lget_info() - - A user's program generated a segfault when the ID passed into H5Lget_info() - was a datatype ID. This was caused by non-VOL functions being used internally - where VOL functions should have been. This correction was extended to many - other functions to prevent potential issue in the future. - - Fixes GitHub #4730 - - - Fixed a segfault by H5Fget_intent(), extending to several other functions - - A user's program generated a segfault when the ID passed into H5Fget_intent() - was not a file ID. In addition to H5Fget_intent(), a number of APIs also failed - to detect an incorrect ID being passed in, which can potentially cause various - failures, including segfault. The affected functions are listed below and now - properly detect incorrect ID parameters: - - H5Fget_intent() - H5Fget_fileno() - H5Fget_freespace() - H5Fget_create_plist() - H5Fget_access_plist() - H5Fget_vfd_handle() - H5Dvlen_get_buf_size() - H5Fget_mdc_config() - H5Fset_mdc_config() - H5Freset_mdc_hit_rate_stats() - - Fixes GitHub #4656 and GitHub #4662 - - - Fixed a bug with large external datasets - - When performing a large I/O on an external dataset, the library would only - issue a single read or write system call. This could cause errors or cause - the data to be incorrect. These calls do not guarantee that they will - process the entire I/O request, and may need to be called multiple times - to complete the I/O, advancing the buffer and reducing the size by the - amount actually processed by read or write each time. Implemented this - algorithm for external datasets in both the read and write cases. - - Fixes GitHub #4216 - Fixes h5py GitHub #2394 - - - Fixed a bug in the Subfiling VFD that could cause a buffer over-read - and memory allocation failures - - When performing vector I/O with the Subfiling VFD, making use of the - vector I/O size extension functionality could cause the VFD to read - past the end of the "I/O sizes" array that is passed in. When an entry - in the "I/O sizes" array has the value 0 and that entry is at an array - index greater than 0, this signifies that the value in the preceding - array entry should be used for the rest of the I/O vectors, effectively - extending the last valid I/O size across the remaining entries. This - allows an application to save a bit on memory by passing in a smaller - "I/O sizes" array. The Subfiling VFD didn't implement a check for this - functionality in the portion of the code that generates I/O vectors, - causing it to read past the end of the "I/O sizes" array when it was - shorter than expected. This could also result in memory allocation - failures, as the nearby memory allocations are based off the values - read from that array, which could be uninitialized. - - - Fixed H5Rget_attr_name to return the length of the attribute's name - without the null terminator - - H5Rget_file_name and H5Rget_obj_name both return the name's length - without the null terminator. H5Rget_attr_name now behaves consistently - with the other two APIs. Going forward, all the get character string - APIs in HDF5 will be modified/written in this manner regarding the - length of a character string. - - Fixes GitHub #4447 - - - Fixed heap-buffer-overflow in h5dump - - h5dump aborted when provided with a malformed input file. The was because - the buffer size for checksum was smaller than H5_SIZEOF_CHKSUM, causing - an overflow while calculating the offset to the checksum in the buffer. - A check was added so H5F_get_checksums would fail appropriately in all - of its occurrences. - - Fixes GitHub #4434 - - - Fixed library to allow usage of page buffering feature for serial file - access with parallel builds of HDF5 - - When HDF5 is built with parallel support enabled, the library would previously - disallow any usage of page buffering, even if a file was not opened with - parallel access. The library now allows usage of page buffering for serial - file access with parallel builds of HDF5. Usage of page buffering is still - disabled for any form of parallel file access, even if only 1 MPI process - is used. - - - Fixed a leak of datatype IDs created internally during datatype conversion - - Fixed an issue where the library could leak IDs that it creates internally - for compound datatype members during datatype conversion. When the library's - table of datatype conversion functions is modified (such as when a new - conversion function is registered with the library from within an application), - the compound datatype conversion function has to recalculate data that it - has cached. When recalculating that data, the library was registering new - IDs for each of the members of the source and destination compound datatypes - involved in the conversion process and was overwriting the old cached IDs - without first closing them. This would result in use-after-free issues due - to multiple IDs pointing to the same internal H5T_t structure, as well as - crashes due to the library not gracefully handling partially initialized or - partially freed datatypes on library termination. - - Fixes h5py GitHub #2419 - - - Fixed function H5Requal actually to compare the reference pointers - - Fixed an issue with H5Requal always returning true because the - function was only comparing the ref2_ptr to itself. - - - Fixed infinite loop closing library issue when h5dump with a user provided test file - - The library's metadata cache calls the "get_final_load_size" client callback - to find out the actual size of the object header. As the size obtained - exceeds the file's EOA, it throws an error but the object header structure - allocated through the client callback is not freed hence causing the - issue described. - - (1) Free the structure allocated in the object header client callback after - saving the needed information in udata. (2) Deserialize the object header - prefix in the object header's "deserialize" callback regardless. - - Fixes GitHub #3790 - - - Fixed many (future) CVE issues - - A partner organization corrected many potential security issues, which - were fixed and reported to us before submission to MITRE. These do - not have formal CVE issues assigned to them yet, so the numbers assigned - here are just placeholders. We will update the HDF5 1.14 CVE list (link - below) when official MITRE CVE tracking numbers are assigned. - - These CVE issues are generally of the same form as other reported HDF5 - CVE issues, and rely on the library failing while attempting to read - a malformed file. Most of them cause the library to segfault and will - probably be assigned "medium (~5/10)" scores by NIST, like the other - HDF5 CVE issues. - - The issues that were reported to us have all been fixed in this release, - so HDF5 will continue to have no unfixed public CVE issues. - - NOTE: HDF5 versions earlier than 1.14.4 should be considered vulnerable - to these issues and users should upgrade to 1.14.4 as soon as - possible. Note that it's possible to build the 1.14 library with - HDF5 1.8, 1.10, etc. API bindings for people who wish to enjoy - the benefits of a more secure library but don't want to upgrade - to the latest API. We will not be bringing the CVE fixes to earlier - versions of the library (they are no longer supported). - - LIST OF CVE ISSUES FIXED IN THIS RELEASE: - - * CVE-2024-0116-001 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5D__scatter_mem resulting in causing denial of service or potential - code execution - - * CVE-2024-0112-001 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5S__point_deserialize resulting in the corruption of the - instruction pointer and causing denial of service or potential code - execution - - * CVE-2024-0111-001 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5T__conv_struct_opt resulting in causing denial of service or - potential code execution - - * CVE-2023-1208-002 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5O__mtime_new_encode resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1208-001 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5O__layout_encode resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1207-001 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5O__dtype_encode_helper causing denial of service or potential - code execution - - * CVE-2023-1205-001 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5VM_array_fill resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1202-002 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5T__get_native_type resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1202-001 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5T__ref_mem_setnull resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1130-001 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5T_copy_reopen resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1125-001 - HDF5 versions <= 1.14.3 contain a heap buffer overflow in - H5Z__nbit_decompress_one_byte caused by the earlier use of an - initialized pointer. This may result in Denial of Service or - potential code execution - - * CVE-2023-1114-001 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5HG_read resulting in the corruption of the instruction pointer - and causing denial of service or potential code execution - - * CVE-2023-1113-002 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5F_addr_decode_len resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1113-001 - HDF5 versions <= 1.14.3 contain a heap buffer overflow caused by - the unsafe use of strdup in H5MM_xstrdup, resulting in denial of - service or potential code execution - - * CVE-2023-1108-001 - HDF5 versions <= 1.14.3 contain a out-of-bounds read operation in - H5FL_arr_malloc resulting in denial of service or potential code - execution - - * CVE-2023-1104-004 - HDF5 versions <= 1.14.3 contain a out-of-bounds read operation in - H5T_close_real resulting in denial of service or potential code - execution - - * CVE-2023-1104-003 - HDF5 library versions <=1.14.3 contain a heap buffer overflow flaw - in the function H5HL__fl_deserialize resulting in denial of service - or potential code execution - - * CVE-2023-1104-002 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5HL__fl_deserialize resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1104-001 - HDF5 library versions <=1.14.3 contains a stack overflow in the - function H5E_printf_stack resulting in denial of service or - potential code execution - - * CVE-2023-1023-001 - HDF5 library versions <=1.14.3 heap buffer overflow in - H5VM_memcpyvv which may result in denial of service or code - execution - - * CVE-2023-1019-001 - HDF5 library versions <=1.14.3 contain a stack buffer overflow in - H5VM_memcpyvv resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1018-001 - HDF5 library versions <=1.14.3 contain a memory corruption in - H5A__close resulting in the corruption of the instruction pointer - and causing denial of service or potential code execution - - * CVE-2023-1017-002 - HDF5 library versions <=1.14.3 may use an uninitialized value - H5A__attr_release_table resulting in denial of service - - * CVE-2023-1017-001 - HDF5 library versions <=1.14.3 may attempt to dereference - uninitialized values in h5tools_str_sprint, which will lead to - denial of service - - * CVE-2023-1013-004 - HDF5 versions <= 1.13.3 contain a stack buffer overflow in - H5HG_read resulting in denial of service or potential code - execution - - * CVE-2023-1013-003 - HDF5 library versions <=1.14.3 contain a buffer overrun in - H5Z__filter_fletcher32 resulting in the corruption of the - instruction pointer and causing denial of service or potential - code execution - - * CVE-2023-1013-002 - HDF5 library versions <=1.14.3 contain a buffer overrun in - H5O__linfo_decode resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1013-001 - HDF5 library versions <=1.14.3 contain a buffer overrun in - H5Z__filter_scaleoffset resulting in the corruption of the - instruction pointer and causing denial of service or potential - code execution - - * CVE-2023-1012-001 - HDF5 library versions <=1.14.3 contain a stack buffer overflow in - H5R__decode_heap resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1010-001 - HDF5 library versions <=1.14.3 contain a stack buffer overflow in - H5FL_arr_malloc resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1009-001 - HDF5 library versions <=1.14.3 contain a stack buffer overflow in - H5FL_arr_malloc resulting in the corruption of the instruction - pointer and causing denial of service or potential code execution - - * CVE-2023-1006-004 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5A__attr_release_table resulting in the corruption of the - instruction pointer and causing denial of service or potential code - execution - - * CVE-2023-1006-003 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5T__bit_find resulting in the corruption of the instruction pointer - and causing denial of service or potential code execution. - - * CVE-2023-1006-002 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5HG_read resulting in the corruption of the instruction pointer - and causing denial of service or potential code execution - - * CVE-2023-1006-001 - HDF5 library versions <=1.14.3 contain a heap buffer overflow in - H5HG__cache_heap_deserialize resulting in the corruption of the - instruction pointer and causing denial of service or potential code - execution - - FULL OFFICIAL HDF5 CVE list (from mitre.org): - - https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=HDF5 - - 1.14.x CVE tracking list: - - https://github.com/HDFGroup/hdf5/blob/hdf5_1_14/CVE_list_1_14.md - - HDF5 CVE regression test suite (includes proof-of-concept files): - - https://github.com/HDFGroup/cve_hdf5 - - - Fixed a divide-by-zero issue when a corrupt file sets the page size to 0 - - If a corrupt file sets the page buffer size in the superblock to zero, - the library could attempt to divide by zero when allocating space in - the file. The library now checks for valid page buffer sizes when - reading the superblock message. - - Fixes oss-fuzz issue 58762 - - - Fixed a bug when using array datatypes with certain parent types - - Array datatype conversion would never use a background buffer, even if the - array's parent type (what the array is an array of) required a background - buffer for conversion. This resulted in crashes in some cases when using - an array of compound, variable length, or reference datatypes. Array types - now use a background buffer if needed by the parent type. - - - Fixed potential buffer read overflows in H5PB_read - - H5PB_read previously did not account for the fact that the size of the - read it's performing could overflow the page buffer pointer, depending - on the calculated offset for the read. This has been fixed by adjusting - the size of the read if it's determined that it would overflow the page. - - - Fixed CVE-2017-17507 - - This CVE was previously declared fixed, but later testing with a static - build of HDF5 showed that it was not fixed. - - When parsing a malformed (fuzzed) compound type containing variable-length - string members, the library could produce a segmentation fault, crashing - the library. - - This was fixed after GitHub PR #4234 - - Fixes GitHub issue #3446 - - - Fixed a cache assert with very large metadata objects - - If the library tries to load a metadata object that is above a - certain size, this would trip an assert in debug builds. This could - happen if you create a very large number of links in an old-style - group that uses local heaps. - - There is no need for this assert. The library's metadata cache - can handle large objects. The assert has been removed. - - Fixes GitHub #3762 - - - Fixed an issue with the Subfiling VFD and multiple opens of a - file - - An issue with the way the Subfiling VFD handles multiple opens - of the same file caused the file structures for the extra opens - to occasionally get mapped to an incorrect subfiling context - object. The VFD now correctly maps the file structures for - additional opens of an already open file to the same context - object. - - - Fixed a bug that causes the library to incorrectly identify - the endian-ness of 16-bit and smaller C floating-point datatypes - - When detecting the endian-ness of an in-memory C floating-point - datatype, the library previously always assumed that the type - was at least 32 bits in size. This resulted in invalid memory - accesses and would usually cause the library to identify the - datatype as having an endian-ness of H5T_ORDER_VAX. This has - now been fixed. - - - Fixed a bug that causes an invalid memory access issue when - converting 16-bit floating-point values to integers with the - library's software conversion function - - The H5T__conv_f_i function previously always assumed that - floating-point values were at least 32 bits in size and would - access invalid memory when attempting to convert 16-bit - floating-point values to integers. To fix this, parts of the - H5T__conv_f_i function had to be rewritten, which also resulted - in a significant speedup when converting floating-point values - to integers where the library does not have a hard conversion - path. This is the case for any floating-point values with a - datatype not represented by H5T_NATIVE_FLOAT16 (if _Float16 is - supported), H5T_NATIVE_FLOAT, H5T_NATIVE_DOUBLE or - H5T_NATIVE_LDOUBLE. - - - Fixed a bug that can cause incorrect data when overflows occur - while converting integer values to floating-point values with - the library's software conversion function - - The H5T__conv_i_f function had a bug which previously caused it - to return incorrect data when an overflow occurs and an application's - conversion exception callback function decides not to handle the - overflow. Rather than return positive infinity, the library would - return truncated data. This has now been fixed. - - - Corrected H5Soffset_simple() when offset is NULL - - The reference manual states that the offset parameter of H5Soffset_simple() - can be set to NULL to reset the offset of a simple dataspace to 0. This - has never been true, and passing NULL was regarded as an error. - - The library will now accept NULL for the offset parameter and will - correctly set the offset to zero. - - Fixes HDFFV-9299 - - - Fixed an issue where the Subfiling VFD's context object cache could - grow too large - - The Subfiling VFD keeps a cache of its internal context objects to - speed up access to a context object for a particular file, as well - as access to that object across multiple opens of the same file. - However, opening a large amount of files with the Subfiling VFD over - the course of an application's lifetime could cause this cache to grow - too large and result in the application running out of available MPI - communicator objects. On file close, the Subfiling VFD now simply - evicts context objects out of its cache and frees them. It is assumed - that multiple opens of a file will be a less common use case for the - Subfiling VFD, but this can be revisited if it proves to be an issue - for performance. - - - Fixed error when overwriting certain nested variable length types - - Previously, when using a datatype that included a variable length type - within a compound or array within another variable length type, and - overwriting data with a shorter (top level) variable length sequence, an - error could occur. This has been fixed. - - - Take user block into account in H5Dchunk_iter() and H5Dget_chunk_info() - - The address reported by the following functions did not correctly - take the user block into account: - - * H5Dchunk_iter() <-- addr passed to callback - * H5Dget_chunk_info() <-- addr parameter - * H5Dget_chunk_info_by_coord() <-- addr parameter - - This means that these functions reported logical HDF5 file addresses, - which would only be equal to the physical addresses when there is no - user block prepended to the HDF5 file. This is unfortunate, as the - primary use of these functions is to get physical addresses in order - to directly access the chunks. - - The listed functions now correctly take the user block into account, - so they will emit physical addresses that can be used to directly - access the chunks. - - Fixes #3003 - - - Fixed asserts raised by large values of H5Pset_est_link_info() parameters - - If large values for est_num_entries and/or est_name_len were passed - to H5Pset_est_link_info(), the library would attempt to create an - object header NIL message to reserve enough space to hold the links in - compact form (i.e., concatenated), which could exceed allowable object - header message size limits and trip asserts in the library. - - This bug only occurred when using the HDF5 1.8 file format or later and - required the product of the two values to be ~64k more than the size - of any links written to the group, which would cause the library to - write out a too-large NIL spacer message to reserve the space for the - unwritten links. - - The library now inspects the phase change values to see if the dataset - is likely to be compact and checks the size to ensure any NIL spacer - messages won't be larger than the library allows. - - Fixes GitHub #1632 - - - Fixed a bug where H5Tset_fields does not account for any offset - set for a floating-point datatype when determining if values set - for spos, epos, esize, mpos and msize make sense for the datatype - - Previously, H5Tset_fields did not take datatype offsets into account - when determining if the values set make sense for the datatype. - This would cause the function to fail when the precision for a - datatype is correctly set such that the offset bits are not included. - This has now been fixed. - - - Fixed H5Fget_access_plist so that it returns the file locking - settings for a file - - When H5Fget_access_plist (and the internal H5F_get_access_plist) - is called on a file, the returned File Access Property List has - the library's default file locking settings rather than any - settings set for the file. This causes two problems: - - - Opening an HDF5 file through an external link using H5Gopen, - H5Dopen, etc. with H5P_DEFAULT for the Dataset/Group/etc. - Access Property List will cause the external file to be opened - with the library's default file locking settings rather than - inheriting them from the parent file. This can be surprising - when a file is opened with file locking disabled, but its - external files are opened with file locking enabled. - - - An application cannot make use of the H5Pset_elink_fapl - function to match file locking settings between an external - file and its parent file without knowing the correct setting - ahead of time, as calling H5Fget_access_plist on the parent - file will not return the correct settings. - - This has been fixed by copying a file's file locking settings - into the newly-created File Access Property List in H5F_get_access_plist. - - This fix partially addresses GitHub issue #4011 - - - Memory usage growth issue - - Starting with the HDF5 1.12.1 release, an issue (GitHub issue #1256) - was observed where running a simple program that has a loop of opening - a file, reading from an object with a variable-length datatype and - then closing the file would result in the process fairly quickly - running out of memory. Upon further investigation, it was determined - that this memory was being kept around in the library's datatype - conversion pathway cache that is used to speed up datatype conversions - which are repeatedly used within an HDF5 application's lifecycle. For - conversions involving variable-length or reference datatypes, each of - these cached pathway entries keeps a reference to its associated file - for later use. Since the file was being closed and reopened on each - loop iteration, and since the library compares for equality between - instances of opened files (rather than equality of the actual files) - when determining if it can reuse a cached conversion pathway, it was - determining that no cached conversion pathways could be reused and was - creating a new cache entry on each loop iteration during I/O. This - would lead to constant growth of that cache and the memory it consumed, - as well as constant growth of the memory consumed by each cached entry - for the reference to its associated file. - - To fix this issue, the library now removes any cached datatype - conversion path entries for variable-length or reference datatypes - associated with a particular file when that file is closed. - - Fixes GitHub #1256 - - - Suppressed floating-point exceptions in H5T init code - - The floating-point datatype initialization code in H5Tinit_float.c - could raise FE_INVALID exceptions while munging bits and performing - comparisons that might involve NaN. This was not a problem when the - initialization code was executed in H5detect at compile time (prior - to 1.14.3), but now that the code is executed at library startup - (1.14.3+), these exceptions can be caught by user code, as is the - default in the NAG Fortran compiler. - - Starting in 1.14.4, we now suppress floating-point exceptions while - initializing the floating-point types and clear FE_INVALID before - restoring the original environment. - - Fixes GitHub #3831 - - - Fixed a file handle leak in the core VFD - - When opening a file with the core VFD and a file image, if the file - already exists, the file check would leak the POSIX file handle. - - Fixes GitHub issue #635 - - - Fixed some issues with chunk index metadata not getting read - collectively when collective metadata reads are enabled - - When looking up dataset chunks during I/O, the parallel library - temporarily disables collective metadata reads since it's generally - unlikely that the application will read the same chunks from all - MPI ranks. Leaving collective metadata reads enabled during - chunk lookups can lead to hangs or other bad behavior depending - on the chunk indexing structure used for the dataset in question. - However, due to the way that dataset chunk index metadata was - previously loaded in a deferred manner, this could mean that - the metadata for the main chunk index structure or its - accompanying pieces of metadata (e.g., fixed array data blocks) - could end up being read independently if these chunk lookup - operations are the first chunk index-related operation that - occurs on a dataset. This behavior is generally observed when - opening a dataset for which the metadata isn't in the metadata - cache yet and then immediately performing I/O on that dataset. - This behavior is not generally observed when creating a dataset - and then performing I/O on it, as the relevant metadata will - usually be in the metadata cache as a side effect of creating - the chunk index structures during dataset creation. - - This issue has been fixed by adding callbacks to the different - chunk indexing structure classes that allow more explicit control - over when chunk index metadata gets loaded. When collective - metadata reads are enabled, the necessary index metadata will now - get loaded collectively by all MPI ranks at the start of dataset - I/O to ensure that the ranks don't unintentionally read this - metadata independently further on. These changes fix collective - loading of the main chunk index structure, as well as v2 B-tree - root nodes, extensible array index blocks and fixed array data - blocks. There are still pieces of metadata that cannot currently - be loaded collectively, however, such as extensible array data - blocks, data block pages and super blocks, as well as fixed array - data block pages. These pieces of metadata are not necessarily - read in by all MPI ranks since this depends on which chunks the - ranks have selected in the dataset. Therefore, reading of these - pieces of metadata remains an independent operation. - - - Fixed potential hangs in parallel library during collective I/O with - independent metadata writes - - When performing collective parallel writes to a dataset where metadata - writes are requested as (or left as the default setting of) independent, - hangs could potentially occur during metadata cache sync points. This - was due to incorrect management of the internal state tracking whether - an I/O operation should be collective or not, causing the library to - attempt collective writes of metadata when they were meant to be - independent writes. During the metadata cache sync points, if the number - of cache entries being flushed was a multiple of the number of MPI ranks - in the MPI communicator used to access the HDF5 file, an equal amount of - collective MPI I/O calls were made and the dataset write call would be - successful. However, when the number of cache entries being flushed was - NOT a multiple of the number of MPI ranks, the ranks with more entries - than others would get stuck in an MPI_File_set_view call, while other - ranks would get stuck in a post-write MPI_Barrier call. This issue has - been fixed by correctly switching to independent I/O temporarily when - writing metadata independently during collective dataset I/O. - - - Dropped support for MPI-2 - - The MPI-2 supporting artifacts have been removed due to the cessation - of MPI-2 maintenance and testing since version HDF5 1.12. - - - Fixed a bug with the way the Subfiling VFD assigns I/O concentrators - - During a file open operation, the Subfiling VFD determines the topology - of the application and uses that to select a subset of MPI ranks that - I/O will be forwarded to, called I/O concentrators. The code for this - had previously assumed that the parallel job launcher application (e.g., - mpirun, srun, etc.) would distribute MPI ranks sequentially among a node - until all processors on that node have been assigned before going on to - the next node. When the launcher application mapped MPI ranks to nodes - in a different fashion, such as round-robin, this could cause the Subfiling - VFD to incorrectly map MPI ranks as I/O concentrators, leading to missing - subfiles. - - - Fixed performance regression with some compound type conversions - - In-place type conversion was introduced for most use cases in 1.14.2. - While being able to use the read buffer for type conversion potentially - improves performance by performing the entire I/O at once, it also - disables the optimized compound type conversion used when the destination - is a subset of the source. Disabled in-place type conversion when using - this optimized conversion and there is no benefit in terms of the I/O - size. - - - Fixed an assertion in a previous fix for CVE-2016-4332 - - An assert could fail when processing corrupt files that have invalid - shared message flags (as in CVE-2016-4332). - - The assert statement in question has been replaced with pointer checks - that don't raise errors. Since the function is in cleanup code, we do - our best to close and free things, even when presented with partially - initialized structs. - - Fixes CVE-2016-4332 and HDFFV-9950 (confirmed via the cve_hdf5 repo) - - - Fixed a file space allocation bug in the parallel library for chunked - datasets - - With the addition of support for incremental file space allocation for - chunked datasets with filters applied to them that are created/accessed - in parallel, a bug was introduced to the library's parallel file space - allocation code. This could cause file space to not be allocated correctly - for datasets without filters applied to them that are created with serial - file access and later opened with parallel file access. In turn, this could - cause parallel writes to those datasets to place incorrect data in the file. - - - Fixed an assertion failure in Parallel HDF5 when a file can't be created - due to an invalid library version bounds setting - - An assertion failure could occur in H5MF_settle_raw_data_fsm when a file - can't be created with Parallel HDF5 due to specifying the use of a paged, - persistent file free space manager - (H5Pset_file_space_strategy(..., H5F_FSPACE_STRATEGY_PAGE, 1, ...)) with - an invalid library version bounds combination - (H5Pset_libver_bounds(..., H5F_LIBVER_EARLIEST, H5F_LIBVER_V18)). This - has now been fixed. - - - Fixed bugs in selection I/O - - Previously, the library could fail in some cases when performing selection - I/O with type conversion. - - - Fixed CVE-2018-13867 - - A corrupt file containing an invalid local heap datablock address - could trigger an assert failure when the metadata cache attempted - to load the datablock from storage. - - The local heap now verifies that the datablock address is valid - when the local heap header information is parsed. - - - Fixed CVE-2018-11202 - - A malformed file could result in chunk index memory leaks. Under most - conditions (i.e., when the --enable-using-memchecker option is NOT - used), this would result in a small memory leak and and infinite loop - and abort when shutting down the library. The infinite loop would be - due to the "free list" package not being able to clear its resources - so the library couldn't shut down. When the "using a memory checker" - option is used, the free lists are disabled so there is just a memory - leak with no abort on library shutdown. - - The chunk index resources are now correctly cleaned up when reading - misparsed files and valgrind confirms no memory leaks. - - - Fixed an issue where an assert statement was converted to an - incorrect error check statement - - An assert statement in the library dealing with undefined dataset data - fill values was converted to an improper error check that would always - trigger when a dataset's fill value was set to NULL (undefined). This - has now been fixed. - - - Fixed an assertion failure when attempting to use the Subfiling IOC - VFD directly - - The Subfiling feature makes use of two Virtual File Drivers, the - Subfiling VFD and the IOC (I/O Concentrator) VFD. The two VFDs are - intended to be stacked together such that the Subfiling VFD sits - "on top" of the IOC VFD and routes I/O requests through it; using the - IOC VFD alone is currently unsupported. The IOC VFD has been fixed so - that an error message is displayed in this situation rather than causing - an assertion failure. - - - Fixed a potential bug when copying empty enum datatypes - - Copying an empty enum datatype (including implicitly, as when an enum - is a part of a compound datatype) would fail in an assert in debug - mode and could fail in release mode depending on how the platform - handles undefined behavior regarding size 0 memory allocations and - using memcpy with a NULL src pointer. - - The library is now more careful about using memory operations when - copying empty enum datatypes and will not error or raise an assert. - - - Added an AAPL check to H5Acreate - - A check was added to H5Acreate to ensure that a failure is correctly - returned when an invalid Attribute Access Property List is passed - in to the function. The HDF5 API tests were failing for certain - build types due to this condition not being checked previously. - - - Fixed a bug in H5Ocopy that could generate invalid HDF5 files - - H5Ocopy was missing a check to determine whether the new object's - object header version is greater than version 1. Without this check, - copying of objects with object headers that are smaller than a - certain size would cause H5Ocopy to create an object header for the - new object that has a gap in the header data. According to the - HDF5 File Format Specification, this is not allowed for version - 1 of the object header format. - - Fixes GitHub issue #2653 - - - Fixed H5Pget_vol_cap_flags and H5Pget_vol_id to accept H5P_DEFAULT - - H5Pget_vol_cap_flags and H5Pget_vol_id were updated to correctly - accept H5P_DEFAULT for the 'plist_id' FAPL parameter. Previously, - they would fail if provided with H5P_DEFAULT as the FAPL. - - - Fixed ROS3 VFD anonymous credential usage with h5dump and h5ls - - ROS3 VFD anonymous credential functionality became broken in h5dump - and h5ls in the HDF5 1.14.0 release with the added support for VFD - plugins, which changed the way that the tools handled setting of - credential information that the VFD uses. The tools could be - provided the command-line option of "--s3-cred=(,,)" as a workaround - for anonymous credential usage, but the documentation for this - option stated that anonymous credentials could be used by simply - omitting the option. The latter functionality has been restored. - - Fixes GitHub issue #2406 - - - Fixed memory leaks when processing malformed object header continuation messages - - Malformed object header continuation messages can result in a too-small - buffer being passed to the decode function, which could lead to reading - past the end of the buffer. Additionally, errors in processing these - malformed messages can lead to allocated memory not being cleaned up. - - This fix adds bounds checking and cleanup code to the object header - continuation message processing. - - Fixes GitHub issue #2604 - - - Fixed memory leaks, aborts, and overflows in H5O EFL decode - - The external file list code could call assert(), read past buffer - boundaries, and not properly clean up resources when parsing malformed - external data files messages. - - This fix cleans up allocated memory, adds buffer bounds checks, and - converts asserts to HDF5 error checking. - - Fixes GitHub issue #2605 - - - Fixed potential heap buffer overflow in decoding of link info message - - Detections of buffer overflow were added for decoding version, index - flags, link creation order value, and the next three addresses. The - checkings will remove the potential invalid read of any of these - values that could be triggered by a malformed file. - - Fixes GitHub issue #2603 - - - Memory leak - - Memory leak was detected when running h5dump with "pov". The memory was allocated - via H5FL__malloc() in hdf5/src/H5FL.c - - The fuzzed file "pov" was an HDF5 file containing an illegal continuation message. - When deserializing the object header chunks for the file, memory is allocated for the - array of continuation messages (cont_msg_info->msgs) in continuation message info struct. - As error is encountered in loading the illegal message, the memory allocated for - cont_msg_info->msgs needs to be freed. - - Fixes GitHub issue #2599 - - - Fixed memory leaks that could occur when reading a dataset from a - malformed file - - When attempting to read layout, pline, and efl information for a - dataset, memory leaks could occur if attempting to read pline/efl - information threw an error, which is due to the memory that was - allocated for pline and efl not being properly cleaned up on error. - - Fixes GitHub issue #2602 - - - Fixed potential heap buffer overrun in group info header decoding from malformed file - - H5O__ginfo_decode could sometimes read past allocated memory when parsing a - group info message from the header of a malformed file. - - It now checks buffer size before each read to properly throw an error in these cases. - - Fixes GitHub issue #2601 - - - Fixed potential buffer overrun issues in some object header decode routines - - Several checks were added to H5O__layout_decode and H5O__sdspace_decode to - ensure that memory buffers don't get overrun when decoding buffers read from - a (possibly corrupted) HDF5 file. - - - Fixed a heap buffer overflow that occurs when reading from - a dataset with a compact layout within a malformed HDF5 file - - During opening of a dataset that has a compact layout, the - library allocates a buffer that stores the dataset's raw data. - The dataset's object header that gets written to the file - contains information about how large of a buffer the library - should allocate. If this object header is malformed such that - it causes the library to allocate a buffer that is too small - to hold the dataset's raw data, future I/O to the dataset can - result in heap buffer overflows. To fix this issue, an extra - check is now performed for compact datasets to ensure that - the size of the allocated buffer matches the expected size - of the dataset's raw data (as calculated from the dataset's - dataspace and datatype information). If the two sizes do not - match, opening of the dataset will fail. - - Fixes GitHub issue #2606 - - - Fixed a memory corruption issue that can occur when reading - from a dataset using a hyperslab selection in the file - dataspace and a point selection in the memory dataspace - - When reading from a dataset using a hyperslab selection in - the dataset's file dataspace and a point selection in the - dataset's memory dataspace where the file dataspace's "rank" - is greater than the memory dataspace's "rank", memory corruption - could occur due to an incorrect number of selection points - being copied when projecting the point selection onto the - hyperslab selection's dataspace. - - - Fixed issues in the Subfiling VFD when using the SELECT_IOC_EVERY_NTH_RANK - or SELECT_IOC_TOTAL I/O concentrator selection strategies - - Multiple bugs involving these I/O concentrator selection strategies - were fixed, including: - - * A bug that caused the selection strategy to be altered when - criteria for the strategy was specified in the - H5FD_SUBFILING_IOC_SELECTION_CRITERIA environment variable as - a single value, rather than in the old and undocumented - 'integer:integer' format - * Two bugs which caused a request for 'N' I/O concentrators to - result in 'N - 1' I/O concentrators being assigned, which also - lead to issues if only 1 I/O concentrator was requested - - Also added a regression test for these two I/O concentrator selection - strategies to prevent future issues. - - - Fix CVE-2021-37501 / GHSA-rfgw-5vq3-wrjf - - Check for overflow when calculating on-disk attribute data size. - - A bogus hdf5 file may contain dataspace messages with sizes - which lead to the on-disk data sizes to exceed what is addressable. - When calculating the size, make sure, the multiplication does not - overflow. - The test case was crafted in a way that the overflow caused the - size to be 0. - - Fixes GitHub #2458 - - - Fixed an issue with collective metadata writes of global heap data - - New test failures in parallel netCDF started occurring with debug - builds of HDF5 due to an assertion failure and this was reported in - GitHub issue #2433. The assertion failure began happening after the - collective metadata write pathway in the library was updated to use - vector I/O so that parallel-enabled HDF5 Virtual File Drivers (other - than the existing MPI I/O VFD) can support collective metadata writes. - - The assertion failure was fixed by updating collective metadata writes - to treat global heap metadata as raw data, as done elsewhere in the - library. - - Fixes GitHub issue #2433 - - - Fixed buffer overflow error in image decoding function. - - The error occurred in the function for decoding address from the specified - buffer, which is called many times from the function responsible for image - decoding. The length of the buffer is known in the image decoding function, - but no checks are produced, so the buffer overflow can occur in many places, - including callee functions for address decoding. - - The error was fixed by inserting corresponding checks for buffer overflow. - - Fixes GitHub issue #2432 - - - Reading a H5std_string (std::string) via a C++ DataSet previously - truncated the string at the first null byte as if reading a C string. - Fixed length datasets are now read into H5std_string as a fixed length - string of the appropriate size. Variable length datasets will still be - truncated at the first null byte. - - Fixes Github issue #3034 - - - Fixed write buffer overflow in H5O__alloc_chunk - - The overflow was found by OSS-Fuzz https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=58658 - - - Fixed a segfault when using a user-defined conversion function between compound datatypes - - During type info initialization for compound datatype conversion, the library checked if the - datatypes are subsets of one another in order to perform special conversion handling. - This check uses information that is only defined if a library conversion function is in use. - The library now skips this check for user-defined conversion functions. - - Fixes Github issue #3840 Java Library ------------ - - Fixed switch case 'L' block missing a break statement. - - The HDF5Array.arrayify method is missing a break statement in the case 'L': section - which causes it to fall through and throw an HDF5JavaException when attempting to - read an Array[Array[Long]]. - - The error was fixed by inserting a break statement at the end of the case 'L': sections. - - Fixes GitHub issue #3056 + - Configuration @@ -1888,230 +148,10 @@ Bug Fixes since HDF5-1.14.0 release Fixes GitHub issue #4811 - - Fixed usage issue with FindZLIB.cmake module - - When building HDF5 with CMake and relying on the FindZLIB.cmake module, - the Find module would correctly find the ZLIB library but not set an OUTPUT_NAME - on the target. Also, the target returned, ZLIB::ZLIB, was not in the ZLIB_LIBRARIES - variable. This caused issues when requesting the OUTPUT_NAME of the target in - the pkg-config settings. - - Similar to HDF5_USE_LIBAEC_STATIC, "Find static AEC library", option, we added - a new option, HDF5_USE_ZLIB_STATIC, "Find static zlib library". These options - allow a user to specify whether to use a static or shared version of the compression - library in a find_package call. - - - Corrected usage of FetchContent in the HDFLibMacros.cmake file. - - CMake version 3.30 changed the behavior of the FetchContent module to deprecate - the use of FetchContent_Populate() in favor of FetchContent_MakeAvailable(). Therefore, - the copying of HDF specialized CMakeLists.txt files to the dependent project's source - was implemented in the FetchContent_Declare() call. - - - Fixed/reverted an Autotools configure hack that causes problems on MacOS - - A sed line in configure.ac was added in the past to paper over some - problems with older versions of the Autotools that would add incorrect - linker flags. This used the -i option in a way that caused silent - errors on MacOS that did not break the build. - - The original fix for this problem (in 1.14.4) removed the sed line - entirely, but it turns out that the sed cleanup is still necessary - on some systems, where empty -l options will be added to the libtool - script. - - This sed line has been restored and reworked to not use -i. - - Fixes GitHub issues #3843 and #4448 - - - Fixed a list index out of range issue in the runTest.cmake file - - Fixed an issue in config/cmake/runTest.cmake where the CMake logic - would try to access an invalid list index if the number of lines in - a test's output and reference files don't match - - - Fix Autotools -Werror cleanup - - The Autotools temporarily scrub -Werror(=whatever) from CFLAGS, etc. - so configure checks don't trip over warnings generated by configure - check programs. The sed line originally only scrubbed -Werror but not - -Werror=something, which would cause errors when the '=something' was - left behind in CFLAGS. - - The sed line has been updated to handle -Werror=something lines. - - Fixes one issue raised in #3872 - - - Changed default of 'Error on HDF5 doxygen warnings' DOXYGEN_WARN_AS_ERROR option. - - The default setting of DOXYGEN_WARN_AS_ERROR to 'FAIL_ON_WARNINGS' has been changed - to 'NO'. It was decided that the setting was too aggressive and should be a user choice. - The github actions and scripts have been updated to reflect this. - - * HDF5_ENABLE_DOXY_WARNINGS: ON/OFF (Default: OFF) - * --enable-doxygen-errors: enable/disable (Default: disable) - - - Fixed an issue where the h5tools_test_utils test program was being - installed on the system for Autotools builds of HDF5 - - The h5tools_test_utils test program was mistakenly added to bin_PROGRAMS - in its Makefile.am configuration file, causing the executable to be - installed on the system. The executable is now added to noinst_PROGRAMS - instead and will no longer be installed on the system for Autotools builds - of HDF5. The CMake configuration code already avoids installing the - executable on the system. - - - Fixed a configuration issue that prevented building of the Subfiling VFD on macOS - - Checks were added to the CMake and Autotools code to verify that CLOCK_MONOTONIC_COARSE, - PTHREAD_MUTEX_ADAPTIVE_NP and pthread_condattr_setclock() are available before attempting - to use them in Subfiling VFD-related utility code. Without these checks, attempting - to build the Subfiling VFD on macOS would fail. - - - Fixes the ordering of INCLUDES when building with CMake - - Include directories in the source or build tree should come before other - directories to prioritize headers in the sources over installed ones. - - Fixes GitHub #1027 - - - The accum test now passes on macOS 12+ (Monterey) w/ CMake - - Due to changes in the way macOS handles LD_LIBRARY_PATH, the accum test - started failing on macOS 12+ when building with CMake. CMake has been - updated to set DYLD_LIBRARY_PATH on macOS and the test now passes. - - Fixes GitHub #2994, #2261, and #1289 - - - Changed the default settings used by CMake for the GZIP filter - - The default for the option HDF5_ENABLE_Z_LIB_SUPPORT was OFF. Now the default is ON. - This was done to match the defaults used by the autotools configure.ac. - In addition, the CMake message level for not finding a suitable filter library was - changed from FATAL_ERROR (which would halt the build process) to WARNING (which - will print a message to stderr). Associated files and documentation were changed to match. - - In addition, the default settings in the config/cmake/cacheinit.cmake file were changed to - allow CMake to disable building the filters if the tgz file could not be found. The option - to allow CMake to download the file from the original Github location requires setting - the ZLIB_USE_LOCALCONTENT option to OFF for gzip. And setting the LIBAEC_USE_LOCALCONTENT - option to OFF for libaec (szip). - - Fixes GitHub issue #2926 - - - Fixed syntax of generator expressions used by CMake - - Add quotes around the generator expression should allow CMake to - correctly parse the expression. Generator expressions are typically - parsed after command arguments. If a generator expression contains - spaces, new lines, semicolons or other characters that may be - interpreted as command argument separators, the whole expression - should be surrounded by quotes when passed to a command. Failure to - do so may result in the expression being split and it may no longer - be recognized as a generator expression. - - Fixes GitHub issue #2906 - - - Fixed improper include of Subfiling VFD build directory - - With the release of the Subfiling Virtual File Driver feature, compiler - flags were added to the Autotools build's CPPFLAGS and AM_CPPFLAGS - variables to always include the Subfiling VFD source code directory, - regardless of whether the VFD is enabled and built or not. These flags - are needed because the header files for the VFD contain macros that are - assumed to always be available, such as H5FD_SUBFILING_NAME, so the - header files are unconditionally included in the HDF5 library. However, - these flags are only needed when building HDF5, so they belong in the - H5_CPPFLAGS variable instead. Inclusion in the CPPFLAGS and AM_CPPFLAGS - variables would export these flags to the h5cc and h5c++ wrapper scripts, - as well as the libhdf5.settings file, which would break builds of software - that use HDF5 and try to use or parse information out of these files after - deleting temporary HDF5 build directories. - - Fixes GitHub issue #2621 - - - Correct the CMake generated pkg-config file - - The pkg-config file generated by CMake had the order and placement of the - libraries wrong. Also added support for debug library names. - - Changed the order of Libs.private libraries so that dependencies come after - dependents. Did not move the compression libraries into Requires.private - because there was not a way to determine if the compression libraries had - supported pkconfig files. Still recommend that the CMake config file method - be used for building projects with CMake. - - Fixes GitHub issues #1546 and #2259 - - - Force lowercase Fortran module file names - - The Cray Fortran compiler uses uppercase Fortran module file names, which - caused CMake installs to fail. A compiler option was added to use lowercase - instead. - Tools ----- - - Fixed several issues in ph5diff - - The parallel logic for the ph5diff tool inside the shared h5diff code was - refactored and cleaned up to fix several issues with the ph5diff tool. This - fixed: - - - several concurrency issues in ph5diff that can result in interleaved - output - - an issue where output can sometimes be dropped when it ends up in - ph5diff's output overflow file - - an issue where MPI_Init was called after HDF5 had been initialized, - preventing the library from setting up an MPI communicator attribute - to perform library cleanup on MPI_Finalize - - - Renamed h5fuse.sh to h5fuse - - Addresses Discussion #3791 - - - Fixed an issue with unmatched MPI messages in ph5diff - - The "manager" MPI rank in ph5diff was unintentionally sending "program end" - messages to its workers twice, leading to an error from MPICH similar to the - following: - - Abort(810645519) on node 1 (rank 1 in comm 0): Fatal error in internal_Finalize: Other MPI error, error stack: - internal_Finalize(50)...........: MPI_Finalize failed - MPII_Finalize(394)..............: - MPIR_Comm_delete_internal(1224).: Communicator (handle=44000000) being freed has 1 unmatched message(s) - MPIR_Comm_release_always(1250)..: - MPIR_finalize_builtin_comms(154): - - - Fixed an issue in h5repack for variable-length typed datasets - - When repacking datasets into a new file, h5repack tries to determine whether - it can use H5Ocopy to copy each dataset into the new file, or if it needs to - manually re-create the dataset, then read data from the old dataset and write - it to the new dataset. H5repack was previously using H5Ocopy for datasets with - variable-length datatypes, but this can be problematic if the global heap - addresses involved do not match exactly between the old and new files. These - addresses could change for a variety of reasons, such as the command-line options - provided to h5repack, how h5repack allocate space in the repacked file, etc. - Since H5Ocopy does not currently perform any translation when these addresses - change, datasets that were repacked with H5Ocopy could become unreadable in the - new file. H5repack has been fixed to repack variable-length typed datasets without - using H5Ocopy to ensure that the new datasets always have the correct global heap - addresses. - - - Names of objects with square brackets will have trouble without the - special argument, --no-compact-subset, on the h5dump command line. - - h5diff did not have this option and now it has been added. - - Fixes GitHub issue #2682 - - - In the tools traverse function - an error in either visit call - will bypass the cleanup of the local data variables. - - Replaced the H5TOOLS_GOTO_ERROR with just H5TOOLS_ERROR. - - Fixes GitHub issue #2598 + - Performance @@ -2121,25 +161,12 @@ Bug Fixes since HDF5-1.14.0 release Fortran API ----------- - - Fixed: HDF5 fails to compile with -Werror=lto-type-mismatch - - Removed the use of the offending C stub wrapper. - - Fixes GitHub issue #3987 + - High-Level Library ------------------ - - Fixed a memory leak in H5LTopen_file_image with H5LT_FILE_IMAGE_DONT_COPY flag - - When the H5LT_FILE_IMAGE_DONT_COPY flag is passed to H5LTopen_file_image, the - internally-allocated udata structure gets leaked as the core file driver doesn't - have a way to determine when or if it needs to call the "udata_free" callback. - This has been fixed by freeing the udata structure when the "image_free" callback - gets made during file close, where the file is holding the last reference to the - udata structure. - - Fixes GitHub issue #827 + - Fortran High-Level APIs @@ -2164,17 +191,6 @@ Bug Fixes since HDF5-1.14.0 release Testing ------- - - Fixed a bug in the dt_arith test when H5_WANT_DCONV_EXCEPTION is not - defined - - The dt_arith test program's test_particular_fp_integer sub-test tries - to ensure that the library correctly raises a datatype conversion - exception when converting a floating-point value to an integer overflows. - However, this test would run even when H5_WANT_DCONV_EXCEPTION isn't - defined, causing the test to fail due to the library not raising - datatype conversion exceptions. This has now been fixed by not running - the test when H5_WANT_DCONV_EXCEPTION is not defined. - - Disabled running of MPI Atomicity tests for OpenMPI major versions < 5 Support for MPI atomicity operations is not implemented for major @@ -2184,142 +200,106 @@ Bug Fixes since HDF5-1.14.0 release skip running the atomicity tests if the major version of OpenMPI is < 5. - - Fixed a testing failure in testphdf5 on Cray machines - - On some Cray machines, what appears to be a bug in Cray MPICH was causing - calls to H5Fis_accessible to create a 0-byte file with strange Unix - permissions. This was causing an H5Fdelete file deletion test in the - testphdf5 program to fail due to a just-deleted HDF5 file appearing to - still be accessible on the file system. The issue in Cray MPICH has been - worked around for the time being by resetting the MPI_Info object on the - File Access Property List used to MPI_INFO_NULL before passing it to the - H5Fis_accessible call. - - - A bug was fixed in the HDF5 API test random datatype generation code - - A bug in the random datatype generation code could cause test failures - when trying to generate an enumeration datatype that has duplicated - name/value pairs in it. This has now been fixed. - - - A bug was fixed in the HDF5 API test VOL connector registration checking code - - The HDF5 API test code checks to see if the VOL connector specified by the - HDF5_VOL_CONNECTOR environment variable (if any) is registered with the library - before attempting to run tests with it so that testing can be skipped and an - error can be returned when a VOL connector fails to register successfully. - Previously, this code didn't account for VOL connectors that specify extra - configuration information in the HDF5_VOL_CONNECTOR environment variable and - would incorrectly report that the specified VOL connector isn't registered - due to including the configuration information as part of the VOL connector - name being checked for registration status. This has now been fixed. - - - Fixed Fortran 2003 test with gfortran-v13, optimization levels O2,O3 - - Fixes failing Fortran 2003 test with gfortran, optimization level O2,O3 - with -fdefault-real-16. Fixes GH #2928. - Platforms Tested =================== - - HDF5 supports the latest macOS versions, including the current and two - preceding releases. As new major macOS versions become available, HDF5 - will discontinue support for the oldest version and add the latest - version to its list of compatible systems, along with the previous two - releases. - - Linux 5.16.14-200.fc35 GNU gcc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) - #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) - Fedora35 clang version 13.0.0 (Fedora 13.0.0-3.fc35) + - HDF5 is tested with the two latest macOS versions that are available + on github runners. As new major macOS versions become available, HDF5 + will discontinue support for the older version and add the new latest + version to its list of compatible systems, along with the previous + version. + + Linux 6.8.0-1010-aws GNU gcc, gfortran, g++ + #10-Ubuntu SMP 2024 x86_64 (Ubuntu 13.2.0-23ubuntu4) 13.2.0 + GNU/Linux Ubuntu 24.04 Ubuntu clang version 18.1.3 (1ubuntu1) + Intel(R) oneAPI DPC++/C++ Compiler 2024.2.0 + ifx (IFX) 2024.2.0 20240602 (cmake and autotools) - Linux 5.19.0-1027-aws GNU gcc (GCC) 11.3.0-1ubuntu1 - #36-Ubuntu SMP x86_64 GNU/Linux GNU Fortran (GCC) 11.3.0-1ubuntu1 - Ubuntu 22.04 Intel oneAPI DPC++/C++ Compiler, IFX 2023.1.0 - Ubuntu clang version 14.0.0-1ubuntu1 + Linux 6.5.0-1018-aws GNU gcc, gfortran, g++ + #18-Ubuntu SMP x86_64 GNU/Linux (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 + Ubuntu 22.04 Ubuntu clang version 14.0.0-1ubuntu1 + Intel(R) oneAPI DPC++/C++ Compiler 2024.0.2 + ifx (IFX) 2024.0.2 20231213 (cmake and autotools) - Linux 5.15.0-1037-aws GNU gcc (GCC) 9.4.0-1ubuntu1 - #36-Ubuntu SMP x86_64 GNU/Linux GNU Fortran (GCC) 9.4.0-1ubuntu1 - Ubuntu 20.04 Intel oneAPI DPC++/C++ Compiler, IFX 2023.1.0 - Ubuntu clang version 10.0.0-4ubuntu1 + Linux 5.14.21-cray_shasta_c cray-mpich/8.1.28 + #1 SMP x86_64 GNU/Linux cce/15.0.0 + (frontier) gcc/13.2 + (cmake) + + Linux 5.14.0-427.24.1.el9_4 GNU gcc, gfortran, g++ (Red Hat 11.4.1-3) + #1 SMP x86_64 GNU/Linux clang version 17.0.6 + Rocky 9 Intel(R) oneAPI DPC++/C++ Compiler 2024.2.0 + ifx (IFX) 2024.2.0 (cmake and autotools) - Linux 5.14.21-cray_shasta_c cray-mpich/8.1.25 - #1 SMP x86_64 GNU/Linux cce 15.0.1 - (perlmutter) GCC 12.2.0 - intel-oneapi/2023.1.0 - nvidia/22.7 + Linux-4.18.0-553.16.1.1toss.t4 openmpi/4.1.2 + #1 SMP x86_64 GNU/Linux clang 14.0.6 + (corona, dane) GCC 12.1.1 + Intel(R) oneAPI DPC++/C++ Compiler 2023.2.1 + ifx (IFX) 2023.2.1 + + Linux-4.18.0-553.5.1.1toss.t4 openmpi/4.1/4.1.6 + #1 SMP x86_64 GNU/Linux clang 16.0.6 + (eclipse) GCC 12.3.0 + Intel(R) oneAPI DPC++/C++ Compiler 2024.0.2 + ifx (IFX) 2024.0.2 (cmake) - Linux 5.14.21-cray_shasta_c cray-mpich/8.1.23 - #1 SMP x86_64 GNU/Linux cce 15.0.1 - (crusher) GCC 12.2.0 + Linux 4.14.0-115.35.1.3chaos spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 17.0.6 + (vortex) GCC 12.2.1 + nvhpc 24.1 + XL 2023.06.28 (cmake) - Linux-4.14.0-115.21.2 spectrum-mpi/rolling-release - #1 SMP ppc64le GNU/Linux clang 12.0.1, 14.0.5 + Linux-4.14.0-115.35.1 spectrum-mpi/rolling-release + #1 SMP ppc64le GNU/Linux clang 14.0.5, 15.0.6 (lassen) GCC 8.3.1 - XL 16.1.1.2, 2021,09.22, 2022.08.05 + XL 2021.09.22, 2022.08.05 (cmake) - Linux-4.12.14-197.99-default cray-mpich/7.7.14 - #1 SMP x86_64 GNU/Linux cce 12.0.3 - (theta) GCC 11.2.0 - llvm 9.0 - Intel 19.1.2 - Linux 3.10.0-1160.36.2.el7.ppc64 gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) #1 SMP ppc64be GNU/Linux g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) Power8 (echidna) GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) - IBM XL C for Linux, V13.1 - IBM XL Fortran for Linux, V15.1 - Linux 3.10.0-1160.24.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++) + Linux 3.10.0-1160.80.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++) #1 SMP x86_64 GNU/Linux compilers: Centos7 Version 4.8.5 20150623 (Red Hat 4.8.5-4) - (jelly/kituo/moohan) Version 4.9.3, Version 5.3.0, Version 6.3.0, - Version 7.2.0, Version 8.3.0, Version 9.1.0 - Version 10.2.0 + (jelly/kituo/moohan) Version 4.9.3, Version 7.2.0, Version 8.3.0, + Version 9.1.0, Version 10.2.0 Intel(R) C (icc), C++ (icpc), Fortran (icc) compilers: Version 17.0.0.098 Build 20160721 GNU C (gcc) and C++ (g++) 4.8.5 compilers - with NAG Fortran Compiler Release 6.1(Tozai) + with NAG Fortran Compiler Release 7.1(Hanzomon) Intel(R) C (icc) and C++ (icpc) 17.0.0.098 compilers - with NAG Fortran Compiler Release 6.1(Tozai) + with NAG Fortran Compiler Release 7.1(Hanzomon) + MPICH 3.1.4 compiled with GCC 4.9.3 MPICH 3.3 compiled with GCC 7.2.0 - MPICH 4.0.3 compiled with GCC 7.2.0 - OpenMPI 3.1.3 compiled with GCC 7.2.0 - OpenMPI 4.1.2 compiled with GCC 9.1.0 + OpenMPI 3.1.3 compiled with GCC 7.2.0 and 4.1.2 + compiled with GCC 9.1.0 PGI C, Fortran, C++ for 64-bit target on x86_64; - Version 19.10-0 - NVIDIA C, Fortran, C++ for 64-bit target on - x86_64; - Version 22.5-0 + Versions 18.4.0 and 19.10-0 + NVIDIA nvc, nvfortran and nvc++ version 22.5-0 (autotools and cmake) - Linux-3.10.0-1160.0.0.1chaos openmpi-4.1.2 - #1 SMP x86_64 GNU/Linux clang 6.0.0, 11.0.1 - (quartz) GCC 7.3.0, 8.1.0 - Intel 19.0.4, 2022.2, oneapi.2022.2 - - macOS Apple M1 11.6 Apple clang version 12.0.5 (clang-1205.0.22.11) - Darwin 20.6.0 arm64 gfortran GNU Fortran (Homebrew GCC 11.2.0) 11.1.0 - (macmini-m1) Intel icc/icpc/ifort version 2021.3.0 202106092021.3.0 20210609 - - macOS Big Sur 11.3.1 Apple clang version 12.0.5 (clang-1205.0.22.9) - Darwin 20.4.0 x86_64 gfortran GNU Fortran (Homebrew GCC 10.2.0_3) 10.2.0 - (bigsur-1) Intel icc/icpc/ifort version 2021.2.0 20210228 - macOS High Sierra 10.13.6 Apple LLVM version 10.0.0 (clang-1000.10.44.4) - 64-bit gfortran GNU Fortran (GCC) 6.3.0 - (bear) Intel icc/icpc/ifort version 19.0.4.233 20190416 + Linux-3.10.0-1160.119.1.1chaos openmpi/4.1.4 + #1 SMP x86_64 GNU/Linux clang 16.0.6 + (skybridge) Intel(R) oneAPI DPC++/C++ Compiler 2023.2.0 + ifx (IFX) 2023.2.0 + (cmake) - Mac OS X El Capitan 10.11.6 Apple clang version 7.3.0 from Xcode 7.3 - 64-bit gfortran GNU Fortran (GCC) 5.2.0 - (osx1011test) Intel icc/icpc/ifort version 16.0.2 + Linux-3.10.0-1160.90.1.1chaos openmpi/4.1 + #1 SMP x86_64 GNU/Linux clang 16.0.6 + (attaway) GCC 12.1.0 + Intel(R) oneAPI DPC++/C++ Compiler 2024.0.2 + ifx (IFX) 2024.0.2 + (cmake) Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++) #1 SMP x86_64 GNU/Linux compilers: @@ -2333,9 +313,9 @@ Platforms Tested Windows 10 x64 Visual Studio 2019 w/ clang 12.0.0 with MSVC-like command-line (C/C++ only - cmake) Visual Studio 2019 w/ Intel (C/C++ only - cmake) - Visual Studio 2022 w/ clang 15.0.1 + Visual Studio 2022 w/ clang 17.0.3 with MSVC-like command-line (C/C++ only - cmake) - Visual Studio 2022 w/ Intel C/C++/Fortran oneAPI 2023 (cmake) + Visual Studio 2022 w/ Intel C/C++ oneAPI 2023 (cmake) Visual Studio 2019 w/ MSMPI 10.1 (C only - cmake)