fixed potential memory leak, deleted unnecessary semicolon #34
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request Summary:
This pull request addresses a potential memory leak in the
load_model
function by ensuring that the dynamically allocated memory for 'model' is properly deallocated before returningNULL
in case of a model loading failure. Additionally, it includes changes to theconvert-hf-to-ggml.py
script to improve its readability and consistency.Changes Made:
NULL
in case of a model loading failure.delete
operator to release the dynamically allocated memory.convert-hf-to-ggml.py
script to enhance code readability and consistency.Explanation:
The
load_model
function previously allocated memory for 'model' using thenew
operator but didn't release it in case the model loading failed, potentially causing a memory leak. To address this issue, I added adelete model;
statement before returningNULL
when the loading fails.In the
convert-hf-to-ggml.py
script, I removed a few semicolons that were not required for proper code execution. This change enhances the code's readability and maintains consistency with Python's coding style.