Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixed potential memory leak, deleted unnecessary semicolon #34

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

mendax0110
Copy link

Pull Request Summary:

This pull request addresses a potential memory leak in the load_model function by ensuring that the dynamically allocated memory for 'model' is properly deallocated before returning NULL in case of a model loading failure. Additionally, it includes changes to the convert-hf-to-ggml.py script to improve its readability and consistency.

Changes Made:

  • Added memory deallocation for 'model' before returning NULL in case of a model loading failure.
  • Utilized the delete operator to release the dynamically allocated memory.
  • Removed a few unnecessary semicolons from the convert-hf-to-ggml.py script to enhance code readability and consistency.

Explanation:

The load_model function previously allocated memory for 'model' using the new operator but didn't release it in case the model loading failed, potentially causing a memory leak. To address this issue, I added a delete model; statement before returning NULL when the loading fails.

In the convert-hf-to-ggml.py script, I removed a few semicolons that were not required for proper code execution. This change enhances the code's readability and maintains consistency with Python's coding style.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant