-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Database reader does not work for mmdb files larger than 2gb #154
Comments
Unfortunately, I don't think there is a quick fix to this. If I recall correctly, it is also a limitation of Java's memory mapping. We would probably need to store an array of |
Any change you would consider changing the internals to account for this limitation? Understandably this might not be something that can be fixed with just a snap of the fingers but just how large of a refactor are we talking about here? |
Given that all the MaxMind databases are well under this limit, it seems unlikely that we would implement this ourselves in the near future. The change will be at least somewhat invasive and I suspect it will harm the performance for smaller databases. We would consider merging a PR to address the problem if the impact on existing users was minimal. In terms of how large of a refactor this would be, I suspect you would need to modify |
Alright, thank you for the prompt answer. I don't think I currently have the resources to fix it myself given my general lack of knowledge considering the library's internals. If you are concerned for the performance, it would probably make sense to either have 2 different memory handling implementations or a whole separate reader class. |
I created a PR to fix this. #222 |
For both of you, may I ask what databases you are using that are so large? I believe MaxMind's largest publicly distributed database is under 400 MB in size. Although this issue is something that it seems like we will need to fix eventually, we are not ready to accept a PR that significantly increases the public API or the complexity of the code to do it. I still believe it should be doable with a relatively moderate change that doesn't increase the public API at all and where almost all of the additional complexity is limited to a class that wraps multiple |
The largest single database in my use case was around 1.5GB, provided by ipinfo. However it is possible to merge multiple databases together using a tool like mmdbmeld, increasing the size further. The two main reasons I see for using a merged database are:
|
@shawjef3, it is a relatively big change and increases the public API significantly, e.g., the new interfaces. It is also not the easiest for the user to use as they need to provide their own |
Okay. I give up. |
I edited my comment. I meant to ask about I have a patch for #229. |
I apologize that this has been frustrating. In general, I would recommend talking through proposed changes before making pull requests, especially large pull requests. You are also free to maintain a fork of this repo with any changes that you deem appropriate. There is no reason that there has to be only one MMDB reader for Java. Many languages have several competing implementations that have different priorities. As for the underlying issue here, I would like to see it fixed, but it has not been a priority for us as all of our databases are so far from the limit. The fix I have in mind would not add any additional dependencies, not require any changes from the user to use a larger database, and would not materially impact the performance for databases under 2 GB. Under the hood, I suspect it would provide two different |
Trying to create a new reader (with or without a cache) for a 2,5gb database:
Results in the following exception:
A quick search of the error message reveals that due to some old standards ByteBuffer size is limited to 2gb.
Is there any workaround or fix for this or is the reader simply unusable with a larger database?
The text was updated successfully, but these errors were encountered: