-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure thread safety for memory mappings. #6509
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have some indication that lacking locking during attach/detach is problematic. Like stack traces from crashes involving any of these methods?
src/realm/alloc_slab.cpp
Outdated
REALM_ASSERT(m_mappings.size()); | ||
m_data = m_mappings[0].primary_mapping.get_addr(); | ||
realm::util::encryption_read_barrier(m_mappings[0].primary_mapping, 0, sizeof(Header)); | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Locking here should not be needed. The slab allocator is just being attached, and no other thread should have a reference to it at this point.
@finnschiermer I think the assumption has changed or there is some other problem elsewhere, since |
@nicola-cab Thx for highlighting this for me. Very interesting. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…guard_for_mappings
This PR although protects the mappings vector while reading the last committed format version, does not explain the other issues we have been seeing around the slab allocator. For now, I will keep them open and keep digging in. |
What, How & Why?
It seems we are not always thread safe while accessing
m_mappings
inside the slab allocator.This could have caused #6508, but also it could be the cause of #6378 and #6510
Unfortunately, this is just a guess, since I haven't found a way to reproduce the issues yet.
☑️ ToDos