Skip to content

Commit

Permalink
Fix a lock order inversion in tests (#6666)
Browse files Browse the repository at this point in the history
The cycle was DaemonThread::m_running_on_change_mutex =>
RealmCoordinator::m_realm_mutex  => SyncManager::m_mutex  =>
RealmCoordinator::s_coordinator_mutex  =>
DaemonThread::m_running_on_change_mutex, and it happened due to
DaemonThread::remove() being called inside RealmCoordinator::clear_cache()
while holding s_coordinator_mutex. Fortunately we don't actually need to be doing that.

As the cycle required RealmCoordinator::clear_all_caches(), this was only
applicable to tests.
  • Loading branch information
tgoyne authored May 25, 2023
1 parent 61da27e commit 5bc5f3b
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 15 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@

### Internals
* Simplify the implementation of query expression nodes which have a btree leaf cache.
* Fix a lock order inversion hit by object store tests running on linux. The cycle required test-specific code and so is not applicable to non-tests.

----------------------------------------------

Expand Down
29 changes: 14 additions & 15 deletions src/realm/object-store/impl/realm_coordinator.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -644,24 +644,26 @@ void RealmCoordinator::unregister_realm(Realm* realm)
}
}

// Thread-safety analsys doesn't reasonably handle calling functions on different
// Thread-safety analysis doesn't reasonably handle calling functions on different
// instances of this type
void RealmCoordinator::clear_cache() NO_THREAD_SAFETY_ANALYSIS
{
std::vector<std::shared_ptr<Realm>> realms_to_close;
std::vector<std::shared_ptr<RealmCoordinator>> coordinators;
{
std::lock_guard<std::mutex> lock(s_coordinator_mutex);

for (auto& weak_coordinator : s_coordinators_per_path) {
auto coordinator = weak_coordinator.second.lock();
if (!coordinator) {
continue;
if (auto coordinator = weak_coordinator.second.lock()) {
coordinators.push_back(coordinator);
}
coordinators.push_back(coordinator);
}
s_coordinators_per_path.clear();
}

coordinator->m_notifier = nullptr;
for (auto& coordinator : coordinators) {
coordinator->m_notifier = nullptr;

std::vector<std::shared_ptr<Realm>> realms_to_close;
{
// Gather a list of all of the realms which will be removed
util::CheckedLockGuard lock(coordinator->m_realm_mutex);
for (auto& weak_realm_notifier : coordinator->m_weak_realm_notifiers) {
Expand All @@ -671,14 +673,11 @@ void RealmCoordinator::clear_cache() NO_THREAD_SAFETY_ANALYSIS
}
}

s_coordinators_per_path.clear();
// Close all of the previously cached Realms. This can't be done while
// locks are held as it may try to re-lock them.
for (auto& realm : realms_to_close)
realm->close();
}
coordinators.clear();

// Close all of the previously cached Realms. This can't be done while
// s_coordinator_mutex is held as it may try to re-lock it.
for (auto& realm : realms_to_close)
realm->close();
}

void RealmCoordinator::clear_all_caches()
Expand Down

0 comments on commit 5bc5f3b

Please sign in to comment.