You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Grow: do nothing but remap. This means we'll continue to nuke old items until we cycle around. I think this is fine and saves the very expensive data movement costs.
Shrink (check that we actually wrote past the new limit before doing this): stream through the entire ring (starting at the head) copying entries to a new ring. Entries larger than 4096 need to have their data moved to their new ring IDs. The new head is just zero as the items are added in reverse order.
Create hard links of all the files into a new data.tmp/ to avoid conflicts.
Rename the existing data directory to data.old/.
Rename the new ring to overwrite the existing one.
Delete data.old with fuc.
This is crash safe because we never modify the original data, and rename the existing data directory upon completion which means a hard crash can just finish this operation to complete recovery. Otherwise on recovery we discard all the data and start over.
The text was updated successfully, but these errors were encountered:
I take it back: I'm going to implement garbage collection as a check that happens every time the bucket free list changes instead of on a timer and the threshold amount of garbage to remove shouldn't be configuration because it'll have the same problems as hashmap load factors (nobody knows what they're doing).
Add a way to change the maximum number of items that can be in each ring. Should probably be a default and then an override by ring ID.
Copied from #3:
Changing the max number of items:
data.tmp/
to avoid conflicts.data.old/
.The text was updated successfully, but these errors were encountered: