Skip to content

Commit

Permalink
Optimise peak memory usage [4/4]
Browse files Browse the repository at this point in the history
Summary:
**Why?**
Running analysis for multiple targets quickly increases the memory usage by buck2 daemon. Because of that glean indexing [bxl script](https://www.internalfb.com/code/fbsource//fbcode/glean/facebook/lang/clang/index.bxl) ooms sandcastle machines in some cases. The common issue is very high peak memory usage with a lot of allocations which are [not released immediately](https://fb.workplace.com/groups/starlark/permalink/1312921066063385/).

This change is to not create a new array in dfs traversal which creates a new copy every time using `nodes_to_visit[::stride]`. Instead, traverse same array in reverse order

Reviewed By: podtserkovskiy

Differential Revision: D68555015

fbshipit-source-id: f3c1ceb257638e21c0d6ba4b65b7491b665c635f
  • Loading branch information
iamirzhan authored and facebook-github-bot committed Jan 25, 2025
1 parent c405efe commit 3dc8a60
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion prelude/utils/graph_utils.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,9 @@ def depth_first_traversal_by(
fail("Expected node {} in graph nodes".format(node_formatter(node)))
nodes_to_visit = get_nodes_to_traverse_func(node)
if nodes_to_visit:
for node in nodes_to_visit[::stride]:
range_traversal = range(len(nodes_to_visit) - 1, -1, -1) if stride == -1 else range(len(nodes_to_visit))
for i in range_traversal:
node = nodes_to_visit[i]
if node not in visited:
visited[node] = None
stack.append(node)
Expand Down

0 comments on commit 3dc8a60

Please sign in to comment.