-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider switch to Chairmarks.jl #35
Comments
I haven't played around with ChairMarks too much but setting gctrial to false should reduce start up time for BenchmarkTools ~300ms. |
Cross-posting thread on Chairmarks.jl for translating benchmarks: LilithHafner/Chairmarks.jl#70
Interesting. Would this hurt accuracy at all though? Some benchmarks can take ~20 minutes to run with BenchmarkTools though so 300ms might not save much. |
I suspect it might, but if the benchmark is 20 minutes long that seems unlikely to matter. Chairmarks essentially run with gctrial=false anyways.
I don't know how well BenchmarkTools and Chairmarks perform on 20 minute workloads but at that point aren't you only running your benchmark function once, I wouldn't think either would have much overhead and you could use |
Just a guess here: a perhaps that 20 minute run includes multiple benchmarks, each of which has its own overhead. If on the other hand, it is actually a single 20 minute workload...
Chairmarks and julia> @time @time sleep(10)
10.011087 seconds (4 allocations: 112 bytes)
10.019674 seconds (444 allocations: 19.547 KiB, 0.08% compilation time)
julia> @time @b sleep(10)
[ Info: Loading Chairmarks ...
10.037857 seconds (114.04 k allocations: 5.883 MiB, 0.23% compilation time)
10.011 s (4 allocs: 112 bytes, without a warmup)
julia> @time @btime sleep(10)
[ Info: Loading BenchmarkTools ...
10.011 s (4 allocations: 112 bytes)
30.654176 seconds (1.90 M allocations: 96.328 MiB, 1.14% gc time, 0.92% compilation time)
julia> @time @btime sleep(10) evals=1
10.011 s (4 allocations: 112 bytes)
20.205827 seconds (32.59 k allocations: 1.615 MiB, 0.82% gc time, 0.07% compilation time) |
In theory all this would take is a package extension on Chairmarks, which removes |
IMO switching to Chairmarks (and dropping support for BenchmarkTools) would be bad. Supporting Chairmarks would be lovely, though. |
@MilesCranmer, what properties does AirspeedVelocity currently require SUITE to have? e.g. perhaps something around I'm asking because I'd like to figure out how to create a fully functional, compatible object using Chairmarks |
It just expects that SUITE is a AirspeedVelocity.jl/src/Utils.jl Lines 137 to 235 in 493a0e0
|
Would this be compatible? julia> struct Runnable{F}
f::F
end
julia> Base.run(r::Runnable; kwargs...) = r.f()
julia> BenchmarkTools.tune!(::Runnable; kwargs...) = nothing
julia> macro chairmarks_benchmarkable(args...)
:(Runnable(() -> @be $(args...)))
end
@chairmarks_benchmarkable (macro with 1 method)
julia> SUITE = BenchmarkGroup()
0-element BenchmarkTools.BenchmarkGroup:
tags: []
julia> SUITE["rand"] = @chairmarks_benchmarkable rand()
Runnable{var"#29#31"}(var"#29#31"())
julia> SUITE["sleep"] = @chairmarks_benchmarkable sleep(.01)
Runnable{var"#33#35"}(var"#33#35"())
julia> SUITE["rand BenchmarkTools"] = @benchmarkable rand()
Benchmark(evals=1, seconds=5.0, samples=10000)
julia> tune!(SUITE)
3-element BenchmarkTools.BenchmarkGroup:
tags: []
"sleep" => Runnable{var"#33#35"}(var"#33#35"())
"rand BenchmarkTools" => Benchmark(evals=1000, seconds=5.0, samples=10000)
"rand" => Runnable{var"#29#31"}(var"#29#31"())
julia> run(SUITE); |
No, it hits
I suspect that the leafs of the |
Huzzah! I got this working. With this benchmark.jl using BenchmarkTools, Chairmarks
struct Runnable{F}
f::F
end
Base.run(r::Runnable; kwargs...) = r.f()
BenchmarkTools.tune!(::Runnable; kwargs...) = nothing
macro chairmarks_benchmarkable(args...)
:(Runnable(()->@be $(args...)))
end
SUITE = BenchmarkGroup()
SUITE["BenchmarkTools"] = @benchmarkable rand(1:100, 100000)
SUITE["Chairmarks"] = @chairmarks_benchmarkable rand(1:100, 100000) and this diff to AirspeedVelocity.jl diff --git a/src/Utils.jl b/src/Utils.jl
index 8a812d1..70a2e3f 100644
--- a/src/Utils.jl
+++ b/src/Utils.jl
@@ -486,6 +486,14 @@ end
function _flatten_results!(d::OrderedDict, results::Dict{String,Any}, prefix)
if "times" in keys(results)
d[prefix] = compute_summary_statistics(results)
+ elseif "samples" in keys(results) # This branch allows for Chairmarks.jl compatibility
+ samples = results["samples"]
+ results′ = Dict(
+ "times" => 1e9getindex.(samples, "time"),
+ "memory" => mean(getindex.(samples, "bytes")),
+ "allocs" => mean(getindex.(samples, "allocs"))
+ )
+ d[prefix] = compute_summary_statistics(results′)
elseif "data" in keys(results)
for (key, value) in results["data"]
next_prefix = if length(prefix) == 0 I got these results click to expand
|
Nice!! |
How achievable do you think it would be to have a |
Very achievable. This is already possible with the flag However, I don't think we need to do that. In an ideal world, it would be possible to use whatever backend you want without even needing a command line flag. When we talk about backend there are two parts I think of
BenchmarkTools provides 1 & 2. Chairmarks provides an alternative for 1, ChairmarksForAirspeedVelocity provides 1 from Chairmarks and re-exports BenchmarkGroup to provide 2. Implementing #73 well should enable support for any backend in the first category. BenchmarkTools.BenchmarkGroup is a reasonable aggregator and right now there are no alternatives that come to mind, so supporting alternative backends for aggregators doesn't seem pressing for now. |
Chairmarks.jl (discourse thread here) from @LilithHafner is much faster than BenchmarkTools.jl. With BenchmarkTools.jl, it can definitely take a while to run an entire benchmark suite, especially if you need to use benchmark tuning, so I am very curious about whether we could try to switch AirspeedVelocity.jl to use this!
It would be really nice to get ultra-fast benchmarks for an entire package's suite.
cc @Zentrik @Krastanov in case of interest.
The text was updated successfully, but these errors were encountered: