Summarize bench::mark results.
# S3 method for bench_mark summary(object, filter_gc = TRUE, relative = FALSE, time_unit = NULL, ...)
bench_mark object to summarize.
Additional arguments ignored.
A tibble with the additional summary columns. The following summary columns are computed
bench_expr The deparsed expression that was evaluated
(or its name if one was provided).
bench_time The minimum execution time.
bench_time The sample median of execution time.
double The estimated number of executions performed per
bench_bytes Total amount of memory allocated by R while
running the expression. Memory allocated outside the R heap, e.g. by
new directly is not tracked, take care to avoid
misinterpreting the results if running code that may do this.
double The number of garbage collections per second.
integer Total number of iterations after filtering
garbage collections (if
filter_gc == TRUE).
double Total number of garbage collections performed over all
iterations. This is a psudo-measure of the pressure on the garbage collector, if
it varies greatly between to alternatives generally the one with fewer
collections will cause fewer allocation in real usage.
bench_time The total time to perform the benchmarks.
list A list column of the object(s) returned by the
list A list column with results from
list A list column of
bench_time vectors for each evaluated
list A list column with tibbles containing the level of
garbage collection (0-2, columns) for each iteration (rows).
filter_gc == TRUE (the default) runs that contain a garbage
collection will be removed before summarizing. This is most useful for fast
expressions when the majority of runs do not contain a gc. Call
summary(filter_gc = FALSE) if you would like to compute summaries with
these times, such as expressions with lots of allocations when all or most
runs contain a gc.
dat <- data.frame(x = runif(10000, 1, 1000), y=runif(10000, 1, 1000)) # `bench::mark()` implicitly calls summary() automatically results <- bench::mark( dat[dat$x > 500, ], dat[which(dat$x > 500), ], subset(dat, x > 500)) # However you can also do so explicitly to filter gc differently. summary(results, filter_gc = FALSE)#> # A tibble: 3 x 13 #> expression min median `itr/sec` mem_alloc `gc/sec` n_itr #> <bch:expr> <bch> <bch:> <dbl> <bch:byt> <dbl> <int> #> 1 dat[dat$x > 500, ] 334µs 355µs 2284. 375KB 40.0 1142 #> 2 dat[which(dat$x > 500), ] 257µs 272µs 3057. 258KB 34.0 1529 #> 3 subset(dat, x > 500) 448µs 480µs 1440. 493KB 33.2 738 #> # … with 6 more variables: n_gc <dbl>, total_time <bch:tm>, result <list>, #> # memory <list>, time <list>, gc <list># Or output relative times summary(results, relative = TRUE)#> # A tibble: 3 x 13 #> expression min median `itr/sec` mem_alloc `gc/sec` n_itr #> <bch:expr> <dbl> <dbl> <dbl> <dbl> <dbl> <int> #> 1 dat[dat$x > 500, ] 1.30 1.30 1.35 1.45 1.23 1122 #> 2 dat[which(dat$x > 500), ] 1 1 1.74 1 1 1512 #> 3 subset(dat, x > 500) 1.75 1.76 1 1.91 1.20 721 #> # … with 6 more variables: n_gc <dbl>, total_time <bch:tm>, result <list>, #> # memory <list>, time <list>, gc <list>