raco parbench:   Run parallel benchmarks
1 Installation
1.1 Requirements
2 Quick Start
2.1 Example Output
3 Command-Line Reference
3.1 Mode Options
3.2 Output Options
3.3 Benchmark Configuration
3.4 Information Options
3.5 Specifying Benchmarks
4 Benchmark Suites
4.1 MPL Benchmarks (27)
4.1.1 Graph Algorithms
4.1.2 Sorting
4.1.3 Numeric
4.1.4 Text Processing
4.1.5 Other
4.2 Shootout Benchmarks (6)
4.3 Racket Benchmarks (3)
5 Running Individual Benchmarks
5.1 Common Benchmark Options
6 Log Format
7 Analysis Tools
7.1 Summarizing Results
7.2 Generating Plots
7.3 HTML Dashboard
8 Configuration Files
9 Repository Structure
10 License
9.0

raco parbench: Run parallel benchmarks🔗ℹ

Sam Tobin-Hochstadt

A comprehensive benchmarking suite for evaluating parallel performance in Racket.

The raco parbench command runs parallel benchmarks and reports timing results. It includes 36 benchmarks across three suites:

  • MPL (27 benchmarks) — Graph algorithms, sorting, numeric computations, and text processing

  • Shootout (6 benchmarks) — Classic language benchmark game workloads

  • Racket (3 benchmarks) — Native Racket benchmarks

The package provides both sequential and parallel implementations of each benchmark, allowing you to measure parallel speedup and scalability across different core counts.

1 Installation🔗ℹ

Install the package using raco pkg:

raco pkg install parbench

Or link from a local clone:

git clone https://github.com/example/parbench

cd parbench

raco pkg install --link .

1.1 Requirements🔗ℹ

  • Racket 9.0 or later

  • 4+ CPU cores recommended

  • Linux or macOS (Windows via WSL)

2 Quick Start🔗ℹ

After installation, use raco parbench to run benchmarks:

raco parbench fib              # Run single benchmark

raco parbench mpl              # Run MPL suite (27 benchmarks)

raco parbench shootout         # Run Shootout suite (6 benchmarks)

raco parbench racket           # Run Racket suite (3 benchmarks)

raco parbench --quick          # Quick smoke test

raco parbench -v fib           # Verbose output

raco parbench --save fib       # Save log files

raco parbench --html fib       # Save logs + HTML report

By default, raco parbench runs quietly and prints a summary table without saving files. Use --save to save log files or --html to also generate HTML reports.

Alternatively, you can run the ./bench script directly from the repository root with identical arguments.

2.1 Example Output🔗ℹ

$ raco parbench --quick fib

Parbench (quick mode)

 

Running mpl benchmarks...

  fib

 

========================================

  Results Summary

========================================

 

                              seq                  1 workers               4 workers

Benchmark               mean/median/min         mean/median/min         mean/median/min

--------------------------------------------------------------------------------------------

fib                     785.0/788.0/764         798.0/797.0/795         216.0/217.0/209

3 Command-Line Reference🔗ℹ

The raco parbench command accepts the following options:

3.1 Mode Options🔗ℹ

  • --quick or -q Quick smoke test mode. Runs 3 iterations with limited core counts (1 and 4).

  • --verbose or -v Show detailed per-benchmark output during execution.

3.2 Output Options🔗ℹ

  • --save or -s Save results to log files in the output directory. Creates a timestamped subdirectory under "results/".

  • --html Generate an HTML visualization report. Implies --save.

  • --output dir or -o dir Set the output base directory. Default: "./results"

  • --update dir or -u dir Add results to an existing run directory. Implies --save.

3.3 Benchmark Configuration🔗ℹ

  • --iterations n or -i n Number of timed iterations per benchmark. Default: 10.

  • --work factor or -w factor Scale problem sizes by the given factor. Use 0.1 for 10% of normal size, 0.001 for very quick smoke tests.

  • --cores counts or -c counts Specify worker counts. Accepts comma-separated values ("1,4,8") or ranges ("1-8").

3.4 Information Options🔗ℹ

  • --list or -l List all available benchmarks and exit.

  • --dry-run or -n Show commands that would be executed without actually running them.

  • --help or -h Show help message and exit.

3.5 Specifying Benchmarks🔗ℹ

You can specify what to run as positional arguments:

  • Suite names: mpl, shootout, racket, or all

  • Individual benchmarks: fib, histogram, binary-trees, etc.

  • Multiple benchmarks: raco parbench fib histogram bfs

  • Default (no arguments): Runs all benchmarks

4 Benchmark Suites🔗ℹ

4.1 MPL Benchmarks (27)🔗ℹ

The MPL benchmarks are parallel implementations of algorithms from the MPL benchmark suite.

4.1.1 Graph Algorithms🔗ℹ

  • bfs Breadth-first search

  • mis Maximal independent set

  • msf Minimum spanning forest

  • connectivity Graph connectivity

  • triangle-count Triangle counting

  • centrality Betweenness centrality

  • convex-hull Convex hull computation

4.1.2 Sorting🔗ℹ

  • integer-sort Parallel integer sorting

  • merge-sort Parallel merge sort

  • samplesort Sample sort algorithm

  • suffix-array Suffix array construction

4.1.3 Numeric🔗ℹ

  • histogram Parallel histogram computation

  • primes Prime number sieve

  • fib Parallel Fibonacci

  • nqueens N-Queens solver

  • mcss Maximum contiguous subsequence sum

  • subset-sum Subset sum problem

  • bignum-add Big number addition

4.1.4 Text Processing🔗ℹ

  • tokens Tokenization

  • word-count Word frequency counting

  • grep Parallel pattern matching

  • dedup Deduplication

  • palindrome Palindrome detection

  • parens Parentheses matching

4.1.5 Other🔗ℹ

  • flatten Parallel list flattening

  • collect Parallel collection

  • shuffle Parallel shuffle

4.2 Shootout Benchmarks (6)🔗ℹ

Classic benchmarks from the Computer Language Benchmarks Game:

  • binary-trees Binary tree allocation and traversal

  • spectral-norm Eigenvalue approximation

  • fannkuch-redux Pancake flipping permutations

  • mandelbrot Mandelbrot set generation

  • k-nucleotide Nucleotide frequency counting

  • regex-dna DNA pattern matching

4.3 Racket Benchmarks (3)🔗ℹ

Native Racket benchmarks:

  • bmbench Boyer-Moore majority voting algorithm

  • richards Richards device scheduler simulation

  • rows1b Synthetic row processing workload

5 Running Individual Benchmarks🔗ℹ

Each benchmark can be run directly using racket -l:

racket -l parbench/benchmarks/mpl/fib -- --n 42 --threshold 30 --workers 4 --repeat 5

racket -l parbench/benchmarks/mpl/histogram -- --n 200000000 --workers 8 --log results/hist.sexp

racket -l parbench/benchmarks/mpl/bfs -- --n 8000000 --graph-type grid --workers 4

racket -l parbench/benchmarks/shootout/binary-trees -- --n 18 --workers 8 --repeat 10

racket -l parbench/benchmarks/shootout/mandelbrot -- --n 4000 --workers 8

racket -l parbench/benchmarks/racket/bmbench -- --n 1000000 --workers 4 --repeat 10

racket -l parbench/benchmarks/racket/richards -- --iterations 100 --workers 8

The -- separator tells racket that subsequent arguments should be passed to the benchmark module rather than interpreted as Racket flags.

5.1 Common Benchmark Options🔗ℹ

All individual benchmarks support these options:

  • --workers n Number of parallel workers

  • --repeat n Number of timed iterations

  • --log file Write S-expression results to file

  • --skip-sequential Skip the sequential baseline run

Each benchmark may have additional benchmark-specific options (e.g., --n for problem size, --threshold for parallelism cutoff).

6 Log Format🔗ℹ

Benchmark results are recorded as S-expressions with the following structure:

(benchmark
  (name histogram)
  (variant parallel)
  (iteration 1)
  (repeat 10)
  (metrics (cpu-ms 520) (real-ms 515) (gc-ms 12))
  (params (n 200000000) (workers 8))
  (metadata (timestamp 1758661801) (racket-version "8.18"))
  (status ok))
  • name Benchmark identifier

  • variant Either 'sequential or 'parallel

  • iteration Current iteration number (1-based)

  • repeat Total number of iterations

  • metrics Timing data:
    • cpu-ms CPU time in milliseconds

    • real-ms Wall-clock time in milliseconds

    • gc-ms Garbage collection time in milliseconds

  • params Benchmark parameters (problem size, worker count, etc.)

  • metadata Run metadata (timestamp, Racket version)

  • status Either 'ok or 'error

7 Analysis Tools🔗ℹ

The package includes tools for analyzing and visualizing benchmark results.

7.1 Summarizing Results🔗ℹ

Compute statistics (mean, standard deviation, min, max) from log files:

racket -l parbench/benchmarks/tools/summarize-results -- results/*.sexp

7.2 Generating Plots🔗ℹ

Create PNG plots from benchmark results:

racket -l parbench/benchmarks/tools/plot-results -- \

  --input results/*.sexp \

  --metric real \

  --output plots/benchmark.png

7.3 HTML Dashboard🔗ℹ

Generate an interactive HTML visualization dashboard:

racket -l parbench/benchmarks/tools/visualize -- \

  --log-dir results \

  --output dashboard.html

8 Configuration Files🔗ℹ

Pre-defined configurations are available in "benchmarks/config/":

File

Purpose

"quick.sexp"

Fast smoke tests (small sizes, 1 repeat)

"standard.sexp"

Typical benchmarking (moderate sizes)

"stress.sexp"

Large problems (comprehensive evaluation)

Use with the suite runner:

racket -l parbench/benchmarks/run-suite -- --suite all --config benchmarks/config/quick.sexp

9 Repository Structure🔗ℹ

parbench/

+-- bench                    # Unified benchmark runner script

+-- raco-parbench.rkt        # Raco command wrapper

+-- info.rkt                 # Package metadata

+-- README.md                # Project overview

+-- BENCHMARKS.md            # CLI reference

+-- benchmarks/

|   +-- common/              # Shared CLI, logging infrastructure

|   +-- mpl/                 # MPL parallel algorithms (27)

|   +-- shootout/            # Shootout benchmarks (6)

|   +-- racket/              # Racket benchmarks (3)

|   +-- tools/               # Analysis and visualization

|   +-- config/              # Configuration files

+-- tests/                   # RackUnit test suite

+-- scribblings/             # This documentation

10 License🔗ℹ

Apache 2.0 or MIT, at your option.