Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename the machines variable/filters to a tags variable/filter #264

Merged
merged 1 commit into from
Sep 6, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Rename the machines variable to a tags variable
This generalizes the use case and avoids confusion with the hopefully soon to be added support for machine-specific configuration.

In the end, the feature was already more generally useful than to split benchmark jobs up by machines.

Signed-off-by: Stefan Marr <git@stefan-marr.de>
smarr committed Sep 4, 2024
commit 4f48ec0de7f4bab55d9db034d85a51af093597ff
32 changes: 16 additions & 16 deletions docs/config.md
Original file line number Diff line number Diff line change
@@ -641,10 +641,10 @@ As well as:
- `input_sizes`
- `cores`
- `variable_values`
- `machines`
- `tags`

Run configurations are generated from the cross product of all `input_sizes`,
`cores`, `variable_values`, and `machines` for a benchmark.
`cores`, `variable_values`, and `tags` for a benchmark.

## Benchmark

@@ -702,7 +702,7 @@ way to adjust the amount of computation performed.
in form of a sequence literal: `[small, large]`.

Run configurations are generated from the cross product of all `input_sizes`,
`cores`, `variable_values` and `machines` for a benchmark.
`cores`, `variable_values` and `tags` for a benchmark.
The specific input size can be used, e.g., in the command as in the example below.

Example:
@@ -728,7 +728,7 @@ In practice, it can be used more flexibly and as just another variable that can
any list of strings.

Run configurations are generated from the cross product of all `input_sizes`,
`cores`, `variable_values`, and `machines` for a benchmark.
`cores`, `variable_values`, and `tags` for a benchmark.
The specific core setting can be used, e.g., in the command as in the example below.

Example:
@@ -750,7 +750,7 @@ Another dimension by which the benchmark execution can be varied.
It takes a list of strings, or arbitrary values really.

Run configurations are generated from the cross product of all `input_sizes`,
`cores`, `variable_values`, and `machines` for a benchmark.
`cores`, `variable_values`, and `tags` for a benchmark.
The specific variable value can be used, e.g., in the command as in the example below.

Example:
@@ -769,27 +769,27 @@ benchmark_suites:

---

**machines:**
**tags:**

A dimension by which the benchmark execution can be varied.
It takes a list of strings, or arbitrary values really.
The typical use case is to name one or more machines on which the benchmark
is to be executed.
It takes a list of strings.
One use case is to split benchmarks into different groups, e.g., by machine
or fast and slow, which one can then filter for with `rebench ... t:tag`.

Run configurations are generated from the cross product of all `input_sizes`,
`cores`, `variable_values`, and `machines` for a benchmark.
The specific machine can be used, e.g., in the command, or often more useful
`cores`, `variable_values`, and `tags` for a benchmark.
The specific tag can be used, e.g., in the command, or often more useful
as a filter when running `rebench`.

Example:

```yaml
benchmark_suites:
ExampleSuite:
command: Harness %(machine)s
command: Harness %(tag)s
benchmarks:
- Benchmark2:
machines:
tags:
- machine1
- machine2
```
@@ -798,7 +798,7 @@ Example filter command line, which would execute only the benchmarks
tagged with `machine1`:

```shell
rebench rebench.conf m:machine1
rebench rebench.conf t:machine1
```

---
@@ -953,7 +953,7 @@ executors:
**run details and variables:**

An executor can additionally use the keys for [run details](#runs) and [variables](#benchmark)
(`input_sizes`, `cores`, `variable_values`, `machines`).
(`input_sizes`, `cores`, `variable_values`, `tags`).

## Experiments

@@ -1077,7 +1077,7 @@ experiments:
**run details and variables:**

An experiment can additionally use the keys for [run details](#runs) and
[variables](#benchmark) (`input_sizes`, `cores`, `variable_values`, `machines`).
[variables](#benchmark) (`input_sizes`, `cores`, `variable_values`, `tags`).
Note, this is possible on the main experiment, but also separately for each
of the defined executions.

2 changes: 1 addition & 1 deletion docs/usage.md
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@ Argument:
e:$ filter experiments to only include the named executor, example: e:EXEC1 e:EXEC3
s:$ filter experiments to only include the named suite and possibly benchmark
example: s:Suite1 s:*:Bench3
m:$ filter experiments to only include the named machines, example: m:machine1 m:machine2
t:$ filter experiments to only include the given tags, example: t:machine1 t:tagFoo
...
```

20 changes: 10 additions & 10 deletions rebench/configurator.py
Original file line number Diff line number Diff line change
@@ -69,21 +69,21 @@ def matches(self, bench):
return bench.name == self._benchmark_name


class _MachineFilter(object):
class _TagFilter(object):

def __init__(self, machine):
self._machine = machine
def __init__(self, tag):
self._tag = tag

def matches(self, machine):
return machine == self._machine
def matches(self, tag):
return tag == self._tag


class _RunFilter(object):

def __init__(self, run_filters):
self._executor_filters = []
self._suite_filters = []
self._machine_filters = []
self._tag_filters = []

if not run_filters:
return
@@ -96,17 +96,17 @@ def __init__(self, run_filters):
self._suite_filters.append(_SuiteFilter(parts[1]))
elif parts[0] == "s" and len(parts) == 3:
self._suite_filters.append(_BenchmarkFilter(parts[1], parts[2]))
elif parts[0] == "m" and len(parts) == 2:
self._machine_filters.append(_MachineFilter(parts[1]))
elif parts[0] == "t" and len(parts) == 2:
self._tag_filters.append(_TagFilter(parts[1]))
else:
raise RuntimeError("Unknown filter expression: " + run_filter)

def applies_to_bench(self, bench):
return (self._match(self._executor_filters, bench) and
self._match(self._suite_filters, bench))

def applies_to_machine(self, machine):
return self._match(self._machine_filters, machine)
def applies_to_tag(self, tag):
return self._match(self._tag_filters, tag)

@staticmethod
def _match(filters, bench):
8 changes: 4 additions & 4 deletions rebench/model/exp_variables.py
Original file line number Diff line number Diff line change
@@ -26,15 +26,15 @@ def compile(cls, config, defaults):
input_sizes = config.get('input_sizes', defaults.input_sizes)
cores = config.get('cores', defaults.cores)
variable_values = config.get('variable_values', defaults.variable_values)
machines = config.get('machines', defaults.machines)
return ExpVariables(input_sizes, cores, variable_values, machines)
tags = config.get('tags', defaults.tags)
return ExpVariables(input_sizes, cores, variable_values, tags)

@classmethod
def empty(cls):
return ExpVariables([''], [1], [''], [None])

def __init__(self, input_sizes, cores, variable_values, machines):
def __init__(self, input_sizes, cores, variable_values, tags):
self.input_sizes = input_sizes
self.cores = cores
self.variable_values = variable_values
self.machines = machines
self.tags = tags
6 changes: 3 additions & 3 deletions rebench/model/experiment.py
Original file line number Diff line number Diff line change
@@ -81,11 +81,11 @@ def _compile_runs(self, configurator):
for cores in variables.cores:
for input_size in variables.input_sizes:
for var_val in variables.variable_values:
for machine in variables.machines:
if not configurator.run_filter.applies_to_machine(machine):
for tag in variables.tags:
if not configurator.run_filter.applies_to_tag(tag):
continue
run = self._data_store.create_run_id(
bench, cores, input_size, var_val, machine)
bench, cores, input_size, var_val, tag)
bench.add_run(run)
runs.add(run)
run.add_reporting(self._reporting)
20 changes: 10 additions & 10 deletions rebench/model/run_id.py
Original file line number Diff line number Diff line change
@@ -29,12 +29,12 @@

class RunId(object):

def __init__(self, benchmark, cores, input_size, var_value, machine):
def __init__(self, benchmark, cores, input_size, var_value, tag):
self.benchmark = benchmark
self.cores = cores
self.input_size = input_size
self.var_value = var_value
self.machine = machine
self.tag = tag

self._reporters = set()
self._persistence = set()
@@ -115,8 +115,8 @@ def var_value_as_str(self):
return '' if self.var_value is None else str(self.var_value)

@property
def machine_as_str(self):
return '' if self.machine is None else str(self.machine)
def tag_as_str(self):
return '' if self.tag is None else str(self.tag)

@property
def location(self):
@@ -249,7 +249,7 @@ def __hash__(self):
def as_simple_string(self):
return "%s %s %s %s %s" % (
self.benchmark.as_simple_string(),
self.cores, self.input_size, self.var_value, self.machine)
self.cores, self.input_size, self.var_value, self.tag)

def _expand_vars(self, string):
try:
@@ -264,7 +264,7 @@ def _expand_vars(self, string):
'invocation': '%(invocation)s',
'suite': self.benchmark.suite.name,
'variable': self.var_value_as_str,
'machine': self.machine_as_str,
'tag': self.tag_as_str,
'warmup': self.benchmark.run_details.warmup}
except ValueError as err:
self._report_format_issue_and_exit(string, err)
@@ -352,7 +352,7 @@ def as_str_list(self):
result.append(self.cores_as_str)
result.append(self.input_size_as_str)
result.append(self.var_value_as_str)
result.append(self.machine_as_str)
result.append(self.tag_as_str)

return result

@@ -363,7 +363,7 @@ def as_dict(self):
'cores': self.cores,
'inputSize': self.input_size,
'varValue': self.var_value,
'machine': self.machine,
'tag': self.tag,
'extraArgs': extra_args if extra_args is None else str(extra_args),
'cmdline': self.cmdline(),
'location': self.location
@@ -378,7 +378,7 @@ def from_str_list(cls, data_store, str_list):
@classmethod
def get_column_headers(cls):
benchmark_headers = Benchmark.get_column_headers()
return benchmark_headers + ["cores", "inputSize", "varValue", "machine"]
return benchmark_headers + ["cores", "inputSize", "varValue", "tag"]

def __str__(self):
return "RunId(%s, %s, %s, %s, %s, %s, %d)" % (
@@ -387,5 +387,5 @@ def __str__(self):
self.benchmark.extra_args,
self.input_size or '',
self.var_value or '',
self.machine or '',
self.tag or '',
self.benchmark.run_details.warmup or 0)
4 changes: 2 additions & 2 deletions rebench/persistence.py
Original file line number Diff line number Diff line change
@@ -79,15 +79,15 @@ def get(self, filename, configurator, action):
self._files[filename] = p
return self._files[filename]

def create_run_id(self, benchmark, cores, input_size, var_value, machine):
def create_run_id(self, benchmark, cores, input_size, var_value, tag):
if isinstance(cores, str) and cores.isdigit():
cores = int(cores)
if input_size == '':
input_size = None
if var_value == '':
var_value = None

run = RunId(benchmark, cores, input_size, var_value, machine)
run = RunId(benchmark, cores, input_size, var_value, tag)
if run in self._run_ids:
return self._run_ids[run]
else:
2 changes: 1 addition & 1 deletion rebench/rebench-schema.yml
Original file line number Diff line number Diff line change
@@ -134,7 +134,7 @@ schema;variables:
# default: [''] # that's the semantics, but pykwalify does not support it
sequence:
- type: scalar
machines:
tags:
type: seq
desc: Another dimension by which the benchmark execution can be varied.
# default: [''] # that's the semantics, but pykwalify does not support it
8 changes: 4 additions & 4 deletions rebench/rebench.py
Original file line number Diff line number Diff line change
@@ -54,7 +54,7 @@ def __init__(self):
self.ui = UI()

def shell_options(self):
usage = """%(prog)s [options] <config> [exp_name] [e:$]* [s:$]* [m:$]*
usage = """%(prog)s [options] <config> [exp_name] [e:$]* [s:$]* [t:$]*

Argument:
config required argument, file containing the experiment to be executed
@@ -70,7 +70,7 @@ def shell_options(self):
Note, filters are combined with `or` semantics in the same group,
i.e., executor or suite, and at least one filter needs to match per group.
The suite name can also be given as * to match all possible suites.
m:$ filter experiments to only include the named machines, example: m:machine1 m:machine2
t:$ filter experiments to only include the given tag, example: t:machine1 t:tagFoo
"""

parser = ArgumentParser(
@@ -224,10 +224,10 @@ def determine_exp_name_and_filters(filters):
exp_name = filters[0] if filters and (
not filters[0].startswith("e:") and
not filters[0].startswith("s:") and
not filters[0].startswith("m:")) else None
not filters[0].startswith("t:")) else None
exp_filter = [f for f in filters if (f.startswith("e:") or
f.startswith("s:") or
f.startswith("m:"))]
f.startswith("t:"))]
return exp_name, exp_filter

def _report_completion(self):
2 changes: 1 addition & 1 deletion rebench/reporter.py
Original file line number Diff line number Diff line change
@@ -60,7 +60,7 @@ class TextReporter(Reporter):
def __init__(self):
super(TextReporter, self).__init__()
self.expected_columns = ['Benchmark', 'Executor', 'Suite', 'Extra', 'Core', 'Size', 'Var',
'Machine', '#Samples', 'Mean (ms)']
'Tag', '#Samples', 'Mean (ms)']

@staticmethod
def _path_to_string(path):
12 changes: 6 additions & 6 deletions rebench/tests/configurator_test.py
Original file line number Diff line number Diff line change
@@ -113,24 +113,24 @@ def test_only_running_bench1_or_bench2_and_test_runner2(self):
runs = cnf.get_runs()
self.assertEqual(2 * 2, len(runs))

def test_machine_filter_m1(self):
filter_args = ['m:machine1']
def test_tag_filter_m1(self):
filter_args = ['t:machine1']
cnf = Configurator(load_config(self._path + '/test.conf'), DataStore(self.ui),
self.ui, run_filter=filter_args)

runs = cnf.get_runs()
self.assertEqual(24, len(runs))

def test_machine_filter_m2(self):
filter_args = ['m:machine2']
def test_tag_filter_m2(self):
filter_args = ['t:machine2']
cnf = Configurator(load_config(self._path + '/test.conf'), DataStore(self.ui),
self.ui, run_filter=filter_args)

runs = cnf.get_runs()
self.assertEqual(14, len(runs))

def test_machine_filter_m1_and_m2(self):
filter_args = ['m:machine1', 'm:machine2']
def test_tag_filter_m1_and_m2(self):
filter_args = ['t:machine1', 't:machine2']
cnf = Configurator(load_config(self._path + '/test.conf'), DataStore(self.ui),
self.ui, run_filter=filter_args)

4 changes: 2 additions & 2 deletions rebench/tests/persistency_test.py
Original file line number Diff line number Diff line change
@@ -289,8 +289,8 @@ def _assert_run_id_structure(self, run_id, run_id_obj):
self.assertEqual(run_id['varValue'], run_id_obj.var_value)
self.assertIsNone(run_id['varValue'])

self.assertEqual(run_id['machine'], run_id_obj.machine)
self.assertIsNone(run_id['machine'])
self.assertEqual(run_id['tag'], run_id_obj.tag)
self.assertIsNone(run_id['tag'])

self.assertEqual(run_id['location'], run_id_obj.location)
self.assertEqual(run_id['inputSize'], run_id_obj.input_size)
2 changes: 1 addition & 1 deletion rebench/tests/reporter_test.py
Original file line number Diff line number Diff line change
@@ -68,7 +68,7 @@ def test_text_reporter_summary_table(self):
self.assertEqual(38, len(sorted_rows))
self.assertEqual(len(reporter.expected_columns) - 1, len(sorted_rows[0]))
self.assertEqual(['Benchmark', 'Executor', 'Suite', 'Extra', 'Core', 'Size', 'Var',
'Machine', 'Mean (ms)'], used_cols)
'Tag', 'Mean (ms)'], used_cols)
self.assertEqual('#Samples', summary[0][0])
self.assertEqual(0, summary[0][1])

4 changes: 2 additions & 2 deletions rebench/tests/test.conf
Original file line number Diff line number Diff line change
@@ -40,7 +40,7 @@ benchmark_suites:
variable_values: # this is an other dimension, over which the runs need to be varied
- val1
- val2
machines:
tags:
- machine1
TestSuite2:
gauge_adapter: TestExecutor
@@ -51,7 +51,7 @@ benchmark_suites:
- Bench1:
extra_args: "%(cores)s 3000"
- Bench2
machines:
tags:
- machine2
TestBrokenCommandFormatSuite:
gauge_adapter: TestExecutor