Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename the machines variable/filters to a tags variable/filter #264

Merged
merged 1 commit into from
Sep 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 16 additions & 16 deletions docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -641,10 +641,10 @@ As well as:
- `input_sizes`
- `cores`
- `variable_values`
- `machines`
- `tags`

Run configurations are generated from the cross product of all `input_sizes`,
`cores`, `variable_values`, and `machines` for a benchmark.
`cores`, `variable_values`, and `tags` for a benchmark.

## Benchmark

Expand Down Expand Up @@ -702,7 +702,7 @@ way to adjust the amount of computation performed.
in form of a sequence literal: `[small, large]`.

Run configurations are generated from the cross product of all `input_sizes`,
`cores`, `variable_values` and `machines` for a benchmark.
`cores`, `variable_values` and `tags` for a benchmark.
The specific input size can be used, e.g., in the command as in the example below.

Example:
Expand All @@ -728,7 +728,7 @@ In practice, it can be used more flexibly and as just another variable that can
any list of strings.

Run configurations are generated from the cross product of all `input_sizes`,
`cores`, `variable_values`, and `machines` for a benchmark.
`cores`, `variable_values`, and `tags` for a benchmark.
The specific core setting can be used, e.g., in the command as in the example below.

Example:
Expand All @@ -750,7 +750,7 @@ Another dimension by which the benchmark execution can be varied.
It takes a list of strings, or arbitrary values really.

Run configurations are generated from the cross product of all `input_sizes`,
`cores`, `variable_values`, and `machines` for a benchmark.
`cores`, `variable_values`, and `tags` for a benchmark.
The specific variable value can be used, e.g., in the command as in the example below.

Example:
Expand All @@ -769,27 +769,27 @@ benchmark_suites:

---

**machines:**
**tags:**

A dimension by which the benchmark execution can be varied.
It takes a list of strings, or arbitrary values really.
The typical use case is to name one or more machines on which the benchmark
is to be executed.
It takes a list of strings.
One use case is to split benchmarks into different groups, e.g., by machine
or fast and slow, which one can then filter for with `rebench ... t:tag`.

Run configurations are generated from the cross product of all `input_sizes`,
`cores`, `variable_values`, and `machines` for a benchmark.
The specific machine can be used, e.g., in the command, or often more useful
`cores`, `variable_values`, and `tags` for a benchmark.
The specific tag can be used, e.g., in the command, or often more useful
as a filter when running `rebench`.

Example:

```yaml
benchmark_suites:
ExampleSuite:
command: Harness %(machine)s
command: Harness %(tag)s
benchmarks:
- Benchmark2:
machines:
tags:
- machine1
- machine2
```
Expand All @@ -798,7 +798,7 @@ Example filter command line, which would execute only the benchmarks
tagged with `machine1`:

```shell
rebench rebench.conf m:machine1
rebench rebench.conf t:machine1
```

---
Expand Down Expand Up @@ -953,7 +953,7 @@ executors:
**run details and variables:**

An executor can additionally use the keys for [run details](#runs) and [variables](#benchmark)
(`input_sizes`, `cores`, `variable_values`, `machines`).
(`input_sizes`, `cores`, `variable_values`, `tags`).

## Experiments

Expand Down Expand Up @@ -1077,7 +1077,7 @@ experiments:
**run details and variables:**

An experiment can additionally use the keys for [run details](#runs) and
[variables](#benchmark) (`input_sizes`, `cores`, `variable_values`, `machines`).
[variables](#benchmark) (`input_sizes`, `cores`, `variable_values`, `tags`).
Note, this is possible on the main experiment, but also separately for each
of the defined executions.

Expand Down
2 changes: 1 addition & 1 deletion docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Argument:
e:$ filter experiments to only include the named executor, example: e:EXEC1 e:EXEC3
s:$ filter experiments to only include the named suite and possibly benchmark
example: s:Suite1 s:*:Bench3
m:$ filter experiments to only include the named machines, example: m:machine1 m:machine2
t:$ filter experiments to only include the given tags, example: t:machine1 t:tagFoo
...
```

Expand Down
20 changes: 10 additions & 10 deletions rebench/configurator.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,21 +69,21 @@ def matches(self, bench):
return bench.name == self._benchmark_name


class _MachineFilter(object):
class _TagFilter(object):

def __init__(self, machine):
self._machine = machine
def __init__(self, tag):
self._tag = tag

def matches(self, machine):
return machine == self._machine
def matches(self, tag):
return tag == self._tag


class _RunFilter(object):

def __init__(self, run_filters):
self._executor_filters = []
self._suite_filters = []
self._machine_filters = []
self._tag_filters = []

if not run_filters:
return
Expand All @@ -96,17 +96,17 @@ def __init__(self, run_filters):
self._suite_filters.append(_SuiteFilter(parts[1]))
elif parts[0] == "s" and len(parts) == 3:
self._suite_filters.append(_BenchmarkFilter(parts[1], parts[2]))
elif parts[0] == "m" and len(parts) == 2:
self._machine_filters.append(_MachineFilter(parts[1]))
elif parts[0] == "t" and len(parts) == 2:
self._tag_filters.append(_TagFilter(parts[1]))
else:
raise RuntimeError("Unknown filter expression: " + run_filter)

def applies_to_bench(self, bench):
return (self._match(self._executor_filters, bench) and
self._match(self._suite_filters, bench))

def applies_to_machine(self, machine):
return self._match(self._machine_filters, machine)
def applies_to_tag(self, tag):
return self._match(self._tag_filters, tag)

@staticmethod
def _match(filters, bench):
Expand Down
8 changes: 4 additions & 4 deletions rebench/model/exp_variables.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,15 +26,15 @@ def compile(cls, config, defaults):
input_sizes = config.get('input_sizes', defaults.input_sizes)
cores = config.get('cores', defaults.cores)
variable_values = config.get('variable_values', defaults.variable_values)
machines = config.get('machines', defaults.machines)
return ExpVariables(input_sizes, cores, variable_values, machines)
tags = config.get('tags', defaults.tags)
return ExpVariables(input_sizes, cores, variable_values, tags)

@classmethod
def empty(cls):
return ExpVariables([''], [1], [''], [None])

def __init__(self, input_sizes, cores, variable_values, machines):
def __init__(self, input_sizes, cores, variable_values, tags):
self.input_sizes = input_sizes
self.cores = cores
self.variable_values = variable_values
self.machines = machines
self.tags = tags
6 changes: 3 additions & 3 deletions rebench/model/experiment.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,11 +81,11 @@ def _compile_runs(self, configurator):
for cores in variables.cores:
for input_size in variables.input_sizes:
for var_val in variables.variable_values:
for machine in variables.machines:
if not configurator.run_filter.applies_to_machine(machine):
for tag in variables.tags:
if not configurator.run_filter.applies_to_tag(tag):
continue
run = self._data_store.create_run_id(
bench, cores, input_size, var_val, machine)
bench, cores, input_size, var_val, tag)
bench.add_run(run)
runs.add(run)
run.add_reporting(self._reporting)
Expand Down
20 changes: 10 additions & 10 deletions rebench/model/run_id.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,12 @@

class RunId(object):

def __init__(self, benchmark, cores, input_size, var_value, machine):
def __init__(self, benchmark, cores, input_size, var_value, tag):
self.benchmark = benchmark
self.cores = cores
self.input_size = input_size
self.var_value = var_value
self.machine = machine
self.tag = tag

self._reporters = set()
self._persistence = set()
Expand Down Expand Up @@ -115,8 +115,8 @@ def var_value_as_str(self):
return '' if self.var_value is None else str(self.var_value)

@property
def machine_as_str(self):
return '' if self.machine is None else str(self.machine)
def tag_as_str(self):
return '' if self.tag is None else str(self.tag)

@property
def location(self):
Expand Down Expand Up @@ -249,7 +249,7 @@ def __hash__(self):
def as_simple_string(self):
return "%s %s %s %s %s" % (
self.benchmark.as_simple_string(),
self.cores, self.input_size, self.var_value, self.machine)
self.cores, self.input_size, self.var_value, self.tag)

def _expand_vars(self, string):
try:
Expand All @@ -264,7 +264,7 @@ def _expand_vars(self, string):
'invocation': '%(invocation)s',
'suite': self.benchmark.suite.name,
'variable': self.var_value_as_str,
'machine': self.machine_as_str,
'tag': self.tag_as_str,
'warmup': self.benchmark.run_details.warmup}
except ValueError as err:
self._report_format_issue_and_exit(string, err)
Expand Down Expand Up @@ -352,7 +352,7 @@ def as_str_list(self):
result.append(self.cores_as_str)
result.append(self.input_size_as_str)
result.append(self.var_value_as_str)
result.append(self.machine_as_str)
result.append(self.tag_as_str)

return result

Expand All @@ -363,7 +363,7 @@ def as_dict(self):
'cores': self.cores,
'inputSize': self.input_size,
'varValue': self.var_value,
'machine': self.machine,
'tag': self.tag,
'extraArgs': extra_args if extra_args is None else str(extra_args),
'cmdline': self.cmdline(),
'location': self.location
Expand All @@ -378,7 +378,7 @@ def from_str_list(cls, data_store, str_list):
@classmethod
def get_column_headers(cls):
benchmark_headers = Benchmark.get_column_headers()
return benchmark_headers + ["cores", "inputSize", "varValue", "machine"]
return benchmark_headers + ["cores", "inputSize", "varValue", "tag"]

def __str__(self):
return "RunId(%s, %s, %s, %s, %s, %s, %d)" % (
Expand All @@ -387,5 +387,5 @@ def __str__(self):
self.benchmark.extra_args,
self.input_size or '',
self.var_value or '',
self.machine or '',
self.tag or '',
self.benchmark.run_details.warmup or 0)
4 changes: 2 additions & 2 deletions rebench/persistence.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,15 +79,15 @@ def get(self, filename, configurator, action):
self._files[filename] = p
return self._files[filename]

def create_run_id(self, benchmark, cores, input_size, var_value, machine):
def create_run_id(self, benchmark, cores, input_size, var_value, tag):
if isinstance(cores, str) and cores.isdigit():
cores = int(cores)
if input_size == '':
input_size = None
if var_value == '':
var_value = None

run = RunId(benchmark, cores, input_size, var_value, machine)
run = RunId(benchmark, cores, input_size, var_value, tag)
if run in self._run_ids:
return self._run_ids[run]
else:
Expand Down
2 changes: 1 addition & 1 deletion rebench/rebench-schema.yml
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ schema;variables:
# default: [''] # that's the semantics, but pykwalify does not support it
sequence:
- type: scalar
machines:
tags:
type: seq
desc: Another dimension by which the benchmark execution can be varied.
# default: [''] # that's the semantics, but pykwalify does not support it
Expand Down
8 changes: 4 additions & 4 deletions rebench/rebench.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def __init__(self):
self.ui = UI()

def shell_options(self):
usage = """%(prog)s [options] <config> [exp_name] [e:$]* [s:$]* [m:$]*
usage = """%(prog)s [options] <config> [exp_name] [e:$]* [s:$]* [t:$]*

Argument:
config required argument, file containing the experiment to be executed
Expand All @@ -70,7 +70,7 @@ def shell_options(self):
Note, filters are combined with `or` semantics in the same group,
i.e., executor or suite, and at least one filter needs to match per group.
The suite name can also be given as * to match all possible suites.
m:$ filter experiments to only include the named machines, example: m:machine1 m:machine2
t:$ filter experiments to only include the given tag, example: t:machine1 t:tagFoo
"""

parser = ArgumentParser(
Expand Down Expand Up @@ -224,10 +224,10 @@ def determine_exp_name_and_filters(filters):
exp_name = filters[0] if filters and (
not filters[0].startswith("e:") and
not filters[0].startswith("s:") and
not filters[0].startswith("m:")) else None
not filters[0].startswith("t:")) else None
exp_filter = [f for f in filters if (f.startswith("e:") or
f.startswith("s:") or
f.startswith("m:"))]
f.startswith("t:"))]
return exp_name, exp_filter

def _report_completion(self):
Expand Down
2 changes: 1 addition & 1 deletion rebench/reporter.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ class TextReporter(Reporter):
def __init__(self):
super(TextReporter, self).__init__()
self.expected_columns = ['Benchmark', 'Executor', 'Suite', 'Extra', 'Core', 'Size', 'Var',
'Machine', '#Samples', 'Mean (ms)']
'Tag', '#Samples', 'Mean (ms)']

@staticmethod
def _path_to_string(path):
Expand Down
12 changes: 6 additions & 6 deletions rebench/tests/configurator_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,24 +113,24 @@ def test_only_running_bench1_or_bench2_and_test_runner2(self):
runs = cnf.get_runs()
self.assertEqual(2 * 2, len(runs))

def test_machine_filter_m1(self):
filter_args = ['m:machine1']
def test_tag_filter_m1(self):
filter_args = ['t:machine1']
cnf = Configurator(load_config(self._path + '/test.conf'), DataStore(self.ui),
self.ui, run_filter=filter_args)

runs = cnf.get_runs()
self.assertEqual(24, len(runs))

def test_machine_filter_m2(self):
filter_args = ['m:machine2']
def test_tag_filter_m2(self):
filter_args = ['t:machine2']
cnf = Configurator(load_config(self._path + '/test.conf'), DataStore(self.ui),
self.ui, run_filter=filter_args)

runs = cnf.get_runs()
self.assertEqual(14, len(runs))

def test_machine_filter_m1_and_m2(self):
filter_args = ['m:machine1', 'm:machine2']
def test_tag_filter_m1_and_m2(self):
filter_args = ['t:machine1', 't:machine2']
cnf = Configurator(load_config(self._path + '/test.conf'), DataStore(self.ui),
self.ui, run_filter=filter_args)

Expand Down
4 changes: 2 additions & 2 deletions rebench/tests/persistency_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -289,8 +289,8 @@ def _assert_run_id_structure(self, run_id, run_id_obj):
self.assertEqual(run_id['varValue'], run_id_obj.var_value)
self.assertIsNone(run_id['varValue'])

self.assertEqual(run_id['machine'], run_id_obj.machine)
self.assertIsNone(run_id['machine'])
self.assertEqual(run_id['tag'], run_id_obj.tag)
self.assertIsNone(run_id['tag'])

self.assertEqual(run_id['location'], run_id_obj.location)
self.assertEqual(run_id['inputSize'], run_id_obj.input_size)
Expand Down
2 changes: 1 addition & 1 deletion rebench/tests/reporter_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ def test_text_reporter_summary_table(self):
self.assertEqual(38, len(sorted_rows))
self.assertEqual(len(reporter.expected_columns) - 1, len(sorted_rows[0]))
self.assertEqual(['Benchmark', 'Executor', 'Suite', 'Extra', 'Core', 'Size', 'Var',
'Machine', 'Mean (ms)'], used_cols)
'Tag', 'Mean (ms)'], used_cols)
self.assertEqual('#Samples', summary[0][0])
self.assertEqual(0, summary[0][1])

Expand Down
Loading