Skip to content

Commit

Permalink
Implement test runner (#3)
Browse files Browse the repository at this point in the history
* Update readme

* Implement run(-test).sh files

* Add Dockerfile

* Remove asdf installation

* Sort test results in run.sh

* remove the need to sort result contents in run-tests.sh

* Add tests/**/Scarb.lock & target to gitignore

* Add all-fail tests

* Update all-file + empty-file not yet working

* Fix error case in run.sh

* Move sorting to run-tests.sh

* Fix comparison in run-tests.sh

* Regenerate empty-file's expected_results.json

* generate partial-fail's expected_results.json

* generate success's & syntax-error's expected_results.json

* Include relative file path in the error message

* UPdate Dockerfile to make run-in-docker work

* Improve 'done' msgs in run-x.sh scripts

* Switch Docker (and scarb install) to alpine

* Remove solution_dir prefix after reading errors up to 'could not compile'

* Add msg if results match

* Remove echo of run-tests done

* Refactor

---------

Co-authored-by: Nenad <[email protected]>
  • Loading branch information
Nenad Misić and Nenad authored Jul 22, 2024
1 parent 09591da commit a2833b9
Show file tree
Hide file tree
Showing 27 changed files with 386 additions and 65 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ jobs:
context: .
push: false
load: true
tags: exercism/test-runner
tags: exercism/cairo-test-runner
cache-from: type=gha
cache-to: type=gha,mode=max

Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
/tests/**/*/results.json
/tests/**/Scarb.lock
/tests/**/target
26 changes: 24 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,8 +1,30 @@
FROM alpine:3.18
ARG REPO=alpine
ARG IMAGE=3.18
FROM ${REPO}:${IMAGE} AS builder

ARG VERSION=v2.6.5
ARG RELEASE=scarb-${VERSION}-x86_64-unknown-linux-musl

RUN apk add --no-cache curl

RUN mkdir opt/test-runner
RUN mkdir opt/test-runner/bin
WORKDIR /tmp
ADD https://github.com/software-mansion/scarb/releases/download/${VERSION}/${RELEASE}.tar.gz .
RUN tar -xf ${RELEASE}.tar.gz \
&& rm -rf /tmp/${RELEASE}/doc \
&& mv /tmp/${RELEASE} /opt/test-runner/bin/scarb

FROM ${REPO}:${IMAGE} AS runner

# install packages required to run the tests
RUN apk add --no-cache jq coreutils
# hadolint ignore=DL3018
RUN apk add --no-cache jq

COPY --from=builder /opt/test-runner/bin/scarb /opt/test-runner/bin/scarb
ENV PATH=$PATH:/opt/test-runner/bin/scarb/bin

WORKDIR /opt/test-runner
COPY . .
# Initialize a scarb cache
ENTRYPOINT ["/opt/test-runner/bin/run.sh"]
23 changes: 2 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,6 @@
# Exercism Test Runner Template
# Exercism Cairo Test Runner

This repository is a [template repository](https://help.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-template-repository) for creating [test runners][test-runners] for [Exercism][exercism] tracks.

## Using the Test Runner Template

1. Ensure that your track has not already implemented a test runner. If there is, there will be a `https://github.com/exercism/<track>-test-runner` repository (i.e. if your track's slug is `python`, the test runner repo would be `https://github.com/exercism/python-test-runner`)
2. Follow [GitHub's documentation](https://help.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-repository-from-a-template) for creating a repository from a template repository
- Name your new repository based on your language track's slug (i.e. if your track is for Python, your test runner repo name is `python-test-runner`)
3. Remove this [Exercism Test Runner Template](#exercism-test-runner-template) section from the `README.md` file
4. Replace `TRACK_NAME_HERE` with your track's name in the `README.md` file
5. Replace any occurances of `exercism/test-runner` with `exercism/<track>-test-runner` (e.g. `exercism/python-test-runner`)
6. Build the test runner, conforming to the [Test Runner interface specification](https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md).
- Update the files to match your track's needs. At the very least, you'll need to update `bin/run.sh`, `Dockerfile` and the test solutions in the `tests` directory
- Tip: look for `TODO:` comments to point you towards code that need updating
- Tip: look for `OPTIONAL:` comments to point you towards code that _could_ be useful

Once you're happy with your test runner, [open an issue on the exercism/exercism](https://github.com/exercism/exercism/issues/new?assignees=&labels=&template=new-test-runner.md&title=%5BNew+Test+Runner%5D+) to request an official test runner repository for your track.

# Exercism TRACK_NAME_HERE Test Runner

The Docker image to automatically run tests on TRACK_NAME_HERE solutions submitted to [Exercism].
The Docker image to automatically run tests on Cairo solutions submitted to [Exercism].

## Run the test runner

Expand Down
4 changes: 2 additions & 2 deletions bin/run-in-docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ output_dir=$(realpath "${3%/}")
mkdir -p "${output_dir}"

# Build the Docker image
docker build --rm -t exercism/test-runner .
docker build --rm -t exercism/cairo-test-runner .

# Run the Docker image using the settings mimicking the production environment
docker run \
Expand All @@ -43,4 +43,4 @@ docker run \
--mount type=bind,src="${solution_dir}",dst=/solution \
--mount type=bind,src="${output_dir}",dst=/output \
--mount type=tmpfs,dst=/tmp \
exercism/test-runner "${slug}" /solution /output
exercism/cairo-test-runner "${slug}" /solution /output
4 changes: 2 additions & 2 deletions bin/run-tests-in-docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
set -e

# Build the Docker image
docker build --rm -t exercism/test-runner .
docker build --rm -t exercism/cairo-test-runner .

# Run the Docker image using the settings mimicking the production environment
docker run \
Expand All @@ -28,4 +28,4 @@ docker run \
--volume "${PWD}/bin/run-tests.sh:/opt/test-runner/bin/run-tests.sh" \
--workdir /opt/test-runner \
--entrypoint /opt/test-runner/bin/run-tests.sh \
exercism/test-runner
exercism/cairo-test-runner
31 changes: 21 additions & 10 deletions bin/run-tests.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env sh

# Synopsis:
# Test the test runner by running it against a predefined set of solutions
# Test the test runner by running it against a predefined set of solutions
# with an expected output.

# Output:
Expand All @@ -12,26 +12,37 @@
# ./bin/run-tests.sh

exit_code=0
# Copy the tests dir to a temp dir, because in the container the user lacks
# permissions to write to the tests dir.
tmp_dir='/tmp/exercism-cairo-test-runner'
rm -rf "${tmp_dir}"
mkdir -p "${tmp_dir}"
cp -r tests/* "${tmp_dir}"

# Iterate over all test directories
for test_dir in tests/*; do
for test_dir in "${tmp_dir}"/*; do
test_dir_name=$(basename "${test_dir}")
test_dir_path=$(realpath "${test_dir}")
results_file_path="${test_dir_path}/results.json"
expected_results_file_path="${test_dir_path}/expected_results.json"

bin/run.sh "${test_dir_name}" "${test_dir_path}" "${test_dir_path}"

# OPTIONAL: Normalize the results file
# If the results.json file contains information that changes between
# different test runs (e.g. timing information or paths), you should normalize
# the results file to allow the diff comparison below to work as expected
for file in "$results_file_path" "$expected_results_file_path"; do
# We sort both the '.message' values in results.json and expected_results.json files
tmp_file=$(mktemp -p "$test_dir/")
sorted_message=$(cat $file | jq -r '.message' >"$tmp_file" && sort "$tmp_file")
jq --arg msg "$sorted_message" '.message = $msg' "$file" >"$tmp_file" && mv "$tmp_file" "$file"
done

file="results.json"
expected_file="expected_${file}"
echo "${test_dir_name}: comparing ${file} to ${expected_file}"
echo "$test_dir_name: comparing $(basename "${results_file_path}") to $(basename "${expected_results_file_path}")"

if ! diff "${test_dir_path}/${file}" "${test_dir_path}/${expected_file}"; then
if ! diff "$results_file_path" "$expected_results_file_path"; then
exit_code=1
else
echo "$test_dir_name: results match"
fi
done

rm -rf "${tmp_dir}"
exit ${exit_code}
48 changes: 26 additions & 22 deletions bin/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,30 +31,34 @@ mkdir -p "${output_dir}"

echo "${slug}: testing..."

# Run the tests for the provided implementation file and redirect stdout and
# stderr to capture it
test_output=$(false)
# TODO: substitute "false" with the actual command to run the test:
# test_output=$(command_to_run_tests 2>&1)
start_dir="$(pwd)"
cd "${solution_dir}" || exit 1

# Write the results.json file based on the exit code of the command that was
# Run the tests for the provided implementation file and redirect stdout and stderr to capture it.
# We also redirect the global cache from some default global directory to the output directory.
test_output=$(scarb --global-cache-dir "$output_dir/.cache" cairo-test --include-ignored 2>&1)
exit_code=$?

cd "${start_dir}" || exit 1

# Write the results.json file based on the exit code of the command that was
# just executed that tested the implementation file
if [ $? -eq 0 ]; then
jq -n '{version: 1, status: "pass"}' > ${results_file}
if [ ${exit_code} -eq 0 ]; then
jq -n '{version: 1, status: "pass"}' >"${results_file}"
else
# OPTIONAL: Sanitize the output
# In some cases, the test output might be overly verbose, in which case stripping
# the unneeded information can be very helpful to the student
# sanitized_test_output=$(printf "${test_output}" | sed -n '/Test results:/,$p')

# OPTIONAL: Manually add colors to the output to help scanning the output for errors
# If the test output does not contain colors to help identify failing (or passing)
# tests, it can be helpful to manually add colors to the output
# colorized_test_output=$(echo "${test_output}" \
# | GREP_COLOR='01;31' grep --color=always -E -e '^(ERROR:.*|.*failed)$|$' \
# | GREP_COLOR='01;32' grep --color=always -E -e '^.*passed$|$')

jq -n --arg output "${test_output}" '{version: 1, status: "fail", message: $output}' > ${results_file}
# Sanitize the output
test_output_inline=$(printf '%s' "$test_output")

# Try to distinguish between failing tests and errors
if echo "$test_output_inline" | grep -q "error:"; then
status="error"
sanitized_test_output=$(echo "$test_output_inline" | sed '/Compiling.*$/d' | sed -n -e '/error: could not compile/q;p' | sed "s@$solution_dir@@g")
else
status="fail"
sanitized_test_output=$(echo "$test_output_inline" | awk '/failures:/{y=1;next}y' | sed -n -e '/Error: test result/q;p' | sed -r 's/ //g')
fi

jq -n --arg output "${sanitized_test_output}" --arg status "${status}" '{version: 1, status: $status, message: $output}' >"${results_file}"
fi

echo "${slug}: done"
echo "$slug: generated $results_file"
4 changes: 4 additions & 0 deletions tests/all-fail/Scarb.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[package]
name = "leap"
version = "0.1.0"
edition = "2023_11"
2 changes: 1 addition & 1 deletion tests/all-fail/expected_results.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"version": 1,
"status": "fail",
"message": "TODO: replace with correct output"
"message": "leap_leap::leap::year_divisible_by_400_is_leap_year - Panicked with \"assertion failed: `is_leap_year(2000)`.\".\nleap_leap::leap::year_divisible_by_4_and_5_is_still_a_leap_year - Panicked with \"assertion failed: `is_leap_year(1960)`.\".\nleap_leap::leap::year_divisible_by_4_not_divisible_by_100_in_leap_year - Panicked with \"assertion failed: `is_leap_year(1996)`.\".\nleap_leap::leap::year_divisible_by_2_not_divisible_by_4_in_common_year - Panicked with \"assertion failed: `!is_leap_year(1970)`.\".\nleap_leap::leap::year_divisible_by_400_but_not_by_125_is_still_a_leap_year - Panicked with \"assertion failed: `is_leap_year(2400)`.\".\nleap_leap::leap::year_not_divisible_by_4_in_common_year - Panicked with \"assertion failed: `!is_leap_year(2015)`.\".\nleap_leap::leap::year_divisible_by_200_not_divisible_by_400_in_common_year - Panicked with \"assertion failed: `!is_leap_year(1800)`.\".\nleap_leap::leap::year_divisible_by_100_not_divisible_by_400_in_common_year - Panicked with \"assertion failed: `!is_leap_year(2100)`.\".\nleap_leap::leap::year_divisible_by_100_but_not_by_3_is_still_not_a_leap_year - Panicked with \"assertion failed: `!is_leap_year(1900)`.\"."
}
3 changes: 3 additions & 0 deletions tests/all-fail/src/lib.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
pub fn is_leap_year(year: u64) -> bool {
!(year % 4 == 0 && (year % 100 != 0 || year % 400 == 0))
}
54 changes: 54 additions & 0 deletions tests/all-fail/tests/leap.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
use leap::is_leap_year;

#[test]
fn year_not_divisible_by_4_in_common_year() {
assert!(!is_leap_year(2015));
}

#[test]
#[ignore]
fn year_divisible_by_2_not_divisible_by_4_in_common_year() {
assert!(!is_leap_year(1970));
}

#[test]
#[ignore]
fn year_divisible_by_4_not_divisible_by_100_in_leap_year() {
assert!(is_leap_year(1996));
}

#[test]
#[ignore]
fn year_divisible_by_4_and_5_is_still_a_leap_year() {
assert!(is_leap_year(1960));
}

#[test]
#[ignore]
fn year_divisible_by_100_not_divisible_by_400_in_common_year() {
assert!(!is_leap_year(2100));
}

#[test]
#[ignore]
fn year_divisible_by_100_but_not_by_3_is_still_not_a_leap_year() {
assert!(!is_leap_year(1900));
}

#[test]
#[ignore]
fn year_divisible_by_400_is_leap_year() {
assert!(is_leap_year(2000));
}

#[test]
#[ignore]
fn year_divisible_by_400_but_not_by_125_is_still_a_leap_year() {
assert!(is_leap_year(2400));
}

#[test]
#[ignore]
fn year_divisible_by_200_not_divisible_by_400_in_common_year() {
assert!(!is_leap_year(1800));
}
4 changes: 4 additions & 0 deletions tests/empty-file/Scarb.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[package]
name = "leap"
version = "0.1.0"
edition = "2023_11"
4 changes: 2 additions & 2 deletions tests/empty-file/expected_results.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"version": 1,
"status": "fail",
"message": "TODO: replace with correct output"
"status": "error",
"message": "error: Identifier not found.\n --> /tests/leap.cairo:1:11\nuse leap::is_leap_year;\n ^**********^"
}
1 change: 1 addition & 0 deletions tests/empty-file/src/lib.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@

54 changes: 54 additions & 0 deletions tests/empty-file/tests/leap.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
use leap::is_leap_year;

#[test]
fn year_not_divisible_by_4_in_common_year() {
assert!(!is_leap_year(2015));
}

#[test]
#[ignore]
fn year_divisible_by_2_not_divisible_by_4_in_common_year() {
assert!(!is_leap_year(1970));
}

#[test]
#[ignore]
fn year_divisible_by_4_not_divisible_by_100_in_leap_year() {
assert!(is_leap_year(1996));
}

#[test]
#[ignore]
fn year_divisible_by_4_and_5_is_still_a_leap_year() {
assert!(is_leap_year(1960));
}

#[test]
#[ignore]
fn year_divisible_by_100_not_divisible_by_400_in_common_year() {
assert!(!is_leap_year(2100));
}

#[test]
#[ignore]
fn year_divisible_by_100_but_not_by_3_is_still_not_a_leap_year() {
assert!(!is_leap_year(1900));
}

#[test]
#[ignore]
fn year_divisible_by_400_is_leap_year() {
assert!(is_leap_year(2000));
}

#[test]
#[ignore]
fn year_divisible_by_400_but_not_by_125_is_still_a_leap_year() {
assert!(is_leap_year(2400));
}

#[test]
#[ignore]
fn year_divisible_by_200_not_divisible_by_400_in_common_year() {
assert!(!is_leap_year(1800));
}
4 changes: 4 additions & 0 deletions tests/partial-fail/Scarb.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[package]
name = "leap"
version = "0.1.0"
edition = "2023_11"
2 changes: 1 addition & 1 deletion tests/partial-fail/expected_results.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"version": 1,
"status": "fail",
"message": "TODO: replace with correct output"
"message": "leap_leap::leap::year_divisible_by_100_but_not_by_3_is_still_not_a_leap_year - Panicked with \"assertion failed: `!is_leap_year(1900)`.\".\nleap_leap::leap::year_divisible_by_100_not_divisible_by_400_in_common_year - Panicked with \"assertion failed: `!is_leap_year(2100)`.\".\nleap_leap::leap::year_divisible_by_200_not_divisible_by_400_in_common_year - Panicked with \"assertion failed: `!is_leap_year(1800)`.\"."
}
3 changes: 3 additions & 0 deletions tests/partial-fail/src/lib.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
pub fn is_leap_year(year: u64) -> bool {
year % 4 == 0 && (year % 101 != 0 || year % 400 == 0)
}
Loading

0 comments on commit a2833b9

Please sign in to comment.