Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

introduce opcode_extension to the structure of Instruction. #1933

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ohad-nir-starkware
Copy link
Collaborator

@ohad-nir-starkware ohad-nir-starkware commented Feb 4, 2025

Introduce opcode_extension to the structure of Instruction

Description

In preparation for adding new opcodes to Stwo, we introduce another field named opcode_extension to the structure Instruction.
That field is an enum that in the future will denote which of the new opcodes is being used in the instruction.

Checklist

  • Linked to Github Issue
  • Unit tests added
  • Integration tests added.
  • This change requires new documentation.
    • Documentation has been added/updated.
    • CHANGELOG has been updated.

This change is Reviewable

Copy link

github-actions bot commented Feb 4, 2025

**Hyper Thereading Benchmark results**




hyperfine -r 2 -n "hyper_threading_main threads: 1" 'RAYON_NUM_THREADS=1 ./hyper_threading_main' -n "hyper_threading_pr threads: 1" 'RAYON_NUM_THREADS=1 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 1
  Time (mean ± σ):     32.205 s ±  0.048 s    [User: 31.393 s, System: 0.809 s]
  Range (min … max):   32.171 s … 32.239 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 1
  Time (mean ± σ):     32.554 s ±  0.846 s    [User: 31.757 s, System: 0.794 s]
  Range (min … max):   31.956 s … 33.152 s    2 runs
 
Summary
  hyper_threading_main threads: 1 ran
    1.01 ± 0.03 times faster than hyper_threading_pr threads: 1




hyperfine -r 2 -n "hyper_threading_main threads: 2" 'RAYON_NUM_THREADS=2 ./hyper_threading_main' -n "hyper_threading_pr threads: 2" 'RAYON_NUM_THREADS=2 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 2
  Time (mean ± σ):     18.068 s ±  0.215 s    [User: 31.502 s, System: 0.821 s]
  Range (min … max):   17.916 s … 18.220 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 2
  Time (mean ± σ):     17.757 s ±  0.014 s    [User: 31.078 s, System: 0.799 s]
  Range (min … max):   17.747 s … 17.767 s    2 runs
 
Summary
  hyper_threading_pr threads: 2 ran
    1.02 ± 0.01 times faster than hyper_threading_main threads: 2




hyperfine -r 2 -n "hyper_threading_main threads: 4" 'RAYON_NUM_THREADS=4 ./hyper_threading_main' -n "hyper_threading_pr threads: 4" 'RAYON_NUM_THREADS=4 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 4
  Time (mean ± σ):     12.458 s ±  0.175 s    [User: 43.884 s, System: 0.965 s]
  Range (min … max):   12.334 s … 12.582 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 4
  Time (mean ± σ):     11.920 s ±  0.318 s    [User: 44.558 s, System: 0.952 s]
  Range (min … max):   11.696 s … 12.145 s    2 runs
 
Summary
  hyper_threading_pr threads: 4 ran
    1.05 ± 0.03 times faster than hyper_threading_main threads: 4




hyperfine -r 2 -n "hyper_threading_main threads: 6" 'RAYON_NUM_THREADS=6 ./hyper_threading_main' -n "hyper_threading_pr threads: 6" 'RAYON_NUM_THREADS=6 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 6
  Time (mean ± σ):     12.248 s ±  0.140 s    [User: 44.052 s, System: 0.981 s]
  Range (min … max):   12.150 s … 12.347 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 6
  Time (mean ± σ):     12.155 s ±  0.054 s    [User: 43.869 s, System: 0.917 s]
  Range (min … max):   12.117 s … 12.194 s    2 runs
 
Summary
  hyper_threading_pr threads: 6 ran
    1.01 ± 0.01 times faster than hyper_threading_main threads: 6




hyperfine -r 2 -n "hyper_threading_main threads: 8" 'RAYON_NUM_THREADS=8 ./hyper_threading_main' -n "hyper_threading_pr threads: 8" 'RAYON_NUM_THREADS=8 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 8
  Time (mean ± σ):     11.947 s ±  0.208 s    [User: 44.459 s, System: 1.029 s]
  Range (min … max):   11.800 s … 12.094 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 8
  Time (mean ± σ):     11.939 s ±  0.347 s    [User: 44.666 s, System: 0.986 s]
  Range (min … max):   11.694 s … 12.185 s    2 runs
 
Summary
  hyper_threading_pr threads: 8 ran
    1.00 ± 0.03 times faster than hyper_threading_main threads: 8




hyperfine -r 2 -n "hyper_threading_main threads: 16" 'RAYON_NUM_THREADS=16 ./hyper_threading_main' -n "hyper_threading_pr threads: 16" 'RAYON_NUM_THREADS=16 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 16
  Time (mean ± σ):     11.901 s ±  0.026 s    [User: 44.713 s, System: 1.071 s]
  Range (min … max):   11.883 s … 11.920 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 16
  Time (mean ± σ):     12.107 s ±  0.009 s    [User: 44.624 s, System: 1.145 s]
  Range (min … max):   12.101 s … 12.113 s    2 runs
 
Summary
  hyper_threading_main threads: 16 ran
    1.02 ± 0.00 times faster than hyper_threading_pr threads: 16


Copy link

github-actions bot commented Feb 4, 2025

Benchmark Results for unmodified programs 🚀

Command Mean [s] Min [s] Max [s] Relative
base big_factorial 2.537 ± 0.060 2.471 2.670 1.01 ± 0.03
head big_factorial 2.524 ± 0.049 2.469 2.614 1.00
Command Mean [s] Min [s] Max [s] Relative
base big_fibonacci 2.485 ± 0.043 2.414 2.571 1.01 ± 0.03
head big_fibonacci 2.465 ± 0.046 2.413 2.561 1.00
Command Mean [s] Min [s] Max [s] Relative
base blake2s_integration_benchmark 9.438 ± 0.465 9.064 10.739 1.03 ± 0.05
head blake2s_integration_benchmark 9.138 ± 0.129 9.009 9.414 1.00
Command Mean [s] Min [s] Max [s] Relative
base compare_arrays_200000 2.584 ± 0.027 2.543 2.623 1.00
head compare_arrays_200000 2.631 ± 0.054 2.580 2.733 1.02 ± 0.02
Command Mean [s] Min [s] Max [s] Relative
base dict_integration_benchmark 1.702 ± 0.024 1.673 1.746 1.00 ± 0.02
head dict_integration_benchmark 1.699 ± 0.018 1.681 1.734 1.00
Command Mean [s] Min [s] Max [s] Relative
base field_arithmetic_get_square_benchmark 1.444 ± 0.016 1.419 1.471 1.00 ± 0.02
head field_arithmetic_get_square_benchmark 1.443 ± 0.029 1.415 1.516 1.00
Command Mean [s] Min [s] Max [s] Relative
base integration_builtins 9.443 ± 0.265 9.176 10.126 1.02 ± 0.03
head integration_builtins 9.270 ± 0.139 9.086 9.595 1.00
Command Mean [s] Min [s] Max [s] Relative
base keccak_integration_benchmark 9.580 ± 0.164 9.384 9.896 1.00
head keccak_integration_benchmark 9.659 ± 0.333 9.370 10.508 1.01 ± 0.04
Command Mean [s] Min [s] Max [s] Relative
base linear_search 2.611 ± 0.071 2.543 2.753 1.00 ± 0.03
head linear_search 2.603 ± 0.037 2.552 2.647 1.00
Command Mean [s] Min [s] Max [s] Relative
base math_cmp_and_pow_integration_benchmark 1.746 ± 0.017 1.728 1.779 1.00
head math_cmp_and_pow_integration_benchmark 1.756 ± 0.013 1.738 1.775 1.01 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base math_integration_benchmark 1.689 ± 0.014 1.673 1.703 1.00
head math_integration_benchmark 1.713 ± 0.019 1.685 1.753 1.01 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base memory_integration_benchmark 1.448 ± 0.023 1.426 1.487 1.00 ± 0.02
head memory_integration_benchmark 1.443 ± 0.017 1.423 1.463 1.00
Command Mean [s] Min [s] Max [s] Relative
base operations_with_data_structures_benchmarks 1.809 ± 0.014 1.792 1.827 1.00
head operations_with_data_structures_benchmarks 1.822 ± 0.006 1.807 1.830 1.01 ± 0.01
Command Mean [ms] Min [ms] Max [ms] Relative
base pedersen 590.7 ± 6.6 582.0 604.9 1.00
head pedersen 593.2 ± 7.1 583.5 604.1 1.00 ± 0.02
Command Mean [ms] Min [ms] Max [ms] Relative
base poseidon_integration_benchmark 705.6 ± 3.5 698.4 711.3 1.00
head poseidon_integration_benchmark 718.1 ± 7.5 712.2 738.4 1.02 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base secp_integration_benchmark 2.070 ± 0.016 2.049 2.099 1.00
head secp_integration_benchmark 2.113 ± 0.028 2.088 2.175 1.02 ± 0.02
Command Mean [ms] Min [ms] Max [ms] Relative
base set_integration_benchmark 720.8 ± 11.2 703.1 738.4 1.00
head set_integration_benchmark 731.4 ± 11.8 709.7 746.6 1.01 ± 0.02
Command Mean [s] Min [s] Max [s] Relative
base uint256_integration_benchmark 5.096 ± 0.080 5.029 5.283 1.00 ± 0.03
head uint256_integration_benchmark 5.095 ± 0.103 5.013 5.364 1.00

Copy link

codecov bot commented Feb 4, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 96.36%. Comparing base (df12864) to head (dfd3f66).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1933   +/-   ##
=======================================
  Coverage   96.35%   96.36%           
=======================================
  Files         102      102           
  Lines       41095    41173   +78     
=======================================
+ Hits        39599    39677   +78     
  Misses       1496     1496           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link

@DavidLevitGurevich DavidLevitGurevich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 4 of 6 files at r1, all commit messages.
Reviewable status: 4 of 6 files reviewed, 1 unresolved discussion (waiting on @fmoletta, @gabrielbosio, @igaray, @juanbono, @ohad-nir-starkware, @Oppen, @pefontana, and @YairVaknin-starkware)


vm/src/vm/decoding/decoder.rs line 9 at r1 (raw file):

// opcode_extension|  opcode|ap_update|pc_update|res_logic|op1_src|op0_reg|dst_reg
//               15|14 13 12|    11 10|  9  8  7|     6  5|4  3  2|      1|      0

I suggest ... 15 and then it is clear that it's all the evelent.

Copy link

@DavidLevitGurevich DavidLevitGurevich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 4 of 6 files reviewed, 1 unresolved discussion (waiting on @fmoletta, @gabrielbosio, @igaray, @juanbono, @ohad-nir-starkware, @Oppen, @pefontana, and @YairVaknin-starkware)


vm/src/vm/decoding/decoder.rs line 9 at r1 (raw file):

Previously, DavidLevitGurevich wrote…

I suggest ... 15 and then it is clear that it's all the evelent.

  • all the elements to the left

Copy link
Collaborator Author

@ohad-nir-starkware ohad-nir-starkware left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 4 of 6 files reviewed, 1 unresolved discussion (waiting on @DavidLevitGurevich, @fmoletta, @gabrielbosio, @igaray, @juanbono, @Oppen, @pefontana, and @YairVaknin-starkware)


vm/src/vm/decoding/decoder.rs line 9 at r1 (raw file):

Previously, DavidLevitGurevich wrote…
  • all the elements to the left

Done.

Copy link

@DavidLevitGurevich DavidLevitGurevich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 2 of 6 files at r1, 1 of 1 files at r2, all commit messages.
Reviewable status: :shipit: complete! all files reviewed, all discussions resolved (waiting on @fmoletta, @gabrielbosio, @igaray, @juanbono, @Oppen, @pefontana, and @YairVaknin-starkware)

Copy link

@DavidLevitGurevich DavidLevitGurevich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

but wait for @YairVaknin-starkware

Reviewable status: :shipit: complete! all files reviewed, all discussions resolved (waiting on @fmoletta, @gabrielbosio, @igaray, @juanbono, @Oppen, @pefontana, and @YairVaknin-starkware)

Copy link

@Stavbe Stavbe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 2 of 6 files at r1, all commit messages.
Reviewable status: :shipit: complete! all files reviewed, all discussions resolved (waiting on @fmoletta, @gabrielbosio, @igaray, @juanbono, @Oppen, @pefontana, and @YairVaknin-starkware)

Copy link
Contributor

@JulianGCalderon JulianGCalderon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ohad-nir-starkware! I left you some small comments.

// Grab offsets and convert them from little endian format.
let off0 = decode_offset(encoded_instr >> OFF0_OFF & OFFX_MASK);
let off1 = decode_offset(encoded_instr >> OFF1_OFF & OFFX_MASK);
let off2 = decode_offset(encoded_instr >> OFF2_OFF & OFFX_MASK);

// Grab flags
let flags = encoded_instr >> FLAGS_OFFSET;
let flags = (encoded_instr >> FLAGS_OFFSET) & FLAGS_MASK;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As FLAGS_MASK == 0x7FFF, we would be ignoring the highest bit and allowing it to be 1, ignoring the check at lines 101-109, right? Is this the required behaviour?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so.
@Stavbe is that right?

Copy link
Contributor

@JulianGCalderon JulianGCalderon Feb 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aah, my bad. It seems that the variable used for the checks 101-109 is encoded_instr and not flags, so this isn't an issue.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The flags variable is only used to grab individual flags in the next lines (45-51). Bitmasking flags with FLAGS_MASK adds an extra operation that does not introduce a logical change in the code. While this is not a big decrement in code readability, I suggest to keep this PR as small as possible by removing the change in this line and line 31.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

vm/src/vm/decoding/decoder.rs Outdated Show resolved Hide resolved
vm/src/vm/errors/vm_errors.rs Show resolved Hide resolved
Copy link
Collaborator Author

@ohad-nir-starkware ohad-nir-starkware left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: all files reviewed, 3 unresolved discussions (waiting on @fmoletta, @gabrielbosio, @igaray, @juanbono, @JulianGCalderon, @Oppen, @pefontana, @Stavbe, and @YairVaknin-starkware)

vm/src/vm/decoding/decoder.rs Outdated Show resolved Hide resolved
// Grab offsets and convert them from little endian format.
let off0 = decode_offset(encoded_instr >> OFF0_OFF & OFFX_MASK);
let off1 = decode_offset(encoded_instr >> OFF1_OFF & OFFX_MASK);
let off2 = decode_offset(encoded_instr >> OFF2_OFF & OFFX_MASK);

// Grab flags
let flags = encoded_instr >> FLAGS_OFFSET;
let flags = (encoded_instr >> FLAGS_OFFSET) & FLAGS_MASK;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so.
@Stavbe is that right?

vm/src/vm/errors/vm_errors.rs Show resolved Hide resolved
Copy link
Collaborator

@YairVaknin-starkware YairVaknin-starkware left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 1 of 6 files at r1, all commit messages.
Reviewable status: 4 of 6 files reviewed, 5 unresolved discussions (waiting on @DavidLevitGurevich, @fmoletta, @gabrielbosio, @igaray, @juanbono, @JulianGCalderon, @Oppen, @pefontana, and @Stavbe)


vm/src/vm/decoding/decoder.rs line 11 at r3 (raw file):

//           ... 15|14 13 12|    11 10|  9  8  7|     6  5|4  3  2|      1|      0

/// Decodes an instruction. The encoding is little endian, so flags go from bit 63 to 48.

Expand a bit upon the bits reserved for opcode_extension.

Code quote:

Decodes an instruction. The encoding is little endian, so flags go from bit 63 to 48.

vm/src/vm/decoding/decoder.rs line 106 at r3 (raw file):

            return Err(VirtualMachineError::InvalidOpcodeExtension(
                opcode_extension_num,
            ))

Please add a test that covers decoding an instruction that returns the new vm error variant.

Code quote:

            return Err(VirtualMachineError::InvalidOpcodeExtension(
                opcode_extension_num,
            ))

Copy link
Collaborator

@YairVaknin-starkware YairVaknin-starkware left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 3 of 6 files at r1, 2 of 2 files at r3.
Reviewable status: all files reviewed, 5 unresolved discussions (waiting on @fmoletta, @gabrielbosio, @igaray, @juanbono, @JulianGCalderon, @ohad-nir-starkware, @Oppen, @pefontana, and @Stavbe)

// Grab offsets and convert them from little endian format.
let off0 = decode_offset(encoded_instr >> OFF0_OFF & OFFX_MASK);
let off1 = decode_offset(encoded_instr >> OFF1_OFF & OFFX_MASK);
let off2 = decode_offset(encoded_instr >> OFF2_OFF & OFFX_MASK);

// Grab flags
let flags = encoded_instr >> FLAGS_OFFSET;
let flags = (encoded_instr >> FLAGS_OFFSET) & FLAGS_MASK;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The flags variable is only used to grab individual flags in the next lines (45-51). Bitmasking flags with FLAGS_MASK adds an extra operation that does not introduce a logical change in the code. While this is not a big decrement in code readability, I suggest to keep this PR as small as possible by removing the change in this line and line 31.

Copy link

@DavidLevitGurevich DavidLevitGurevich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 2 of 2 files at r3, all commit messages.
Reviewable status: all files reviewed, 5 unresolved discussions (waiting on @fmoletta, @igaray, @juanbono, @JulianGCalderon, @ohad-nir-starkware, @Oppen, @pefontana, and @Stavbe)

Copy link
Collaborator Author

@ohad-nir-starkware ohad-nir-starkware left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: all files reviewed, 5 unresolved discussions (waiting on @fmoletta, @igaray, @juanbono, @JulianGCalderon, @Oppen, @pefontana, @Stavbe, and @YairVaknin-starkware)


vm/src/vm/decoding/decoder.rs line 11 at r3 (raw file):

Previously, YairVaknin-starkware wrote…

Expand a bit upon the bits reserved for opcode_extension.

Done.


vm/src/vm/decoding/decoder.rs line 106 at r3 (raw file):

Previously, YairVaknin-starkware wrote…

Please add a test that covers decoding an instruction that returns the new vm error variant.

Done.

// Grab offsets and convert them from little endian format.
let off0 = decode_offset(encoded_instr >> OFF0_OFF & OFFX_MASK);
let off1 = decode_offset(encoded_instr >> OFF1_OFF & OFFX_MASK);
let off2 = decode_offset(encoded_instr >> OFF2_OFF & OFFX_MASK);

// Grab flags
let flags = encoded_instr >> FLAGS_OFFSET;
let flags = (encoded_instr >> FLAGS_OFFSET) & FLAGS_MASK;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants