Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Switch 32 bit int HAL pins to 64 bit, rather than having both types in parallel. #3286

Open
andypugh opened this issue Jan 15, 2025 · 10 comments

Comments

@andypugh
Copy link
Collaborator

See also: #2331

A long time ago HAL float pins were single-precision. These were switched to double, even though at the time many CPUs were 32 bit.

Much more recently, now that most CPUs are 64-bit the 64-bit signed and unsigned HAL pins were added, along with the various conversion functions.

I think we should simply change the width of integer HAL pins rather than retain two parallel types.

As of now, LinuxCNC 2.9 does not have u64 and s64 HAL pins. We can fairly easily make a clean switch at this point in development without affecting any configs running the stable version.

As things stand almost no HAL components have 64-bit inputs or outputs, and this includes the rawcounts pins of the encoders and stepgens, which is one place where 32-bit ints may well be inadequate.

Rather than replace some instances of HAL component pins with 64-bit equivalents (which will cause incompatibilities in HAL files) I propose switching all integer HAL pins to the 64-bit equivalent simply by redefining the integer types. This way there need be no changes to existing HAL files.

Some changes to certain HAL components can be made. For example "bitwise" will gain a number of new pins, but I would anticipate that this would not cause any functional changes in existing HAL configs.

Similarly "LUT5" would gain one more input, and should probably be renamed to LUT6. This name-change can be handled by the update_ini script. For a given config and input setup the operation would be unaffected.

@BsAtHome
Copy link
Contributor

A conversion is possible, although with potentially breaking drivers. There also will be a lot of work because the current types are bit-size post-fixed. Redefining hal_s32_t to be a 64 bit type is not a good option. You need to change all types names to, for example: hal_si_t and hal_ui_t. And this is one you cannot s/x/y/ your way out of. This will be manual labour.

Changing all hal 32-bit types into 64-bit may have severe implication on drivers that use them for hardware interaction. The size is required to be specific when f.ex. addressing registers. This may prove to become the largest problem. You need to look at every single driver's code, find out what it is doing and see how you can change it. It will break things.

On a side note, a massive change like this will require to update the HAL version ID to make sure that no "old" connectivity through shared memory can be done.

On the speed of 32/64,... there is a difference. Not directly in the CPU, but the support logic like cache do feel the pressure. You will require more cache lines and transfers with the larger types. Although the current layout of structures is mostly not well thought through with alignment gaps between fields.

@andypugh
Copy link
Collaborator Author

Indeed, I had always anticipated it being a bit of a job, and was never considering redefining hal_u32 to be 64 bits

There would have to be some leeway in halcompile. The docs suggest "signed" and "unsigned" as the type specifier, but u32 and s32 have been allowed, and should still be allowed to work for users with custom HAL components. (or we could actually error out with a helpful suggestion to switch to "signed" and "unsigned" as halcompiling is a one-time thing.

@rmu75
Copy link
Contributor

rmu75 commented Jan 16, 2025

A similar argument could be made to get rid of all integer types and just use double. Can represent integers exactly up to 2^53 IIRC, ~9e15, more than 32bit, enough to encode +-9000km with nanometer resolution.

@BsAtHome
Copy link
Contributor

I don't think that using all-floats will do performance any good, regardless resolution.

And, FWIW, using doubles indiscriminately in calculations will reduce the accuracy of the value contained in the double to approx log10(sqrt(2^53)), or about 8 decimal digits, due to error accumulation and propagation.

@rmu75
Copy link
Contributor

rmu75 commented Jan 16, 2025

I don't think you need to lose accuracy if you stay within whole numbers with magnitude smaller than 2^53. In principle for positive integers those should also be valid (denormalised) doubles. I agree that it probably doesn't make much sense on a PC where there is plenty of RAM.

@BsAtHome
Copy link
Contributor

Denormals are slow and will most likely get transformed into normals at some point by some use. Then you are off again. And, this is not javascript where (cough) nobody really cares (cough). So, indeed, it doesn't make sense making everything a double.

@andypugh
Copy link
Collaborator Author

Representing bitmasks (for example in LUT5) using floats seems like a bad plan to me. There are several HAL components that work at the bit level.

bin2gray, bitwise, bitslice, demiux, gray2bin, ilowpass(debatable), logic. lut5, match8, matrix_kb, mesa_uart(debatable), mux_generic, select8(debatable), weighted_sum,

@rmu75
Copy link
Contributor

rmu75 commented Jan 16, 2025

NaNs could be used to encode a type annotation, then everything could be connected with everything, with the price of having to use some kind of access-function.

@BsAtHome
Copy link
Contributor

... with the price of having to use some kind of access-function.

You do realize that this is running in RT? Adding complex access functions is a Bad IdeaTM.

@rmu75
Copy link
Contributor

rmu75 commented Jan 16, 2025

That "complex access function" would amount to bitwise AND resp. OR operations. If the goal is to get rid of conversion functions and other complexity this would be doable. But any change along this line will be too intrusive.

Re. the original propsal, it should be possible (on little endian architectures, which intel and arm are) to access a 64bit int through a pointer to a 32bit int and get truncated value (modulo 2^32). So maybe 64bit integers can be used everywhere storage-wise and connect function could be changed to allow connecting 32bit and 64bit ints?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants