Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Array sizes up to 5000 (some of which are required for NTRU-Prime) #79

Closed
wants to merge 5 commits into from

Conversation

oddcoder
Copy link

I would kindly ask you to wait and not merge this until I am 100% sure I will not need more sizes (When NTRU patch is ready).
Best

src/sizes.rs Outdated Show resolved Hide resolved
@oddcoder oddcoder force-pushed the ntru branch 2 times, most recently from 51addcb to cb2b5f8 Compare June 27, 2024 21:45
@oddcoder oddcoder force-pushed the ntru branch 2 times, most recently from 222ccef to d1eda9c Compare June 30, 2024 20:50
Ahmed added 5 commits July 4, 2024 12:28
This is better practice in general

Signed-off-by: Ahmed <>
Signed-off-by: Ahmed <>
First modules are disabled for some reason and it is not clear why
Second we replaced .map(Clone::clone) with .cloned()
Last we removed uncessary .into_iter()

Signed-off-by: Ahmed <>
@oddcoder oddcoder changed the title WIP: Add Array sizes required for NTRU-Prime Add Array sizes up to 5000 (some of which are required for NTRU-Prime) Jul 6, 2024
@oddcoder
Copy link
Author

oddcoder commented Jul 6, 2024

@tarcieri I propose slightly different thing from the original PR, I modified the patch to include numbers up to 5000.
I tried adding numbers manually for NTRU but there were too many of them and it is painful to do it manually.

This version takes 9 seconds to compile on my laptop which is not that bad. If later someone complains and need faster compile time we can split the ranges into smaller feature flags?

One thing that might require a deep look is this line. I am no sure why is it there

Last but not least, I am not sure what tests are failing. This is the view I see
image

If you have no objection, I would consider this patch ready to merge

@@ -0,0 +1,26 @@
start = 1025
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don't use non-Rust languages for codegen

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

noted! I will try to rework something with proc-macros

@tarcieri
Copy link
Member

tarcieri commented Jul 7, 2024

This version takes 9 seconds to compile on my laptop which is not that bad

@oddcoder that seems really slow to me. What's the relative change in compilation time as a percentage?

@oddcoder
Copy link
Author

oddcoder commented Jul 7, 2024

➜  hybrid-array git:(ntru) time cargo build
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.85s
cargo build  0.40s user 0.09s system 53% cpu 0.926 total
➜  hybrid-array git:(ntru) time cargo build --features=extra-sizes
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 12.74s
cargo build --features=extra-sizes  11.87s user 0.41s system 96% cpu 12.770 total

Percentage wise, it is bad, almost 13X. However, we are talking about few seconds. So it shouldn't be that bad waiting for it to build.

The problem is for NTRU I need 18 constants between U653 and U2067, then I will need variants of them like C + 1, 2*C - 1 and bunch of others, and it doesn't scale well to keep adding them one by one. If you can accept the slow down, I can try to do it with proc macros. We can even split them with ranges (1000-1999), (2000, 2999) ....
But then NTRU will need all the ranges.

@tarcieri
Copy link
Member

@oddcoder could you try adding only the sizes you need for NTRU?

If that's too onerous, we can consider feature-gating various sizes in batches

@tarcieri
Copy link
Member

Closing this as causing too much of an adverse effect on compile times.

#81 might be nice for pluggable sizes, but for now I would suggest submitting another PR with exactly the sizes you need.

@tarcieri tarcieri closed this Aug 18, 2024
@oddcoder
Copy link
Author

@tarcieri appologies for the long delay, got some busy weeks. I prototype a solution with feature flags. But it is abit too intrusive, so instead I will split it into smaller PRs each can be reviewed separately. The idea is is to use proc macros to improve felxibility but once we have proc macrs, some macros can be done nicer..... if you wouldn't mind, I can send smaller prs improving some of the stuff (like moving uint, and friends to proc macro) and then this patch will become nature all the sudden.

@tarcieri
Copy link
Member

I would prefer to avoid proc macros in codegen, and just check in the generated files, so long as they have Rust code to generate them.

But also, I would like to get a stable release of hybrid-array out soon as it's blocking literally every release of every other crate our project maintains, so the "pencils down" moment is going to come very soon.

@oddcoder
Copy link
Author

so my idea for proc macro is few things:
1- things like pub type U1040 = uint!(0 0 0 0 1 0 0 0 0 0 1); looks a bit error prone, pub type U1040 = uint!(1040); feels more natural
2- impl_array_sizes_with_import and impl_array_sizes are almost a derive.
3- generating ranges of uint! + impl_array_sizes can be also done by a proc macro.

if can reconsider this, I can send series of patches very very quickly, since I kinda have prototype, I just need to test split things into meaningful commit. If not, I can rework the python script to something in rust, But I feel it is a bit hacky.

@tarcieri
Copy link
Member

tarcieri commented Aug 18, 2024

Compile time, simplicity, and maintaining our current status of having zero hard dependencies matter a lot more than ergonomics for the implementation of hybrid-array.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants