-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: inclusion proof program #115
Conversation
09a05d9
to
0dc1ee6
Compare
c816e56
to
2f4f6f4
Compare
2f4f6f4
to
f33b126
Compare
// Commit the two hashes | ||
sphinx_zkvm::io::commit(&old_sync_committee_hash.hash()); | ||
// Commit the signer hash, the current and next sync committee hashes, and the block height | ||
sphinx_zkvm::io::commit(update.attested_header().beacon().slot()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this: update.attested_header().beacon().slot()
correspond to a block height?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I have changed it to the finalized one now though
sphinx_zkvm::io::commit(&old_sync_committee_hash.hash()); | ||
// Commit the signer hash, the current and next sync committee hashes, and the block height | ||
sphinx_zkvm::io::commit(update.attested_header().beacon().slot()); | ||
sphinx_zkvm::io::commit(&signer_sync_committee_hash.hash()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is my understanding correct, that:
signer_sync_commitee_hash
corresponds to H_29;updated_sync_committee_hash
corresponds to H_30;next_sync_committee_hash
corresponds to H_31;
according to the explanation from @huitseeker
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Main question is the last one about committing additional EIP1186 data in the inclusion program
sphinx_zkvm::io::commit( | ||
eip1186_proof | ||
.key_hash() | ||
.expect("EIP1186Proof::key_hash: could not get key hash") | ||
.as_ref(), | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we commit additional data related to the EIP1186 proof? Specifically I'm thinking of the value correspondingly associated with the specific key/storage/slot/address.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. We do commit merkle tree key / value in aptos case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the program to have all the keys/values as outputs. I have removed what I had set previously to uniquely identify the proof, as it is redundant.
Co-authored-by: wwared <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good to me, I've left a few comments inline.
ethereum/core/src/types/utils.rs
Outdated
for (i, element) in list.iter().enumerate() { | ||
if i < list.len() - 1 { | ||
element_offset += element.len(); | ||
bytes.extend_from_slice(&(element_offset as u32).to_le_bytes()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I think you can avoid the enumerate
.
let mut element_offset = (list.len() * OFFSET_BYTE_LENGTH) as u32;
for element in list {
bytes.extend_from_slice(&element_offset.to_le_bytes());
element_offset += element.len() as u32;
serialized_elements.extend_from_slice(element);
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense!
ethereum/core/src/types/utils.rs
Outdated
loop { | ||
if cursor == first_offset { | ||
break; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be replaced by ?
while cursor < first_offset {
...
}
ethereum/core/src/types/utils.rs
Outdated
})? | ||
.to_vec(), | ||
); | ||
} else if !bytes.is_empty() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we negate this and front-load this condition?
if bytes.len() <= OFFSET_BYTE_LENGTH {
if bytes.is_empty() {
return Ok(list_bytes);
} else {
return Err(TypesError::UnderLength {
structure: "StorageProof".into(),
minimum: OFFSET_BYTE_LENGTH,
actual: bytes.len(),
});
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, and it should improve readability 😄
} | ||
|
||
#[test] | ||
#[ignore = "This test is too slow for CI"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we open an issue to run this on a nightly basis? /cc @samuelburnham
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created #117
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great on my end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* feat: inclusion program * refactor: output correct committee hash + finalized slot * fix: fix tests after transition to finalized block height * refactor: apply suggestion Co-authored-by: wwared <[email protected]> * refactor: address reviews --------- Co-authored-by: wwared <[email protected]>
This PR creates a storage inclusion program for our Ethereum Light Client. It comes with a benchmark and tests.
Tests
The unit tests can be found under
ethereum/light-client/src/proofs/inclusion.rs
. A basic PR will trigger an execution test while the tests for SNARK and STARK proving are to be manually triggered because of computational costs.Benchmark
The benchmark for this program can be found in
ethereum/light-client/benches/inclusion.rs
. This benchmark can either be on the STARK or SNARK proof based on an environment variableMODE=<STARK | SNARK>
.The current number for this inclusion proof is:
The benchmark can be ran with:
MODE=SNARK SHARD_BATCH_SIZE=0 SHARD_SIZE=4194304 RUSTFLAGS="-C target-cpu=native -C opt-level=3" cargo +nightly-2024-05-31 bench --bench inclusion
Changelog
StorageInclusionProver
that handles everything around this program: execute, STARK/SNARK proving and verifyingRelated issues
Closes #70