Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allocate builtin instructions budget with its actual cost #170

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

tao-stones
Copy link
Contributor

No description provided.

Copy link
Contributor

@buffalojoec buffalojoec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The motivation makes sense to me, but does this proposal take into account builtin programs who CPI to other builtin programs? The address lookup table program, for example, consumes default CUs of 750, but a few instructions (create & extend) will CPI to the system program.

It might be difficult and/or brittle to hard-code all default CUs for builtin programs including via CPI. If we were going to go this route, I'd advocate for moving away from blanket CU usage across all instructions, and instead configuring a CU value per-instruction, which might make this benchmarking process a little safer.

@tao-stones
Copy link
Contributor Author

The motivation makes sense to me, but does this proposal take into account builtin programs who CPI to other builtin programs? The address lookup table program, for example, consumes default CUs of 750, but a few instructions (create & extend) will CPI to the system program.

It might be difficult and/or brittle to hard-code all default CUs for builtin programs including via CPI. If we were going to go this route, I'd advocate for moving away from blanket CU usage across all instructions, and instead configuring a CU value per-instruction, which might make this benchmarking process a little safer.

Excellent point. This proposal calls for "A builtin instruction that might call other instructions (CPI) would fail without explicitly requesting more CUs." (In Detailed Design section, Example 2).

Budget was moved from "per instruction" to "per transaction", it might be good idea to revisit it. Another possible option to handle "builtin program that CPIs" is the second one in Alternatives Considered. But asking user to explicitly request cu limit seems to be most straightforward atm.

@tao-stones tao-stones force-pushed the builtin-instruction-cost-and-budget branch from 0fcf4fe to c961aed Compare August 28, 2024 14:59
@tao-stones tao-stones force-pushed the builtin-instruction-cost-and-budget branch from c961aed to 2406ddb Compare August 28, 2024 16:43
proposals/simd-0170-builtin-instruction-cost-and-budget.md Outdated Show resolved Hide resolved
complexity and could introduce additional corner cases, potentially leading to
more issues.

- Another alternative would be to treat builtin instructions the same as other
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's already plans underway to move the builtins to bpf.

at that point it'd effectively be the same as this alternative since builtins would be treated the same as other user programs - (?) possible exception for compute-budget which is a configuration not an instruction (imo).

Thus we will need some way to address the over-reserving described here.
There have been discussions on ramping down the default ix CUs over time, what is the issue with such an approach now if it is eventually the path we have to take?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if moving builtins to bpf solves the particular issue (mentioned above) this proposal aims in immediate term, cause still need to allocate "proper" budget for builtin, or any instruction in that scenario.

But lower default cu/ix will help - lowered default per ix will ease over reserving. What would the new "default" be is arbitrary. In extreme case, if it is lowered to 0, we'd be in the ideal world that every transaction will have to set (in other words, configure) its cu limit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, but I think there's simplicity in saying "all programs are treated the same" vs "builtins are treated differently AND these specific builtin instructions are treated even different from that"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely. to be at "all programs are treated the same" in terms of allocating budget for VM, it needs to shift to 100% request-based mode. More tools need to be built to help devs to request cu limits easily and accurately, also lower the default cu/ix when < 100% transaction requests cu limit. This proposal serves stopgap before all that land.

stage and SVM would simplify the code logic and make reasoning more
straightforward.

## Alternatives Considered
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we were to take this approach of using static costs per builtin program, and alternative is to just fix the implementation of CU usage in execution.

If our cost-model says that builtin program Z always uses X CUs, then that should be what is actually used by the execution, regadless of what it does internally, including CPI.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we were to take this approach of using static costs per builtin program, and alternative is to just fix the implementation of CU usage in execution.

Acknowledging that several alternatives exist, this proposal focuses on the specific issue: "If the VM must consume builtin.default_compute_units, then it should allocate exactly that amount for the builtin, rather than a fixed 200,000 units." This approach addresses block overpacking in the short term, while longer-term solutions are still being explored.

If our cost-model says that builtin program Z always uses X CUs,

To clarify, it's the cost model being told that a builtin program Z always uses X CUs.

then that should be what is actually used by the execution, regadless of what it does internally, including CPI.

Yea, that'd be ideal. I am not 100% how and when this will happen tho. Happy to discuss more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(agave implementation specific)
I believe we'd need to modify the declare_process_instruction macro so that we can somehow have a call that doesn't consume CUs and one that does - either through some flag or separate call.
Calls from inside our builtins would then need to call the correct one that does not consume.
Would also need to make sure that actual user-code cannot call into the non-consuming version.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@apfitzge you can just provide zero and it becomes a no-op.
https://github.com/anza-xyz/agave/blob/a72f981370c3f566fc1becf024f3178da041547a/program-runtime/src/invoke_context.rs#L71-L76

A good standard would be to follow the ZK proof program's instructions (shared previously by @tao-stones). We would just need to sort out how to represent the constants to get some kind of builtin-program dictionary of CUs per instruction.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but then I think then it'd always consume 0, since it is a macro defining a function, not a function itself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but then I think then it'd always consume 0, since it is a macro defining a function, not a function itself.

Whoops, sorry, I don't think I was specific enough to describe what I meant. Yeah, the generated code consumes zero if you do that, but the salient bit from the ZK program I was intending to share was where each instruction does a manual consumption.

https://github.com/anza-xyz/agave/blob/e207c6e0eaf8e1657fbfaff07da05ca6a7928349/programs/zk-token-proof/src/lib.rs#L205-L208

Globally, default CU consumption is zero, but each instruction consumes its own individual value.

@ptaffet-jump
Copy link
Contributor

Overall looks pretty good. I agree with Andrew's comment

If our cost-model says that builtin program Z always uses X CUs, then that should be what is actually used by the execution, regadless of what it does internally, including CPI.

As for the UX strangeness that this causes, I'd propose one of the following two:

  1. Expose a CPI-inclusive number to the cost tracker and a CPI-exclusive number to the VM. This may have to be per-instruction then.
  2. Make CPIs from native programs not consume CUs.

I'd be okay with either, though there's some complexity involved with per-instruction costs (suppose you distinguish instruction by first byte, then what if the instruction data is empty or not one of the known bytes? You throw the transaction out?).

@apfitzge
Copy link
Contributor

apfitzge commented Sep 4, 2024

Of @ptaffet-jump's 2 suggestions, I don't feasibly see how the per-instruction will work well. Especially given the last case he mentioned - what if the ix variant is invalid?

For option 2, if @tao-stones agrees it is reasonable approach, think we'll need to bring in someone from agave VM team to comment on how difficult it would be. And also how unsafe it would be - we definitely do not want to create a bug where user txs can CPI into native programs for free.
Just CPI from native programs being free.

@tao-stones
Copy link
Contributor Author

Thanks for all the helpful inputs. It looks like the primary issue is handling builtins that make CPIs without introducing confusing or inconsistent user experiences. The potential solutions are converging too. @ptaffet-jump's option 1 is similar to @buffalojoec pseudo-code, his option 2 is inline with @apfitzge suggestion.

I am inclined toward the first option, which avoids introducing special cases into the VM and instead focuses on making builtin programs more transparent about their compute requirements, and most the logics are implemented within builtin-default-costs crate:

  1. Changes to builtin programs:
  • Expose DEFAULT_COMPUTE_UNIT per instruction (instead of currently per program), similar to ZK as mentioned above.
  • Expose CPI instruction Array. Additionally, builtin programs should expose an array of instructions they invoke via CPIs. For example, create_address_lookup_table instruction that makes three CPIs to the system program would expose [system_ix, system_ix, system_ix]. Others might expose an empty array.
  • This makes builtin programs more transparent about what they do.
  1. Changes to builtin-default-costs crate:
  • Dictionary of Instruction Costs and CPIs: Maintain a static dictionary with the structure <instruction, {ix_default_compute_units, cpi_list}> to store the default compute units and associated CPI instructions for each builtin instruction.
  • Helper Function for CU Calculation that calculates the appropriate number of compute units to allocate per instruction based on the dictionary data, pseudo:
fn get_cu_for_allocation( &ix ) -> Result<u64> {
    let entry = get_dictionary_entry( ix ) ? ;
    let mut allocation_size = entry.value.ix_default_compute_units;
    for cpi_ix in entry.value.cpi_list {
        let cpi_ix_cost = get_cu_for_allocation( cpi_ix ) ? ;
        alloacation_size += cpi_ix_cost;
    }
    allocation_size
}
  1. Call-Site Implementation:
  • Instruction Type Lookup: at the call-site, such as within the compute budget or cost model, determine the type of builtin instruction. If the instruction type cannot be determined, returns Err(invalid_instruction_data_error).
  • CU Allocation Calculation: If the instruction type is valid, use function provided by the builtins-default-costs crate to calculate the correct amount of compute units to allocate for that instruction.
  • transaction's program_id_index is checked with above process only once, result is cached for reuse.
  1. No Changes to VM:

wdyt?

@apfitzge
Copy link
Contributor

apfitzge commented Sep 4, 2024

All sounds reasonable except the error handling here:

Instruction Type Lookup: at the call-site, such as within the compute budget or cost model, determine the type of builtin instruction. If the instruction type cannot be determined, returns Err(invalid_instruction_data_error).

Dropping these on invalid ix data would be an attack vector.
Seems they should just have some "fallback" program cost, which represents the cost to deserialize/match on ix data enum variant.
and then let the tx error out at runtime.

@tao-stones
Copy link
Contributor Author

Dropping these on invalid ix data would be an attack vector.

It just returns an error at early stage of process pipeline (before execution, like compute-budget is doing currently), leaders can decide to pack them and charge the fee, or drop them. If leaders can't do this yet, probably can keep current "per program default cost" as fallback.

@apfitzge
Copy link
Contributor

apfitzge commented Sep 4, 2024

Dropping these on invalid ix data would be an attack vector.

It just returns an error at early stage of process pipeline (before execution, like compute-budget is doing currently), leaders can decide to pack them and charge the fee, or drop them. If leaders can't do this yet, probably can keep current "per program default cost" as fallback.

Cannot do that right now. Code would be the same that we've already implemented for #82 - code is effectively done on our side, but that SIMD has not been agreed upon yet.

@buffalojoec
Copy link
Contributor

I am inclined toward the first option, which avoids introducing special cases into the VM and instead focuses on making builtin programs more transparent about their compute requirements

Yeah, I think this is the right motivation and approach IMO.

Changes to builtin programs:
...
Expose CPI instruction Array. Additionally, builtin programs should expose an array of instructions they invoke via CPIs. For example, create_address_lookup_table instruction that makes three CPIs to the system program would expose [system_ix, system_ix, system_ix]. Others might expose an empty array.
...

Unfortunately, this isn't as straightforward to represent in an array like this. Programs may not always CPI each time they're invoked. Consider an instruction that may CPI once, may CPI twice, or may not CPI at all, considering some account state or input data.

For this reason, I think we should gear the pattern(s) toward using the maximum CUs possible by an instruction. In the above example, the instruction would define MAX_CUS_WITH_CPI (or whatever) as the worst-case, ie. 2 CPIs.

I'd be okay with either, though there's some complexity involved with per-instruction costs (suppose you distinguish instruction by first byte, then what if the instruction data is empty or not one of the known bytes? You throw the transaction out?).

We also probably need to enforce standards for builtin instructions. Right now, they're all 4-byte (u32) instruction discriminators. The CU definitions should be required to map to these discriminators. On the Agave side, we can just make this a trait for builtin instructions.

A few more suggestions from my side for contributors' QoL:

  • This isn't something we do now, but we should explicitly forbid CPIs from builtin programs to BPF programs. Should be posted somewhere obvious and maybe even included in this SIMD (if relevant)?
  • I suggest some test suite requirement for all builtins that tests their CU declarations against the proposed runtime change. This way, if someone defines CUs wrong, the runtime should error on budget exceeded in their test.

What do you guys think?

@tao-stones
Copy link
Contributor Author

Unfortunately, this isn't as straightforward to represent in an array like this. Programs may not always CPI each time they're invoked. Consider an instruction that may CPI once, may CPI twice, or may not CPI at all, considering some account state or input data.

For this reason, I think we should gear the pattern(s) toward using the maximum CUs possible by an instruction. In the above example, the instruction would define MAX_CUS_WITH_CPI (or whatever) as the worst-case, ie. 2 CPIs.

Thanks for bringing this up. I was assuming builtin instructions have rather fixed CPIs schema, not aware there are instances that dynamically based on account states. I only know that "create lookup table" always CPIs "system" 3 times, and "extend lookup account" CPI "system" once. Most likely I am not up to date with builtins, if there are more dynamic scenarios, then MAX_CUS_WITH_CPI is a good idea to me.

A few more suggestions from my side for contributors' QoL:

A great list of TODOs! To add to it:

  • forbid builtin from nested CPIs (builtin CPIs to anothe rBuiltin that CPIs to another builtin); to extend that, possible to limit builtin to only statically CPIs at top level?
  • is it possible to add static assertion, or tests, to ensure newly created builtin program, or instruction, that complies with all this buitins rules? And are included in the "dictionary"
    ( Maybe this all belong to separate SIMD)

@buffalojoec
Copy link
Contributor

Thanks for bringing this up. I was assuming builtin instructions have rather fixed CPIs schema, not aware there are instances that dynamically based on account states. I only know that "create lookup table" always CPIs "system" 3 times, and "extend lookup account" CPI "system" once.

We could go through and profile all of the processors to make sure they're fixed, but we'd also have to impose this constraint on any new instructions/processors. Considering your last bullet (below), it might also be harder to programmatically enforce.

is it possible to add static assertion, or tests, to ensure newly created builtin program, or instruction, that complies with all this buitins rules? And are included in the "dictionary"
( Maybe this all belong to separate SIMD)

Yeah, I think some kind of interface (trait for Agave) for builtins and a testing standard (check instruction stack height for example) can accomplish this.

IMO we probably don't need a separate SIMD, we can introduce the constraints in this one, and mention that all builtins are already compliant as-is. Since the introduction of these constraints doesn't inherently change anything about the current protocol, I lean toward not requiring they be proposed in a new SIMD.

@tao-stones
Copy link
Contributor Author

We could go through and profile all of the processors to make sure they're fixed, but we'd also have to impose this constraint on any new instructions/processors.

Yea, I take it back, such constraint is unnecessarily restrictive. Make builtin programs to expose worse-case CUs, as you suggested, is better.

If no other objects, I'll include updated option one to proposal.

@tao-stones
Copy link
Contributor Author

IMO we probably don't need a separate SIMD, we can introduce the constraints in this one, and mention that all builtins are already compliant as-is. Since the introduction of these constraints doesn't inherently change anything about the current protocol, I lean toward not requiring they be proposed in a new SIMD.

For the sake of documentation, the constrains all current and future builtins should comply, and testing standard they must follow, deserve its own SIMD. Would work better for multiple clients too. ( Plus I am not the right person to draft these rules for builtins 😄 )

@tao-stones tao-stones force-pushed the builtin-instruction-cost-and-budget branch from ce6fd2f to 22594b6 Compare September 6, 2024 23:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants