Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review maximum instruction trace length #33863

Closed
febo opened this issue Oct 25, 2023 · 14 comments
Closed

Review maximum instruction trace length #33863

febo opened this issue Oct 25, 2023 · 14 comments
Labels
community Community contribution

Comments

@febo
Copy link

febo commented Oct 25, 2023

Problem

Before PR #27938, the maximum number of instructions was bounded by the compute budget. Currently, there is an explicit (and arbitrary) limit of 64 instructions. This is limiting some uses cases, since they are reaching the maximum number of instructions before running out of compute budget – this indicates that the limit of 64 might not be ideal.

Proposed Solution

Evaluate the impact of increasing the number of instructions (ComputeBudget::max_instruction_trace_length). It seems more desirable to fail when the compute budget limit is reached instead of the number of instructions. While it is understandable that a maximum limit needs to exist, it is not very clear whether this limit can be higher than 64 or not – even if this would cause transactions to fail when the compute budget limit is reached.

@febo febo added the community Community contribution label Oct 25, 2023
@ripatel-fd
Copy link
Contributor

@febo In order to better understand the requirements of the impacted transactions, could you post an example trace exceeding 64 before activation of 27938?

@Lichtso
Copy link
Contributor

Lichtso commented Oct 25, 2023

I updated the problem description in #27938 to describe the reason why the limit was introduced.
This limit is not about computational resources but peak memory allocation. Thus we can't just double the limit as that would double the maximum memory allocation in the program runtime.

@febo
Copy link
Author

febo commented Oct 25, 2023

This limit is not about computational resources but peak memory allocation. Thus we can't just double the limit as that would double the maximum memory allocation in the program runtime.

Totally understand that there is a trade-off. But I wonder how the number 64 was determined. Having a "hard" limit of 64 gives you a predefined maximum memory allocation, but is it necessary to enforce that number of instructions? Like, if a "simple" instruction uses less than the upper bound allocation per instruction (maximum_memory_allocation / 64), you could have more than 64 of this "simple" instruction and still be under the maximum memory allocation.

In other words, can a maximum memory allocation limit exist without having to cap the number of instructions? If a transaction exceeds this limit, then it will fail – similar to what happens when you exceed the compute budget.

@febo
Copy link
Author

febo commented Oct 25, 2023

@ripatel-fd I don't have a first hand example, unfortunately. But my understanding is that it was a program doing a swap between ~4 pNFTs. The discussion started on this thread: https://twitter.com/FoxyDev42/status/1717012044480696595

@ripatel-fd
Copy link
Contributor

Totally understand that there is a trade-off. But I wonder how the number 64 was determined.

@febo The description in #27938 shows a sample of ca 500 million user transactions. All of them had a trace of length 61 or less. 99.999% had a trace of length of length 49 or less.

@febo
Copy link
Author

febo commented Oct 27, 2023

@ripatel-fd That is one way to define a limit, but does not seem to look for an optimal value, which is the issue flagged here. Why 64 and not 65, 66, etc? Also, 64 seems very close to the maximum currently observed 61, so if there is an opportunity, this could set to 70, 80 or ideally a maximum value that presents a good/optimal trade-off between maximizing the number of instructions and minimizing the impact on the maximum memory allocation requirement.

@FoxyDev42
Copy link

@ripatel-fd
Copy link
Contributor

ripatel-fd commented Oct 28, 2023

@ripatel-fd That is one way to define a limit, but does not seem to look for an optimal value, which is the issue flagged here. Why 64 and not 65, 66, etc? Also, 64 seems very close to the maximum currently observed 61, so if there is an opportunity, this could set to 70, 80 or ideally a maximum value that presents a good/optimal trade-off between maximizing the number of instructions and minimizing the impact on the maximum memory allocation requirement.

@febo The occurrence of such transactions is extremely rare. The additional memory requirements induced by raising this limit are therefore not economically justified. Changing it also requires a protocol upgrade/hard fork. I can only speculate why exact this limit was set, but if anything, it should be even lower. Regardless, this limit will get removed entirely in program runtime v2 as cross program function calls will be rearchitected.

This discussion should probably belong here regardless: https://github.com/solana-foundation/solana-improvement-documents

@vvvvq
Copy link

vvvvq commented Jan 20, 2024

Metaplex's NFT standards require programs minting NFTs to make large amounts of CPIs, which means you quickly run into this limitation when trying to mint more than one NFT (with full metadata, collection, etc.) in a transaction. Currently it takes 16 CPIs to mint an NFT with all of these things set up. This is a real use case, as it prevents contracts from minting multiple NFTs in a single transaction. (you can at most do four, if you're doing nothing else). NFTs without any metaplex accounts take 6 CPIs to create.

While there are certaintly reasons why this limit currently exists, there are definitely use cases here and a clear benefit in terms of UX as ideally these products could be offered in a single transaction.

@ripatel-fd
Copy link
Contributor

Metaplex's NFT standards require programs minting NFTs to make large amounts of CPIs

This is a problem with Metaplex and should be fixed on their end.

NFTs without any metaplex accounts take 6 CPIs to create.

This sounds like terrible program design. There is no reason for an account creation to require 6 CPIs.

@blockiosaurus
Copy link

The CPIs aren't avoidable. They're small calls to standard spl-token manipulations.

@stegaBOB
Copy link
Contributor

stegaBOB commented Mar 20, 2024

Calling create idempotent on the associated token program is 5 instructions on its own. We're running into issues around this as well. We can't just fix the associated token program, and for open source programs that necessarily can't make breaking changes, that can't be fixed either. Could there not have been some sort of additional fee for additional instructions past 64? The devex overhead of having to statically determine how many separate transactions you now need to send is just painful to work with. Even if it only rarely breaks users, it just makes building on Solana that much worse.

@ripatel-fd
Copy link
Contributor

@blockiosaurus @stegaBOB Can we move this to the forum? Since this is a consensus change, it would be more appropriate to discuss it there. All new consensus changes now have to go through proposals. https://forum.solana.com/ https://github.com/solana-foundation/solana-improvement-documents
Solana Foundation monitors these SIMD channels and will bug core developers to implement them when accepted.

cc @t-nelson, I suggest we close this issue since it doesn't relate to a bug or feature request in the Solana Labs client (but rather the protocol itself)

@t-nelson
Copy link
Contributor

thanks! not sure how it remained open post-anza-migration, tbh 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community Community contribution
Projects
None yet
Development

No branches or pull requests

8 participants