Conversation
|
TODO:
|
|
|
||
| ### Option 2: Speculative — checkpoint_proposal-Gated | ||
|
|
||
| The next proposer starts building (locally, **not broadcasting**) as soon as the `checkpoint_proposal` is received. Blocks are only **broadcast after attestations arrive**. |
There was a problem hiding this comment.
Would the next proposer validate the previous proposer's checkpoint (verify the proofs, simulate, check for duplicate nullifiers, check fee juice balances)? If so, the time saved by this Option 2 is lessened vs Option 1, since the next proposer will be performing a lot of the time-consuming computation that will happen as part of the attestation process.
At the very least, the next proposer will need to personally simulate all txs of the previous checkpoint (even if not taking time to verify the proofs), so that they can begin simulating tree insertions for their own checkpoint.
Edit:
B builds silently for ~2s before attestations arrive
Oh, maybe you're already assuming this, actually. "2s" suggests your only saving is the time for the committee to send attestations back to the previous proposer.
There was a problem hiding this comment.
ill combine the two sections together, theyre not different enough to warrant seperate sections
There was a problem hiding this comment.
I'm making the assumption that the next proposer will be validating blocks as they arrive, they shouldnt need to verify all proofs etc, the ideal situation is that the node is at the tip by the time the checkpoint proposal arrives (or at least just slightly behind).
docs/build-ahead/dd.md
Outdated
|
|
||
| ## L1 Submission Handoff | ||
|
|
||
| The predecessor attempts L1 submission during their slot as normal. At the **slot boundary**, the predecessor stops trying. After the slot boundary, anyone can submit the predecessor's checkpoint to L1 — the current `ProposeLib.sol` already allows any address to submit. The incentive for the next proposer to submit is indirect: B needs A's checkpoint on L1 to make B's own blocks valid. |
There was a problem hiding this comment.
the current
ProposeLib.solalready allows any address to submit
The incentive for the next proposer to submit is indirect
It could be that there isn't sufficient "indirect incentive":
If the "predecessor" gets the block reward, how does the submitter (be it the next proposer or someone else) get reimbursed for their L1 ETH submission costs?
If the ETH cost of the next proposer submitting both the previous checkpoint proposal and their own proposal[1] exceeds the block reward, there might not be sufficient incentive.
[1] a prudent proposer would want to submit their own proposal so as to reduce the likelihood of getting slashed, presumably.
There was a problem hiding this comment.
I agree here, I've moved this section to optional at the end. In original discussions we were planning to either slash the person who missed checkpoint submission AND OR give most of their reward to the submitter ( if the submission window is open for longer to maintain liveness)
docs/build-ahead/dd.md
Outdated
|
|
||
| ## L1 Submission Handoff | ||
|
|
||
| The predecessor attempts L1 submission during their slot as normal. At the **slot boundary**, the predecessor stops trying. After the slot boundary, anyone can submit the predecessor's checkpoint to L1 — the current `ProposeLib.sol` already allows any address to submit. The incentive for the next proposer to submit is indirect: B needs A's checkpoint on L1 to make B's own blocks valid. |
There was a problem hiding this comment.
Presumably Rollup.sol is already well-designed enough to revert early in the case of a duplicated submission, to save the subsequent submitter(s) unnecessary gas?
docs/build-ahead/dd.md
Outdated
|
|
||
| The current `ProposeLib.sol` already allows **any address** to submit a checkpoint — there is no proposer validation on the submitter. This design preserves that property. | ||
|
|
||
| The only L1 contract change required is **slot validation**: today the contract requires `slot == currentSlot`. For Build Ahead, the contract must also accept `slot == currentSlot - 1` after the slot boundary has elapsed. This allows a checkpoint from slot N to be submitted during slot N+1's window. |
There was a problem hiding this comment.
Something to discuss:
It could be that any protocol changes which aim at a post-alpha release should go through the (not yet finalised) AZIP process. An L1 change is such a "protocol change".
| The next proposer builds from the **last confirmed state** (the most recent L1-confirmed tip). This is equivalent to the current system's behavior when a proposer is offline, but triggered earlier within the slot rather than waiting for the slot boundary. | ||
|
|
||
| The exact timeout value is TBD and will be tuned via testing. It should be long enough to avoid false positives (slow proposers) but short enough to recover meaningful build time. | ||
|
|
There was a problem hiding this comment.
What happens, slashing-wise, in the following scenario:
The previous proposer proposes a checkpoint.
The next proposer begins building block 1 of their checkpoint. After 6s, they are done building that block. Q1: Do they broadcast this single block to their peers? Or do they wait until the end of their checkpoint to broadcast this block?
"Multiple blocks per checkpoint" doesn't make much sense if the latter, so presume they broadcast this block after 6s.
So the next proposer has now proposed a block before the previous checkpoint has been submitted to L1 (and, under Option 2 above, before the previous block has been attested-to).
Now suppose:
- With Option 1 above:
- The prev checkpoint is not submitted to L1, or it doesn't reach L1 in time.
- With Option 2 above:
- The prev checkpoint is not attested; or it is not submitted to L1; or it doesn't reach L1 in time.
Q2: Can/should the next proposer be slashed for technically proposing a block that builds on a checkpoint that never got proposed to L1?
There was a problem hiding this comment.
they should only broadcast whenever theyve received attestations. You should not be slashed for building on a block which has attestations, even if it never makes it to l1 in a checkpoint
docs/build-ahead/dd.md
Outdated
|
|
||
| 3. **Coordinated upgrade required.** The L1 contract changes are a hard fork. All validators must upgrade simultaneously — likely at an epoch boundary. | ||
|
|
||
| 4. **Remaining dead zone.** Even with Build Ahead, 10-12s of dead zone remains per slot (the checkpoint finalization overhead: re-execution + assembly + P2P round-trip). This is inherent to the attestation-based protocol and cannot be eliminated without changing the trust model (e.g., optimistic attestation). |
There was a problem hiding this comment.
Can't the attestors re-execute block 1 of a checkpoint as soon as they see it -- 6s into the checkpoint -- rather than only commencing re-execution of the blocks of the checkpoint after the final block in the checkpoint has been proposed?
Wouldn't this then reduce the "re-exection" component of dead zone to be "re-execution of the final block of the checkpoint", which is 10x faster.
There was a problem hiding this comment.
yeah, ill delete these lines - theyre not correct
docs/build-ahead/dd.md
Outdated
|
|
||
| # Open Questions | ||
|
|
||
| - **Reward economics:** Rewards are distributed at **proof arrival time**, not at checkpoint submission time. This means the submitter of the L1 checkpoint doesn't automatically get the reward — the proof associates the reward with the original proposer. The incentive for B to submit A's checkpoint is indirect (B needs A's checkpoint on L1 to make B's own blocks valid). The exact reward mechanism and incentive alignment needs formal analysis but is deferred from this design. |
There was a problem hiding this comment.
Nice. This covers one of my earlier comments. Peraps this analysis shouldn't be deferred, but is important to the feasiblility of this design?
There was a problem hiding this comment.
yeah it is, ill reword to say that we must address this before proceeding
There was a problem hiding this comment.
or we can work without l1 changes
|
|
||
| KPIs: we are trying to reduce **user-perceived latency** (time from TX submission to "proposed chain" visibility) by up to 12s and increase **effective chain throughput** from 8 blocks per slot to 10 blocks per slot (+25% improvement). | ||
|
|
||
| The solution: allow the next slot's proposer to begin building blocks as soon as the predecessor's checkpoint data appears on P2P — before it lands on L1. The next proposer builds and broadcasts blocks during what is currently dead time, and anyone can submit the predecessor's checkpoint to L1 (the next proposer is incentivized to do so because their own blocks depend on it). All blocks B builds during the overlap go into B's checkpoint for B's slot. |
There was a problem hiding this comment.
Is it not possible to stagger the slots entirely?
So, proposer A builds checkpoint 10 in slot 9 and publishes to L1 in slot 10. Meanwhile, proposer B builds checkpoint 11 in slot 10 (having seen checkpoint 10 on p2p towards the end of slot 9) and publishes in slot 11.
I guess everyone needs to agree to only attest to a checkpoint for slot N broadcasted in slot N - 1. Otherwise the proposer for slot N could just keep building all the way through slot N. But that should be ok.
No description provided.