Implement IngesterAffinity broadcast#6152
Implement IngesterAffinity broadcast#6152nadav-govari wants to merge 7 commits intonadav/feature-node-based-routingfrom
Conversation
| pub const INGESTER_PRIMARY_SHARDS_PREFIX: &str = "ingester.primary_shards:"; | ||
|
|
||
| /// Key used in chitchat to broadcast the ingester affinity score and open shard counts. | ||
| pub const INGESTER_AFFINITY_PREFIX: &str = "ingester.affinity"; |
There was a problem hiding this comment.
We already use the word affinity for searchers split affinity. I think we can find another ok name for this metric that we don't use already.
There was a problem hiding this comment.
Yep, how's ingester capacity? As in, literally the capacity of the ingester to ingest new requests.
There was a problem hiding this comment.
Renamed the task to BroadcastIngesterCapacity and all references from affinity to capacity.
|
|
||
| pub type OpenShardCounts = Vec<(IndexUid, SourceId, usize)>; | ||
|
|
||
| const WAL_CAPACITY_LOOKBACK_WINDOW_LEN: usize = 6; |
There was a problem hiding this comment.
This could use a comment. I assume you had a duration in mind for that window and then divided by BROADCAST_INTERVAL_PERIOD to get to 6. What's that window duration?
There was a problem hiding this comment.
Adding. It was meant to be 30 seconds.
|
|
||
| struct WalCapacityTimeSeries { | ||
| wal_capacity: ByteSize, | ||
| readings: VecDeque<ByteSize>, |
There was a problem hiding this comment.
There's a better implementation of a timeserie based on a rotating time window in broadcast. This is a common pattern. So, move the og implementation in common. Abstractify enough so it can be used for both uses cases, import and use it here.
There was a problem hiding this comment.
LocalShardUpdate and BroadcastIngesterCapacity now both use this new RingBuffer, which is in quickwit-common.
| } | ||
|
|
||
| impl WalCapacityTimeSeries { | ||
| fn new(wal_capacity: ByteSize) -> Self { |
There was a problem hiding this comment.
mem or disk? the name should say it.
There was a problem hiding this comment.
Disk, modified.
| return None; | ||
| } | ||
| let oldest = if self.readings.len() > WAL_CAPACITY_LOOKBACK_WINDOW_LEN { | ||
| self.readings.pop_back().unwrap() |
There was a problem hiding this comment.
Use expect and state the invariant/conditation that allow you to call expect safely:
.expect("window should not be empty")
.expect("window should have more than 1 measurements")
There was a problem hiding this comment.
Noted, though this isn't relevant any longer with the RingBuffer changes.
| .weak_state | ||
| .upgrade() | ||
| .context("ingester state has been dropped")?; | ||
|
|
There was a problem hiding this comment.
Just lock the whole thing fully and make the code more readable.
| Ok(snapshot) => snapshot, | ||
| Err(error) => { | ||
| error!("failed to snapshot ingester state: {error}"); | ||
| return; |
There was a problem hiding this comment.
The WAL can take multiple BROADCAST_INTERVAL_PERIOD intervals to load. The task should not not stop when we're loading the WAL, only if the state is dropped.
There was a problem hiding this comment.
Updated to the following cases:
- State dropped: error, stop task
- Ingester not initialized: no-op
- Ingester ready: happy path
| let value = serde_json::to_string(affinity) | ||
| .expect("`IngesterAffinity` should be JSON serializable"); | ||
| self.cluster | ||
| .set_self_key_value(INGESTER_AFFINITY_PREFIX, value) |
There was a problem hiding this comment.
You can't broadcast that over a single key because the open shard counts can be very long.
-> one key per index/source
There was a problem hiding this comment.
(The value length is an issue because chitchat uses UDP and every update must fit in a single datagram (MTU))
There was a problem hiding this comment.
Made it similar to LocalShardsUpdate, one key per index/source.
| pub fn get_open_shard_counts(&self) -> Vec<(IndexUid, SourceId, usize)> { | ||
| self.shards | ||
| .values() | ||
| .filter(|shard| shard.is_open()) |
There was a problem hiding this comment.
| .filter(|shard| shard.is_open()) | |
| .filter(|shard| shard.is_advertisable && !shard.is_replica() && shard.is_open()) |
Background
Main idea: https://docs.google.com/document/d/1XUpBdMFnuX8d23erK-XwQkomRgbeRTJ0TJtve7RGW3k/edit?tab=t.0.
All work on this feature will be merged PR by PR into the base branch nadav/feature-node-based-routing, which will then eventually be merged into main once it's fully ready.
PR Description
Creates a new broadcast to prepare for node based routing. The idea is described more in depth in
The primary thinking here is:
Ingesters will move away from keeping shard level data, and instead keep this node level data for routing requests. Routing tables will move to be node based and use the data from these broadcasts to update their routing tables.