Conversation
|
I feel I asked @DelSkayn take on that PR or is that a new approach? He had some valid concerns around fair scheduling with the refactor but I cant find the comment. Actix had the same issue in the past, it was always polling in the same order instead of a fair scheduling like Tokio so you would have sometimes pretty bad P99. |
| // If we polled all the futures atleas once, | ||
| // or more then one future immediatily queued itself after being polled, | ||
| // yield back to the parent schedular. | ||
| if yielded > 2 || iteration > self.len.get() { |
There was a problem hiding this comment.
That previous implementation might not be an ideal case of fair scheduling but there was an attempt to poll multiple futures and not just the top of the queue.
|
|
||
| let mut progress = false; | ||
|
|
||
| while let Some(p) = self.ready.with(|r| r.pop()) { |
There was a problem hiding this comment.
This new system only drives one future at a time unless I dont understand. I would like to see a benchmark where you have 10 random futures and 2-3 of them are very slow (10s+ of pending time) to see how two versions perform.
There was a problem hiding this comment.
In a single call to poll(), it drains the entire ready queue and polls every task that has been woken. It's not limited to one future per call.
In the scenarios you described the slow futures would return Pending and sit idle until their waker fires. The other futures would continue to make progress normally. The slow futures don't consume any CPU while pending and they're just stored in their slots waiting to be re-queued.
| unsafe fn waker_wake_by_ref(p: *const ()) { | ||
| let slot = &*(p as *const Slot); | ||
| if get_flag(&slot.active) && try_set_true(&slot.queued) { | ||
| let queue = &*slot.queue; | ||
| queue.ready.with(|r| r.push(p as *mut Slot)); | ||
| if let Some(w) = queue.waker.take_clone() { | ||
| w.wake_by_ref(); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
This is unsound when the parallel is disabled. Waker is Send + Sync and therefore all it's functions must be callable from a different thread, read the documentation for RawWakerVTable for more info.
This current implementation uses a UnsafeCell without synchronization when parallel is disabled and therefore can cause race conditions.
|
What is your goal for writing a new schedular? The current schedular is quite capable, being both |
|
@DelSkayn Last time we talked it was more performant by a factor, so it was worth it + less complex overall |
|
Was that with or without the The current schedular has a similar implementation to other sub-schedular implementations like the |
Description of changes
Sync refactor
Checklist