Skip to content

refactor!: rename write_chunk to write_bytes and write_chunks to write_bytes_many#536

Merged
rklaehn merged 9 commits into
mainfrom
write_bytes_many
May 7, 2026
Merged

refactor!: rename write_chunk to write_bytes and write_chunks to write_bytes_many#536
rklaehn merged 9 commits into
mainfrom
write_bytes_many

Conversation

@rklaehn
Copy link
Copy Markdown
Contributor

@rklaehn rklaehn commented Mar 24, 2026

Description

Remove the Written struct from the public API.

write_many_chunks had a somewhat weird API. On the one hand it was mutating the passed buffers to zero out fully written buffers and split the half-written buffer, but then on the other hand it expected the caller to remove the empty buffers from the buffer array for the next call.

With the change we only return the total number of written bytes and remove the buffers from the slice ourselves. That makes the typical use simpler.

In the rare case where you do need the number of fully written buffers you can still get it.

Breaking Changes

noq::SendStream::write_bytes_many: renamed to write_many_chunks, and returns just the size.
noq::Written: removed

Notes & open questions

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Mar 24, 2026

Performance Comparison Report

a996279049b29ddb1282ae3c04a142fc68a04e0e - artifacts

No results available

---
b84e403c41f126eef307d914495104eab55c92f8 - artifacts

Raw Benchmarks (localhost)

Scenario noq upstream Delta CPU (avg/max)
large-single 5996.9 Mbps 7835.8 Mbps -23.5% 92.8% / 97.8%
medium-concurrent 5516.9 Mbps 7693.7 Mbps -28.3% 92.3% / 97.4%
medium-single 4342.1 Mbps 4481.5 Mbps -3.1% 95.7% / 111.0%
small-concurrent 3946.3 Mbps 5243.9 Mbps -24.7% 97.0% / 111.0%
small-single 3642.7 Mbps 4659.8 Mbps -21.8% 92.7% / 113.0%

Netsim Benchmarks (network simulation)

Condition noq upstream Delta
ideal 3128.0 Mbps 3659.1 Mbps -14.5%
lan 782.4 Mbps 796.4 Mbps -1.8%
lossy 69.8 Mbps 55.9 Mbps +25.0%
wan 83.8 Mbps 83.8 Mbps ~0%

Summary

noq is 20.3% slower on average

---
45ff73d5eb5ab514d5f75a342a524bf1e39a58eb - artifacts

Raw Benchmarks (localhost)

Scenario noq upstream Delta CPU (avg/max)
large-single 6037.7 Mbps 7945.0 Mbps -24.0% 96.8% / 98.6%
medium-concurrent 5784.8 Mbps 8016.9 Mbps -27.8% 97.0% / 98.3%
medium-single 3795.2 Mbps 4473.8 Mbps -15.2% 96.6% / 98.5%
small-concurrent 3998.1 Mbps 5182.6 Mbps -22.9% 96.7% / 99.5%
small-single 3656.7 Mbps 4735.2 Mbps -22.8% 95.7% / 98.4%

Netsim Benchmarks (network simulation)

Condition noq upstream Delta
ideal 2935.9 Mbps N/A N/A
lan 782.5 Mbps N/A N/A
lossy 55.9 Mbps N/A N/A
wan 83.8 Mbps N/A N/A

Summary

noq is 23.3% slower on average

---
7b63c469ef0f34577fd16ef56803e75867c70bbf - artifacts

No results available

---
79d885dad2c02c57c0d4f3306a1f1080935b3ef4 - artifacts

No results available

---
0e070fee0c6edc510fe6ee19737efaffabecfadd - artifacts

No results available

---
a1ca52a10fead7d6071876d6d7e00ea7731c9b22 - artifacts

Raw Benchmarks (localhost)

Scenario noq upstream Delta CPU (avg/max)
large-single 5497.1 Mbps 8188.3 Mbps -32.9% 96.4% / 150.0%
medium-concurrent 5512.0 Mbps 7688.9 Mbps -28.3% 97.3% / 150.0%
medium-single 4143.3 Mbps 4469.8 Mbps -7.3% 89.8% / 97.9%
small-concurrent 3878.4 Mbps 5025.4 Mbps -22.8% 95.2% / 103.0%
small-single 3540.6 Mbps 4623.3 Mbps -23.4% 95.1% / 103.0%

Netsim Benchmarks (network simulation)

Condition noq upstream Delta
ideal 3128.4 Mbps 3994.9 Mbps -21.7%
lan 782.4 Mbps 810.3 Mbps -3.5%
lossy 69.8 Mbps 69.8 Mbps ~0%
wan 83.8 Mbps 83.8 Mbps ~0%

Summary

noq is 23.8% slower on average

---
567c543c1e2e6b6723cefbe34923fbf226bbb9c8 - artifacts

No results available

---
10c707b2a8b67130fd1a4599e648319f40abd9bc - artifacts

Raw Benchmarks (localhost)

Scenario noq upstream Delta CPU (avg/max)
large-single 5385.7 Mbps 7980.9 Mbps -32.5% 95.3% / 99.3%
medium-concurrent 5430.9 Mbps 7712.1 Mbps -29.6% 93.8% / 98.1%
medium-single 3352.8 Mbps 4596.0 Mbps -27.1% 97.8% / 150.0%
small-concurrent 3910.4 Mbps 5264.4 Mbps -25.7% 91.3% / 99.5%
small-single 3572.8 Mbps 4838.8 Mbps -26.2% 94.2% / 102.0%

Netsim Benchmarks (network simulation)

Condition noq upstream Delta
ideal N/A 4037.0 Mbps N/A
lan N/A 824.3 Mbps N/A
lossy N/A 69.8 Mbps N/A
wan N/A 83.8 Mbps N/A

Summary

noq is 28.8% slower on average

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Mar 24, 2026

Documentation for this PR has been generated and is available at: https://n0-computer.github.io/noq/pr/536/docs/noq/

Last updated: 2026-05-07T08:14:33Z

@n0bot n0bot Bot added this to iroh Mar 24, 2026
@github-project-automation github-project-automation Bot moved this to 🚑 Needs Triage in iroh Mar 24, 2026
Copy link
Copy Markdown
Collaborator

@flub flub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opiniated! I'm kind of -0 because it's making it harder for folks to migrate between Quinn and noq without meaningfully improving the api. chunks is not that bad, it is descriptive.

@divagant-martian
Copy link
Copy Markdown
Collaborator

divagant-martian commented Mar 26, 2026

Opiniated! I'm kind of -0 because it's making it harder for folks to migrate between Quinn and noq without meaningfully improving the api. chunks is not that bad, it is descriptive.

While you have a good point, when we decided to check the API this was not part of the criteria we established for what's good or bad API. We basically agreed to follow @rklaehn's lead and impose this new check (which, again, is a fair observation) will just make his job harder. Our API will not be compatible with quinn and I think the faster we accept this, better. If people come knocking on our door about that we can add a "migrating from quinn" section to the README.md. But TBH I think this change is simple enough that users can figure it out on their own.

@flub
Copy link
Copy Markdown
Collaborator

flub commented Mar 26, 2026

Ok, fair. I won't use the Quinn argument gain. Still -0 on it though:

  • This is not a API problem, only a style question. So expect to get subjective opinions.
  • I do not particularly like naming functions after their argument types (hungarian notation). This is not an existing style anywhere in the current API either so is inconsistent. I still think chunks is very descriptive here.
  • I do agree that chunk vs chunks is bad, totally up for fixing that.
  • I think write_many_chunks reads better than write_chunk_many (also when you use bytes instead of chunks).

@rklaehn
Copy link
Copy Markdown
Contributor Author

rklaehn commented Mar 27, 2026

I use write_bytes for fns that use Bytes in other crates, but I am also fine with keeping Chunks.

So write_chunk and write_many_chunks?

But what about the changed signature? I always found the Written struct somewhat weird. If all you care about is writing until you're done while tracking the size the current signature is better.

If we keep chunks this would obviously also apply to the other PR #535

@rklaehn rklaehn requested a review from flub March 27, 2026 08:56
Copy link
Copy Markdown
Collaborator

@flub flub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing out the return value change. Yeah, I agree with making is simpler. It seems odd to introduce yet another type for this.

So write_chunk and write_many_chunks?

That would get my approval :)

And yes, whatever we end up doing needs to be consistent in #535

Comment thread noq-proto/src/connection/streams/mod.rs Outdated
@github-project-automation github-project-automation Bot moved this from 🚑 Needs Triage to 🏗 In progress in iroh Mar 27, 2026
@rklaehn rklaehn force-pushed the write_bytes_many branch from 45ff73d to 7b63c46 Compare May 7, 2026 07:35
@rklaehn rklaehn force-pushed the write_bytes_many branch from 0e070fe to a1ca52a Compare May 7, 2026 07:46
@rklaehn rklaehn requested a review from flub May 7, 2026 07:47
Copy link
Copy Markdown
Collaborator

@flub flub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some nits, but overall LGTM.

Comment thread noq/src/send_stream.rs Outdated
Comment thread noq/src/send_stream.rs Outdated
Comment thread noq/src/send_stream.rs Outdated
Comment thread noq/src/send_stream.rs Outdated
rklaehn and others added 2 commits May 7, 2026 11:10
@rklaehn rklaehn enabled auto-merge May 7, 2026 08:20
@rklaehn rklaehn disabled auto-merge May 7, 2026 08:39
@rklaehn rklaehn added this pull request to the merge queue May 7, 2026
@dignifiedquire dignifiedquire changed the title refactor: Rename write_chunk to write_bytes and write_chunks to write_bytes_many refactor!: rename write_chunk to write_bytes and write_chunks to write_bytes_many May 7, 2026
Merged via the queue into main with commit f4ec777 May 7, 2026
36 of 38 checks passed
@rklaehn rklaehn deleted the write_bytes_many branch May 7, 2026 09:02
@github-project-automation github-project-automation Bot moved this from 🏗 In progress to ✅ Done in iroh May 7, 2026
Stanley00 pushed a commit to stanley-fork/noq that referenced this pull request May 7, 2026
…ytes (n0-computer#535)

## Description

Remove Chunk usage in ordered read API

Make read_chunk return just a Bytes. Also rename read_chunks to
read_chunks_many (it already returns Bytes).

Add a bytes_read fn for the rare case where you do need the offset
despite being in ordered read mode.

Not sure if people agree, but there was an inconsistency before between
read_chunk (returns a Chunk, including offset) and read_chunks (fills a
bunch of Bytes, no offset).

Also it doesn't seem useful to have Chunk at all for ordered streams.
You usually don't care about the offset unless you are reading unordered
streams. So I would like to confine Chunk usage to only the (now
separate) unordered read API.

## Breaking Changes

noq::RecvStream::read_chunk returns a Bytes.
noq::RecvStream::read_chunks renamed to read_many_chunks.

## Notes & open questions

Note: while the [other related
PR](n0-computer#536) is just renaming, this
one is I think actually removing some weirdness.

Why does read_chunk give you an offset but read_chunks does not. And
there is no way to get the offset if you need it if you use read_chunks.

---------

Co-authored-by: Floris Bruynooghe <flub@n0.computer>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: ✅ Done

Development

Successfully merging this pull request may close these issues.

3 participants