Skip to content

A93: xDS ExtProc Support#484

Open
markdroth wants to merge 33 commits intogrpc:masterfrom
markdroth:xds_ext_proc
Open

A93: xDS ExtProc Support#484
markdroth wants to merge 33 commits intogrpc:masterfrom
markdroth:xds_ext_proc

Conversation

@markdroth
Copy link
Copy Markdown
Member

No description provided.

@markdroth markdroth marked this pull request as ready for review September 18, 2025 22:50
yanavlasov pushed a commit to envoyproxy/envoy that referenced this pull request Dec 16, 2025
Adds a new body send mode for gRPC traffic. Also
adds a safe way for the ext_proc server to return OK status without
losing data in FULL_DUPLEX_STREAMED and GRPC modes. See
grpc/proposal#484 for context.
Risk Level: Low
Testing: N/A
Docs Changes: Included in PR
Release Notes: N/A
Platform Specific Features: N/A

---------

Signed-off-by: Mark D. Roth <roth@google.com>
Co-authored-by: Adi (Suissa) Peleg <adip@google.com>
update-envoy bot added a commit to envoyproxy/data-plane-api that referenced this pull request Dec 16, 2025
Adds a new body send mode for gRPC traffic. Also
adds a safe way for the ext_proc server to return OK status without
losing data in FULL_DUPLEX_STREAMED and GRPC modes. See
grpc/proposal#484 for context.
Risk Level: Low
Testing: N/A
Docs Changes: Included in PR
Release Notes: N/A
Platform Specific Features: N/A

---------

Signed-off-by: Mark D. Roth <roth@google.com>
Co-authored-by: Adi (Suissa) Peleg <adip@google.com>

Mirrored from https://github.com/envoyproxy/envoy @ 7b3a632333b587c784aff65e72ff618ff034f331
- [request_headers](https://github.com/envoyproxy/envoy/blob/cdd19052348f7f6d85910605d957ba4fe0538aec/api/envoy/service/ext_proc/v3/external_processor.proto#L76)
and
[response_headers](https://github.com/envoyproxy/envoy/blob/cdd19052348f7f6d85910605d957ba4fe0538aec/api/envoy/service/ext_proc/v3/external_processor.proto#L81).
Populated when sending client headers or server headers, respectively.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know we talked about this, but what's the story around what headers should be included vs. not? Are we going to define which headers specifically? It would be good to call out exactly whether method/scheme/path/te/user-agent/message-type/etc, etc are supposed to be included or not. In Go many of these things are added by the transport on the way out or are removed by the transport on the way in. It would be great if we can specify: "only the things set by the application [plus X, Y, and Z, which some libraries may need to manually synthesize before sending to the ext_proc server]"

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we need to add this for both ext_authz and ext_proc. I think @easwars was compiling a list of what headers we should document.


#### Server-Side Metrics

The following server-side metrics will be exported:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there no labels that could be used on the server?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not that I can think of. If you have any suggestions of labels that you think might be useful here, I'm open to them.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't these actually per-call/attempt metrics, so should they have all the same labels as the default per-call metrics?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have the following labels on per-call metrics today:

  • method: This seems reasonable to add if we think it will be useful.
  • target (client only): This also seems reasonable to add if it will be useful (client only).
  • status: I don't think this one makes sense, because we don't know the status of the data plane RPC at the point at which we're updating these metrics.
  • the new per-RPC label being introduced in A108: I'd be okay with this one too.

I'm open to adding all but status, as long we think they'll be useful and are not worried about performance issues in Java or Go.

Comment on lines +667 to +668
The ext_authz filter will export metrics using the non-per-call metrics
architecture defined in [A79]. There will be a separate set of metrics
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh maybe the implication is that it will get all of those labels already? If so we should call that out, probably?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you mean by "all of those labels". A79 doesn't define any labels that are intended to be applied to all metrics.

Maybe you meant to refer to the per-call metric labels? If so, I've answered that below.

breaking their ext_proc servers.

gRPC implementations should structure their xDS HTTP filter APIs such that
the filter has access to the serialized bytes of each message rather than
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct me if I am wrong , but from what I know is that a decision has been made to bear the double serialization/deserialization cost as of now , at least in Go. Do you think it is worth mentioning that here?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as this spec is concerned, implementations should not do redundant serialization/deserialization. Go may choose to initially implement it that way in order to get the functionality done faster, but that really isn't a good state to be in for the long term, and I expect that people will complain about the performance.

@markdroth markdroth requested a review from dfawley March 17, 2026 23:57
grnmeira pushed a commit to grnmeira/envoy that referenced this pull request Mar 20, 2026
Adds a new body send mode for gRPC traffic. Also
adds a safe way for the ext_proc server to return OK status without
losing data in FULL_DUPLEX_STREAMED and GRPC modes. See
grpc/proposal#484 for context.
Risk Level: Low
Testing: N/A
Docs Changes: Included in PR
Release Notes: N/A
Platform Specific Features: N/A

---------

Signed-off-by: Mark D. Roth <roth@google.com>
Co-authored-by: Adi (Suissa) Peleg <adip@google.com>
Signed-off-by: Gustavo <grnmeira@gmail.com>

Events on the data plane RPC should be sent on the ext_proc stream as
they occur, even if the filter has not yet received a response from the
ext_proc server for a previous event. For example, if the filter is
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious about the example here, from what I understand, sending and getting the modified headers should be a blocking operation and only after modifying the headers should we use them to create the dataplane stream. The Send and Receive functions should come after the stream is created. The example seems to suggest that Send can happen before getting the header modification. Just want to understand the sequence of events.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the C-core API, the application does not need for sending the headers to complete before it sends the first message on the stream, which is why this can happen.

I am guessing from your comment that the grpc-go API does not allow the application to send the first message until it gets the stream object, which is not returned until the headers have gone out on the wire? If so, then I think there are two possible ways to deal with it:

  1. Keep things the way they are, which means this specific case can't happen in Go.
  2. Tell the application that the stream has been created as soon as headers are sent to the ext_proc server, without waiting for the headers to be sent to the data plane server. This would increase the amount of memory used for buffering headers, but it would allow better pipelining.

I think either option is probably fine. @easwars and/or @dfawley may have thoughts on which one makes more sense for Go.

Note that even if this specific example can't happen in Go, there are other cases that will happen. For example, in the server-to-client direction, the ext_proc filter does not need to wait until it receives the response headers from the ext_proc server before it sends the first response message.

@ejona86 Not sure if this is an issue for Java.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, in Go the client headers block until a stream is allocated. So it could make sense to wait for those to be processed by ext_proc before unblocking the client, to still wait for the stream to be allocated. That doesn't sound required though. And it would add significant latency, especially for unary RPCs.

Java doesn't have flow control for unary RPCs, so it doesn't impact that. We do typically delay saying the RPC is ready for data until we allocate the stream, but we can trivially implement this either way. (I think it'd only impact a single if.)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I can see, when an application attempts to create a stream:

  • the grpc layer creates an instance of the client stream interface and as part of doing this, it creates an attempt
  • the attempt retrieves the transport by doing a Pick and creates a stream on the transport
  • this results in an instance of the client stream interface being created at the transport layer
    • this allocates the stream ID and checks for write quota (which should exist, the default is 64K)
    • creates the header frame and queues it in the control buf
    • returns

At this point, the application should have a stream and should be able to send messages on it. They won't get sent out on the wire until the headers frame is sent out. I don't see the transport code even waiting to get a headers frame before sending out data (as long as there is write quota).

@eshitachandwani : If you think otherwise, let's talk and go over the code together.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@easwars The ext_proc filter would run between the first and second bullet in your list -- i.e., after the client stream instance is created but before the LB pick. This means that the workflow will go like this:

  1. gRPC layer creates client stream interface, which creates an attempt.
  2. Headers are sent to the ext_proc filter. The ext_proc filter sends them to the ext_proc server and waits for the response.
  3. The ext_proc filter receives the headers from the ext_proc server. Now the ext_proc filter sends the headers on.
  4. We do an LB pick, resulting in creating the client stream at the transport layer. This gets returned to the application, which can then send the first message.

The problem here is that there is a round-trip to the ext_proc server in steps 2 and 3, and the application will need to wait for that to finish before it can send the first message on the stream. This will hurt latency.

The alternative is to do something like this:

  1. gRPC layer creates client stream interface, which creates an attempt.
  2. Headers are sent to the ext_proc filter. The ext_proc filter sends them to the ext_proc server and waits for the response.
  3. While waiting for the ext_proc response, return the client stream interface to the application, so that it can start sending messages on the stream.
  4. The client application sends a message on a stream. This message gets down to the ext_proc filter, which sends the message to the ext_proc server.
  5. The ext_proc filter receives the headers from the ext_proc server. Now the ext_proc filter sends the headers on.
  6. We do an LB pick, resulting in creating the client stream at the transport layer.

This approach would improve latency by allowing the application to start sending messages on the stream without waiting to get the headers back from the ext_proc server.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood. I think the alternative should be doable in Go too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants