Conversation
Adds a new body send mode for gRPC traffic. Also adds a safe way for the ext_proc server to return OK status without losing data in FULL_DUPLEX_STREAMED and GRPC modes. See grpc/proposal#484 for context. Risk Level: Low Testing: N/A Docs Changes: Included in PR Release Notes: N/A Platform Specific Features: N/A --------- Signed-off-by: Mark D. Roth <roth@google.com> Co-authored-by: Adi (Suissa) Peleg <adip@google.com>
Adds a new body send mode for gRPC traffic. Also adds a safe way for the ext_proc server to return OK status without losing data in FULL_DUPLEX_STREAMED and GRPC modes. See grpc/proposal#484 for context. Risk Level: Low Testing: N/A Docs Changes: Included in PR Release Notes: N/A Platform Specific Features: N/A --------- Signed-off-by: Mark D. Roth <roth@google.com> Co-authored-by: Adi (Suissa) Peleg <adip@google.com> Mirrored from https://github.com/envoyproxy/envoy @ 7b3a632333b587c784aff65e72ff618ff034f331
| - [request_headers](https://github.com/envoyproxy/envoy/blob/cdd19052348f7f6d85910605d957ba4fe0538aec/api/envoy/service/ext_proc/v3/external_processor.proto#L76) | ||
| and | ||
| [response_headers](https://github.com/envoyproxy/envoy/blob/cdd19052348f7f6d85910605d957ba4fe0538aec/api/envoy/service/ext_proc/v3/external_processor.proto#L81). | ||
| Populated when sending client headers or server headers, respectively. |
There was a problem hiding this comment.
I know we talked about this, but what's the story around what headers should be included vs. not? Are we going to define which headers specifically? It would be good to call out exactly whether method/scheme/path/te/user-agent/message-type/etc, etc are supposed to be included or not. In Go many of these things are added by the transport on the way out or are removed by the transport on the way in. It would be great if we can specify: "only the things set by the application [plus X, Y, and Z, which some libraries may need to manually synthesize before sending to the ext_proc server]"
There was a problem hiding this comment.
Yeah, we need to add this for both ext_authz and ext_proc. I think @easwars was compiling a list of what headers we should document.
|
|
||
| #### Server-Side Metrics | ||
|
|
||
| The following server-side metrics will be exported: |
There was a problem hiding this comment.
Are there no labels that could be used on the server?
There was a problem hiding this comment.
Not that I can think of. If you have any suggestions of labels that you think might be useful here, I'm open to them.
There was a problem hiding this comment.
Aren't these actually per-call/attempt metrics, so should they have all the same labels as the default per-call metrics?
There was a problem hiding this comment.
We have the following labels on per-call metrics today:
- method: This seems reasonable to add if we think it will be useful.
- target (client only): This also seems reasonable to add if it will be useful (client only).
- status: I don't think this one makes sense, because we don't know the status of the data plane RPC at the point at which we're updating these metrics.
- the new per-RPC label being introduced in A108: I'd be okay with this one too.
I'm open to adding all but status, as long we think they'll be useful and are not worried about performance issues in Java or Go.
A93-xds-ext-proc.md
Outdated
| The ext_authz filter will export metrics using the non-per-call metrics | ||
| architecture defined in [A79]. There will be a separate set of metrics |
There was a problem hiding this comment.
Oh maybe the implication is that it will get all of those labels already? If so we should call that out, probably?
There was a problem hiding this comment.
Not sure what you mean by "all of those labels". A79 doesn't define any labels that are intended to be applied to all metrics.
Maybe you meant to refer to the per-call metric labels? If so, I've answered that below.
| breaking their ext_proc servers. | ||
|
|
||
| gRPC implementations should structure their xDS HTTP filter APIs such that | ||
| the filter has access to the serialized bytes of each message rather than |
There was a problem hiding this comment.
Correct me if I am wrong , but from what I know is that a decision has been made to bear the double serialization/deserialization cost as of now , at least in Go. Do you think it is worth mentioning that here?
There was a problem hiding this comment.
As far as this spec is concerned, implementations should not do redundant serialization/deserialization. Go may choose to initially implement it that way in order to get the functionality done faster, but that really isn't a good state to be in for the long term, and I expect that people will complain about the performance.
Adds a new body send mode for gRPC traffic. Also adds a safe way for the ext_proc server to return OK status without losing data in FULL_DUPLEX_STREAMED and GRPC modes. See grpc/proposal#484 for context. Risk Level: Low Testing: N/A Docs Changes: Included in PR Release Notes: N/A Platform Specific Features: N/A --------- Signed-off-by: Mark D. Roth <roth@google.com> Co-authored-by: Adi (Suissa) Peleg <adip@google.com> Signed-off-by: Gustavo <grnmeira@gmail.com>
|
|
||
| Events on the data plane RPC should be sent on the ext_proc stream as | ||
| they occur, even if the filter has not yet received a response from the | ||
| ext_proc server for a previous event. For example, if the filter is |
There was a problem hiding this comment.
Curious about the example here, from what I understand, sending and getting the modified headers should be a blocking operation and only after modifying the headers should we use them to create the dataplane stream. The Send and Receive functions should come after the stream is created. The example seems to suggest that Send can happen before getting the header modification. Just want to understand the sequence of events.
There was a problem hiding this comment.
In the C-core API, the application does not need for sending the headers to complete before it sends the first message on the stream, which is why this can happen.
I am guessing from your comment that the grpc-go API does not allow the application to send the first message until it gets the stream object, which is not returned until the headers have gone out on the wire? If so, then I think there are two possible ways to deal with it:
- Keep things the way they are, which means this specific case can't happen in Go.
- Tell the application that the stream has been created as soon as headers are sent to the ext_proc server, without waiting for the headers to be sent to the data plane server. This would increase the amount of memory used for buffering headers, but it would allow better pipelining.
I think either option is probably fine. @easwars and/or @dfawley may have thoughts on which one makes more sense for Go.
Note that even if this specific example can't happen in Go, there are other cases that will happen. For example, in the server-to-client direction, the ext_proc filter does not need to wait until it receives the response headers from the ext_proc server before it sends the first response message.
@ejona86 Not sure if this is an issue for Java.
There was a problem hiding this comment.
IIRC, in Go the client headers block until a stream is allocated. So it could make sense to wait for those to be processed by ext_proc before unblocking the client, to still wait for the stream to be allocated. That doesn't sound required though. And it would add significant latency, especially for unary RPCs.
Java doesn't have flow control for unary RPCs, so it doesn't impact that. We do typically delay saying the RPC is ready for data until we allocate the stream, but we can trivially implement this either way. (I think it'd only impact a single if.)
There was a problem hiding this comment.
From what I can see, when an application attempts to create a stream:
- the grpc layer creates an instance of the client stream interface and as part of doing this, it creates an
attempt - the attempt retrieves the transport by doing a
Pickand creates a stream on the transport - this results in an instance of the client stream interface being created at the transport layer
- this allocates the stream ID and checks for write quota (which should exist, the default is 64K)
- creates the header frame and queues it in the control buf
- returns
At this point, the application should have a stream and should be able to send messages on it. They won't get sent out on the wire until the headers frame is sent out. I don't see the transport code even waiting to get a headers frame before sending out data (as long as there is write quota).
@eshitachandwani : If you think otherwise, let's talk and go over the code together.
There was a problem hiding this comment.
@easwars The ext_proc filter would run between the first and second bullet in your list -- i.e., after the client stream instance is created but before the LB pick. This means that the workflow will go like this:
- gRPC layer creates client stream interface, which creates an attempt.
- Headers are sent to the ext_proc filter. The ext_proc filter sends them to the ext_proc server and waits for the response.
- The ext_proc filter receives the headers from the ext_proc server. Now the ext_proc filter sends the headers on.
- We do an LB pick, resulting in creating the client stream at the transport layer. This gets returned to the application, which can then send the first message.
The problem here is that there is a round-trip to the ext_proc server in steps 2 and 3, and the application will need to wait for that to finish before it can send the first message on the stream. This will hurt latency.
The alternative is to do something like this:
- gRPC layer creates client stream interface, which creates an attempt.
- Headers are sent to the ext_proc filter. The ext_proc filter sends them to the ext_proc server and waits for the response.
- While waiting for the ext_proc response, return the client stream interface to the application, so that it can start sending messages on the stream.
- The client application sends a message on a stream. This message gets down to the ext_proc filter, which sends the message to the ext_proc server.
- The ext_proc filter receives the headers from the ext_proc server. Now the ext_proc filter sends the headers on.
- We do an LB pick, resulting in creating the client stream at the transport layer.
This approach would improve latency by allowing the application to start sending messages on the stream without waiting to get the headers back from the ext_proc server.
There was a problem hiding this comment.
Understood. I think the alternative should be doable in Go too.
No description provided.