Skip to content

Close streaming after loop completion#5201

Open
priyaranjannanda wants to merge 1 commit intoopenconfig:mainfrom
priyaranjannanda:TRANSCEIVER-4.2_1
Open

Close streaming after loop completion#5201
priyaranjannanda wants to merge 1 commit intoopenconfig:mainfrom
priyaranjannanda:TRANSCEIVER-4.2_1

Conversation

@priyaranjannanda
Copy link
Contributor

  • Changes to Close streaming after loop completion

@priyaranjannanda priyaranjannanda requested a review from a team as a code owner March 6, 2026 12:19
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the stream management within the TestOpticalPower test to ensure gNMI sample streams are closed promptly after each test iteration. This change prevents potential resource leaks and ensures that stream resources are released as expected, leading to more robust and reliable test execution.

Highlights

  • Stream Closure Logic Refactored: Modified the TestOpticalPower function to explicitly close gNMI sample streams (optical channel, transceiver, and interface) after each test iteration, ensuring proper resource management.
  • Removed Deferred Closures: Removed defer statements for stream closure from within the stream creation loop, as they would only execute at the end of the test function, not after each iteration.
  • Code Cleanup: Removed a commented-out line related to optical channel component retrieval, improving code readability.
Changelog
  • feature/platform/transceiver/tests/zrp_input_output_power_test/zrp_input_output_power_test.go
    • Removed a commented-out line for optical channel component retrieval.
    • Removed defer statements for stream closure from the stream creation loop.
    • Added an explicit loop to close all gNMI sample streams (optical channel, transceiver, interface) for each port after the validation step within the main test loop.
Activity
  • No specific activity (comments, reviews) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request correctly addresses a resource leak by ensuring that gNMI sample streams are closed after use. The change removes incorrect defer statements and adds an explicit cleanup loop. My review includes a suggestion to refactor the test to use t.Run for subtests. This would align with the repository's style guide and allow for more idiomatic resource management using defer.

Comment on lines +119 to +124
//Close the connections:
for portName := range ochs {
ochStreams[portName].Close()
trStreams[portName].Close()
interfaceStreams[portName].Close()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While this explicit loop correctly closes the streams, the test structure could be improved by using t.Run for subtests. This is recommended by the repository style guide to improve test output clarity.

Using t.Run would create a new scope for each test case (each combination of frequency and power), allowing you to use defer for resource management. This is generally safer and leads to cleaner code.

Example structure:

for _, frequency := range frequencies {
    for _, targetOpticalPower := range targetOpticalPowers {
        t.Run(fmt.Sprintf("freq_%d_power_%.1f", frequency, targetOpticalPower), func(t *testing.T) {
            // Create streams here
            // ...
            // Use defer to close streams, which will be executed at the end of this subtest.
            defer stream.Close()

            // ... rest of the test logic for this case ...
        })
    }
}

This refactoring would make the test more idiomatic, easier to debug, and fully compliant with the style guide.

References
  1. The style guide recommends using t.Run for subtests to ensure clear output for passed/failed steps. The current test iterates through parameter combinations without creating subtests. (link)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants