Refactor WatchConfig to panic inline instead of os.Exit() in a goroutine on init fail (#2095)#2096
Refactor WatchConfig to panic inline instead of os.Exit() in a goroutine on init fail (#2095)#2096deefdragon wants to merge 4 commits intospf13:masterfrom
Conversation
…ine on init fail Fixes spf13#2095 I Chose to also add panics to the errors for the getConfigFile and watcher.Add as either of those failing was also returning, tho not exiting. This increases the consistency of this area of teh code. I am unsure if it would be better to just remove the panics and rely on the user not getting updates as expected. Unless there is some manner for listening to errors that occur in viper (an error channel etc) I think this is the best bet.
| eventsWG.Wait() // now, wait for event loop to end in this go-routine... | ||
| }() | ||
| initWG.Wait() // make sure that the go routine above fully ended before returning |
There was a problem hiding this comment.
I'm surprised by the changes about wait group here, and all the changes you did in the file to do not use waitgroup
I would have kept the change minimal to support removal the os.Exit, and that's it.
The other changes, which are good, I feel, are making the current PR uneasy to review.
I would have split this in 2 PRs one that can be easily be merged (the thing about panic), and a refactoring PR that could be reviewed and merged separately.
I'm not suggesting to change anything right now. I report here how I would have done it.
Other reviewers or you could have different opinions.
Also, I can be simply wrong and the changes here are somehow needed to address the os.Exit issue.
There was a problem hiding this comment.
I really didn't want to refactor the wait groups & goroutines as I hate the resulting diff due to the tabbing. Unfortunately I honestly felt that it was necessary. 1: It was what allowed the panic calls to propagate up a reasonable stack (being recoverable by the calling user), and 2: it leaves the code in a much more simplified, and thus more readable/maintainable state.
As was with the wait groups, it was literally just synchronous code made excessively complicated to allow being lazy with the watcher.Close(), using the defer instead of just closing where it should be closed on error.
There was a problem hiding this comment.
Thanks for confirming what I thought about the fact the code had no need for such asynchronous thing.
You are right about the stack in the panic, it makes sense.
So here, I would like to suggest you to split the first commit in two.
- the first one to remove the complexity and keeping the os.Exit with a clear commit message about it's a refactoring made for simplifying over - complicated code
- then a commit about the change about os.Exit.
(They could be inverted of course)
This way the PR diff will stay unchanged, but at least the history would be way clearer
|
@ccoVeille Do you think I should update the method comment (and corresponding function comment) to document that WatchConfig will panic on an error being encountered? (and should I explain its due to legacy compatibility reasons that the method wasn't just updated, or just leave that out)? |
I think so, yes. Good idea |
There was a problem hiding this comment.
Pull request overview
Refactors WatchConfig initialization to fail fast by panicking (instead of calling os.Exit inside a goroutine) when fsnotify watcher setup cannot be completed, aligning with issue #2095’s request to make failure behavior recoverable by callers.
Changes:
- Moves fsnotify watcher creation and directory registration out of the goroutine so initialization failures occur synchronously.
- Replaces
os.Exit(1)with panics on watcher creation failure, and adds panics on config-file resolution and watcher registration failures. - Updates
WatchConfigdoc comments to mention panic behavior.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
|
Thanks @deefdragon I reviewed the Copilot comments and they seem to be valid (even though some of them are probably for pre-existing issues, they are highlighted because they are in the diff) |
…te WatchConfig doc comments
|
That took longer than it should have for me to get to. Ive fixed most of the comments. The only thing that I couldn't fix was adding tests on two of the panics. They would (I think) require the kernel calls somehow breaking, and I don't think its worth getting THAT into the weeds over to try to mock them returning an error. The one that I was able to test is the one that an end user is most likely to encounter regardless (failing to configure the config file before starting the watcher) |
Fixes #2095
I Chose to also add panics to the errors for the getConfigFile and watcher.Add parts as either of those failing was also returning, tho not exiting. This increases the consistency of this area of the code.
I am unsure if it would be better to just remove the panics and rely on the user not getting updates as expected. Unless there is some manner for listening to errors that occur in viper (an error channel etc) I think this is the best bet.