-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Working Directory, Signals, Respawn and Internals. #37
Conversation
Adds uptime threshold to allow for worker exit respawns while preventing cyclical respawns. Adds signals for graceful shutdown, hard shutdown, adding and removing workers.
…e related defaults using local module variables and ms.
LGTM |
Also mean't to mention in the reasoning for refactoring |
+1 |
+1 . Badly needed. |
I'd call |
This is a great pull @gjohnson |
Thanks @guille! Let me know if I can chop up the PR a bit to make life easier (if your planning on using it). |
That'd be my only change – then I'll merge |
I will whip up some docs later this week for the new methods and signals. |
+1 Great Stuff. Merge? |
It's not applying cleanly, and we need docs. |
Yeah, there have been some other commits since this that would effect on the pull. Plus, I suck at life and still haven't written docs, soooo boring. :-) I can pull upstream in and handle the mangled merge my self. Stay tuned... Sent from my iPhone On Sep 16, 2012, at 2:08 PM, Guillermo Rauch [email protected] wrote:
|
Yeah no rush @gjohnson thanks for taking the time in the first place. |
I am working on merging these together. The biggest issue will be a conflict between my worker reload and the one recently merged in #42. The implementation is conceptually similar, I just check exit codes, etc. However #42 uses |
Sorry for the single pull request, committed too much at once...
Working Directory
This adds the ability for the CLI to accept the option
-d/--directory
to change the location of process.chdir.Signals
This adds the following signals for the CLI to bind to. Right now they are bound no matter what, but perhaps there should be an option
-s/--signals
for opting out.Respawning
This adds the feature of spawning a new worker after an existing one exits. There are two conditions that must be satisfied in order for this to happen.
API and Internal Changes
To add some of the features above, I changed
UpServer
andWorker
a good bit. PreviouslyUpServer#reload
was the only hook into spawning and shutting down workers. I have added:UpServer.reload()
is alot slimmer now, these changes may be debatable. Before, if there was a workerTimeout set, new workers were spawned and upon the first worker being spawned, the old workers wereshutdown
. However, if there was no workerTimeout set, the workers were shutdown and after the last worker died, the new workers were spawned. Unless I was looking at it wrong, it did not make a whole lot of sense, so I trimmed it down to only the first case.To handle the re-spawning without spinning into a cycical disaster, I simply bind to the
terminate
event inUpServer
ctor. If the workerexitCode
(which is set viachild_process.on('exit')
) is 1, we check theuptime
of the worker against theuptimeThreshold
. If all is good, we respawn.