Skip to content

Option to kill other resque-pool instances on startup#137

Merged
nevans merged 2 commits intoresque:masterfrom
ShippingEasy:kill_others
Oct 14, 2015
Merged

Option to kill other resque-pool instances on startup#137
nevans merged 2 commits intoresque:masterfrom
ShippingEasy:kill_others

Conversation

@joshuaflanagan
Copy link
Contributor

Adds the --kill-others command line option.
When this option is set, resque-pool will send a graceful
shutdown signal to all other running resque-pools.

This is useful in "no downtime deploy" scenarios,
where you want the pool running the new code to
completely load its environment and be ready to
process work before shutting down the old code.
Once the new code is ready, it will shut down the
old processes for you.

See also #132

Adds the `--kill-others` command line option.
When this option is set, resque-pool will send a graceful
shutdown signal to all other running resque-pools.

This is useful in "no downtime deploy" scenarios,
where you want the pool running the new code to
completely load its environment and be ready to
process work *before* shutting down the old code.
Once the new code is ready, it will shut down the
old processes for you.

See also resque#132
@brasic
Copy link
Contributor

brasic commented Oct 12, 2015

--kill-others implies --no-pidfile, since a second daemon could never get to kill_other_pools! without it. It might make sense to make the dependency explicit by either failing fast or making --kill-others set --no-pidfile if it is not already.

@joshuaflanagan
Copy link
Contributor Author

That is a good point. Although, I think you could run multiple instances with pidfiles by specifying different pidfile paths. Probably not a common scenario, but maybe enough to not force the --not-pidfile option. Might be enough to mention it in the README that you probably want to use the options together.

brasic added a commit to ShippingEasy/resque-pool that referenced this pull request Oct 12, 2015
This explains functionality from resque#132 and resque#137 and provides example
upstart config files that will allow for zero-downtime deploys.
brasic added a commit to ShippingEasy/resque-pool that referenced this pull request Oct 13, 2015
This explains functionality from resque#132 and resque#137 and provides example
upstart config files that will allow for zero-downtime deploys.
@nevans
Copy link
Collaborator

nevans commented Oct 13, 2015

I very much like the idea of defaulting --no-pidfile --lock tmp/resque-pool.lock when --kill-others is set, especially since you can subsequently override either (e.g. --kill-others --pidfile /var/resque-pool.pid --lock /var/resque-pool.lock).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This ought to always work just fine on Linux/Mac OS X/BSD... but my paranoia prefers to use ps -e -o pid= -o command= (POSIX flags, restricted output columns). Your current parse_pids_from_output should work just fine on either, I think.

Specify the output columns for the `ps` command.
Only match process commands that start with `resque-pool-master`
@joshuaflanagan
Copy link
Contributor Author

@nevans I've addressed your concerns regarding the ps parsing. We'll need to get #132 merged before I add the logic around --no-pidfile.

@joshuaflanagan
Copy link
Contributor Author

What if we add an additional flag that encompasses the many options that work together for this approach? For example, we could add --min-downtime (or --max-uptime), which sets --kill-others, --no-pidfile and --lock. That might be more discoverable than having --kill-others imply all of those options. I like the idea of having discrete flags that do one thing, but also a flag to group the flags commonly used together.

@nevans
Copy link
Collaborator

nevans commented Oct 14, 2015

I'm okay with that approach. I think I prefer --hot-swap (or something similar) to --min-downtime. What do you think?

@nevans nevans merged commit af0d024 into resque:master Oct 14, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments