Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow registrations to be manifested on the file system and add 'use_previous_backends' option #115

Closed
wants to merge 28 commits into from

Conversation

jolynch
Copy link
Collaborator

@jolynch jolynch commented Mar 12, 2015

This pull request pulls in the Yelp fork of synapse. It accomplishes:

  1. Merges in Add 'use_previous_backends' option. #106, which provides the use_previous_backends option.
  2. Provide the "file_output" option in the configuration, which will cause synapse to manifest registrations on the filesystem. This is really useful for services that do not or can not use haproxy for routing requests (e.g. cassandra, udp services, etc ...). By default this is disabled and only haproxy configuration is written out.

Both of these changes are functioning in production at Yelp.

jnb and others added 5 commits October 27, 2014 14:11
* This option defaults to true:  if there are no default servers and a watcher
  reports no backends, then use the previous backends that we already know
  about.
* Factor out code that updates '@backends' into the watcher base class.
Add 'use_previous_backends' option.
If the configuration specifies a file_output key, synapse will manifest
and update registrations on the filesystem in an atomic way.
This is useful for applications that do not wish to communicate with
service backends via haproxy
Allow registrations to be manifested on the file system
jolynch and others added 4 commits March 16, 2015 14:33
Note that at this time only the Yelp fork of haproxy has support for
redispatching on every retry.
Allow the option allredisp option to haproxy.
Explicitly deduplicate registrations
@jolynch
Copy link
Collaborator Author

jolynch commented Apr 1, 2015

To give some context on the allredisp option:

http://marc.info/?l=haproxy&m=142740715826651&w=2

To give some context on the deduplication, we currently restart nerve gracefully (i.e. not de-registering a whole box) by starting two nerves and relying on synapse to deduplicate.

@Jaykah
Copy link
Contributor

Jaykah commented Apr 2, 2015

Hi Joseph,

First of all, thanks for the PR!

Secondly, I have a question on deduplication. How does it work with process managers that control forking?

Precisely, I've had issues with running synapse via Supervisord (#94) where it would not restart when a change in backends is detected, unless I manually specify to run 4 processes of Synapse in Supervisord.

@jolynch
Copy link
Collaborator Author

jolynch commented Apr 2, 2015

My deduplication comment was not the clearest. The deduping logic is just referring to synapse ignoring duplicated service backends, e.g. (host1, 1234), (host1, 1234), (host2, 1234) => (host1, 1234), (host2, 1234).

I would want to run some tests since supervisord is not the setup we use (we use a cron job that restarts synapse whenever our synapse config file changes), but the deduplication is implemented by synapse filtering out duplicate backends by the tuple of (host, port, name?), so I imagine that all of the forked synapse instances would deduplicate in the same way. To be honest I am not sure without digging into #94 more.

attr_reader :opts, :name

def initialize(opts)
super()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure what this is calling out to, this class does not inherit from anything

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed it does not, I'll fix that up

@igor47
Copy link
Collaborator

igor47 commented Apr 28, 2015

done with first pass, i left a couple of questions/comments. mostly, it looks great!

jnb and others added 9 commits May 22, 2015 13:29
This reverts commit 136d1fe.
Increase HAProxy restart interval
ZooKeeper connection pooling.
We can allow socket updates to go through any time that the config was
updated, but restarts must be rate limited for stability
Rate limit restarts but not stats socket updates
@jolynch
Copy link
Collaborator Author

jolynch commented Jun 14, 2015

Sorry that I've just been adding a ton more commits to this "de-fork" pull-request. We ended up hitting some serious reliability problems when we turned on some new multi-datacenter functionality and had to focus on fixing that up. I'll be getting this ready for merge ASAP.

jnb and others added 8 commits June 16, 2015 17:01
Fix bug in caching logic
Add support for the weight key added in nerve
We are experiencing some very slow updates in production and I think it
may be due to the connection pooling, try :per_callback threading to see
if that helps.

Also fixes default values for do_writes, do_reloads, do_socket
Try out :per_callback threads and get more debug information
This fixes a bug where we could have a session expiry but not fail pings
because the pooling code would not actually tear down the connection
Turns out it's important to handle session disconnects correctly
@jolynch jolynch mentioned this pull request Jul 22, 2015
@jolynch jolynch closed this Jul 22, 2015
@jolynch
Copy link
Collaborator Author

jolynch commented Jul 22, 2015

We're going to take care of this in #130

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants