Skip to content. | Skip to navigation

Personal tools

>>> ''.join(word[:3].lower() for word in 'David Isaac Glick'.split())

‘davisagli’

Navigation

You are here: Home

David Glick – Plone developer

by admin posted Apr 05, 2010 11:48 PM

Using HAProxy with Zope via Buildout

by David Glick posted Jan 19, 2010 03:14 PM

After my post on reducing GIL contention by using fewer Zope threads, Lee Joramo asked for more information on setting up HAProxy, so let me share my configuration. Much of the credit for this goes to Hanno Schlichting and Alex Clark, who offered me much good advice and a sample configuration, respectively.

First, a few words about what HAProxy offers. For the past couple years I've been using Pound to load balance between multiple backend Zope instances. But recently I've been hearing recommendations from people I trust (such as Jarn and Elizabeth Leddy) to try HAProxy instead.

HAProxy offers some nice features: - Backend health checks - Various load-balance algorithms for how requests get distributed to backends - Can do sticky sessions so that an authenticated user always hits the same backend - Warmup time (don't send as many requests to a Zope instance while it's starting up) - Provides a status page giving info on backend status and uptime, # of queued requests, # of active sessions, # of errors, etc.

Some of these are possible with pound too, but the status screen was really the "killer app" for me. This is fun to watch but also very useful for doing rolling restarts when new code needs to be deployed without an interruption in service.

HAProxy status page

Configuration

In my buildout.cfg I added:

[buildout]
...
parts =
    ...
    haproxy-build
    haproxy-conf

[haproxy-build]
recipe = plone.recipe.haproxy
url = http://dist.plone.org/thirdparty/haproxy-1.3.22.zip

[haproxy-conf]
recipe = collective.recipe.template
input = ${buildout:directory}/haproxy.conf.in
output = ${buildout:directory}/etc/haproxy.conf
maxconn = 24000
ulimit-n = 65536
user = zope
group = staff
bind = 127.0.0.1:8080

Here, we add a part called "haproxy-build" which uses the plone.recipe.haproxy recipe to build haproxy from source and add a bin/haproxy script for running it, and a part called "haproxy-conf" which builds the HAProxy configuration file by filling in variables in a template file called haproxy.conf.in.

Be sure to set the user and group variables to the user and group you want HAProxy to run as, and update the bind variable to set the port to which HAProxy should bind.

I run most of my Plone stack using supervisord, so I also updated my supervisord configuration in buildout to run HAProxy:

[supervisor]
recipe = collective.recipe.supervisor
...
programs =
    ...
    10 haproxy ${buildout:directory}/bin/haproxy [ -f ${buildout:directory}/etc/haproxy.conf -db ]

In a real life deployment, you'll probably also want a caching reverse proxy like squid or varnish sitting in front of HAProxy.

What about the contents of haproxy.conf.in? Here's mine:

global
  log 127.0.0.1 local6
  maxconn  ${haproxy-conf:maxconn}
  user     ${haproxy-conf:user}
  group    ${haproxy-conf:group}
  daemon
  nbproc 1

defaults
  mode http
  option httpclose
  # Remove requests from the queue if people press stop button
  option abortonclose
  # Try to connect this many times on failure
  retries 3
  # If a client is bound to a particular backend but it goes down,
  # send them to a different one
  option redispatch
  monitor-uri /haproxy-ping

  timeout connect 7s
  timeout queue   300s
  timeout client  300s
  timeout server  300s

  # Enable status page at this URL, on the port HAProxy is bound to
  stats enable
  stats uri /haproxy-status
  stats refresh 5s
  stats realm Haproxy\ statistics

frontend zopecluster
  bind ${haproxy-conf:bind}
  default_backend zope

# Load balancing over the zope instances
backend zope
  # Use Zope's __ac cookie as a basis for session stickiness if present.
  appsession __ac len 32 timeout 1d
  # Otherwise add a cookie called "serverid" for maintaining session stickiness.
  # This cookie lasts until the client's browser closes, and is invisible to Zope.
  cookie serverid insert nocache indirect
  # If no session found, use the roundrobin load-balancing algorithm to pick a backend.
  balance roundrobin
  # Use / (the default) for periodic backend health checks
  option httpchk

  # Server options:
  # "cookie" sets the value of the serverid cookie to be used for the server
  # "maxconn" is how many connections can be sent to the server at once
  # "check" enables health checks
  # "rise 1" means consider Zope up after 1 successful health check
  server  plone0101 127.0.0.1:${zeoclient1:http-address} cookie p0101 check maxconn 2 rise 1
  server  plone0102 127.0.0.1:${zeoclient2:http-address} cookie p0102 check maxconn 2 rise 1

This assumes that I have Zope instances built by parts called "zeoclient1" and "zeoclient2" in my buildout; you'll probably need to update those names.

You may want to adjust the "option httpchk" line to use a different URL for checking whether the Zope instances are up -- you want to point at something that can be rendered as quickly as possible (in my case it's the Zope root information screen, so I'm not too worried).

The maxconn setting for each backend should be at least the number of threads that that Zope instance is running. Laurence Rowe pointed out to me that it should probably not be set to 1, since Zope also serves some things (blobs and ) via file stream iterators, which happens apart from the main ZPublisher threads. (So setting maxconn to 1 would mean serving a large blob could block other requests to that backend, for instance.)

See the HAProxy configuration documentation for more details on the settings that can be used in this file.

2 comments

on Zope, multiple cores, and the GIL

by David Glick posted Jan 17, 2010 04:12 PM

I recently installed HAProxy as a load-balancer for a site that had previously been running using a single Zope instance using 4 threads. I switched to 2 instances using 2 threads each, load-balanced by HAProxy. I wasn't anticipating that this change would have a noticeable effect on the site's performance, so was happily surprised when the client mentioned that users of the site were commenting on the improved speed.

But why did the site get faster?

Looking at a munin graph of server activity, I observed a noticeable drop in the number of rescheduling interrupts -- a change that coincided with my change in server configuration:

graph showing decreased contention when I switched to more Zope instances with fewer threads

I suspect that the "before" portion of this graph illustrates a problem that occurs when running multi-threaded Python programs on multi-core machines, wherein threads running in different cores fight for control of the Global Interpreter Lock (a problem Dave Beazley has called to the community's attention in a recent presentation) -- and that this explains the improvement in performance once I switched to multiple processes with fewer threads. By switching to multiple processes, we let concurrent processing get managed by the operating system, which is much better at it.

Moral of the story: If you're running Zope on a multi-core machine, having more than 2 threads per Zope instance is probably a bad move performance-wise, compared to the option of running more (load-balanced) instances with fewer threads.

(Using a single thread per instance might be even better, although of course you need to make sure you have enough instances to still handle your load, and you need to make sure single-threaded instances don't make calls to external services which then call back to that instance and block. I haven't experimented with using single-threaded instances yet myself.)

4 comments

Come improve Dexterity at the Tahoe Snow Sprint

by David Glick posted Jan 12, 2010 02:46 PM

This year the West Coast is hosting our own version of the infamous Snow Sprint. I'm really looking forward to spending a week coding, hanging out with Plonistas, and playing in the snow at the upcoming Tahoe Snow Sprint, organized by David Brenneman (dbfrombrc) and coming to California's Sierra Nevada this March 15-19.

The goal of the sprint is to improve the Dexterity content type framework (a modern alternative to Archetypes created by Martin Aspeli and others). As part of the Dexterity team, I want to offer the following list of potential projects to help get your creative juices flowing.

At the sprint, you could...


Implement one of Dexterity's missing features, such as:

Fix some of the other outstanding issues in the Dexterity issue tracker.

Create a ZopeSkel template for Dexterity-based projects.

Improve the through-the-web content type editor.

  • improve usability and/or sexiness
  • add UI for exporting types for work on the filesystem
  • add support for defining vocabularies
  • add support for selecting/configuring custom widgets

Create an editor that allows through-the-web editing of new behaviors (which can then be applied to existing types in a schema-extender-like fashion.)

Add a view editor to accompany the through-the-web schema editor. Deco is coming and will be great, but in the meantime it would be nice to at least have something that generates a basic view template based on your schema and then lets you tweak it and export it to the filesystem.

Build a better workflow editor to accompany the above.

Write a guide to migrating Archetypes-based content types to Dexterity. Or build a tool to do it automatically.

Create replacements for the ATContentTypes types using Dexterity types.

Determine how to handle existing content items sanely when editing schemas.

Devise a PloneFormGen successor that stores its schema in a fashion similar to Dexterity, and makes it easy to convert a form + results into a full-blown content type. Bonus points if the form editing is done using Deco. :)


There are so many interesting possibilities I'm having trouble deciding what to focus on myself. Space is limited, so if any of this strikes your fancy, head on over to Coactivate and sign right up to join us at the sprint!
2 comments
David Glick

David Glick

I am a problem solver trying to make websites easier to build.

Currently I do this in my spare time as a member of the Plone core team, and during the day as an independent web developer specializing in Plone and custom Python web applications.