One of the great things about the HTTP protocol, besides status code 418, is that it's stateless. A web server therefore is not required to store any information on the user or allocate resources for a user after the individual request is done. By that a single web server can handle many many many different users easily, and well if it can't anymore one can add a new server, put a simple load balancer in front and scale out. Each of those web servers then handles its requests without the need for communication which leads to linear scaling (assuming network provides enough bandwidth etc.).
Now the Web isn't used for serving static documents only anymore but we have all these fancy web apps. And those applications often have the need for a state. The most trivial information they need is the current user. HTTP is a great protocol and provides a way to do authentication which works well with its stateless nature - unfortunately this authentication is implemented badly in current clients. Ugly popups, no logout button, ... I don't have to tell more I think. For having nicer login systems people want web forms. Now the stateless nature of HTTP is a problem: The user may login and then browse around. On later requests it should still be known who that user is - with a custom HTML form based login alone this is not possible. A solution might be cookies. At least one might think so for a second. But setting a cookie "this is an authorized user" alone doesn't make sense as it could easily be faked. Better is to simply store a random identifier in a cookie and then keep a state information on the server. Then all session data is protected and only the user who knows this random identifier is authenticated. If this identifier is wisely chosen and hard to guess this works quite well. Luckily this is a mostly PHP- and MySQL-focused blog and as PHP is a system for building web applications this functionality is part of the core language: The PHP session module.
The session module, which was introduced in PHP 4, partly based on work on the famous phplib library, is quite a fascinating piece of code. It is open and extendable in so many directions but still so simple to use that everybody uses it, often newcomers learn about it on their first day in PHP land. Of course you can not only store the information whether the user is logged in but cache some user-specific data or keep the state on some transactions by the user, like multi-page forms or such.
In its default configuration session state will be stored on the web server's file system. Each session's data in its own file in serialized form. If the filesystem does some caching or one uses a ramdisk or something this can be quite efficient. But as we suddenly have a state on the web server we can't scale as easily as before anymore: If we add a new server and then route a user with an existing session to the new server all the session data won't be there. That is bad. This is often solved by a configuration of the load balancer to route all requests from the same user to the same web server. In some cases this works quite ok, but it is often seen that this might cause problems. Let's assume you want to take a machine down for maintenance. All sessions there will die. Or imagine there's a bunch of users who do complex and expensive tasks - then one of your servers will have a hard time, giving these users bad response times which feels like bad service, even though your other systems are mostly idle.
A nice solution for this would be to store the sessions in a central repository which can be accessed from all web servers.
In a previous blog post I discussed an approach for this: MySQL 5.6 and its memcached API. MySQL with InnoDB is a really fast database which can accessed from all the web servers and respond them quickly. Now as this session storage only does relatively simple operations (key-value store) the memcached API puts the thing on steroids: All data is stored in InnoDB and available in a fast way for all nodes.
Unfortunately even such a system has limits and unfortunately replication is no good solution here to scale further as we will always need a master for writing the updated session data. By using replication we can take some load from it and we can configure a slave which can be promoted to master to keep session alive if the primary master machine fails but at some point in time we need another solution ... but, happy news, again: One doesn't have to look far as MySQL cluster will be happy to help. MySQL Cluster "is a high-availability, high-redundancy
version of MySQL adapted for the distributed computing environment," as the MySQL documentation states. The technology of MySQL Cluster is proven in performance-critical environments with a demand on high availability. It's quite likely that whenever you're using your GSM device MySQL Cluster is involved.
A typical MySQL Cluster consists of multiple systems. The key part are the data nodes. These systems keep the data in a distributed fashion in a form they can quickly access. These data nodes can be configured in a way that all data is available on multiple systems so whenever a system crashes all data is still available. While doing that this cluster has no need for shared resources, like a shared storage system or such but builds a shared-nothing architecture.
On the side the cluster has management nodes. The purpose there is to have a system responsible for the configuration. This system is only required during startup so all other nodes know what to do. If it crashes lateron the availability of the cluster will not be affected, while it's good to have it around.
For actually accessing the data we need a third component: The API node. API nodes can be diffferent things. One are C++ applications using the NDB API. The NDB API is using the low level protocol for talking directly to the cluster. Using this API is very fast but not everybody likes writing C++, for whatever reason. Among the other ways, like an Java JPA interface or an OpenLDAP plugin, theres a major form of an API node: The MySQL server with the ndb storage engine. The ndb storage engine can be used like other storage engines in MySQL can, but other than the ones you probably know best, InnoDB or MyISAM, it does not store to disk but reads and writes the data using an highly efficient protocol via the network to the MySQL Cluster data nodes. With this system you may not only have a single MySQL Server accessing the data but you may even have multiple servers reading and writing to the same database in a consistent, scalable, fast way to a single cluster. Just what one might like with Master-Master-Replication. Now people, who used MySQL Cluster before know that current versions aren't handling joins really well, which is bad as many web applications need to join data from different tables. But I can tell you there's land in sight: There are Development Milestone Releases proving a feature called Adaptive Query Localization which can execute joins 70x faster than in current MySQL Cluster 7.1. But that's actually no part of this article.
In this article I was writing about PHP session storage.
Having a cluster of machines for session storage is a cool idea: One can access the sessions in a very fast way from all web servers, read and write to the cluster and be happy. And even if a machine fails the suers won't notice as all sessions survive. Now from the previous paragraph you can easily guess what we need for that. We need MySQL Cluster and an API node. Now instead of building a C++ module specific for that but why built such a specific solution when a more generic one exists? - The mentioned Development Milestone release of MySQL Cluster 7.2 includes a new form of API node: memcached. Sounds similar to what was mentioned before, eh? So instead of memcaches native in-memory local HashTable or InnoDB we can use MySQL Cluster. As we usually won't need persistency on disk but can configure MySQL Cluster to keep the data in memory this is extremely fast even with many many concurrent users.
For getting started I suggest you take a look at this blog posting by Andrew Morgan which will run you through the installation and initial configuration of a cluster test system on a single machine. Once you're through this you'll have a MySQL Cluster which can be reached using a MySQL sql node and memcache on your local machine. For accessing this from PHP we have the choice between two for PHP: memcache and memcached. The first one has no external dependencies, besides PHP, and is therefore a tiny bit simpler to install, the later one has a few more features. Once the module you've chosen is installed and activated in php.ini we can test it:
$ php -r '$m = new Memcached(); $m->addServer('localhost', 11211);' \ '$m->set("greeting", "Hello World!"); var_dump($m->get("greeting"));' string(12) "Hello World!"
So far so good, we can store data and get it back, let's check with MySQL, if it's really in the Cluster:
mysql> SELECT * FROM demo_table; +----------+------------+-----------+--------------+ | mkey | math_value | cas_value | string_value | +----------+------------+-----------+--------------+ | greeting | NULL | NULL | Hello World! | +----------+------------+-----------+--------------
As next step we configure PHP using our php.ini file to use the memcache module as session handler and ask it to connect to our cluster:
Now all sessions will go to the cluster. But, aside from the fact that this is currently running all on the same machine, is this already high available and redundant? Well not yet: Currently we have a single memcached node. If that goes down nothing will work. Additionally this machine can be a bottleneck. But as we have the cluster in the background this can be solved easily. First we need a second memcached node. And then configure the memcached PHP module to use them as a memcache server pool:
Such pools are one of the things only available with the memcached module for PHP, not memcache.