I think most developers are able to cache database queries by now. But what about DML queries? Every query, connection - or in the general case - ressource needs time. So I thought a lot about how to write data as efficiently as possible into the database - in my case MySQL. Hmm...We have to take a closer look at the details and we can't choose the same asynchronously writing method for every kind of query. With kind of query I mean that it depends on what storage engine we use, the complexity of the query, should more than one record be written at once, are triggers involved and so on. Sure, the one or the other query MUST be written instantly, but most of the writing querys are stackable with no need to check if the request has succeed.
For that reason I've invented a queuing system. Every extensive query goes to a local file based queue and will be forwarded to the MySQL server by a cron-job. The next upper method is a APC based queue which is cleaned up with an own daemon which is implemented as a simple loop with access to the same SHM segment as the FCGI pool. As you can see the system is really complex and error-prone and does not really satisfy my needs to get the data asynchronously into the database. What about the "poll"-based solution (yes, select() is used in the background) which is part of the newer mysqli extensions. So we could create a pseudo asynchronous connection which let us fire up multiple queries at once. The problem here is, that every query needs it's own database connection and we have to wait until every query has finished it's work because the MySQL protocol does not support async connections.
My next attempt was using threads to write queries asynchronously using PHP's pcntl extension to fork PHP child's. It's quite tricky but persistent connections are good to avoid the close when a child exits. I also wrote a patch to avoid the mysql_close() when the child exits by setting an ini value. Anyway, any of these toys are not the truth.
The trigger for this articles was another matter. I got an idea to avoid all of these wranglings by using also threads but not with every write query as described above. My idea was to create a MySQL write-query queue inside of the webserver as a single thread which could be feed through the FCGI protocol. But before I implement such a thing I want to ask you, if you have a better way to avoid writes by queuing them up, which is much more efficient in every way.
You might also be interested in the following
- Is it possible to avoid query parsing inside of MySQL?
- Running Standard Deviation in MySQL
- Netcat MySQL Stream
- Fast circular buffer in MySQL
Sorry, comments are closed for this article. Contact me if you want to leave a note.