My MySQL tips valid-rss-rogers


In real life, there are frequent cases where getting a running application to work correctly is strongly dependent on consistent write/read operations. This is no issue when using a single data node as a provider, but it becomes more concerning and challenging when adding additional nodes for high availability and/or read scaling.

In the MySQL dimension, I have already described it here in my blog Dirty Reads in High Availability Solution.

We go from the most loosely-coupled database clusters with primary-replica async replication, to the fully tightly-coupled database clusters with NDB Cluster (MySQL/Oracle).

Adding components like ProxySQL to the architecture can, from one side, help in improving high availability, and from the other, it can amplify and randomize the negative effect of a stale read. As such it is crucial to know how to correctly set up the environment to reduce the risk of stale reads, without reducing the high availability.

This article covers a simple HOW-TO for Percona XtraDB Cluster 8.0 (PXC) and ProxySQL, providing an easy to follow guide to obtain no stale reads, without the need to renounce at read, scaling or a high grade of HA thanks to PXC8.

The Architecture

The covered architecture is based on:

  • PXC8 cluster compose by 3 nodes
  • ProxySQL v2 node in a cluster to avoid a single point of failure
  • Virtual IP with KeepAlived see here. If you prefer to use your already-existing load balancer, feel free to do so.
  • N number of application nodes, referring to VIP


Install PXC8

Install ProxySQL

And finally, set the virtual IP as illustrated in the article mentioned above. It is now the time to do the first step towards the non-stale read solution.


Covering Stale Reads

With PXC, we can easily prevent stale reads by setting the parameter to one of the following values wsrep-sync-wait = 1 – 3 – 5 or 7 (default = 0).
We will see what changes in more detail in part 3 of the blog to be published soon.
For now, just set it to wsrep-sync-wait = 1 ;.

The cluster will ensure consistent reads no matter from which node you will write and read.

This is it. So simple!


ProxySQL Requirements

The second step is to be sure we set up our ProxySQL nodes to use:

  • One writer a time to reduce the certification conflicts and Brutal Force Abort
  • Avoid including the writer in the reader group
  • Respect the order I am setting for failover in case of needs

Now here we have a problem; ProxySQL v2 comes with very interesting features like SSL Frontend/backend, support for AWS Aurora …and more. But it also comes with a very poor native PXC support. I have already raised this in my old article on February 19, 2019, and raised other issues with discussions and bug reports.

In short, we cannot trust ProxySQL for a few factors:

  • The way it deals with the nodes failover/failback is not customizable
  • The order of the nodes is not customizable
  • As of this writing, the support to have the writer NOT working as a reader is broken

In the end, the reality is that in order to support PXC/Galera, the use of an external script using the scheduler is more flexible, solid, and trustworthy. As such, the decision is to ignore the native Galera support, and instead focus on the implementation of a more robust script.

For the scope of this article, I have reviewed, updated, and extended my old script.

Percona had also developed a Galera checker script that was part of the ProxySQL-Admin-Tools suite, but that now has been externalized and available in the PerconaLab GitHub.


Setting All Blocks

The setup for this specific case will be based on:

  • Rules to perform read-write split.
  • One host group to define the writer HG 200
  • One host group to define the reader HG 201
  • One host group to define candidate writers HG 8200
  • One host group to define candidate readers HG 8201

The final architecture will look like this:

ProxySQL Nodes:

Node1 public ip internal ip
Node1 public ip internal ip
Node1 public ip internal ip

VIP public ip

PXC8 Nodes:


Let us configure PXC8 first. Operation one is to create the users for ProxySQL and the script to access the PXC cluster for monitoring.

CREATE USER monitor@'10.0.%' IDENTIFIED BY '';
GRANT USAGE ON *.* TO monitor@'10.0.%';
GRANT SELECT ON performance_schema.* TO monitor@'10.0.%';

CREATE USER monitor@'10.0.%' IDENTIFIED BY '';
GRANT USAGE ON *.* TO monitor@'10.0.%';
GRANT SELECT ON performance_schema.* TO monitor@'10.0.%';

The second step is to configure ProxySQL as a cluster:

Add a user able to connect from remote. This is will require ProxySQL nodes to be restarted.

update global_variables set Variable_Value='admin:admin;cluster1:clusterpass'  where Variable_name='admin-admin_credentials';

systemctl restart proxysql.

On rotation, do all ProxySQL nodes.

The third part is to set the variables below.

Please note that the value for admin-cluster_mysql_servers_diffs_before_sync is not standard and is set to 1.


update global_variables set variable_value='cluster1' where variable_name='admin-cluster_username';
update global_variables set variable_value='clusterpass' where variable_name='admin-cluster_password';

update global_variables set variable_value=1 where variable_name='admin-cluster_mysql_servers_diffs_before_sync';
update global_variables set Variable_Value=0  where Variable_name='mysql-hostgroup_manager_verbose';
update global_variables set Variable_Value='true'  where Variable_name='mysql-query_digests_normalize_digest_text';
update global_variables set Variable_Value='8.0.19'  where Variable_name='mysql-server_version';

It is now time to define the ProxySQL cluster nodes:;

INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES('',6032,100,'PRIMARY');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES('',6032,100,'SECONDARY');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES('',6032,100,'SECONDARY');

Check the ProxySQL logs and you should see that the nodes are now linked: 2020-05-25 09:24:30 [INFO] Cluster: clustering with peer . Remote version: 2.1.0-159-g0bdaa0b . Self version: 2.1.0-159-g0bdaa0b 2020-05-25 09:24:30 [INFO] Cluster: clustering with peer . Remote version: 2.1.0-159-g0bdaa0b . Self version: 2.1.0-159-g0bdaa0b


Once this is done let us continue the setup, adding the PXC nodes and all the different host groups to manage the architecture:

delete from mysql_servers where hostgroup_id in (200,201);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('',200,3306,10000,2000,'default writer');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('',201,3306,10000,2000,'reader');    
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('',201,3306,10000,2000,'reader');        
delete from mysql_servers where hostgroup_id in (8200,8201);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('',8200,3306,1000,2000,'Writer preferred');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('',8200,3306,999,2000,'Second preferred');    
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('',8200,3306,998,2000,'Thirdh and last in the list');      
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('',8201,3306,1000,2000,'reader setting');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('',8201,3306,1000,2000,'reader setting');    
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('',8201,3306,1000,2000,'reader setting');       

You can see that as mentioned we have two host groups to manage the cluster 8200 and 8201.

Those two host groups work as templates and they will change only by us manually.

The 8200 host group weight defines the order of the writers from higher to lower.

Given that node with weight 1000 is the preferred writer.

At the moment of writing, I chose to NOT implement automatic fail-back.

I will illustrate later how to trigger that manually.

Once we have all the servers up, lets’ move on and create the users:


insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent,comment) values ('app_test2','test',1,200,'mysql',1,'application test user');
insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent,comment) values ('dba','dbapw',1,200,'mysql',1,'generic dba for application');

And the query rules to have Read/Write split:

insert into mysql_query_rules (rule_id,proxy_port,destination_hostgroup,active,retries,match_digest,apply) values(1040,6033,200,1,3,'^SELECT.*FOR UPDATE',1);
insert into mysql_query_rules (rule_id,proxy_port,destination_hostgroup,active,retries,match_digest,apply) values(1042,6033,201,1,3,'^SELECT.*$',1);

The final step is to set the scheduler:

INSERT  INTO scheduler (id,active,interval_ms,filename,arg1) values (10,0,2000,"/var/lib/proxysql/","-u=cluster1 -p=clusterpass -h= -H=200:W,201:R -P=6032  --main_segment=1 --debug=0  --log=/var/lib/proxysql/galeraLog --active_failover=1 --single_writer=1 --writer_is_also_reader=0");

Let analyze the script parameters:

The schedule ID. id: 10
As a best practice, always keep the scheduler script not active by default and enable it only when in the need. active: 0
Interval is how often the scheduler should execute the script; it needs to be often enough to reduce the time the service is in a degraded state, but not so often to be noisy. An interval of two seconds is normally a good start. interval_ms: 2000
The location of the script that must be set as executable filename: /var/lib/proxysql/

Given the scheduler limitation to five arguments, we collapse all the parameters in one and let the script then parse them. arg1: -u=cluster1 -p=clusterpass -h= -H=200:W,201:R -P=6032 –retry_down=2 –retry_up=1 –main_segment=2 –debug=0 –log=/var/lib/proxysql/galeraLog –active_failover=1 –single_writer=1 –writer_is_also_reader=0

The parameters we pass here are:

The credential to connect to ProxySQL: -u=cluster1 -p=clusterpass -h= -P=6032
The host group definition: -H=200:W,201:R This setting is necessary because you can have multiple script running serving multiple clusters.
The retry settings are to reduce the risk of false positive, say a network hiccup or other momentary events against which you do not want to take action: –retry_down=2 –retry_up=1

Given the script is segment-aware, you need to declare the main segment that is serving the applications: –main_segment=1 
Log location/name the final name will be the combination of this plus the host groups (ie galeraLog_200_W_201_R.log ) : –log=/var/lib/proxysql/galeraLog
If script should deal with failover or not and what type (read documentation/help for details): –active_failover=1
If the script should support SINGLE writer (default recommended), or multiple writer nodes: –single_writer=1
Is (are) the writers also working as readers or fully write dedicated: –writer_is_also_reader=0
Once we are confident our settings are right, let us put the script in production: 

update scheduler set active=1 where id=10;


One important thing to keep in mind is that ProxySQL scheduler IS NOT part of the cluster synchronization, as such we must manually configure that part on each node. Once the script runs, any change done inside ProxySQL to the mysql_server table will be kept in sync by the ProxySQL cluster. It is strongly recommended to not mix ProxySQL nodes in the cluster and sparse one, as this may cause unexpected behavior.


At this point, your PXC8 cluster architecture is fully running and will provide you with a very high level of HA and write isolation while preserving the read scaling capabilities.

In part two of this post, we will see the cluster in action and how it behaves in case of standard operations like backup or emergency cases like node crashes.


Continue in part 2 





What you may not know about random number generation in sysbench

Sysbench is a well known and largely used tool to perform benchmarking. Originally written by Peter Zaitsev in early 2000, it has become a de facto standard when performing testing and benchmarking. Nowadays it is maintained by Alexey Kopytov and can be found in Github at

What I have noticed though, is that while widely-used, some aspects of sysbench are not really familiar to many. For instance, the easy way to expand/modify the MySQL tests is using the lua extension, or the embedded way it handles the random number generation.

Why this article? 

I wrote this article with the intent to show how easy it can be to customize sysbench to make it what you need. There are many different ways to extend sysbench use, and one of these is through proper tuning of the random IDs generation.

By default, sysbench comes with five different methods to generate random numbers. But very often, (in fact, almost all the time), none is explicitly defined, and even more rare is seeing some parametrization when the method allows it.

If you wonder “Why should I care? Most of the time defaults are good”, well, this blog post is intended to help you understand why this may be not true.


Let us start.

What methods do we have in sysbench to generate numbers? Currently the following are implemented and you can easily check them invoking the --help option in sysbench:

  • Special 
  • Gaussian
  • Pareto
  • Zipfian 
  • Uniform


Of them Special is the default with the following parameters:

  • rand-spec-iter=12   number of iterations for the special distribution [12]
  • rand-spec-pct=1    percentage of the entire range where 'special' values will fall in the special distribution [1]
  • rand-spec-res=75    percentage of 'special' values to use for the special distribution [75]


Given I like to have simple and easy reproducible tests and scenarios, all the following data has being collected using the sysbench commands:

  •  sysbench ./src/lua/oltp_read.lua --mysql_storage_engine=innodb --db-driver=mysql --tables=10 --table_size=100 prepare
  • sysbench ./src/lua/oltp_read_write.lua --db-driver=mysql --tables=10 --table_size=100   --skip_trx=off --report-interval=1 --mysql-ignore-errors=all --mysql_storage_engine=innodb --auto_inc=on --histogram --stats_format=csv --db-ps-mode=disable --threads=10 --time=60  --rand-type=XXX run


Feel free to play by yourself with script instruction and data here (


What is sysbench doing with the random number generator? Well, one of the ways it is used is to generate the IDs to be used in the query generation. So for instance in our case, it will look for numbers between 1 and 100, given we have 10 tables with 100 rows each.

What will happen if I run the sysbench RUN command as above, and change only the random –rand-type?

I have run the script and used the general log to collect/parse the generated IDs and count their frequencies, and here we go:



Picture 1


Picture 2


Picture 3


Picture 4


Picture 5


Makes a lot of sense right? Sysbench is, in the end, doing exactly what we were expecting.

Let us check one by one and do some reasoning around them.


The default is Special, so whenever you DO NOT specify a random-type to use, sysbench will use special. What special does is to use a very, very limited number of IDs for the query operations. Here we can actually say it will mainly use IDs 50-51 and very sporadically a set between 44-56, and the others are practically irrelevant. Please note, the values chosen are in the middle range of the available set 1-100.

In this case, the spike is focused on two IDs representing 2 percent of the sample. If I increase the number of records to one million, the spike still exists and is focused on 7493, which is 0.74% of the sample. Given that’s even more restrictive, the number of pages will probably be more than one.


As declared by the name, if we use Uniform, all the values are going to be used for the IDs and the distribution will be … Uniform.


The Zipf distribution, sometimes referred to as the zeta distribution, is a discrete distribution commonly used in linguistics, insurance, and the modeling of rare events. In this case, sysbench will use a set of numbers starting from the lower (1) and reducing the frequency in a very fast way while moving towards bigger numbers.


With Pareto that applies the rule of 80-20 (read, the IDs we will use are even less distributed and more concentrated in a small segment. 52 percent of all IDs used were using the number 1, while 73 percent of IDs used were in the first 10 numbers.


Gaussian distribution (or normal distribution) is well known and familiar (see and mostly used in statistics and prediction around a central factor. In this case, the used IDs are distributed in a bell curve starting from the mid-value and slowly decreases towards the edges.

The point now is, what for?

Each one of the above cases represents something, and if we want to group them we can say that Pareto and Special can be focused on hot-spots. In that case, an application is using the same page/data over and over. This can be fine, but we need to know what we are doing and be sure we do not end up there by mistake.

For instance, IF we are testing the efficiency of InnoDB page compression in read, we should avoid using the Special or Pareto default, which means we must change sysbench defaults. This is in case we have a dataset of 1Tb and bufferpool of 30Gb, and we query over and over the same page. That page was already read from the disk-uncompressed-available in memory.

In short, our test is a waste of time/effort.

Same if we need to check the efficiency in writing. Writing the same page over and over is not a good way to go.

What about testing the performance?

Well again, are we looking to identify the performance, and against what case? It is important to understand that using a different random-type WILL impact your test dramatically. So your “defaults should be good enough” may be totally wrong.

The following graphs represent differences existing when changing ONLY the rand-type value, test type, time, additional option, and the number of threads are exactly the same.

Latency differs significantly from type to type:

Picture 9    

Here I was doing read and write, and data comes from the Performance Schema query by sys schema (sys.schema_table_statistics). As expected, Pareto and Special are taking much longer than the others given the system (MySQL-InnoDB) is artificially suffering for contention on one hot spot.

Changing the rand-type affects not only latency but also the number of processed rows, as reported by the performance schema.

Picture 10

Picture 11


Given all the above, it is important to classify what we are trying to determine, and what we are testing.

If my scope is to test the performance of a system, at all levels, I may prefer to use Uniform, which will equally stress the dataset/DB Server/System and will have more chances to read/load/write all over the place.

If my scope is to identify how to deal with hot-spots, then probably Pareto and Special are the right choices.

But when doing that, do not go blind with the defaults. Defaults may be good, but they are probably recreating edge cases. That is my personal experience, and in that case, you can use the parameters to tune it properly.

For instance, you may still want to have sysbench hammering using the values in the middle, but you want to relax the interval so that it will not look like a spike (Special-default) but also not a bell curve (Gaussian).

You can customize Special and have something like :

Picture 6

In this case, the IDs are still grouped and we still have possible contention, but less impact by a single hot-spot, so the range of possible contention is now on a set of IDs that can be on multiple pages, depending on the number of records by page.

Another possible test case is based on Partitioning. If, for instance, you want to test how your system will work with partitions and focus on the latest live data while archiving the old one, what can you do?

Easy! Remember the graph of the Pareto distribution? You can modify that as well to fit your needs.

Picture 8

Just tuning the –rand-pareto value, you can easily achieve exactly what you were looking for and have sysbench focus the queries on the higher values of the IDs.

Zipfian can also be tuned, and while you cannot obtain an inversion as with Pareto, you can easily get from spiking on one value to equally distributed scenarios. A good example is the following:

Picture 7


The last thing to keep in mind, and it looks to me that I am stating the obvious but better to say that than omit it, is that while you change the random specific parameters, the performance will also change.

See latency details:

Picture 12

Here you can see in green the modified values compared with the original in blue.


Picture 13



At this point, you should have realized how easy it can be to adjust the way sysbench works/handles the random generation, and how effective it can be to match your needs. Keep in mind that what I have mentioned above is valid for any call like the following, such as when we use the sysbench.rand.default call:

local function get_id()

   return sysbench.rand.default(1, sysbench.opt.table_size)


Given that, do not just copy and paste strings from other people’s articles, think and understand what you need and how to achieve it.

Before running your tests, check the random method/settings to see how it comes up and if it fits your needs. To make it simpler for me, I use this simple test ( The test runs and will print a quite clear representation of the IDs distribution.

My recommendation is, identify what matches your needs and do your testing/benchmarking in the right way.


First and foremost reference is for the great work Alexey Kopytov is doing in working on sysbench

Zipfian articles:


Percona article on how to extend tests in sysbench

The whole set material I used for this article is on github (

 Understand dirty reads when using ProxySQL

 Recently I had been asked to dig a bit about WHY some user where getting dirty reads when using PXC and ProxySQL. 

While the immediate answer was easy, I had taken that opportunity to dig a bit more and buildup a comparison between different HA solutions. 

 For the ones that cannot wait, the immediate answer is …drum roll, PXC is based on Galera replication, and as I am saying from VERY long time (2011), Galera replication is virtually synchronous. Given that if you are not careful you MAY hit some dirty reads, especially if configured incorrectly. 

 There is nothing really bad here, we just need to know how to handle it right. 

In any case the important thing is to understand some basic concepts. 

Two ways of seeing the world (the theory)

Once more let us talk about data-centric approach and data-distributed.

We can have one data state:


 Where all the data nodes see a single state of the data. This is it, you will consistently see the same data at a given T moment in time, where T is the moment of commit on the writer. 

 Or we have data distributed:

data diff

Where each node has an independent data state. This means that data can be visible on the writer, but not yet visible on another node at the moment of commit, and that there is no guarantee that data will be passed over in a given time. 

 The two extremes can be summarized as follow:

Tightly coupled database clusters

  • Data Centric approach (single state of the data, distributed commit)
  • Data is consistent in time cross nodes
  • Replication requires high performing link
  • Geographic distribution is forbidden

Loosely coupled database clusters

  • Single node approach (local commit)
  • Data state differs by node
  • Single node state does not affect the cluster
  • Replication link doesn’t need to be high performance
  • Geographic distribution is allowed 


Two ways of seeing the world (the reality)

Given life is not perfect and we do not have only extremes, the most commonly used MySQL solution find their place covering different points in the two-dimensional Cartesian coordinate system:

Screen Shot 2019 10 16 at 94547 PM

This graph has the level of high availability on the X axis and the level of Loose – Tight relation on the Y axis. 

As said I am only considering the most used solutions:

  • MySQL – NDB cluster
  • Solutions based on Galera 
  • MySQL Group replication / InnoDB Cluster
  • Basic Asynchronous MySQL replication 

InnoDB Cluster and Galera are present in two different positions, while the others take a unique position in the graph. At the two extreme position we have Standard replication, which is the one less tight and less HA, and NDB Cluster who is the tightest solution and higher HA.  

 Translating this into our initial problem, it means that when using NDB we NEVER have Dirty Reads, while when we use standard replication we know this will happen.

Another aspect we must take in consideration when reviewing our solutions, is that nothing come easy. So, the more we want to move to the Right-Top corner the more we need to be ready to give. This can be anything, like performance, functionalities, easy to manage, etc.

 When I spoke about the above the first time, I got a few comments, the most common was related on why I decided to position them in that way and HOW I did test it. 

 Well initially I had a very complex approach, but thanks to the issue with the Dirty Reads and the initial work done by my colleague Marcelo Altman, I can provide a simple empiric way that you can replicate just use the code and instructions from HERE.


Down into the rabbit hole 

The platform

To perform the following tests, I have used:

  • A ProxySQL server
  • An NDB cluster of 3 MySQL nodes 6 data nodes (3 Node Groups)
  • A cluster of 3 PXC 5.7 single writer
  • An InnoDB cluster 3 nodes single writer 
  • A 3 nodes MySQL replica set
  • 1 Application node running a simple Perl script

All nodes where connected with dedicated backbone network, different from front end receiving data from the script. 

The tests

I have run the same simple test script with the same set of rules in ProxySQL.
For Galera and InnoDB cluster I had used the native support in ProxySQL, also because I was trying to emulate the issues I was asked to investigate. 

For Standard replication and NDB I had used the mysql_replication_hostgroup settings, with the difference that the later one had 3 Writers, while basic replication has 1 only.

Finally, the script was a single threaded operation, creating a table in the Test schema, filling it with some data, then read the Ids in ascending order, modify the record with update, and try to read immediately after. 

When doing that with ProxySQL, the write will go to the writer Host Group (in our case 1 node also for NDB, also if this is suboptimal), while reads are distributed cross the READ Host Group. If for any reason an UPDATE operation is NOT committed on one of the nodes being part of the Reader HG, we will have a dirty read.

Simple no?!

The results


dirty comparative2


Let us review the graph. Number of dirty reads significantly reduce moving from left to the right of the graph, dropping from 70% of the total with basic replication to the 0.06% with Galera (sync_wait =0).

The average lag is the average time taken from the update commit to when the script returns the read with the correct data. 

It is interesting to note a few factors:

  1. The average cost time in GR between EVENTUAL and AFTER is negligible
  2. Galera average cost between sync_wait=0 and sync_wait=3 is 4 times longer 
  3. NDB is getting an average cost that is in line with the other BUT its max Lag is very low, so the fluctuation because the synchronization is minimal (respect to the others)
  4. GR and Galera can have 0 dirty reads but they need to be configured correctly. 

 Describing a bit more the scenario, MySQL NDB cluster is the best, period! Less performant in single thread than PXC but this is expected, given NDB is designed to have a HIGH number of simultaneous transactions with very limited impact. Aside that it has 0 dirty pages no appreciable lag between writer commit – reader. 

On the other side of the spectrum we have MySQL replication with the highest number of dirty reads, still performance was not bad but data is totally inconsistent.

 Galera (PXC implementation) is the faster solution when single threaded and has only 0.06% of dirty reads with WSREP_SYNC_WAIT=0, and 0 dirty pages when SYNC_WAIT=3.
About galera we are seen and paying something that is like that by design. A very good presentation ( from Fred Descamps explain how the whole thing works.

This slide is a good example:

Screen Shot 2019 10 13 at 32714 PM

By design the apply and commit finalize in Galera may have (and has) a delay between nodes. When changing the parameter wsrep_sync_wait as explained in the documentation the node initiates a causality check, blocking incoming queries while it catches up with the cluster. 

Once all data on the node receiving the READ request is commit_finalized, the node perform the read.

 MySQL InnoDB Cluster is worth a bit of discussion. From MySQL 8.0.14 Oracle introduced the parameter group_replication_consistency please read (, in short MySQL Group replication can now handle in different way the behavior in respect of Write transactions and read consistency.

Relevant to us are two settings:

    • Both RO and RW transactions do not wait for preceding transactions to be applied before executing. This was the behavior of Group Replication before the group_replication_consistency variable was added. A RW transaction does not wait for other members to apply a transaction. This means that a transaction could be externalized on one member before the others.
    • A RW transaction waits until its changes have been applied to all of the other members. This value has no effect on RO transactions. This mode ensures that when a transaction is committed on the local member, any subsequent transaction reads the written value or a more recent value on any group member. Use this mode with a group that is used for predominantly RO operations to ensure that applied RW transactions are applied everywhere once they commit. This could be used by your application to ensure that subsequent reads fetch the latest data which includes the latest writes.


As shown above using AFTER is a win and will guarantee us to prevent dirty reads with a small cost.


ProxySQL has native support for Galera and Group replication, including the identification of the transactions/writeset behind. Given that we can think ProxySQL SHOULD prevent dirty reads, and it actually does when the entity is such to be caught. 

But dirty reads can happen in such so small-time window that ProxySQL cannot catch them. 

As indicated above we are talking of microseconds or 1-2 milliseconds. To catch such small entity ProxySQL monitor should pollute the MySQL servers with requests, and still possibly miss them given network latency. 

Given the above, the dirty read factor, should be handled internally as MySQL Group Replication and Galera are doing, providing the flexibility to choose what to do. 

There are always exceptions, and in our case the exception is in the case of basic MySQL replication. In that case, you can install and use the ProxySQL binlog reader, that could help to keep the READS under control, but will NOT be able to prevent them when happening a very small time and number.


Nothing comes for free, dirty reads is one of “those” things that can be prevented but we must be ready to give something back. 

It doesn’t matter what, but we cannot get all at the same time. 

Given that is important to identify case by case WHICH solution fits better, sometimes it can be NDB, others Galera or Group replication.  There is NOT a silver bullet and there is not a single way to proceed. 

Also, when using Galera or GR the more demanding setting to prevent dirty reads, can be set at the SESSION level, reducing the global cost.


  • NDB is the best, but is complex and fits only some specific usage like high number of threads; simple schema definition; in memory dataset
  • Galera is great and it helps in joining performance and efficiency. It is a fast solution but can be flexible enough to prevent dirty reads with some cost.
    Use WSREP_SYNC_WAIT to tune that see (
  • MySQL Group Replication come actually attached, we can avoid dirty reads, it cost a bit use SET group_replication_consistency= 'AFTER' for that.
  • Standard replication can use ProxySQL Binlog Reader, it will help but will not prevent the dirty reads. 

To be clear:

  • With Galera use WSREP_SYNC_WAIT=3 for reads consistency 
  • With GR use group_replication_consistency= 'AFTER'

I suggest to use SESSION not GLOBAL and play a bit with the settings to understand well what is going on.


I hope this article had given you a better understanding of what solutions we have out there, such that you will be able to perform an informed decision when in need. 



This week is almost over, and it pass after two nice event that I had attended.

PerconaLive 2019 in Amsterdam Schiphol and ProxySQL Technology Day in Ghent.

What are my takeaways on both events?

Well, let us start with PerconaLive first.


The venue was nice, the hotel comfortable and rooms were nice as well, clean and quiet. Allow to have good rest and same time decent space to work if you have to. The conference was just there so I was able to attend, work and rest without any effort, A+.

The hotel was far away from the city and the Amsterdam’s distractions, which it may be a negative thing for many, but honestly IF I spend money, and time of my life to attend a conference, I want to take the most out of it. Have the chance to stay focus on it and to talk with all attendees, customer and experts is a plus. If you want to go around and be a tourist, take a day more after or before the conference and not try to do it while you should work.

Attendees & Speakers

Attendee were curious and investigative, given that I notice most of the speeches were interactive, with people asking questions during the presentations and after. All good, there. A couple of comments I have is more towards some speakers. First, I think some of them should rehears more the speech they present, second please please please STOP reading the slides. You should refer to them, but speak toward the audience not the screen and speak! Give your opinion your thoughts, they can read your bullet points, no need for you to read the text to them.

Outside the rooms we have a good mix of people, talking and interacting. Small groups where reshuffling quite often, which at the end result in better exchanges. Never the less I think we should do better, sometimes we, and I am referring to people like me that are the “old ones” of the MySQL conferences, should help more the customers to connect to each other and with other experts.


I am not going to do a full review of the Key-Note sessions, but one thing come to my mind over and over, and to be honest it makes me feel unhappy and a bit upset.

The discussion we are having over and over about Open Source model and how some big giants (aka Amazon AWS, Google cloud, Microsoft Azure but not only) use the code develop by others, get gazillion and give back crumbs to the community who had develop it, makes me mad.

We had not address it correctly at all, and this because there is a gap not only in the licensing model, but also in the international legislation.

This because open source was not a thing in the past for large business. It is just recently that finally Open Source has been recognized a trustable and effective solution also for large, critical business.

Because that, and the obvious interest of some large group, we lack a legislation that should help or even prevent the abuse that is done by such large companies.

Technical Tracks

Anyhow back to the technical tracks. Given this was a PerconaLive event, we had few different technologies presents. MySQL, Postgress and MongoDB. It comes without saying that MySQL was the predominant and the more interesting tracks were there. This not because I come from MySQL, but because the ecosystem was helping the track to be more interesting.
Postgres was having some interesting talk, but let us say clearly, we had just few from the community.

Mongo was really low in attendee. The number of attendees during the talks and the absence of the MongoDb community was clearly indicating that the event is not in the are of interest of the MongoDB utilizers.

Here I want to give a non sollecitated advice. The fact that Percona supports multiple technologies is a great thing, and Percona do that in the best and more professional way possible.
But this doesn’t mean the company should dissipate its resources and create confusion and misunderstanding when realizing events like the Percona Live.

Do all the announcement and off sessions presentations you want, to explain what the company does (and does them in a great way) to serve the other technologies, but keep the core of the conference for what it should be, a MySQL conference.

Why? It is simple, Percona has being leading in that area after MySQL AB vanished, and the initial years had done a great job.

The community had benefitted a lot form it, and customers understanding and adoption had improved significantly because Percona Live. With this attempt of building up a mix conference, Percona seems have lost the compass during the event. Which is not true!!! But that is what comes out in the community, and more important not sure that is good for customers.

At the same time, the attempt is doom to fail, because both Postgres and Mongo already have strong communities. So instead trying to have the Sun gravitate around the Earth, let us have the Earth to gravitate around the Sun.

My advice is to be more present in the Postgres/MongoDB events, as sponsor, collaborating with the community, with talks and innovation (like Percona packaging for Postgres or MongoDB backup), make the existing conference stronger with the Percona presence.

That will lead more interest towards what Percona is doing than trying to get the Sun out of its orbit.


About MySQL, as usual we had a lot of great news and in-depth talks. We can summarize them, indicating how MySQL/Oracle and Percona Software are growing. Becoming more efficient and ready to cover enterprise needs even better than before.

The MySQL 8.0.17 version contains great things, not only the clone plugins. The performance optimization indicated by Dimitri KRAVTCHUK in his presentation, and how they were achieved, is a very important step. Because FINALLY we had broken the barrier and start to touch core blocks that were causing issues from forever.

At the same time, it is sad to see how MariaDb is accumulating performance debt respect to the other distributions. Sign that the decision to totally diverge is not paying back, at least in the real world, given for their sales they are better than ever.

As Dimitri said about himself, “I am happy not to use MariaDB” I second that.

InnoDB Cluster

The other interesting topic coming over and over was InnoDB cluster and Group replication. The product is becoming better and better, and yes it still have some issues, but we are far from the initial version on 5.7, lightyear far. Doing some in depth talks about it with the Oracle guys, we can finally say that IF correctly tuned GR is a strong HA solution. At the moment and on the paper stronger than anything based on Galera.

But we still need to see performance wise if the solution keeps the promises when under load, or IF there will be the need to relax the HA constrains, resulting in lowering the HA efficiency. The last operation will result in making GR less HA efficient than Galera, but still a valid alternative.


About Galera, I have noticed a funny note in the Codership slides at the boot, which says “Galera powers PXC” or something like that.

Well of course guys!!! We all know who is the real code producer of Galera (Codership). Other companies are adopting the core and changing few things around to “customize” the approach.

Packaging that, adding some variants, will never change the value of what you do. Just think about the last PXC announcement who include the: “PXC8 implements Galera 4”.

It seems to me, we have to sit at a table and resolve a bit of identity crisis from both sides.

Vitess also is growing and the interest around it as well, Morgo is doing is best to help the community to understand what it does and how to approach it, well done!

Proxysql taks

A lot of talks about ProxySQL as well, I did one but there where many. ProxySQL is confirming its role as THE solution when you need to add a “proxy/router/firewall/ha/performance improvement”. I am still wondering what Oracle is waiting for to replace router and start to use a more performing and flexible product like ProxySQL.

ProxySQL Technology Day

Finally let us talk about ProxySQL Technology day.

The event was organized in Ghent (a shame it was not on the same venue as PLEU) for the 3td of October after the close of PLEU 2019.

I spoke with many PLEU attendee and a lot of them were saying something like this: “I would LOVE to attend the event, if in Amsterdam, but cannot do at the last minute in Ghent”. Well that is a bit a shame because event was promoted and announced in time, but I have to say I understand not all the people are willing to move away from Amsterdam, take the train and move to the historical Ghent.

Anyhow the ProxySQL Technology Day was actually very well attended. Not only in number of people there, but also company participating. We had, Oracle, Virtual Health, Percona, Pythian and obviously the ProxySQL guys.

It was also interesting to see the different level of attendees, from senior dba or technical managers to students.

The event was happening in the late afternoon starting at 5PM, but I think that ProxySQL should plan the next one as a full day event. Probably a bit more structured in the line to follow for the talks, but I really see it as a full day event, with also real cases presented eventually by customers. This because real life always wins on any other kind of talk, and because a lot of attendee where looking to have the chance to share real cases.

The other great thing that I saw happening there, was during the Pizza Time (thanks Pythian!). The interaction between the people was great, the involvement they had and the interest was definitely worth the trip. I had answered more technical questions during the pizza there than the 2 days PLEU. No barriers no limit, I love that.

Given all the above, well folks in Amsterdam, great to see your pictures in FB or whatever social platform, but trust me you had miss something!


PLEU 2019 in Amsterdam was a nice conference, it totally shows we need to keep focus on MySQL and diversify the efforts for the other technology. It also shows collaboration pay and fights doesn’t.

Some of the thing could have be done better, especially in the session scheduling and in following the Community indications, but those are workable bumps on the road that should be addressed and clarified.
ProxySQL is doing great and is doing better with the time, just this week the ProxySQL 2.0.7 was announced, and include full native support for AWS Aurora and AUTODISCOVERY.

Wow so excited … must try it NOW!

Good MySQL to all…

Missed OpportunityIs “that” time of the year … when autumn is just around the corner and temperature start to drop.
But is also the time for many exciting conferences in the MySQL world.
We have the Oracle Open world this week, and many interesting talks around MySQL 8.
Then in 2 weeks we will be at the Percona live Europe in Amsterdam, which is not a MySQL (only) conference anymore (
Percona had move to a more “polyglot” approach not only in its services but also during the events.
This is obviously an interesting experiment, that allow people from different technologies to meet and discuss. At the end of the day it is a quite unique situation and opportunity, the only negative effect is that it takes space from the MySQL community, who is suffering a bit in terms of space, attendee and brainstorming focus on MySQL deep dive.
Said that there are few interesting talks I am looking to attend:
• Security and GDPR, many sessions
• MySQL 8.0 Performance: Scalability & Benchmarks
• Percona will also present the Percona cluster version 8, which is a must attend session
Plus the other technologies which I am only marginally interested to.

After Percona live in Amsterdam there will be a ProxySQL technology day in Ghent ( Ghent is a very nice city and worth a visit, to reach it from Amsterdam is only 2hrs train. Given this event is the 3td of October I will just move there immediately after PLEU.
The ProxySQL event is a mid-day event starting at 5PM, with 30 minutes sessions focus on best practices on how to integrate the community award winning solution “ProxySQL” with the most common scenario and solutions.
I like that because I am expecting to see and discuss real cases and hands on issues with the participants.

So, a lot of things, right?
But once more, I want to raise the red flag about the lack of a MySQL community event.
We do have many events, most of them are following companies focus, and they are sparse and not well synchronized. Given that more than anything else, we miss A MySQL event. A place where we can group around and not only attract DBAs from companies who use it and sometime abuse it, but also a place for all of us to discuss and coordinate the efforts.

In the meantime, see you in Amsterdam, then Ghent, then Fosdem then …
Good MySQL to all

Latest conferences


We have 188 guests and no members online