AlwaysOn: Impact of mapping reporting workload on Readable Secondary to Snapshot Isolation

AlwaysOn: Impact of mapping reporting workload on Readable Secondary to Snapshot Isolation

Rate This
  • Comments 1

In my previous blog, I described how Readable Secondary eliminates potential REDO thread blocking for DML work running on the primary replica by mapping all isolation levels used in the reporting workloads to Snapshot Isolation. While it was essential that we eliminate REDO thread blocking with concurrent DML on secondary replica, it comes at some overheads as described in this blog.

First thing to understand is that if you do not enable secondary replica for read workload, there is no Snapshot Isolation mapping needed and therefore no additional overhead. Things get interesting when you enable the secondary replica for read workload. You may recall that both Snapshot Isolation and RCSI are based on row versioning. When a row is modified, its previous version is saved in the version store backed by TempDB and a 14 byte pointer is set from the modified row to the versioned row. If the data row gets modified again, the process is repeated. The key point to note here is that newly inserted or modified data rows will have a 14 byte overhead.  There are 4 possible scenarios to consider here

  •  Secondary replica is not enabled for read workload: As shown in the figure below, there is no additional overhead on the primary replica.

  • Secondary replica is enabled for read workload: As you will note in the figure below, a 14 byte overhead is added to the row on the primary and secondary replica. Additionally, a row version is generated by REDO thread on the secondary replica as it processes the UPDATE/DELETE operation from the primary. There are two interesting points to note here. First, the row version is only generated on the secondary replica. Reason is simple. Since the RCSI or SI has not been enabled on the primary replica, there is really no need to have row versions created there. This also means that apart from adding 14 byte overhead, there is not additional impact (i.e. row version generation and TempDB overhead) on the primary replica. Second, we need to add 14 byte overhead to the new/modified rows on the primary because the primary and secondary replicas must be physically identical.


  • Primary replica already has RCSI or SI enabled but no readable secondary: This case is a bit simpler because in this case, the 14 bytes versioning overhead is already added to the data rows independent of the status of secondary replica. As shown in the picture below, if secondary replica is not enabled for read workload, there is still a 14 byte overhead on the rows on secondary replica as it must, but there is not row version generation because the read workload has not been enabled


  • Primary replica already has RCSI or SI enabled with readable secondary: This case is similar to previous configuration except that row versions will need to be generated on the secondary replica as well.

There is no change or additional guidance for managing and provisioning for row versioning in this configuration compared to what we recommend for Snapshot Isolation or RCSI. Please refer to Books Online for details of SI and RCSI.


Sunil Agarwal


Leave a Comment
  • Please add 8 and 1 and type the answer here:
  • Post
  • Hi Sunil

    Great explanations!

    I found it very difficult to get information about the REDO_THREAD_PENDING_WAIT state. Even Microsoft has Wait State documents online that do not even mention that state and Red Gate's help file also ignores it.

    This wait state is the second highest state in our Always on - 1 Secondary Read - configuration and we have been wondering whether it is normal or can be improved (wait state capturing according to

    From your mentioned reasons in your 3 part blog we can eliminate the disk and network bottlenecks at our site.

    Question of course is what the counter represents - if it counts the wait for redo thread to get new work assigned a high value would be good. If however it counts the inability of the redo thread to finish assigned work we would like to find ways to lower that count.

    In another thread this wait counter has been defined as beeing used to keep multiple secondaries at about the same data status. It is not applicable to our own scenario but can you confirm that to be another reason for the counter to be high?



Page 1 of 1 (1 items)