Thursday 17 December 2015

Concurrency Control in Dynamics AX

The Difference between Optimistic concurrency control(OCC) and Pessimistic concurrency control(PCC).


  1. Optimistic concurrency control (OCC) helps increase the database performance.
  2.  Optimistic concurrency control locks records only from the time the real update operation is started (i.e. during update process). So, the records are locked for a shorter duration of time and will be available for other process (for update process).
  3.  OCC makes it possible for other processes to update the record even after the record is fetched. Here the disadvantage is if another process updates the same records which are fetched, then the update fails and retry will take more time which hits the performance.
  4. In OCC fewer resources are being used to hold the locks during the update process.
  5. Property used on table is OccEnabled. Defaulted to Yes.
  • Pessimistic concurrency control locks the records immediately after they are fetched from the database for update.
  • To set concurrency model for a select statement, just replace forupdate with the optimistic or pessimistic as shown below.
For Example:-      Select optimistic CustTable
                            Select pessimistic CustTable.





Preventing locking and blocking and their implications on the transaction log



When looking at the following examples please remember one thing: For every ttscommit the Microsoft SQL Server transaction log is written to!


Not every write is an immediate physical write but definitely a logical write. The transaction log writer of Micosoft SQL Server is writing sequentially, this means it cannot be parallelized in any way. So the less transactions are opened the faster the changes are processed in general, but the longer the exclusive locking time will be.
Example 1: Longest exclusive locking / blocking time but fasted execution

The fast execution is because you are only writing one time to the transaction log of Microsoft SQL Server. At the same time you work in a very save way as you fully align with the concurrency model you are using.



static void UpdateCreditMax(Args _args)
{
  CustTable custTable;
  ;
  ttsbegin;
  while select forupdate custTable where custTable.creditMax == 0
  {
     if (custTable.balanceMST() < 10000)
     {
        custTable.creditMax = 50000; custTable.update();
     }
  }
  ttscommit;
}
Again please be aware that only this example has a low transactional workload as there is only one write to the transaction log file happening!






Example 2: Most roundtrips, short locking time

This example is (was) mainly usefull on Dynamcis AX 3.0 and causes a transaction overhead. This example makes not very much sense on Dynamics AX 4.0 or higher due to the changes in the concurrency model.


static void UpdateCreditMax(Args _args)
{
  CustTable custTable;
  CustTable updateableCustTable;
  ;

  while select custTable where custTable .creditMax == 0
  {
    if (custTable.balanceMST() < 10000)
    {
      ttsbegin;
      select forupdate updateableCustTable where updateableCustTable.AccountNum == custTable.AccountNum;
      updateableCustTable.creditMax = 50000;
      updateableCustTable.update();
      ttscommit;
    }
  }
}




Example 3: Mix between Example 1 and Example 2

This example is mainly useful on Dynamics AX 4.0 and higher but causes an transaction overhead. In the select statemet below you could replace optimisticlock also with forupdate, but in this case you would not enforce optimistic concurrency.



static void UpdateCreditMax(Args _args)
{
  CustTable custTable;
  ;

  while select optimisticlock custTable where custTable.creditMax == 0
  {
    if (custTable.balanceMST() < 10000)
    {
      ttsbegin;
      custTable.creditMax = 50000;
      custTable.update();
      ttscommit;
    }
  }
}

Some comments on Example 2 and Example 3




Both examples are very effective in regards to locking and blocking. Overall these examples will perform more slowly than the Example 1 and there is also the chance of running into a last writer wins scenario. Also you cannot use them if you need all of the updates to be done or cancelled as a single transaction.



You should evaluate these examples only if you think that locking / blocking is an issue for you and if you are sure that you can accept the risks of partly bypassing your concurrency model.



When you are using this approach and run into performance issues on your Microsoft SQL Server you should have a look at the DMV SYS.DM_OS_WAIT_STATS. More precise look for the value WRITELOG.


select * from sys.dm_os_wait_stats order by wait_time_ms desc



If WRITELOG is very high compared to the other wait stats you should reduce code following Example 2 and 3 as much as possilbe and replace it with the code from Example 1.


In general the mentioned DMV can also give you a good overview if you are suffering from a transactional overhead or if you have an issue on the drive where the log file resides.
For more information see sys.dm_os_wait_stats.
Update_recordset is a very effective way to update multiple rows at once


The examples in the last section were about based update operations which have been very common in Dynamics AX 3.0. With Dynamics AX 4.0 you should use however set based update operations where ever possible. This is especially effective as Microsft SQL Server is basically working set based and not line based.


There are a lot less roundtrips between the Dynamics AX Kernel and the database if you are using the update_recordset command for updating multiple rows instead of the while select forupdate X++ equivalent.

Creating & Using the find() methods in Dynamic AX 2012


Hi,
1. Create a table,
2. Use the 2 EDTs and make it one as index as unique by property AllowDuplicates : NO
3. Create a find method in the table using below code and patten

static TableName find(UsedEDT _UsedEDT ,
                       boolean          _forUpdate = false,
                       ConcurrencyModel _concurrencyModel = ConcurrencyModel::Auto)
{
    TableName  tableName ;

    if (_UsedEDT)
    {
        if (_forUpdate)
        {
            tableName .selectForUpdate  (_forUpdate);
            if (_concurrencyModel != ConcurrencyModel::Auto)
                tableName .concurrencyModel(_concurrencyModel);
        }
        tableName .selectLocked     (_forUpdate);

        select firstonly tableName
            index hint UsedEDTIndex
            where tableName.UsedEDT == _UsedEDT ;
    }

    return     tableName ;
}
/************************Using the find method **********************/
Using the display method with find() we can get that value by using below codes in the table.
display Name name()
{
    return TableName::find(this.UsedEDT).name();
}

Concurrency Control in Dynamics AX

The Difference between Optimistic concurrency control(OCC) and Pessimistic concurrency control(PCC).


  1. Optimistic concurrency control (OCC) helps increase the database performance.
  2.  Optimistic concurrency control locks records only from the time the real update operation is started (i.e. during update process). So, the records are locked for a shorter duration of time and will be available for other process (for update process).
  3.  OCC makes it possible for other processes to update the record even after the record is fetched. Here the disadvantage is if another process updates the same records which are fetched, then the update fails and retry will take more time which hits the performance.
  4. In OCC fewer resources are being used to hold the locks during the update process.
  5. Property used on table is OccEnabled. Defaulted to Yes.
  • Pessimistic concurrency control locks the records immediately after they are fetched from the database for update.
  • To set concurrency model for a select statement, just replace forupdate with the optimistic or pessimistic as shown below.
For Example:-      Select optimistic CustTable
                            Select pessimistic CustTable.





Preventing locking and blocking and their implications on the transaction log



When looking at the following examples please remember one thing: For every ttscommit the Microsoft SQL Server transaction log is written to!


Not every write is an immediate physical write but definitely a logical write. The transaction log writer of Micosoft SQL Server is writing sequentially, this means it cannot be parallelized in any way. So the less transactions are opened the faster the changes are processed in general, but the longer the exclusive locking time will be.
Example 1: Longest exclusive locking / blocking time but fasted execution

The fast execution is because you are only writing one time to the transaction log of Microsoft SQL Server. At the same time you work in a very save way as you fully align with the concurrency model you are using.



static void UpdateCreditMax(Args _args)
{
  CustTable custTable;
  ;
  ttsbegin;
  while select forupdate custTable where custTable.creditMax == 0
  {
     if (custTable.balanceMST() < 10000)
     {
        custTable.creditMax = 50000; custTable.update();
     }
  }
  ttscommit;
}
Again please be aware that only this example has a low transactional workload as there is only one write to the transaction log file happening!






Example 2: Most roundtrips, short locking time

This example is (was) mainly usefull on Dynamcis AX 3.0 and causes a transaction overhead. This example makes not very much sense on Dynamics AX 4.0 or higher due to the changes in the concurrency model.


static void UpdateCreditMax(Args _args)
{
  CustTable custTable;
  CustTable updateableCustTable;
  ;

  while select custTable where custTable .creditMax == 0
  {
    if (custTable.balanceMST() < 10000)
    {
      ttsbegin;
      select forupdate updateableCustTable where updateableCustTable.AccountNum == custTable.AccountNum;
      updateableCustTable.creditMax = 50000;
      updateableCustTable.update();
      ttscommit;
    }
  }
}




Example 3: Mix between Example 1 and Example 2

This example is mainly useful on Dynamics AX 4.0 and higher but causes an transaction overhead. In the select statemet below you could replace optimisticlock also with forupdate, but in this case you would not enforce optimistic concurrency.



static void UpdateCreditMax(Args _args)
{
  CustTable custTable;
  ;

  while select optimisticlock custTable where custTable.creditMax == 0
  {
    if (custTable.balanceMST() < 10000)
    {
      ttsbegin;
      custTable.creditMax = 50000;
      custTable.update();
      ttscommit;
    }
  }
}

Some comments on Example 2 and Example 3




Both examples are very effective in regards to locking and blocking. Overall these examples will perform more slowly than the Example 1 and there is also the chance of running into a last writer wins scenario. Also you cannot use them if you need all of the updates to be done or cancelled as a single transaction.



You should evaluate these examples only if you think that locking / blocking is an issue for you and if you are sure that you can accept the risks of partly bypassing your concurrency model.



When you are using this approach and run into performance issues on your Microsoft SQL Server you should have a look at the DMV SYS.DM_OS_WAIT_STATS. More precise look for the value WRITELOG.


select * from sys.dm_os_wait_stats order by wait_time_ms desc



If WRITELOG is very high compared to the other wait stats you should reduce code following Example 2 and 3 as much as possilbe and replace it with the code from Example 1.


In general the mentioned DMV can also give you a good overview if you are suffering from a transactional overhead or if you have an issue on the drive where the log file resides.
For more information see sys.dm_os_wait_stats.
Update_recordset is a very effective way to update multiple rows at once


The examples in the last section were about based update operations which have been very common in Dynamics AX 3.0. With Dynamics AX 4.0 you should use however set based update operations where ever possible. This is especially effective as Microsft SQL Server is basically working set based and not line based.


There are a lot less roundtrips between the Dynamics AX Kernel and the database if you are using the update_recordset command for updating multiple rows instead of the while select forupdate X++ equivalent.

Creating & Using the find() methods in Dynamic AX 2012


Hi,
1. Create a table,
2. Use the 2 EDTs and make it one as index as unique by property AllowDuplicates : NO
3. Create a find method in the table using below code and patten

static TableName find(UsedEDT _UsedEDT ,
                       boolean          _forUpdate = false,
                       ConcurrencyModel _concurrencyModel = ConcurrencyModel::Auto)
{
    TableName  tableName ;

    if (_UsedEDT)
    {
        if (_forUpdate)
        {
            tableName .selectForUpdate  (_forUpdate);
            if (_concurrencyModel != ConcurrencyModel::Auto)
                tableName .concurrencyModel(_concurrencyModel);
        }
        tableName .selectLocked     (_forUpdate);

        select firstonly tableName
            index hint UsedEDTIndex
            where tableName.UsedEDT == _UsedEDT ;
    }

    return     tableName ;
}
/************************Using the find method **********************/
Using the display method with find() we can get that value by using below codes in the table.
display Name name()
{
    return TableName::find(this.UsedEDT).name();
}

memory leak profiler tool for ax 2012

To access for AX please refer this link


http://dev.goshoom.net/en/2012/12/memory-profiling/


This walkthrough shows how to locate a memory leak using a sample application called QueryBee. QueryBee is a simple WinForms application for running queries against SQL Server databases. It has a database connection dialog and a query window to query the database. We know our application is leaking memory, because every time we open a query window and close it again, our memory usage increases.


When you open the profiler, you first see the Startup screen:



The ANTS Memory Profiler 7 setup dialog.
Figure 1. The ANTS Memory Profiler startup screen.


Here, there’s a list of your recent profiling sessions so you can re-run them easily. For this example, we’ll start a new session by clicking New profiling session.
The New profiling session screen is displayed:



New Profiling session
Figure 2. It's easy to configure and start a new profiling session.


All we need to do is point it at QueryBee, choose our performance counters, and click Start profiling.
The profiler starts up QueryBee and begins collecting performance counter data:



Profiling the application
Figure 3. Whilst profiling, ANTS Memory Profiler collects performance counter data. The profiler is telling us that it's profiling our application. There are also some useful instructions on this screen telling us to take and compare snapshots.




Taking and comparing memory snapshots is a key activity when looking for memory leaks, so our approach will be as follows:
  1. Wait for QueryBee to open.
  2. Take a first snapshot without using the application; this first snapshot will be used as a baseline.
  3. Within QueryBee, perform the actions that we think cause the memory leak.
  4. Take a second snapshot.
  5. Examine the comparison that the profiler shows us after it has finished taking and analyzing the second snapshot.


So, QueryBee is open, sitting in our system tray.
At this point, we take a first snapshot, which we will use as a baseline for comparison with later snapshots.




When we click the Take Memory Snapshot button, the memory profiler forces a full garbage collection and takes a snapshot of the heap memory it is using.





Results from our first snapshot – Summary screen.
Figure 4. Results from our first snapshot – Summary screen.


Now, we go back to QueryBee and perform the tasks which we think cause the memory leak.
We open up QueryBee and connect to a database.





Database connection dialog
Figure 5. QueryBee – Database connection dialog.
The query window
Figure 6. QueryBee – The query window.


The query window opens up and we enter and execute a SQL query.
We obtain some results and close the query window.



Results grid
Figure 7. QueryBee – The results are displayed in a grid.


We close the query form.
At this point, the window is gone. We expect the memory usage to fall back to where it was in the first snapshot, but that is not the case.



Image showing the memory usage rising
Figure 8. Despite closing our query window, the memory usage has not fallen.
So what's happening here? We take a second snapshot and get the results.



Summary pane
Figure 9. The summary pane compares the results of the two snapshots.
A number of problems are highlighted by the summary screen.
  • We can see a large memory increase between snapshots, which we noticed on the timeline (top left).
  • The Large Object Heap appears to be fragmented, which could cause problems (top right).
  • The Generation 2 heap accounts for a large proportion of memory usage - often indicating objects are being held onto for longer than necessary (bottom left).


We can choose to select one of the largest classes which are shown to us in the bottom right of the screen, but instead we switch to the class list to find out more. The class list gives us a fuller picture of what's in the snapshot.
We're interested in objects which have been created since the baseline snapshot, so we need to look at types which have more instances in the second snapshot. We therefore sort by Instance Diff in decreasing order.



Class list
Figure 10. The class list allows you to compare memory usage in both snapshots in more detail.
The String class has been placed at the top of the list, with over 300,000 new instances. We want to understand why there is such a large increase so load the Instance Categorizer for the String class by clicking the Instance retention graph icon icon.

Categorized references
Figure 11. The Instance Categorizer shows chains of instances sorted by their shortest path to GC Root.
We see that over 21MB of the String class are held in memory by the same shortest path back to GC Root, via our QueryForm and ConnectForm. We select Show the Instances on this Path to view a list of every instance in the previous category.

Class list
Figure 12. The instance list view shows us a set of strings which we recognize as coming from our SQL Database.
The Instance List is showing us data which QueryBee had retrieved from the SQL Database, but that data should have been destroyed when QueryForm was closed. We select one of the instances and click the Instance retention graph icon to generate an Instance Retention Graph.

Instance retention graph
Figure 13. This instance retention graph.
Using the instance retention graph, we should be able to find out what is still referencing our String instances. Then, we'll be able to go back into our code to break the chain of references that is keeping them in memory.
We start at the bottom and work our way up the graph until we find a reference that needs to be broken. We'll just need to break the chain at one point to allow the garbage collector to clean up everything below that.
By navigating up, we can see the string is being held onto by QueryForm, even though that should have been released from memory. Looking a little further up, the graph is telling us that a System.EventHandler is referencing QueryForm and, if we step up one more level, it's telling us that the event handler is referenced by our ConnectForm instance – this is the form that asked us for the database connection details. In other words, the ConnectForm is holding onto the QueryForm via an Event Handler.
If we look at this node more closely, we see that it's actually being referenced by the ConnectForm's Foregrounded field.
Let's find this Foregrounded event in our code. We right-click on the QueryBee.ConnectForm node and open the ConnectForm source code in Visual Studio.

Foregrounded event
Figure 14. Foregrounded event in the ConnectForm source code.
The profiler automatically jumps to the Foregrounded event. We check where it is being used by right-clicking on Find All References.

Find references results
Figure 15. The Foregrounded event is used in three places.
We've got three usages and we find that the last usage is where QueryForm registers for the Foregrounded event, but it doesn't look like it unregisters. If we fix that, then the memory leak should go away.
Once we're done, we need to rebuild, but first we need to stop profiling QueryBee so that the executable isn't locked. We go back to Profiler and click on the Stop Profiling button.
Then, we rebuild.

Rebuilding the application
Figure 16.  We rebuild our QueryBee application.
Back in the profiler, we start up a new profiling session. We want to find out whether the reference to the QueryForm has disappeared.
Note that it remembered our settings from last time, so all we need to do is click Start Profiling.

Settings dialog
Figure 17.  The settings dialog remembers settings from last time.
We take a first snapshot to use as a baseline.
We perform the same actions as last time: take a baseline snapshot while QueryBee is idle, then a snapshot once we’ve connected and run a query.
We'll also take an extra snapshot, because we want to be able to verify that the QueryForm has disappeared.
Finally, we close the query window with the results grid and we take a third snapshot.
We switch to a comparison between snapshots 1 and 3, using the snapshot selection field just under the timeline.

Results from first snapshot
Figure 18.  Summary screen comparing snapshots 1 and 3.
We can see there is now only a small memory increase between the snapshots, which is promising. Let's see if there's a QueryForm still in the class list.
We switch to the class list view and search only for classes in the QueryBee namespace.

Class list for the QueryBee namespace
Figure 19.  Class list for the QueryBee namespace.
No, it's gone. We're no longer leaking the form.
As you saw, it was fairly easy to track down a form which was being leaked.