GET OPTIMAL PERFORMANCE FROM AX DURING PEAK – PART 3

We hope that the peak period is a prosperous one for you and that what can be a stressful time is going smoothly. Just in case you are having any issues, we are here to help. Following on from our first two posts on Getting Optimal Performance From AX During Peak Part 1 and Part 2, we’re now going to add a little more guidance around DynamicsPerf.

By this time you will have been running for a few weeks and will have plenty of data in your DynamicsPerf database. In our latest performance post we will cover:

  • Addition of indexes to address slow running queries
  • Proactive baseline analysis, spot an issue before it occurs

Addition of indexes to address slow running queries
In our previous post, we showed how you can extract the long running queries and then show the query plan that SQL utilised to return the data. The query execution plan is significant in that it will show how data was retrieved from SQL based on the query requested. It is really important at this stage to understand what a query execution plan is and how it is generated. You must also make sure that the statistics in your Dynamics AX database are up to date, otherwise poor decisions will be made by SQL. This article, while old, is an excellent resource to understand the basics of how the SQL engine works.

One of the most important ways in which data can be accessed quickly by SQL is through indexes.  If data from a table is requested by a SQL query, and there is no corresponding index, then the whole table must be scanned, which can be inefficient and slow the query down.  It is possible that this is one of the reasons behind a long running query.

By finding the slow running queries which have missing indexes, you can hand this new-found information to your development team and get some index hotfixes generated to speed up the system – and for very little cost.

Here is the SQL query to allow you to do just that, but remember to put in some maintenance time for those statistics, otherwise you will not reap the full benefits.

USE DynamicsPerf

SELECT UNIQUE_QUERIES.SQL_TEXT, UNIQUE_QUERIES.LATEST_STATS_TIME, [MISSING_INDEXES], ([AVG_ELAPSED_TIME]/1000) AS AVG_DURATION_SECONDS, [EXECUTION_COUNT], [EXECUTION_PER_HOUR], [TOTAL_ELAPSED_TIME], [MAX_ELAPSED_TIME], [QUERY_PLAN_TEXT], [QUERY_PLAN_PARSED], [QUERY_PARAMETER_VALUES], [QUERY_PLAN], [AVG_LOGICAL_READS], [AVG_LOGICAL_READS_MB], [AVG_ROWS_RETURNED], [TOTAL_ROWS], [MAX_ROWS], [MIN_ROWS], [TOTAL_WAIT_TIME], [SERVER_NAME], [DATABASE_NAME], [COMPILED_TIME], [PLAN_GENERATION_NUM], [LAST_EXECUTION_TIME], [TOTAL_WORKER_TIME], [AVG_PHYSICAL_READS], [AVG_LOGICAL_WRITES], [LAST_ELAPSED_TIME], [MIN_ELAPSED_TIME], [TOTAL_PHYSICAL_READS], [LAST_PHYSICAL_READS], [MIN_PHYSICAL_READS], [MAX_PHYSICAL_READS], [TOTAL_LOGICAL_READS], [LAST_LOGICAL_READS], [MIN_LOGICAL_READS], [MAX_LOGICAL_READS], [TOTAL_LOGICAL_WRITES], [LAST_LOGICAL_WRITES], [MIN_LOGICAL_WRITES], [MAX_LOGICAL_WRITES], [LAST_WORKER_TIME], [MIN_WORKER_TIME], [MAX_WORKER_TIME], [QUERY_HASH], [QUERY_PLAN_HASH], [COMMENT]

FROM QUERY_STATS_VW INNER JOIN

(SELECT SQL_TEXT, MAX(STATS_TIME) AS LATEST_STATS_TIME FROM QUERY_STATS_VW GROUP BY SQL_TEXT

) AS UNIQUE_QUERIES ON QUERY_STATS_VW.SQL_TEXT = UNIQUE_QUERIES.SQL_TEXT AND QUERY_STATS_VW.STATS_TIME = UNIQUE_QUERIES.LATEST_STATS_TIME WHERE MISSING_INDEXES <> '' ORDER BY AVG_ELAPSED_TIME DESC

An example of the output from the query is shown below:

The MISSING_INDEXES column in the results identifies the table used by the long running query, the index which is missing and provides an indication of the relative impact of the missing index (or conversely, the benefit that can be expected if the index is created); an example is provided below:

[IMPACT]= 99.998
[TABLE]= INVENTTRANS
[EQUALITY_COLUMNS]= PACKINGSLIPID, DATAAREAID
[INEQUALITY_COLUMNS]=
[INCLUDED_COLUMNS]=

Based on this information, you can decide which columns should be added to the table index to improve the performance of the specific query. Depending on what results are returned you may wish to use the average durations as a guide, or indeed the execution per hour.

Good luck with your updates, as this should give you the keys to quickly improving performance across your systems.

And one last thing, please make sure that your maintenance plans for SQL not only take into account stale statistics, but also keep on top of index fragmentation. A well-maintained system is a fast system after all.

Proactive baseline analysis, spot an issue before it occurs
You now have a few weeks of data in your database and therefore it is time to start monitoring your baseline outputs. This will help you to spot tables which are growing quickly in a chosen period, or maybe queries which are now taking significantly longer to run than previously.

The DynamicsPerf tool comes with some really useful baseline queries, which can be found here in the DynamicsPerf download:

DynamicsPerf → DynamicsPerf – Analysis Scripts → 7-Baseline – Benchmark Queries.sql

Take a look through the different queries here as they may help you to proactively diagnose issues before they disrupt your business operations.

 
READ: GET OPTIMAL PERFORMANCE FROM AX DURING PEAK – PART 4
 

If you’re short on time or resources and need a bit of help, please don’t hesitate to get in touch. Don’t forget to follow AgileCadence on LinkedIn for industry news, and more information on how to diagnose and rectify any issues with your systems.