//mega nav ctas

Embarking on an AX Data Management Strategy

Posted On: June 21, 2016

Topic: ERP Application Management

When asked how AX works, most people will answer with the workload they use it for. It “works” by placing orders, or managing inventory, or running financials. Rarely do users think about the AX Engine – the code and the data that actually runs the system – and rightly so. Users should be able to get into the system and do what they need to do without worrying about how the magic happens. 

In this way, AX is very much like a car. When I need to drive the family somewhere, they don’t think about the brakes, oil or gas; they get in, buckle up and we go. However, since it is my car, I do need to worry about the brakes, oil and gas, or we may not get very far. For the same reason, someone needs to worry about the AX engine, because without regular maintenance, that engine will someday fail.

When we look at all the components that make up AX, data can really be thought of as the oil. You can have the fastest engine in the world (servers) and load it up with gas (electricity), but without oil (data) the usefulness of the engine is very limited. The health of the data also has the biggest impact on the day-to-day use of the system. Too little or the wrong type, and you don’t get the results you need. If the data gets corrupted, then the whole system can shut down, leaving you stranded. 

Unfortunately, there are no real gas gauges or check-engine lights in AX that will help remind you when the time for maintenance has come, so implementing a maintenance schedule is very important. This will help keep track of what needs to be done over the course of a year, keeping in mind that not all tasks need to run on the same schedule. Checking things like server settings, user load balancing and patch availability need to be on the list, but the most frequent tasks will end up involving data management.

When embarking on a data management strategy, it is important to realize that it is not a purely technical exercise. Of course there are technical components to it, but without input from the business, the success will be hit or miss, as the correct data will not always be targeted. To that end, there are a number of different data management strategies and definitions that have been published, including those provided by the Data Management Association; however for the scope of this blog, I will be using a very simplistic approach that will allow a company new to data management quickly get underway.  

Step 1…

The first step in any data management plan must be to identify the data assets in use. A data asset can be thought of as the business definition of data such as customers, orders and audit logs.

The three initial attributes a data asset needs to have are criticality, availability and lifetime. These can be broken down as your data management maturity grows and other attributes can be added; however we need these three in order to work on the initial plan. 

  • Criticality describes how important the data is to a company’s continued ability to work. This can be as granular as you like, but I prefer three levels to start with: high, moderate and low. High criticality items are things like your sales orders and invoices, areas where you lose money if they are unavailable. Moderate describes data that would slow you down but not stop you if lost. Think of production orders - as long as you have the actual sales order and inventory, you can build what is needed and update the system later. Data that has low criticality should not have a direct impact on your day-to-day business. These are things like temporary tables or internal audit logs (not used for compliance reporting). 
  • Availability simply describes how fast you need access to the data. Is it real time or can it be moved to near-term or long-term storage? Think of sources for monthly reports and audit logs.
  • Lifetime describes how long data must be retained (1 month, 6 months, 7 years). This comes into play when we start talking about archiving and purging. Sometimes lifetime will be defined by industry or government regulations, and sometimes it will be based entirely on the comfort level of management. Both are valid requirements. 

Step 2…

Now that we have a way of categorizing our data, we need to pick what data to start categorizing. The two best places to start are data health (consistency) and data volume. Data deemed highly critical must be included in any consistency checks, as keeping it healthy will ensure the business can run.

Next we look at our top 10 to 15 largest tables. These should contain only high or moderate critical data. If they contain any data tagged low they are prime candidates for purging.  Likewise, data that has passed its real-time availability requirement should be considered for archiving. Archiving is often ignored in a data management strategy; however it can significantly reduce storage cost (offline is much cheaper than real time), as well as increasing performance. Reducing a table size not only improves read functions, but it will also reduce index maintenance times and backup times. 

Step 3…

With our data defined and a starting point identified, it’s time to start putting the management activities in place. Fortunately, we have a number of very useful tools at our disposal to help us look under the AX hood. Some are native to AX, others involve direct SQL calls, and Microsoft has also provided some standalone tools. Each option can be used to monitor data, looking at peaks, valleys and trends so we can get a feel of where we need to focus our attention and eventually apply maintenance. I will take a look at few to show how they can help us keep AX running efficiently; however there are many more available than described here.

When setting up the monitoring components, there is not a dashboard that we have that shows all of the things that we want to look at. Life Cycle Services is getting very close and is a great starting point if you are looking for a quick overview; however we still need to dig a little deeper to see everything. 

Within AX, in System administration -> Reports -> Database -> Size of company accounts will give a quick view of table sizes for a specific company. This is very useful if you only have one company; however you would need to export one report per company to Excel and aggregate them if you have more than one. A better solution would be to log in to SSMS and run the following script:

-- DROP TABLE #tmpTableSizes

CREATE TABLE #tmpTableSizes

(

    tableName varchar(100),

    numberofRows varchar(100),

    reservedSize varchar(50),

    dataSize varchar(50),

    indexSize varchar(50),

    unusedSize varchar(50)

)

insert #tmpTableSizes

EXEC sp_MSforeachtable @command1="EXEC sp_spaceused '?'"

select  * from #tmpTableSizes

order by cast(LEFT(reservedSize, LEN(reservedSize) - 4) as int)  desc

However, just looking at the size of a table doesn’t tell us much, so we need to have some business context in mind when we look at them. Things like temporary tables and database logging are usually at the top of the list when they really should never be, so if we see this it should be a call to action. Digging deeper, we want to make sure that our business activities are reflected in the volume of data we have. Do we use the QA system but there is almost no data in the tables? Do we not use it and the tables have millions of rows? Again, the business has to be involved in these discussions because from a purely technical point of view I can tell how to optimize a table and its usage, however that may mean removing data that is mission critical and we really don’t want that. 

Still in Administrative tools, under Periodic -> Database we find Consistency check and SQL administration. The consistency check should be run on a regular basis as it can help identify orphan records and other SQL anomalies that require fixing. The first run can take quite some time so it may be worthwhile to run it in a replicated environment (TEST or STAGE) to reduce the impact to PROD. Once an initial run has been done and the issues cleaned up, there should not be an issue with running it directly against PROD. 

SQL administration provides a link to some maintenance tasks that can also be scheduled through SQL directly, such as reindexing. Run here or through SQL provides the same results, so we usually include these tasks in the SQL maintenance plan.

Finally, in a number of modules within AX, there is the ability to clean up temp tables on a regular basis. Notifications is a good example, found in System administration -> Periodic -> Notification clean up; however there are others. Again, this could be done through SQL (or IDMF – see below), but is made available through the interface to grant access to power users.

There is not much else from a maintenance perspective that is native to AX. To get more information on data patterns we need to switch over to SQL and some of the tools that Microsoft provides. 

For systems that are running on SQL server versions prior to 2016 (most of the install base), there is a tool called DynamicsPerf that we install and configure to surface the statistics on queries and data patterns. I say surface because SQL is already logging the information in DMVs, DynamicsPerfs just brings that information into a user database that we can query. Because of this, there is no significant impact on performance. When we get to SQL 2016, the Query Store actually does the same thing, so DynamicsPerf is no longer needed. The two tools provide the same data and work very much the same way, so getting familiar with one will be a great path to the other. Both of these tools require a more intimate knowledge of SQL to interpret the findings and turn them into actionable items.

The last tool, one that is often overlooked but is very much a compliment to DynamicsPerf, is the Intelligent Data Management Framework tool (IDMF). It can be found in the Downloadable Tools section of LCS. While the install requires some Windows Server tweaking, once it is installed it can be configured and run by a power user, unlike the DynamicsPerf tool which requires a DBA. One thing to note about the IDMF tool is that it does require a replica of the production database to work against. This is to limit the performance impact on the production database. All the analysis and configuration can be run against the replica, and only the actual purge or archive runs against production.

Once installed, IDMF will keep an updated dashboard of table and index sizes, along with a number of other data statistics to help evolve your strategy. Once you have completed the discovery phase and have a path forward, the tool will help identify table relationships to make sure that any purging or archiving you do does not corrupt the data by creating orphan records. It also ensures that any data archived includes children records, allowing for complete lookups against the archive. This is probably the most powerful aspect of the tool. Lastly you can schedule the archive and purge jobs to run on a recurring schedule. 

The archive database that IDMF creates can be hooked up to another AOS instance, if needed, to allow access to the archived data or it can be tied into a BI or data warehouse package. One very slick feature of IDMF is that you can undo any archive that has been run, moving the data back into the production environment.

So when we consider all of the ramifications of losing data, it is clear that everyone should have some form of data management strategy in place. It can be as simple as cleaning out unnecessary data or as complex as a continual process of identification, purging and archiving. Whatever that plan looks like, power users from the business and technical side will need to work together to ensure an effective execution. It may not be a walk in the park, but like everything that requires a little effort, the payout at the end will be well worth it as your users will still be able to get what they need without having to worry about the magic.