THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Master Data Services Team

Blog for the Microsoft SQL Server Master Data Services (MDS) team. Blog posts are contributed by various team members.

What's New in Master Data Services - SQL2016 CTP2 (May) Release

SQL Server 2016 Community Technology Preview 2 is publicly released and the latest SQL Server bits can be found at http://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2016.

Improved Performance and Scale

We have made significant improvement on performance from backend database, middle tier services to frontend UI including both WebUI and Excel UI. The performance is improved on all scenarios, especially on the master data loading, while the server side CPU and memory will be lower than previous version.

The default settings are optimized based on our test data, should be good for most of customer scenarios. There are few settings can be tweaks based on individual scenario.

1. Data Compression

We introduce the data compression on entity level. Default setting is enabled.

clip_image001

When data compression is enabled, all this entity related tables and indexes are compressed with SQL Row Level compression. It will significant reduce the Disk I/O when read or update the master data, especially when master data has millions of rows and/or has lots of NULL value columns.

But it will slightly increase the CPU usage on the SQL engine side, as well. Since major of the SQL operation is I/O bound, we set default value as True.

If you are facing CPU bound on the server, you may turn off data compression by Edit the Entity.

More information can be found at:

https://msdn.microsoft.com/en-us/library/cc280449.aspx

2. Dynamic Data Compression

We enforced the server enable the feature Dynamic Content Compression IIS feature. The default setting in the service is enable dynamic compression.

clip_image002

The MDS API is using xml format. Enable dynamic content compression will significant reduce the size the xml response and save the network I/O, while increase the CPU usage.

If you are facing CPU bound on the server, you may turn off data compression by put following setting in the web config.

<configuration>
<system.webServer>
<urlCompression doStaticCompression="true" doDynamicCompression="false" />
</system.webServer>
</configuration>

More information can be found at:

http://www.iis.net/configreference/system.webserver/urlcompression

Although, both of above change will lower I/O usage by trading off CPU usage, we did great improvement on service side CPU as well. So we are expecting the CPU usage is lower than previous in general, even with compression enabled.

3. Index maintenance.

We found index fragmentation and increasing transaction log are the two main reasons why performance is degrading over time in many of user cases.

We introduce two SQL Job to do index maintenance and log maintenance.

We have a different blog for Log Maintenance.

clip_image003

The default schedule for Index Maintenance job is Weekly, Saturday 2AM. You can change/add more schedule in job property.

You can also manual kick run the index maintenance by execute store procedure udpDefragmentation.

Exec [mdm].[udpDefragmentation]

We recommend to run this sproc to maintain the indexes after each massive master data insert or update, or after new version is created from existing version.

Index with more than 30% fragmentation will be rebuild online. During the rebuild, the performance will be affected on the CRUD operation on the same table. Run the store procedure at off business hours if the performance degrade is a concern.

More information about index fragmentation problem can be found at:

https://msdn.microsoft.com/en-us/library/ms189858.aspx

For related information refer to:

Create an Entity (Master Data Services)

What’s New (Master Data Services)

 

Improved Security

Super User Functional permission

In the previous release, whoever install the MDS originally, is the server admin. The user will have user id 0. There is no easy way to transfer server admin from one user to the other. It will require DB admin go the user table and change SID on user id 0. There is no easy to have multiple server admins or give server admin permission to a group.

New Super User functional permission is introduced in this release. Super user has the same permission as user id 0 in the previous releases. It can be assigned in same way that assign other functional permission. It can be assigned to user or group.

clip_image001

Model Admin Permission

In the previous release, model admin permission is implicitly assigned based on some calculation result. If user only have update permission on the model level and do not have any other permission in the model sub tree, the user is model admin. If someday later, user got another explicit permission assign in the model sub tree, such as entity level, the user will lose model admin permission.

New explicit Model Admin Permission is introduced in this release. The admin is available on the model level.

clip_image002

Granular Access Permission

In the previous release, we have Read-Only and Update permission. It is similar to R/RW permission, where RW permission means all the permission including Create Update and Delete.

And we have been seeing more and more user case requires more granular permission than RW, especially spit delete/create from update. So user can only update the master data but cannot create or delete the data.

So we introduce 4 granular access permission Read, Create, Update and Delete. They can be used together. For example, Create+Update means user can create and update but cannot delete.

Since it does not make sense user only have Create, Update or Delete without Read permission, so when assign Create, Update and Delete permission to user, Read is given automatically. For example, assign user Update permission, the UI will show user has Update and Read permission.

clip_image003

Comparison   

Previous Release

Current Release

User Id 0

User has Super User functional permission

User has update permission on model and doesn’t any permissions in the subtree

User has Admin Permission on model

Read-Only

User has Read Access Permission Only

Update

User has all 4 Access Permission.

Shortcut in UI in All

Deny

Deny

 

The user with old permission will be converted to new permission, during upgrade.

You can find more information at:

Administrators (Master Data Services)

Model Object Permissions (Master Data Services)

Hierarchy Member Permissions (Master Data Services)

 

Transaction Log Maintenance

Transactions, Validation Issue and Staging table cleanup

Master Data Services previous versions didn’t have a supported way to clean the transaction logs, validation issues history and Staging tables. For a MDS system with lot of data changes and ETL processes these tables can grow exponentially and lead to performance degradation and storage space issues.

In SQL Server 2016 we will provide option to clean these tables on a pre-determined intervals or schedule. Based on the user setting the data in these tables will be truncated.

What is cleaned?

All the transaction history older than specified number of days, all the validation issues history older than specified number of days and all the staging batches which ran before specified number of days.

How:

There will be settings where user can specify for how many days he wants to retain the logs and data in these tables. He can do this in two places. One using a System Setting (LogRetentionDays) which will apply to all the models in the system. Second ways is to set the property at Model level which will override the system level setting.

System Setting:

By default the system setting will be -1 which means do not truncate/clean any tables. If the value is 0 then the tables will retain only today’s data and all the previous days data logs will be truncated. For n > 0 the logs will be retained for n number of days.

To set this value use the MDS Configuration Manger tool and change the setting shown below.

clip_image002

Model level Setting:

By default the model setting is NULL which means the value will be inherited from the System Setting “Log Retention in Days”. If you want to override system setting and NOT clean any logs then set as -1. If you do not want to retain any transaction logs and validation history and already processed staging data then set as 0. For n > 0 the logs will be retained for n number of days. Acceptable values: -1 to 5000.

To Set or Update the Model property in the MDS Web UI go to System Administration -> Manage -> Model and change the property shown below

clip_image003

Scheduler:

When you create the database from Configuration Manager, by default it schedules a SQL Agent Job, “MDS_<DB Name>_Log_Maintenance” which triggers every day at 2AM and scans all the models and cleans the table as per the settings above.

If you want to change the frequency of this job then use the SQL Agent -> Job (MDS_<DB Name>_Log_Maintenance) and update the schedule. You can use SQL Server Management Studio to make any changes to this scheduled job.

How to manually cleanup the tables without setting a recurring cleanup as above

You can call the Stored Procedures manually which will clean up the tables. To clean Transaction tables use mdm.udpTransactionsCleanup, to clean Validation History use mdm.udpValidationsCleanup and to clean staging table use mdm.udpEntityStagingBatchTableCleanup.

Sample:

DECLARE @CleanupOlderThanDate date = '2014-11-11',

@ModelID INT = 7

--Cleanup Transaction Logs

EXEC mdm.udpTransactionsCleanup @ModelID, @CleanupOlderThanDate;

--Cleanup Validation History

EXEC mdm.udpValidationsCleanup @ModelID, @CleanupOlderThanDate;

--Cleanup EBS tables

EXEC mdm.udpEntityStagingBatchTableCleanup @ModelID, @CleanupOlderThanDate;

More related information can be found at:

Transactions (Master Data Services)

Create a Model (Master Data Services)

System Settings (Master Data Services)


Improved Troubleshooting and Logging

In the SQL Server 2016 CTP2, we did logging improvements to improve debugging and make the troubleshooting easier.

1.       Tracing setting in the web.config

    <sources>

      <!-- Adjust the switch value to control the types of messages that should be logged.

           http://msdn.microsoft.com/en-us/library/system.diagnostics.sourcelevels

           Use the a switchValue of Verbose to generate a full log. Please be aware that

           the trace file can get quite large very quickly -->

      <source name="MDS" switchType="System.Diagnostics.SourceSwitch" switchValue="Warning, ActivityTracing">

        <listeners>

          <!-- Set a directory path where the service account you chose while setting up Master Data Services has read and write privileges.

               Default path is Logs in WebApplication folder, for example C:\Program Files\Microsoft SQL Server\130\Master Data Services\WebApplication

               New log file will be created everyday or every 10 mb.

               When directory size hits the 200mb limitation, the oldest file will be deleted.-->

          <add name="FileTraceListener"

               type="Microsoft.MasterDataServices.Core.Logging.FileTraceListener, Microsoft.MasterDataServices.Core"

               initializeData="DirectoryPath = Logs; FileSizeInMb = 10; MaxDirectorySizeInMb = 200"/>

          <remove name="Default"/>

        </listeners>

      </source>

    </sources>

     We have a new section in the web.config for tracing setting.

     By default,

a.       Tracing is enabled for Warning and below level, plus ActivityTracing.

b.      The Logs are saved in the Logs folder under the WebApplication folder. For example, C:\Program Files\Microsoft SQL Server\130\Master Data Services\WebApplication

c.       The file file will be created for each day or every 10 mb

d.      Maximum usage on Logs folder will be 200mb, the oldest log will be deleted.

e.      The log format is CSV.

2.       Log format

 

Time

When the trace entry happens

CorrelationId

One correlation id is assigned for each request. The all the traced trigged by this request will share the same correlation id.

Operation

The request operation name. If the request is web ui request, the operation name is the url. If request is API request, the operation name is the service name.

Level

The level of this trace entry.

Message

The message body of the trace

 

The log is CSV format and can be opened in Microsoft Excel or other tools for filtering.

 

3.       Correlation Id

Each request will be assign a correlation id, the all trace trigger by this request will share the same id.

When an error happens, in the UI, the correlation id will be shown in the error message.

 

Web UI:

clip_image001[6]

 

Excel Addin UI:

 

clip_image003[6]

https://msdn.microsoft.com/en-us/library/system.diagnostics.sourcelevels(v=vs.110).aspx

Use the a switchValue of Verbose to generate a full log. Please be aware that

the trace file can get quite large very quickly -->

<source name="MDS" switchType="System.Diagnostics.SourceSwitch" switchValue="Warning, ActivityTracing">

<listeners>

<!-- Set a directory path where the service account you chose while setting up Master Data Services has read and write privileges.

Default path is Logs in WebApplication folder, for example C:\Program Files\Microsoft SQL Server\130\Master Data Services\WebApplication

New log file will be created everyday or every 10 mb.

When directory size hits the 200mb limitation, the oldest file will be deleted.-->

<add name="FileTraceListener"

type="Microsoft.MasterDataServices.Core.Logging.FileTraceListener, Microsoft.MasterDataServices.Core"

initializeData="DirectoryPath = Logs; FileSizeInMb = 10; MaxDirectorySizeInMb = 200"/>

<remove name="Default"/>

</listeners>

</source>

</sources>

We have a new section in the web.config for tracing setting.

By default,

a. Tracing is enabled for Warning and below level, plus ActivityTracing.

b. The Logs are saved in the Logs folder under the WebApplication folder. For example, C:\Program Files\Microsoft SQL Server\130\Master Data Services\WebApplication

c. The file file will be created for each day or every 10 mb

d. Maximum usage on Logs folder will be 200mb, the oldest log will be deleted.

e. The log format is CSV.

2. Log format

Time

When the trace entry happens

CorrelationId

One correlation id is assigned for each request. The all the traced trigged by this request will share the same correlation id.

Operation

The request operation name. If the request is web ui request, the operation name is the url. If request is API request, the operation name is the service name.

Level

The level of this trace entry.

Message

The message body of the trace

 

The log is CSV format and can be opened in Microsoft Excel or other tools for filtering.

3. Correlation Id

Each request will be assign a correlation id, the all trace trigger by this request will share the same id.

When an error happens, in the UI, the correlation id will be shown in the error message.

Web UI:

clip_image001[4]

Excel Addin UI:

clip_image003[4]

For more information:

Tracing (Master Data Services)

 

Improved Manageability

Improvements in manageability help to lower maintenance costs and positively impact your return on investment (ROI). These improvements include transaction log maintenance and improvements to security, as well as the following new features.

  • Using attribute names that are longer than 50 characters.

  • Renaming and hiding Name and Code attributes.

For more information, see the following topics.

 


Published Friday, June 5, 2015 2:22 PM by MDS Team

Comments

No Comments
New Comments to this post are disabled
Privacy Statement