THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Dejan Sarka

Fraud Detection with the SQL Server Suite Part 2

This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic.

My Approach to Data Mining Projects

It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project.

If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more.

Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems.

Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME.

After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube.

After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers.

Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project:

  • The customer learns the basic patterns of frauds and fraud detection
  • The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems
    • The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible
    • The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before
    • All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production
  • The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution
    • It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise
  • Typically, the project results in several important “side effects”
    • Improved data quality
    • Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization
    • Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns

After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks:

  • Establish the final infrastructure to measure the efficiency of the deployed models
  • Deploy the models in additional scenarios
    • Through reports
    • By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings
    • Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project
    • Create smart ETL applications that divert suspicious data for immediate or later inspection
  • I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well
  • Of course, for the actual project, I would repeat the data and model preparation as needed

It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days.

The approximate timeline for the POC project is, as follows:

  • 1-2 days of training
  • 2-3 days for data preparation and data overview
  • 2 days for creating and evaluating the models
  • 1 day for initial preparation of the continuous learning infrastructure
  • 1 day for presentation of the results and discussion of further actions

Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

Published Tuesday, October 29, 2013 4:42 PM by Dejan Sarka

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Jon said:

Interesting appraoch. Can you provide actual samples projects (Database, SSAS, Mining Models, Report).

Cheers,

Jon

November 4, 2013 5:34 AM
 

Dejan Sarka said:

Jon, did you attend either TechEd Europe in Madrid, or PASS Summit in Charlotte? I had a presentation about fraud detection in both conferences, and I made the code available as well.

Best regards,

Dejan

November 5, 2013 1:52 AM
 

Dejan Sarka said:

This is the fourth part of the fraud detection whitepaper. You can find the first part , the second part

December 10, 2013 6:59 AM
 

Dejan Sarka said:

This is the fifth, the final part of the fraud detection whitepaper. You can find the first part , the

January 6, 2014 12:53 PM

Leave a Comment

(required) 
(required) 
Submit

About Dejan Sarka

Dejan Sarka, MCT and SQL Server MVP, is an independent consultant, trainer, and developer focusing on database & business intelligence applications. His specialties are advanced topics like data modeling, data mining, and data quality. On these toughest topics, he works and researches together with SolidQ and The Data Quality Institute. He is the founder of the Slovenian SQL Server and .NET Users Group. Dejan Sarka is the main author or coauthor of eleven books about databases and SQL Server, with more to come. Dejan Sarka also developed and is developing many courses and seminars for SolidQ, Microsoft and Pluralsight. He is a regular speaker at many conferences worldwide for more than 15 years, including conferences like Microsoft TechEd, PASS Summit and others.
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement