I know this is a topic that could cause a few heads to spin around uncontrollably if taken the wrong way, but I preserve. Today I was having a conversation with an end user, an influential one at that, about a business rule that was needed for the time entry system we are putting together (and no comments on make/buy decisions, no way I could come down on either side of that discussion and please all of the real people I work with.)
Anyhow the rule was that the maximum number of regular hours a person could record was 168 and OT was 128. In other words, 7 * 24 or (7 * 24) – 40. This is a very important rule to put into your system to prevent someone from accidentally putting in an amazingly large number of hours and likely getting paid for it (before having to give it all back of course.)
But this is clearly not the final rule on number of hours, right? I mean, you certainly don’t usually want a user to record 168 hours, right? I mean, I have worked a long week or two in my life, but never have I approached 168 (around 80 is the place where the magical pixies arrive to do a good amount of my work for me, though they write in a gibberish that I can’t quite seem to translate once I have slept.)
So I starting thinking that there needed to be some standard way to describe the different levels of business rule enforcement. For example:
- Hard and fast – These are rules that will never be broken under any circumstances. Like an exact birthdate in the future. Or a negative amount of time worked (reversing entries could be an exception, but you would tag a row as such if needed.) Working more hours in a week than physically possible is another.
- Abnormal range – This is a band of values that are not normal, but they are not necessarily wrong either. Overtime might be an example. For non-exempt employees, you might warn them “hey, OT is not usual, are you sure,” which, loosely translated to manager speak is “bozo, if you put in for OT pay and you didn’t actually work OT, I am going to pound you”. You might even use some form of predictive algorithm for this, like if this is the first time they have ever put in OT, a gentle warning might be raised when saving the time sheet.
- Approval range – This is the level where the user is not empowered to make their own decisions. For example, take 168 hour week. I am pretty sure that this value (for the average user) would require some level of extra management approval. If not, a person could file a few of these doozies a few weeks before they quit and make a mint, assuming no other part of the process has implemented checks for abnormal ranges that is.
The question then becomes how to implement these rules. And this is where the battle begins to get ugly doesn’t it? Business rules in the database, the horrors! Well, ok, let me just say *my* feelings on the subject and feel free to chime in whenever.
- Hard and fast – These rules ought to be in the database layer (at least) using constraints (unique, primary key, check, foreign key) and in some cases triggers (not so much stored procedures.) Not doing this leaves you open to some goober (dba like myself) with SSMS coming in and “fixing” the data in a way that was not exactly intended. It also makes 100% sure that data protected with constraints (that are trusted, anyhow) will not need to be checked again upon usage. Too much code in screens and ETL process is essentially restating the very common rules that should never be broken.
- Abnormal range – These rules should stay out of the database entirely. While it is good for the UI to give the user the hint that they are being dumb, if there is any chance of a value being correct, then it is just too much trouble to validate it at the database layer
- Approval range – The approval range is the shared responsibility of the business layer and database too. The database at the very least needs to record that an authorization was gotten to ignore the system business rules, but in most cases, what that range was at any given time is probably best left a mystery to the database. Just like in the abnormal range, your screens and ETL processes need to know that the data is within the realistic realm, but certainly shouldn’t toss chunks when it hits the data. Using overrides in your data warehouse to understand data, and possibly just tossing out overridden data may certainly be a possibility for queries/processes (It might be that the customer says to use some out of the normal data, but for normal calculations you only use “regular” readings. Who knows, clients do a lot of stuff that makes no sense to us programmers, but it is their data!)
Proper separation of application logic and data is essential to building a great/robust solution, but what proper is can be a matter of contention. Just remember that the data is the reason for 99% of all business apps. Data on who ordered what, data on how they paid, and analyzing that data to get them to come back and repeat the process (and tell their friends!) The more hard and fast rules you can validate, the more likely that your data assets will be safe from ignorant mistakes (by a programmer, at least. Can’t stop a user from spelling Toaster : C-h-i-c-k-e-n. Well, at least not until I finish developing my mind probe. Shh, don’t tell anyone.)