Thoughts from other blogs...
If you put security for example out in the middle tier (you know, the CICS transaction layer is what we used to call this piece – I know, nothing useful or cool was invented before application servers but hey – that is another story) - what happens when the middle tier technology of choice today becomes passé? That’ll never happen right? (it is spelled with all capital letters – CICS, some people pronounce it as “kicks” transactions). Well, I’m sure we can just reinvent screen scrapers for your middle tier technology like we did for 3278 terminal green screen applications.
And, if you put your security in the middle tier and someone, well, gets around your middle tier into your database – what then? Oh, no security… Got it.
And, if you put your data logic (like NOT NULL, this field must be a string of 30 characters or less, this is a number, this is a date that is less than this other date over there, the value in this field must exist in this table over there, this is unique, when the status code = ‘ACTIVE’ – then this field must be unique, and so on) in your middle tier… What happens when the next cool application comes along (and doesn’t quite remember all of the rules?).
I could go on – and do in my seminar. The optimizer wants those integrity constraints. The data wants the protection of security (data hates to be exposed like that). I love to get “algorithms” from middle tier programmers that purport to enforce data integrity – especially integrity that crosses rows in a table or crosses tables in the database. It takes about 30 seconds to come up with a multi-user scenario involving 2 or 3 users that causes bad data to be created. (Programmers still think very “linear” I find – not much thought to multi-user conditions. Understandable since they treat the database like a black box and don’t really get how concurrency controls are implemented anyway).
The next blog I was reading was “We Do Not Use Blogs”. Mogens made a good point this morning – one I make when discussing statspack. It is very unlikely you can find the root cause of system wide slowdowns with a statspack report. It is somewhere to start, but it is unlikely you’ll discover the root cause with it. It can give you places to start looking, but it is unlikely in my experience to find the “answer”. Oh sure, you can get lucky sometimes – but most times, no. You need to dig a little deeper.
I do disagree with his pronouncement of this stuff as not being useful. I have found places to go looking from a baselined statspack (from when the ‘system’ was ‘good’) compared to a report from when the ‘system’ was bad. Looking for major changes (to help answer the question “what has changed” when the answer from humans is invariably “nothing has changed of course”). While I might not be able to answer the final question “what is wrong” from them – they do give me clues where to start looking. Else you have to start looking at everything.