Talking about instrumentation of code and introducing two more blogs I've been reading on a regular basis. One of them describes a really nice utopia today.
Code instrumentation, what is it? To me, it is the fine art of making approximately every other line of your developed code “debug” of some sort. Trace information. With timestamps. And meaningful information.
Code instrumentation, why is it? Because poor performance is not always a “database problem”. It is many times an application issue and when the application is spread over 14 tiers of complexity, tracking down the bottleneck is grievously hard. If you just whip together an application and throw it out there without any thought to monitoring it over time, be prepared to have poor performance and no clue as to why or where.
Commonly requested information – Tom, can you tell me what my average transaction response time is? Answer: nope, no clue, not a single clue. The only thing I can tell you from the database is on average how long individual bits of SQL might have taken. I don’t know what a transaction is to you and besides, I would tell you only about the database component – no network, no application time, just the database.
And you know what, the end users are the ones that care and they need to have all of the time accounted for. You know what the only thing is that can really give you good transaction response times over time? The application. Why? Because it is the thing that knows what a transaction is, what a transaction does.
I’ve met many a developer that refuses to put this into their code. “It’ll make it run slower”, “This is extra stuff I don’t need”, “It takes me longer to write”. I have yet in 18 years to hear a valid reason why instrumentation should not be done. I have only heard extremely compelling reasons why it must be done.
End users want to know, is the system going slower over time, if so, by how much. Management wants to know, what are my transaction response times, how many transactions do we do, when is the busiest time, and so on. People responsible to administering the system need to be able to identify where bottlenecks are, who needs to be brought in to look at something, who is responsible.
Without code instrumentation, you cannot answer any of those questions – not a single one. Not accurately anyway. (Well, maybe you can if you live in utopia!)
To the developers that say “this is extra code that will just make my code run slower” I respond “well fine, we will take away V$ views, there will be no SQL_TRACE, no 10046 level 12 traces, in fact – that entire events subsystem in Oracle, it is gone”. Would Oracle run faster without this stuff? Undoubtedly – not. It would run many times slower, perhaps hundreds of times slower. Why? Because you would have no clue where to look to find performance related issues. You would have nothing to go on. Without this “overhead” (air quotes intentionally used to denote sarcasm there), Oracle would not have a chance of performing as well as it does. Because you would not have a change to make it perform well. Because you would not know where even to begin.
So, a plea to all developers, get on the instrumentation bandwagon. You’ll find your code easier to debug (note how Oracle doesn’t fly a developer to your site to debug the kernel, there is enough instrumentation to do it remotely). You’ll find your code easier to tune. You’ll find your code easier to maintain over time. Also, make this instrumentation part of the production code, don’t leave it out! Why? Because, funny thing about production – you are not allowed to drop in “debug” code at the drop of a hat, but you are allowed to update a row in a configuration table, or in a configuration file! Your trace code, like Oracle’s should always be there, just waiting to be enabled.