Friday, December 10, 2010

A tiny challenge...

Check out this asktom post and see if you can answer "why that probably won't ora-1555".


POST A COMMENT

19 Comments:

Anonymous Anonymous said....

I'm not sure I quite understand why, not having an index on {a,b,c,d}, you won't get a full table scan of big table for each row in big table. With each commit, it doesn't have to start the working set over again? So yes, you'd have the order by get a consistent view of the original data, but everything else would be so slow that each iteration would be subject to figuring out which version of the block is correct for the row being updated, and thus conceivably ORA-1555 with the slightest provocation from an outside source.

Fri Dec 10, 05:56:00 PM EST  

Blogger Thomas Kyte said....

@anonymous

Even if there was an index on a,b,c,d AND one of a,b,c,d was defined NOT NULL - a query of the form:

select * from big_table order by a,b,c,d


would *not* tend to use it. You'd have to hint it or use first_rows(n) optimization for it to consider using it.


It is more efficient to full scan -> sort order by than to index full scan -> table access by index rowid.

So, the presumption is that the plan would be full scan+sort order by - which would read the entire table into temp before returning the first row - making the chance of a 1555 on that select 0% after the first row was retrieved.

Fri Dec 10, 06:00:00 PM EST  

Anonymous Anonymous said....

I'm not talking about the select - I'm talking about the update. Won't that need to full scan, making the PGA have to hold the original data set and the one for each row, spilling its guts to disk? If the order by makes the update hit the same block numerous times in a row, won't that hit some limit of block chaining in memory, causing some strange effects?

Fri Dec 10, 06:14:00 PM EST  

Blogger Thomas Kyte said....

@anonymous

the update, even if it full scanned, would have but one block in the PGA at any time, it doesn't need the entire table in memory anywhere.

it would all work fine, slow - but fine.


but the assumption would be that the index for the update exists.

even if it didn't though - it would work just fine. The database can deal with that readily and efficiently (as efficiently as a big full scan can be dealt with anyway)

Fri Dec 10, 06:21:00 PM EST  

Blogger Flado said....

Tom, I think Anonymous has a partial point: the update WILL need a cleaned out block, even if the row is found via user (or index) rowid. Now, our loop won't be causing delayed cleanouts, as it only ever updates one block per transaction (barring chained rows), but there's nothing to stop a concurrent session from making a massive update and leaving tons of non-clean (dirty?) blocks while we run. And the update will be visiting - and therefore attempting to clean out - the same blocks many times.

What am I overlooking?
Flado

Fri Dec 10, 07:59:00 PM EST  

Blogger Thomas Kyte said....

@Flado

you are missing the commit cleanout. It is a single row update, followed by a commit - it will not be put away dirty often - in fact, it will likely never be put away dirty.

Also, there is the "so what" factor. Even if it needed cleaning every time - "so what". That would not flood the PGA with lots of stuff.

Fri Dec 10, 08:01:00 PM EST  

Blogger Flado said....

@Tom
"you are missing the commit cleanout. It is a single row update, followed by a commit - it will not be put away dirty often - in fact, it will likely never be put away dirty."

-- It can't be that, as I said that much, didn't I? But a concurrent transaction can cause dirty blocks to be written. Here's a scenario:
a) We clean(if necessary), update, commit, and clean again block #1
b) session X dirties block #1 and DOESN'T clean it out
c) DBWn writes block #1
d) lots of commits happen on the system, ours and others'
e) we try to update block #1 again and discover it needs cleanout, but the UNDO is gone - BANG!

The part of Anonymous' point I was proposing to concede was that the update may have its own 1555 problems. NOT the PGA exaflood.

Fri Dec 10, 08:28:00 PM EST  

Blogger jametong said....

if sorted result set is too large , the fetch will come from temporary segment, if the data was not too large ,it will come from the process PGA. In summary ,it will not access blocks, and it will not generate more CR block for the outer loop select, so it will not generate ora-1555 .

Sat Dec 11, 06:59:00 AM EST  

Blogger Thomas Kyte said....

@Flado

but our update transactions are far too short to be affected by this. What would need to happen is:

a) we start an operation that needs blocks as of "point T0 in time". When we start - we know some high water mark set of transactions that are known to have committed (most all of the ones before T0)

b) lots of transactions complete - lots and lots and lots of them

c) we get to a block with some transaction on it that happened after T0 - but the transaction information was wiped out during (b) - ora-1555, restart.



Our update transaction is far too short to get hit by that. There isn't enough time for (b) to hit us - the update is too quick to be affected by that.

Sun Dec 12, 08:26:00 AM EST  

Blogger Flado said....

@Tom:
It was the finding of a high water mark of transactions known to have commited that I was worried about. But now I see that this risk has nothing to do with the "update in a loop" approach and is therefore irrelevant to this discussion.
Thanks for your time!
Flado

Sun Dec 12, 10:30:00 AM EST  

Blogger Pratik said....

@Flado

b) session X dirties block #1 and DOESN'T clean it out


"Doesnt' clean it out", means not committed or not delayed clean out.

If you mean that, it is not committed than in that case Oracle will not reuse that undo block, and it will remain in undo.

If you mean than it is not delayed clean out that in this case other transaction which use block#1, will perform delayed clean out operation.

Mon Dec 13, 04:46:00 AM EST  

Blogger Flado said....

@Patrick:

I mean delayed commit cleanout.

>in this case other transaction which use block#1, will perform delayed clean out operation.

Yes, and for that clean out, it will need to know whether Session X's transaction was committed, for which it will need to look at the undo segment header. Now, if the the transaction hasn't yet committed, it would still be there, so no problem (no clean out necessary). If it is not there, our session needs to determine whether the transaction was committed at the point in time we're interested in. For that, we need to look at the undo segment header as it was at that same point in time. Since that point in time is the start of our UPDATE statement (i.e., a few microseconds in the past), the required UNDO would most likely still be available (as Tom has pointed out) and we would have no problem reconstructing the undo header block and seeing that Session X's transaction was indeed committed. We would only hit ORA-1555 if there were so many commits between the start of the UPDATE and the time we try to clean out the block that we wouldn't be able to reconstruct the undo header.
At least that's my current understanding of the clean out mechanism.

Mon Dec 13, 05:18:00 AM EST  

Blogger Pratik said....

@Flado
I am "Pratik" not "Patrick"(Chars are out of order with one extra 'c' :)


If it is not there, our session needs to determine whether the transaction was committed at the point in time we're interested in.

If it is not there, that means Undo block is overwritten(and delayed clean out is done). and this is possible when Session X's transaction is committed.

I hope my understanding is right here....

Mon Dec 13, 07:49:00 AM EST  

Blogger Flado said....

@Pratik:
I'm terribly sorry about misreading your name! Please accept my apologies

>If it is not there, that means Undo block is overwritten(and delayed clean out is done)

Yes, if the transaction ID is not in the undo header that would mean it is committed as of now. But we're doing a CR of the data block as of the start of our update - and that might be very different from "now". So we need to determine whether the transaction was already committed when we started the update.

(That's how I think it works, at least; I'm probably oversimplifying)

Mon Dec 13, 08:56:00 AM EST  

Anonymous Anonymous said....

The cursor is a forward looking cursor, that is changing one row at a time, once you change a row the cursor doesn't need to look at it again, so it doesn't need to keep a read consistent view of rows it already changed.

Gandolf989.

Mon Dec 13, 11:45:00 AM EST  

Anonymous Anonymous said....

Thanks for clarifying, everyone, especially Flado's scenario.

Mon Dec 13, 04:17:00 PM EST  

Blogger Pratik said....

This comment has been removed by the author.

Fri Dec 17, 01:32:00 AM EST  

Anonymous Anonymous said....

shouldn't you need...
for x in (select * from big_table order by a,b,c,d
for update)

because someone can update and commit a,b,c,d while we are still processing this transaction. So actual a,b,c,d can differ from what we refer in the update statement...
update big_table set x=y, z=m where a=x.a and b=x.b and c=x.c and d=x.d;

Tue Dec 21, 11:51:00 PM EST  

Blogger Thomas Kyte said....

@anonymous

about the for update, the presumption for something like this would be "I am a batch job, the only game in town" - at least that was one of my assumptions.

but yes, if others had the ability to access the table, I would have started with

lock table t in exclusive mode

Wed Dec 22, 12:19:00 PM EST  

POST A COMMENT

<< Home