Updating millions of records in oracle

Rated 4.72/5 based on 706 customer reviews

I want to update and commit for a certain number of records—say 10,000—but I don’t want to do it in one stroke, because I may end up with rollback segment issues.

Do you have any suggestions on how best to do this?

Which Index Is Better in a Partitioned Table: Global or Local?

I have a partitioned table based on a date, say startdate, and interval partitions for each day.

You’d need to read almost every block in the table anyway, so doing a full table scan would be the right approach.

At the very least, you will minimize the amount of REDO you generate.

The result: you’ll write 3 million blocks into the buffer cache, one at a time—1 million table blocks and 2 million UNDO blocks.

That will probably exceed the buffer cache for most people, and even if it doesn’t, things such as checkpoints and other sessions needing free blocks in the cache will cause those blocks to be written to disk.

You would end up reading the entire table anyway—because every block needs about 1 row updated in it—generating all the UNDO and flooding the buffer cache again with dirty blocks that need to be written to disk one by one.

Some might say, “But because you are touching just 1 percent of the rows, you’d naturally want to use an index,” but that wouldn’t be true either.

Leave a Reply