Home Code Snippets Oracle Reference Oracle Functions Oracle Error Codes Forum Oracle Jobs Oracle Blogs

Dear DBA ( a note from a principal developer)

Dear DBA, as I am running loadtests with product XYZ in preparation
for uploading of many thousands of XML documents , I find that there is something
that's bogging down your database. I thought your equipment
should easily outpace the old Compaqs that we have running Windows, and that
Oracle should do better than our PostgreSQL-based setup. Yet
I find the numbers show the opposite:

733 Documents in 2325.713 s for an average of 3.173 s
at ~11% CPU load

944 Documents in 1960.4 s for an average of 2.077 s
at ~15% CPU load

It takes 50% longer on the Oracle setup. Since the
application server load is negligible at the same time, it
goes to show that we are waiting for the database.

I wonder why.


Hmmm, I thinking that this developer is not acquainted with the Oracle database technology choices available to him. He continues on.

Since I can not observe the Oracle database server's IO and
CPU activity directly, I have looked at the Oracle ADVISOR
report [Query 2: ADVISOR REPORT below]. I am showing
the most important part from report on task_id 4313
and task_name ADDM:6344221675_1_1349:
|
| FINDING 1: 60% impact (489 seconds)
| ———————————–
| Waits on event "log file sync" while performing COMMIT and ROLLBACK
| operations
| were consuming significant database time.

Is there anything you could do about this? For example, could
you move the redo logs onto a different disk device? Or is this
database or hardware very busy processing other transactions?

The loading of data might take at least 50% longer than expected
(and possibly more, because we'll hit the database with 4-8
processes in parallel). Your help will be much appreciated.

regards,
Prince

Dear Prince-
We've noticed many (hundreds of thousands) insert statements being executed through jdbc-client and wondered what your commit strategy was.

Sometimes java programs have auto commit turned on (every DML statement is committed one-at-a-time). If that's the case, then it is causing a lot of round-trips through the network to confirm a committed row in the database.

Other times, a bulk load will attempt to load thousands of rows at a time before committing. That can cause the growth of a large amount of undo segments that slows throughput as well. If that's the case then the commit interval needs to be changed.

If you are using jdbc's addBatch and executeBatch methods for processing multiple SQL statements, then you can make adjustments in how many rows in the batch that you need to commit.

Alternatively, you could make modifications to commit pending transactions after a given number of iterations in your code. If possible, try starting at 500 and varying it either higher or lower to meet your throughput goals.

One thing you should know about the test database that we're hosting is that we are not collecting and archiving redo logs.

We circulate through the logs very frequently. We have enlarged the redo log size to 200 MB (up from 55 MB), but we need to reduce the amount of redo being written. To that end, we recommend that the
insert SQL statements for the following tables be rewritten to use the "Append" hint since it appears that you are performing bulk load inserts on them.

If you want, we can even put the target tables into "nologging" mode during this phase where you need to load a lot of data. That would ensure that you could put hundreds of thousands of rows in the database in a very brief period of time.

This entry was posted in Architecture, data loading and tagged , , , , , , , , . Bookmark the permalink. Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL.

Post a Comment

You must be logged in to post a comment.