Skip to main content
finished thought
Source Link
anon
anon

As with everything, there is no FAST=TRUE setting. While the JDBC default fetch size of 10 is not ideal for your situation, it is OK for a "typical" OLTP application, and really isn't that bad for your case either, it seems. Apparently a large fetch size is not ideal for your situation either. But again, it isn't that bad to do 1000 at a time.

The other factor which you haven't mentioned is how WIDE the rows are that are being pulled. Consider that the chunk of data you are pulling from the database server across the network to the app server is the sum(WIDTH*ROWS). If your rows are 5000 bytes across, and you're pulling 1000 at a time, then each fetch is going to bring in 5 MB of data. In another case, perhaps your rows are "skinny" at only 100 bytes across. Then fetching 1000 of those is only shuttling 100K pieces around.

Because only YOU can know what the data will look like coming back, the recommendation is to set the fetch size system-wide for the "general" case, then adjust the oddball queries individually as needed.

In general, I too have found 100 to be a better setting for large data processes. That's not a *recommendationrecommendation, but relaying an observation.

As with everything, there is no FAST=TRUE setting. While the JDBC default fetch size of 10 is not ideal for your situation, it is OK for a "typical" OLTP application, and really isn't that bad for your case either, it seems. Apparently a large fetch size is not ideal for your situation either. But again, it isn't that bad to do 1000 at a time.

The other factor which you haven't mentioned is how WIDE the rows are that are being pulled. Consider that the chunk of data you are pulling from the database server across the network to the app server is the sum(WIDTH*ROWS). If your rows are 5000 bytes across, and you're pulling 1000 at a time, then each fetch is going to bring in 5 MB of data. In another case, perhaps your rows are "skinny" at only 100 bytes across. Then fetching 1000 of those is only shuttling 100K pieces around.

Because only YOU can know what the data will look like coming back, the recommendation is to set the fetch size system-wide for the "general" case, then adjust the oddball queries individually as needed.

In general, I too have found 100 to be a better setting for large data processes. That's not a *recommendation

As with everything, there is no FAST=TRUE setting. While the JDBC default fetch size of 10 is not ideal for your situation, it is OK for a "typical" OLTP application, and really isn't that bad for your case either, it seems. Apparently a large fetch size is not ideal for your situation either. But again, it isn't that bad to do 1000 at a time.

The other factor which you haven't mentioned is how WIDE the rows are that are being pulled. Consider that the chunk of data you are pulling from the database server across the network to the app server is the sum(WIDTH*ROWS). If your rows are 5000 bytes across, and you're pulling 1000 at a time, then each fetch is going to bring in 5 MB of data. In another case, perhaps your rows are "skinny" at only 100 bytes across. Then fetching 1000 of those is only shuttling 100K pieces around.

Because only YOU can know what the data will look like coming back, the recommendation is to set the fetch size system-wide for the "general" case, then adjust the oddball queries individually as needed.

In general, I too have found 100 to be a better setting for large data processes. That's not a recommendation, but relaying an observation.

Source Link
anon
anon

As with everything, there is no FAST=TRUE setting. While the JDBC default fetch size of 10 is not ideal for your situation, it is OK for a "typical" OLTP application, and really isn't that bad for your case either, it seems. Apparently a large fetch size is not ideal for your situation either. But again, it isn't that bad to do 1000 at a time.

The other factor which you haven't mentioned is how WIDE the rows are that are being pulled. Consider that the chunk of data you are pulling from the database server across the network to the app server is the sum(WIDTH*ROWS). If your rows are 5000 bytes across, and you're pulling 1000 at a time, then each fetch is going to bring in 5 MB of data. In another case, perhaps your rows are "skinny" at only 100 bytes across. Then fetching 1000 of those is only shuttling 100K pieces around.

Because only YOU can know what the data will look like coming back, the recommendation is to set the fetch size system-wide for the "general" case, then adjust the oddball queries individually as needed.

In general, I too have found 100 to be a better setting for large data processes. That's not a *recommendation