22

Well, not much to ask apart from the question. What do you mean when you say a OLTP DB must have a high throughput.

Going to the wiki.

"In communication networks, such as Ethernet or packet radio, throughput or network throughput is the average rate of successful message delivery over a communication channel. This data may be delivered over a physical or logical link, or pass through a certain network node. The throughput is usually measured in bits per second (bit/s or bps), and sometimes in data packets per second or data packets per time slot."

So does this mean , OLTP databases need to have a high/quick insertion rate ( i.e. avoiding deadlocks etc)??

I was always under an impression if we take a database for say an airline industry, it must have quick insertion , but at the same time quick response time since it is critical to it's operation. And in many ways this shouldn't this be limited to the protocol involved in delivering the message/data to the database?

I am not trying to single out the "only" characteristic of OLTP systems. In general I would like to understand, what characteristics are inherent to a OLTP system.

Cheers!

2
  • btw: avoiding deadlocks is more about application design and less relevant when talking about performance. Commented Apr 14, 2011 at 1:07
  • to be overly simplistic: oltp environments (from a db standpoint anyway) are generally concerned with optimizing DML (inserts/updates/deletes). You don't deal with summary/aggregate/rollup, which is more for BI/DSS data warehouses/marts.
    – tbone
    Commented Apr 14, 2011 at 13:29

2 Answers 2

27

In general, when you're talking about the "throughput" of an OLTP database, you're talking about the number of transactions per second. How many orders can the system take a second, how many web page requests can it service, how many customer inquiries can it handle. That tends to go hand-in-hand with discussions about how the OLTP system scales-- if you double the number of customers hitting your site every month because the business is taking off, for example, will the OLTP systems be able to handle the increased throughput.

That is in contrast to OLAP/ DSS systems which are designed to run a relatively small number of transactions over much larger data volumes. There, you're worried far less about the number of transactions you can do than about how those transactions slow down as you add more data. If you're that wildly successful company, you probably want the same number and frequency of product sales by region reports out of your OLAP system as you generate exponentially more sales. But you now have exponentially more data to crunch which requires that you tune the database just to keep report performance constant.

6

Throughput doesn't have a single, fixed meaning in this context. Loosely, it means the number of transactions per second, but "write" transactions are different than "read" transactions, and sustained rates are different than peak rates. (And, of course, a 10-byte row is different than a 1000-byte row.)

I stumbled on Performance Metrics & Benchmarks: Berkeley DB the other day when I was looking for something else. It's not a bad introduction to the different ways of measuring "how fast". Also, this article on database benchmarks is an entertaining read.

0

Not the answer you're looking for? Browse other questions tagged or ask your own question.