What is OLTP?

OLTP defined

OLTP or Online Transaction Processing is a type of data processing that consists of executing a number of transactions occurring concurrently—online banking, shopping, order entry, or sending text messages, for example. These transactions traditionally are referred to as economic or financial transactions, recorded and secured so that an enterprise can access the information anytime for accounting or reporting purposes.

In the past, OLTP was limited to real-world interactions in which something was exchanged–money, products, information, request for services, and so on. But the definition of transaction in this context has expanded over the years, especially since the advent of the internet, to encompass any kind of digital interaction or engagement with a business that can be triggered from anywhere in the world and via any web-connected sensor. It also includes any kind of interaction or action such as downloading pdfs on a web page, viewing a specific video, or automatic maintenance triggers or comments on social channels that maybe critical for a business to record to serve their customers better.

The primary definition for transactions—economic or financial—remains the foundation for most OLTP systems, so online transaction processing typically involves inserting, updating, and/or deleting small amounts of data in a data store to collect, manage, and secure those transactions. Typically a web, mobile, or enterprise application tracks all those interactions or transactions with customers, suppliers, or partners and updates them in the OLTP database. This transaction data stored in the database is critical for businesses and used for reporting or analyzed to use for data-driven decision making.

Read how other companies like Retraced, Archaeological Park of Pompeii, Jasci or Siemens have been successful in building their transaction processing workloads in the cloud.

Businesses usually have two types of data processing capabilities: OLTP and OLAP.

OLTP versus OLAP

Though they sound similar and are both online data processing systems, there is a stark difference between the two.

OLTP enables the real-time execution of large numbers of transactions by large numbers of people, whereas online analytical processing (OLAP) usually involves querying these transactions (also referred to as records) in a database for analytical purposes. OLAP helps companies extract insights from their transaction data so they can use it for making more informed decisions.

The table below shows comparison between OLTP and OLAP systems.

OLTP systems

OLAP systems

Enable the real-time execution of large numbers of database transactions by large numbers of people

Usually involve querying many records (even all records) in a database for analytical purposes

Require lightning-fast response times

Require response times that are orders of magnitude slower than those required by OLTP

Modify small amounts of data frequently and usually involve a balance of reads and writes

Do not modify data at all; workloads are usually read-intensive

Use indexed data to improve response times

Store data in columnar format to allow easy access to large numbers of records

Require frequent or concurrent database backups

Require far less frequent database backup

Require relatively little storage space

Typically have significant storage space requirements, because they store large amounts of historical data

Usually run simple queries involving just one or a few records

Run complex queries involving large numbers of records

So, OLTP is an online data modification system, whereas OLAP is an online historical multidimensional data store system that’s used to retrieve large amounts data for analytical purpose. OLAP usually provides analytics on data that was captured by one or more OLTP systems.

Requirements for an OLTP system

The most common architecture of an OLTP system that uses transactional data is a three-tier architecture that typically consists of a presentation tier, a business logic tier, and a data store tier. The presentation tier is the front end, where the transaction originates via a human interaction or is system-generated. The logic tier consists of rules that verify the transaction and ensure all the data required to complete the transaction is available. The data store tier stores the transaction and all the data related to it.

The main characteristics of an online transaction processing system are the following:

  • ACID compliance: OLTP systems must ensure that the entire transaction is recorded correctly. A transaction is usually an execution of a program that may require the execution of multiple steps or operations. It may be complete when all parties involved acknowledge the transaction, or when the product/service is delivered, or when a certain number of updates are made to the specific tables in the database. A transaction is recorded correctly only if all the steps involved are executed and recorded. If there is any error in any one of the steps, the entire transaction must be aborted and all the steps must be deleted from the system. Thus OLTP systems must comply with atomic, consistent, isolated, and durable (ACID) properties to ensure the accuracy of the data in the system.
    • Atomic: Atomicity controls guarantee that all the steps in a transaction are completed successfully as a group. That is, if any steps between the transactions fail, all other steps must also fail or be reverted. The successful completion of a transaction is called commit. The failure of a transaction is called abort.
    • Consistent: The transaction preserves the internal consistency of the database. If you execute the transaction all by itself on a database that’s initially consistent, then when the transaction finishes executing the database is again consistent.
    • Isolated: The transaction executes as if it were running alone, with no other transactions. That is, the effect of running a set of transactions is the same as running them one at a time. This behavior is called serializability and is usually implemented by locking the specific rows in the table.
    • Durable: The transaction’s results will not be lost in a failure.
  • Concurrency: OLTP systems can have enormously large user populations, with many users trying to access the same data at the same time. The system must ensure that all these users trying to read or write into the system can do so concurrently. Concurrency controls guarantee that two users accessing the same data in the database system at the same time will not be able to change that data, or that one user has to wait until the other user has finished processing before changing that piece of data.
  • Scale: OLTP systems must be able to scale up and down instantly to manage the transaction volume in real time and execute transactions concurrently, irrespective of the number of users trying to access the system.
  • Availability: An OLTP system must be always available and always ready to accept transactions. Loss of a transaction can lead to loss of revenue or may have legal implications. Because transactions can be executed from anywhere in the world and at any time, the system must be available 24/7.
  • High throughput and short response time: OLTP systems require nanosecond or even shorter response times to keep enterprise users productive and to meet the growing expectations of customers.
  • Reliability: OLTP systems typically read and manipulate highly selective, small amounts of data. It is paramount that at any given point of time the data in the database is reliable and trustworthy for the users and applications accessing that data.
  • Security: Because these systems store highly sensitive customer transaction data, data security is critical. Any breach can be very costly for the company.
  • Recoverability: OLTP systems must have the ability to recover in case of any hardware or software failure.

Databases for OLTP workloads

Relational databases were built specifically for transaction applications. They embody all the essential elements required for storing and processing large volumes of transactions, while also continuously being updated with new features and functionality for extracting more value from this rich transaction data. Relational databases are designed from the ground up to provide the highest possible availability and fastest performance. They provide concurrency and ACID compliance so the data is accurate, always available, and easily accessible. They store data in tables after extracting relationships between the data so the data can be used by any application, ensuring a single source of truth.

The evolution of transaction processing databases

As transactions became more complex, originating from any source or device, from anywhere in the world, traditional relational databases were not advanced enough to meet the needs of modern-day transactional workflows. They had to evolve to handle the modern-day transactions, heterogeneous data, and global scale, and most importantly to run mixed workloads. Relational databases transformed into multimodal databases that store and process not only relational data but also all other types of data, including xml, html, JSON, Apache Avro and Parquet, and documents in their native form, without much transformation.. Relational databases also needed to add more functionality such as clustering and sharding so they could be distributed globally and scale infinitely to store and process increasingly large volumes of data and to make use of cheaper storage available on cloud. With other capabilities such as in-memory, advanced analytics, visualization, and transaction event queues included, these databases now can run multiple workloads — such as running analytics on transaction data or processing streaming (Internet of Things (IoT)) data, or running spatial, and graph analytics.

Modern relational databases built in the cloud automate a lot of the management and operational aspects of the database, making them easier for users to provision and use. They provide automated provisioning, security, recovery, backup, and scaling so DBAs and IT teams have to spend much less time maintaining them. They also embed intelligence to automatically tune and index the data so database query performance is consistent irrespective of the amount of data, the number of concurrent users, or the complexity of the queries. These cloud databases also include self-service capabilities and REST APIs so developers and analysts can easily access and use the data. This simplifies application development, giving flexibility and making it easier for developers to build new functionality and customizations into their applications. It also simplifies analytics, making it easier for analysts and data scientists to use the data for extracting insights.

How to select the right database for your OLTP workload

As IT struggles to keep pace with the speed of business, it is important that when you choose an operational database you consider your immediate data needs and long-term data requirements. For storing transactions, maintaining systems of record, or content management, you will need a database with high concurrency, high throughput, low latency, and mission-critical characteristics such as high availability, data protection, and disaster recovery. Most likely, your workload will fluctuate throughout the day or week or year, so ensuring that the database can autoscale will help you save a lot of expense. You’ll also may need to decide whether to use a purpose-built database or general-purpose database. If your requirements are for a specific type of data, a purpose-built database may work for you, but make sure you aren’t compromising on any of the other characteristics you need. It would be costly and resource-intensive to build for those characteristics later in the application layer. Also, if your data needs grow and you want to expand the functionality of your application, adding more single-purpose or fit-for-purpose databases will only create data silos and amplify the data management problems. You must also consider other functionalities that may be necessary for your specific workload—for example, ingestion requirements, push-down compute requirements, and size at limit.

Select a future-proof cloud database service with self-service capabilities that will automate all the data management so that your data consumers—developers, analysts, data engineers, data scientists and DBAs—can do more with the data and accelerate application development.

Learn more about Oracle’s Autonomous Transaction Processing Database, the cloud OLTP database service. Try it for free.