Your search did not match any results.
We suggest you try the following to help find what you're looking for:
When SAP developed the in-memory database HANA, they discovered that their own applications were the worst enemies of the new in-memory database. SAP realized that their applications had to be optimized in order to benefit from HANA. But while SAP had only HANA in mind, feature-rich databases such as Oracle can support the same optimizations and benefit from them.
SAP used to think of a database as a dumb data store. Whenever a user wants to do something useful with the data, it must be transferred, because the intelligence sits in the SAP Application Server.
The disadvantages of this approach are obvious: If the sum of 1 million values needs to be calculated and if those values represent money in different currencies, 1 million individual values are transferred from the database server to the application server – only to be thrown away after the calculation has been done. The network traffic caused by this approach is responsible for the bad performance.
As a response to this insight, SAP developed the "Push down" strategy: push down code that requires data-intensive computations from the application layer to the database layer. They developed a completely new programming model that allows ABAP code to (implicitly or explicitly) call procedures stored in the database. And they defined a library of standard procedures, called SAP NetWeaver Core Data Services (CDS).
20 years before, Oracle had already had the same idea and made the same decision. Since version 7 Oracle Database allows developers to create procedures and functions that can be stored and run within the database. It was therefore possible to make CDS available for Oracle Database as well, and today SAP application developers can make use of it.
SAP’s data models (the set of tables an application uses and the relationships between them) had been defined 15 or 20 years ago and optimized for disk-oriented databases. But, as it turned out, what was an optimization in the age of disk-based computing is an obstacle in the age of in-memory computing.
The most famous example is probably the internal structure of an SAP BW cube. What from a business or user perspective looks like one single “cube”, is actually a set of multiple tables, and the relationships between them can be described as a multi-level hierarchy (“star” or “snowflake” schema). This complex structure, which requires many joins when a query or a report is executed, slows down in-memory databases considerably. Therefore, SAP designed a new, simpler data model for SAP BW on HANA and consequently called it HANA-Optimized InfoCubes. But this new data model is not only optimized for HANA. It is optimized for in-memory computing in general. Therefore SAP on Oracle users who have activated Oracle Database In-Memory can implement it as well, the only difference being the name (Flat InfoCubes or simply Flat Cubes).
A less famous, nevertheless important optimization is Table Declustering. A cluster table stores a complete (logical) record in one single (physical) table column. Such a complex value can be interpreted by the SAP Application Server, but not by a database server – which means that code pushdown is not possible, if a cluster table is involved. Therefore SAP now supports Table Declustering, for HANA as well as for the Oracle Database.
The benefits of the CDS framework just described are by no means restricted to SAP applications (i.e. standard applications created by SAP developers). For customers, home-grown applications are an essential part of their SAP landscape. Many of these applications could significantly benefit from using CDS features.
CDS views can be exposed via OData. Based on the OData exposure of CDS, it is then rather straightforward to create SAP Fiori applications using the development framework SAP WEB IDE. For details see the whitepaper ABAP Core Data Services on anyDB.