죄송합니다. 검색 내용과 일치하는 항목을 찾지 못했습니다.

원하시는 정보를 찾는 데 도움이 되도록 다음을 시도해 보십시오.

  • 검색에 사용하신 키워드의 철자가 올바른지 확인하십시오.
  • 입력한 키워드에 동의어를 사용하십시오. 예를 들어 “소프트웨어” 대신 “애플리케이션”을 사용해 보십시오.
  • 새로운 검색을 시작하십시오.
문의하기 Oracle Cloud에 로그인
Oracle Data Platform for Manufacturing

Manufacturing plant data consolidation

Optimize efficiency and lower risk with consolidated, real-time data

Today’s manufacturers must understand how efficiently all their lines are running across multiple plants—they need to know immediately when a problem occurs, not five or ten minutes after the fact. However, this is also one of their biggest challenges because their ability to do this relies on real-time access to data from multiple remote locations that may have limited or sporadic internet connectivity. To solve this problem, we need to push machine learning (ML) and data acquisition to the network edge.

Simplify decision-making at the edge

We can configure Oracle Data Platform to solve this challenge by including Oracle Roving Edge Devices (REDs). Each RED is designed to capture, store, run, manage, and gain insight from data, giving manufacturers the ability to automate the decision-making process and management of manufacturing equipment at the edge. Oracle Data Platform for manufacturing also includes anomaly detection capabilities, which can be used to address manufacturing line disruptions and provide maintenance-related insights to improve mitigation and remediation.

The following architecture demonstrates how Oracle Data Platform supports plant data consolidation by deploying advanced analytics and machine learning at the edge to identify anomalies, perform smart data collection, and provide real-time operational information.

plant data consolidation diagram, description below

This image shows how Oracle Data Platform for manufacturing can be used to consolidate plant data. The platform includes the following five pillars:

  1. 1. Data Sources, Discovery
  2. 2. Ingest, Transform
  3. 3. Persist, Curate, Create
  4. 4. Analyze, Learn, Predict
  5. 5. Measure, Act

The Data Sources, Discovery pillar includes two categories of data.

  1. 1. Business records data comprises warehouse management and inventory optimization data, ERP (Oracle E-Business Suite, Fusion SaaS, NetSuite) data, and MES planning and scheduling data.
  2. 2. Technical input data includes sensor, camera, and device (IoT) data and data from PLM, SCADA, and manufacturing applications.

The Ingest, Transform pillar comprises four capabilities.

  1. 1. Batch ingestion uses Oracle Data Integrator and OCI Data Integration.
  2. 2. Streaming ingest uses Kafka Connect.
  3. 3. Custom integration uses Oracle WebLogic Server on VMs.
  4. 4. RED sync transfer uses a local daemon.

Batch ingestion connects unidirectionally to the serving data store.

Streaming ingest and custom integration connect unidirectionally to the outbound transfer area.

Additionally, RED sync transfer unidirectionally connects to the inbound transfer area.

The Persist, Curate, Create pillar comprises four capabilities.

  1. 1. The serving data store uses MySQL and Oracle DB server.
  2. 2. Batch processing/Spark processing uses OCI GoldenGate Stream Analytics.
  3. 3. The outbound transfer area uses OCI Object Storage.
  4. 4. The inbound transfer area uses OCI Object Storage.

These capabilities are connected within the pillar. Batch/Spark processing is unidirectionally connected to the serving data store.

The outbound transfer area is unidirectionally connected to batch/Spark processing.

Three capabilities connect into the Analyze, Learn, Predict pillar:

The serving data store connects unidirectionally to the analytics and visualization capability and bidirectionally to the anomaly detection capability. The outbound transfer area connects unidirectionally to the anomaly detection and RED sync transfer capabilities.

The inbound transfer area connects unidirectionally to the anomaly detection capability.

The Analyze, Learn, Predict pillar comprises three capabilities.

  1. 1. Analytics and visualization uses Oracle Analytics Server.
  2. 2. Anomaly detection uses a model trained centrally and deployed locally as PMML.
  3. 3. RED sync transfer uses a local daemon.

The anomaly detection capability is unidirectionally connected to the analytics and visualization capability within the pillar.

Three capabilities are connected to the Measure, Act pillar. The analytics and visualization capability is unidirectionally connected to local dashboards and reports and also local predictions. The anomaly detection capability is unidirectionally connected to local predictions, and the RED sync transfer capability is unidirectionally connected to an additional use case.

The Measure, Act pillar captures how the consolidated plant data can be used. These potential uses are divided into four groups.

  1. The first group includes local dashboards and reports.
  2. The second group includes local predictions.
  3. The third group includes applications.
  4. The fourth group contains an additional use case, which is operational efficiency and performance.

The three central pillars—Ingest, Transform; Persist, Curate, Create; and Analyze, Learn, Predict—are supported by Oracle Roving Edge Device(s).



There are four main ways to inject data into an architecture to enable manufacturers to easily understand operational efficiency and performance.

  • A custom integration from Oracle Integration Repository lets us integrate data—both structured and unstructured—from various sources, allowing for interactions with devices, custom APIs, and so on. The data can be ingested from any application development type (for example, standalone Java or Python code, Oracle WebLogic Server–based applications, or Kubernetes-based applications). Data will be stored in object storage for further refinement, for outbound transfer, or to feed AI models.
  • The RED data sync is an efficient and simple way to transfer ML models from a central location (for example, your object storage repository of trained models in Oracle Cloud Infrastructure (OCI)) to the edge. In this use case, the edge definition would have the RED colocated with other machinery within the plant itself. New versions of models are stored in “standalone” Predictive Model Markup Language (PMML) format. The local daemon will perform an update when a new model is discovered and automatically push it to the RED. The RED data sync is also a great way to transfer all the data collected by different REDs throughout the day (for example, relevant anomalies, signals, and so on) to your central location, most likely to object storage on OCI. This data will then be used for operational reporting and ML model training. The volume of data involved in these RED data sync processes will determine your requirements for edge-to-data center telco or satellite bandwidth.
  • Batch ingestion uses Oracle Data Integrator, a comprehensive data integration solution that covers all data integration requirements from high-volume, high performance batch loads to event-driven, trickle-feed integration processes and SOA-enabled data services. While real-time needs are evolving, the most common extract from ERP, planning, warehouse management, and transportation management systems is a batch ingestion using an extract, transform, and load or extract, load, and transform process. These extracts could be frequent, as often as every 10 or 15 minutes, but they are still bulk in nature as transactions are extracted and processed in groups rather than individually. OCI offers different services to handle batch ingestion; these include the native OCI Data Integration service or Oracle Data Integrator running on an OCI Compute instance. Depending on the volumes and data types, data could be loaded into object storage or loaded directly into a structured relational database for persistent storage.
  • Analyzing data in real-time from multiple sources can help provide manufacturing companies with valuable insights into their operational efficiency and overall performance. Oracle Data Platform uses streaming ingestion to ingest data streams from several ISA-95 Level 2 systems, such as supervisory control and data acquisition (SCADA) systems, programmable logic controls, and batch automation systems. Streaming data (events) will be ingested and some basic transformations/aggregations will occur before the data is stored in object storage. Streaming analytics can be used to identify correlating events, and identified patterns can be fed back (manually) for a data science examination of the raw data. While traditional analytics tools extract information from data at rest, streaming analytics assesses the value of data in motion, i.e., in real time.

Data persistence and processing is built on three components.

  • In the serving data store, data will be managed by Oracle Database Server or MySQL for data processing. The serving data store provides a persistent relational tier often used to serve data directly to end users via SQL-based tools. It also functions as the serving layer for specialized analytics.
  • All data retrieved from data sources in its raw form (as a native file or extract) is captured and loaded into object storage to be used in current or future ML model training. Cloud object storage is the most common data persistence layer for our data platform, and it serves as both the inbound transfer area and the outbound transfer area. It can be used for both structured and unstructured data.
  • With object storage as the primary data persistence tier, OCI GoldenGate Stream Analytics is the primary processing engine. Batch processing involves several activities, including basic noise treatment, missing data management, and filtering based on defined outbound datasets. Results are written back to various layers of object storage or to a persistent relational repository based on the processing needed and the data types used.

The ability to analyze, learn, and predict is built on two technologies.

  • Analytics and visualization services deliver descriptive analytics (describes current trends with histograms and charts), predictive analytics (predicts future events, identifies trends, and determines the probabilities of uncertain outcomes), and prescriptive analytics (proposes suitable actions, leading to optimal decision-making). Oracle Analytics Server provides the functionality to deliver descriptive analytics related to operational reporting and prescriptive analytics. Additionally, ML models can be embedded directly into the Oracle Analytics Server data flow. Oracle Analytics Server is designed to run on-premises and provides dashboards, reporting, alerting, self-service data preparation, and end user–driven machine learning algorithms. Oracle Data Platform for manufacturing is completely open and flexible, so, if desired, you could use third-party tools for this instead.
  • Alongside the use of advanced analytics, ML models are developed, trained, and deployed to support anomaly detection. OCI Anomaly Detection is an AI service that makes it easier for developers to build business-specific anomaly detection models that flag critical incidents, speeding up detection and resolution. These models will be trained at the central location and deployed in PMML format to be executed locally as Java or Python code.

Automate decision-making to increase profitability

Oracle Data Platform lets manufacturers get the greatest value from all their available data while simplifying and streamlining data access and storage. The ability to push data collection and ML scoring to the edge through Oracle Roving Edge Devices helps manufacturers make better business decisions that are informed by accurate data that’s always available when they need it, allowing them to increase efficiency and production while lowering costs.

Oracle Data Platform 시작하기

30일 체험판으로 20개 이상의 상시 무료 클라우드 서비스를 체험해 보세요.

Oracle이 제공하는 무료 체험을 통해 Autonomous Database, Arm Compute, Storage 등을 무기한 사용할 수 있으며 추가 클라우드 서비스를 체험할 수 있는 미화 300달러 상당의 무료 크레딧이 함께 제공됩니다. 자세한 내용을 확인하고 지금 바로 무료 계정에 가입해보세요.

  • Oracle Cloud Free Tier에는 어떤 항목이 포함되어 있나요?

    • Autonomous Database 2개(각 20GB)
    • AMD 및 Arm Compute VM
    • 총 200GB의 블록 스토리지
    • 10GB의 객체 스토리지
    • 매달 10TB의 아웃바운드 데이터 전송
    • 10개 이상의 상시 무료 서비스
    • 30일 동안 사용 가능한 미화 300달러 상당의 무료 크레딧

단계별 안내에 따라 학습하기

튜토리얼 및 실습을 통해 다양한 OCI 서비스를 경험해볼 수 있습니다. 개발자든, 관리자든, 분석가든, 각 사용자에 적합한 방식으로 OCI 작동 방법을 보여드리겠습니다. 대부분의 실습은 Oracle Cloud 무료 체험, 또는 Oracle에서 제공하는 무료 실습 환경에서 실행됩니다.

  • OCI 핵심 서비스 시작하기

    이 워크샵은 실습을 통해 VCN(가상 클라우드 네트워크) 및 컴퓨팅 및 스토리지 서비스를 비롯한 OCI(Oracle Cloud Infrastructure) 핵심 서비스에 대해 소개합니다.

    OCI 핵심 서비스 실습 지금 시작하기
  • Autonomous Database 빠르게 시작하기

    이 워크샵은 Oracle Autonomous Database를 시작하기 위한 단계를 안내합니다.

    Autonomous Database 퀵스타트 랩 바로 시작하기
  • 스프레드시트에서 시작하는 앱 구축

    이 실습에서는 스프레드시트를 Oracle Database에 업로드하여 생성한 새 테이블로 응용 프로그램을 생성하는 과정을 소개합니다.

    지금 실습 랩 시작하기
  • OCI에 HA 애플리케이션 배포하기

    이 실습에서는 로드 밸런서를 이용해 고가용성 모드로 구성된 웹서버를 OCI(Oracle Cloud Infrastructure)에 존재하는 두 개의 컴퓨팅 인스턴스에 배치합니다.

    HA 응용 프로그램 실습 바로 시작

150개 이상의 모범 사례 디자인 살펴보기

Oracle의 아키텍트 및 기타 고객들이 엔터프라이즈 앱, HPC, 마이크로서비스, 데이터 레이크 등 다양한 워크로드를 배포하는 방식을 확인할 수 있습니다. 모범 사례들로부터 정보를 얻고, Oracle의 Built & Deployed 시리즈를 통해 고객사 아키텍트들이 공유하는 관련 내용들을 살펴보세요. '클릭하여 배포(click to deploy)' 기능을 활용하거나 Oracle의 GitHub 저장소에 직접 액세스하여 다양한 워크로드를 배포할 수도 있습니다.

인기 아키텍처

  • MySQL Database 서비스를 사용한 Apache Tomcat
  • Jenkins를 사용한 Kubernetes상의 Oracle Weblogic
  • 머신러닝(ML) 및 AI 환경
  • Arm 환경의 Tomcat과 Oracle Autonomous Database
  • ELK 스택을 활용한 로그 분석
  • OpenFOAM를 사용한 HPC

OCI에서의 비용 절감 효과 확인

Oracle Cloud는 저렴한 가격을 전 세계적으로 동일하게 적용하며, 간편하고 다양한 사용 사례를 지원합니다. 예상 요금 절감액을 확인하려면, 비용 계산기를 사용하여 필요에 맞게 서비스를 구성해보세요.

차이를 경험하세요:

  • 1/4의 아웃바운드 대역폭 비용
  • 가격 대비 컴퓨트 성능 3배
  • 모든 리전에 동일하게 적용되는 저렴한 가격
  • 장기 약정 없이 저렴한 가격 책정

영업 팀에 문의하기

Oracle Cloud Infrastructure에 대해 자세히 알고 싶으신가요? Oracle의 전문가가 도와 드리겠습니다.

  • 다음과 같은 질문에 답해드릴 수 있습니다.

    • OCI를 위한 최적의 업무는 무엇인가요?
    • 어떻게 하면 Oracle 투자에 대해 최대한의 효과를 거둘 수 있을까요?
    • OCI는 타사 클라우드 컴퓨팅 제품과 어떻게 다른가요?
    • OCI는 우리 회사의 IaaSPaaS 관련 목표 달성을 어떻게 지원할 수 있나요?