Cloud Readiness / Oracle Asset Monitoring Cloud
New Feature Summary
Expand All


  1. RELEASE 21.1.3
  1. Revision History
  2. Overview
    1. Asset Monitoring
        1. Flexible Computation Schedules and Data Windows for Analytics
  1. RELEASE 21.1.1
  1. Revision History
  2. Overview
    1. Asset Monitoring
        1. Sample Metric Values Before Metric Deployment
        2. Training Status Details for Anomalies, Trends, and Predictions

Release 21.1.3

Revision History

This document will continue to evolve as existing sections change and new information is added. All updates appear in the following table:

Date Product Feature Notes
01 MAR 2021 Created initial document.

Overview

This guide outlines the information you need to know about new or improved functionality in this update.

DISCLAIMER

The information contained in this document may include statements about Oracle’s product development plans. Many factors can materially affect Oracle’s product development plans and the nature and timing of future product releases. Accordingly, this Information is provided to you solely for information only, is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described remains at the sole discretion of Oracle.

This information may not be incorporated into any contractual agreement with Oracle or its subsidiaries or affiliates. Oracle specifically disclaims any liability with respect to this information. Refer to the Legal Notices and Terms of Use for further information.

Asset Monitoring

Flexible Computation Schedules and Data Windows for Analytics

New flexible data window options for your predictions, anomalies, and trends let you select the data set for a one-time training or periodic training of your analytics model. For example, you can choose to train your prediction model with a rolling data window of the last 7 days, and choose to perform the prediction training daily.

New computation schedule options for your predictions let you select a reporting frequency that can be different from the prediction forecast window. For example, you can choose to make a prediction for the next 24 hours and refresh the prediction value every hour.

Release 21.1.1

Revision History

This document will continue to evolve as existing sections change and new information is added. All updates appear in the following table:

Date Product Feature Notes
19 JAN 2021     Created initial document.

Overview

This guide outlines the information you need to know about new or improved functionality in this update.

DISCLAIMER

The information contained in this document may include statements about Oracle’s product development plans. Many factors can materially affect Oracle’s product development plans and the nature and timing of future product releases. Accordingly, this Information is provided to you solely for information only, is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described remains at the sole discretion of Oracle.

This information may not be incorporated into any contractual agreement with Oracle or its subsidiaries or affiliates. Oracle specifically disclaims any liability with respect to this information. Refer to the Legal Notices and Terms of Use for further information.

Asset Monitoring

Sample Metric Values Before Metric Deployment

When creating a metric, and after validating the formula, you can run a test inside the metric editor to view the sample metric results on live asset data. Sampling the metric values lets you validate whether your computations work along expected lines. Sampling also lets you determine if the metric can go live, and if the metric is ready to be used in analytics artifacts, such as anomalies and predictions.

Training Status Details for Anomalies, Trends, and Predictions

Additional information related to the training statuses of your anomaly, trend, and prediction models is available in the application. This information helps you better validate and tune your analytics model.

The application reports completed model trainings along with their timestamps. For skipped training, the application includes additional information on the reasons. For example, the presence of a valid trained model may result in skipped training. If training fails, the application includes pertinent information related to the failure. For example, the chosen training data set's statistical properties might not be suitable for predictions.