Nebyly nalezeny žádné výsledky

Vašemu vyhledávání neodpovídají žádné výsledky.

Abyste našli to, co hledáte, doporučujeme vyzkoušet následující postup:

  • Zkontrolujte pravopis vašich klíčových slov ve vyhledávání.
  • Použijte synonyma pro klíčové slovo, které jste zadali, například zkuste „aplikace“ místo „software“.
  • Vyzkoušejte jedno z populárních vyhledávání uvedených níže.
  • Zahajte nové hledání.
Časté otázky

Frequently Asked Questions

Open all Close all

    General Questions

  • What is Oracle Cloud Infrastructure Streaming (OCI Streaming?

    The Oracle Cloud Infrastructure Streaming service provides a fully managed, scalable, and durable storage option for continuous, high-volume streams of data that you can consume and process in near real-time.

    For more information, see the following topics in the documentation:

  • Where the Streaming service is available?

    For a list of the regions that presently run the Streaming service, see the documentation.

  • What are the API endpoints?

    The API endpoint is constructed as follow: https://streaming.$_REGION.oci.oraclecloud.com

    As an example for the variable :

    • us-phoenix-1
    • us-ashburn-1
  • What does OCI Streaming manage on my behalf?

    OCI Streaming is fully managed; from the underlying infrastructure to provisioning, deployment, maintenance, security patches, replication and consumer groups, which makes application development easier.

  • How does Oracle Cloud Infrastructure Streaming provide resiliency?

    When you create a stream inside OCI Streaming, Oracle automatically creates and manages 3 streaming nodes distributed across 3 different AD(s) (or fault domains for single AD-regions), ensuring that your streams stay highly available and your data highly durable.

  • What can I do with OCI Streaming?

    OCI Streaming allows you to emit data and retrieve the data in near real time. The number of use cases are nearly unlimited, from messaging to complex data streams processing.

    Here are some of the many possible uses for Streaming:

    • Messaging: Use streaming to decouple components of large systems. Streaming provides a pull/buffer-based communication model with sufficient capacity to flatten load spikes and the ability to feed multiple consumers with the same data independently. Key-scoped ordering and guaranteed durability provide reliable primitives to implement a variety of messaging patterns, while high throughput potential allows for such a system to scale well.
    • Metric and log ingestion: Use streaming as an alternative for traditional file-scraping approaches to help make critical operational data more quickly available for indexing, analysis, and visualization.
    • Web/Mobile activity data ingestion: Use streaming for capturing activity from web sites or mobile apps (such as page views, searches, or other actions users may take). This information can be used for real-time monitoring and analytics as well as in data warehousing systems for offline processing and reporting.
    • Infrastructure and apps event processing: Use streaming as a unified entry point for cloud components to report their life cycle events for audit, accounting, and related activities.
  • How do I use OCI Streaming?

    Start using OCI Streaming by:

    • Creating a new stream through the OCI Streaming Console or through the CreateTopic API
    • Emitting data from producers to the topic (see detailed documentation)
    • Building consumers to read and process data from your stream
  • What are the limits of OCI Streaming?

    Overall, the amount of throughput you can have access to doesn’t have any limits. You just need to proactively design your stream with the right number of partitions.

    The hard limits of the system are:

    • Duration of up to maximum 7 days for retention.
    • The maximum size of a unique message is 1 MB
    • Each partition can handle and 5 Read API call per second.up to 1MB per second with any number of requests for writes.
    • Each partition can support up a maximum total data write rate of 1MB per second and read rate of 2MB per second
    • Each tenancy has a limit of 5 partitions however you can request more partitions - Contact Us
  • How do I use Streaming with the Oracle Cloud Infrastructure CLI?

    An example is available on GitHub, included with the oci-cli binaries.

  • How do I access my partition?

    In our APIs, a partition is represented as a string.

    If you create a stream with five partitions, you can access them by using the strings "0", "1", "2", "3", or "4".

    Don't rely on partition identifiers being represented as numeric value.

    Offsets aren't dense. Expect to see message offsets always increment, but sometimes not by 1. Don't rely on that to make future offset calculations.

    For example, if you publish two messages going on the same partition, the first message could have offset 42 and the second message could have offset 45 (offset 43 and 44 being non-existent.

    Key Concepts

  • What is a stream?

    A Stream can be viewed as an append-only log file that contains your messages.

  • A partition?

    Streams are divided into a number of partitions for scalability. Partitions allow you to distribute a stream by splitting the messages across multiple nodes (or brokers) — each partition can be placed on a separate machine to allow for multiple consumers to read from a topic in parallel.

  • A message?

    A 64-bit encoded message is what you emit into a topic.

  • What is a offset?

    Each message within a partition has an identifier called its offset. Consumers can read messages starting from a specific offset and are allowed to read from any offset point they choose. Consumers can also commit the latest processed offset so they can resume their work without replaying or missing a message if they stop and then restart.

  • What is a key?

    A key is an identifier used to group related messages.

    Creating a Stream

  • How to create a new stream?

    You can create a new stream by using our Console or our API. See API here.

    Your stream is created for a particular region and tenancy and optionally for a dedicated compartment. A steam's data is replicated across the entire region allowing it to tolerate AD loss or network splits without disrupting the service and offers built-in high availability in a region.

  • How long does the provisioning take?

    The time to provision depends on the number of partitions. Creating a new partition takes up to 10 seconds.

  • How do I decide the number of partitions I need?

    The number of partitions for your stream depends on the throughput expectations of your application (expected throughput = average recond size x maximum number of records written per sectond).

  • What is the minimum throughput I can request for a stream?

    The throughput of a Oracle Cloud Infrastructure stream is defined by a partition. A partition provides 1MB/sec data input and 2MB/sec data output.

  • How many requests I can send to a partition?

    You can send 1,000 requests per second to a partition.

  • How do I use the SDKs?

    Examples of the main APIs of the Streaming service are described in the documentation.

    The examples include the following APIs:

  • Where can I find the SDKs?
  • Do you plan to support more languages?

    Streaming SDK will support the same languages as other OCI SDK implementations, there are no additional languages that will be supported for Streaming service specifically.

  • Where can I find the list of all the APIs that I need for streaming?

    Publishing Data to a Stream

  • How do I emit data into a stream?

    Once a stream is created and active you can publish messages. For publishing, you can use the Write API (putMessages). The message will be published to a partition in the stream. If there is more than one partition, the partition where the message will be published is calculated using the message's key.

  • How will OCI Streaming store data if I send null key?

    If the key is null, the partition will be calculated using a subset of the value. For messages with a null key, do not expect messages with same value to go on the same partition, since the partitioning scheme may change; sending a null key will effectively put the message in a random partition.

  • How do I ensure ordering of messages in OCI Streaming?

    If you want to make sure that messages with the same value go to the same partition, you should use the same key for those messages.

  • How do I ensure that my message is durable?

    As soon as the OCI Streaming API acknowledges your putMessage without error, this messages is durable.

  • How do you ensure consistency of data in a OCI Streaming stream?

    guaranteesOCI Streaming linear reads and writes to a partitioning key.

  • What happen if I emit more data than the maximum authorized?

    When client requests exceed the limits, OCI Streaming denies the request and send out an error exception message.

  • When do I get throttled?

    The throttling mechanism is activated when the following thresholds are exceeded:

    • GetMessages: 5 calls per second or 2 MB/s per partition
    • PutMessages: 1 MB/s per partition.
    • Management and control-plane operations (CreateCursor, listStream, and so on): 5 calls per second per stream.
  • Should I batch my messages?

    We recommend message batching for the following reasons:

    • Reduces the number of Put requests sent to the service, which avoids throttling
    • Enables better throughput

    The size of a batch of messages shouldn't exceed 1 MB. If this limit is exceeded, the throttling mechanism is triggered.

  • How do I handle messages that are bigger than 1 MB?

    You can use one of the following approaches: chunking or sending the message via Object Storage.

  • Chunking

    Large payloads can be split into multiple, smaller chunks that the Streaming service can accept.

    The chunks are stored in the service in the same way that ordinary (not-chunked) messages are stored. The only difference is that the consumer must keep the chunks and combine them into the real message when all the chunks have been collected.

    The chunks in the partition can be interwoven with ordinary message.

  • Object Storage

    A large payload is put in Object Storage and only the pointer to that data is transferred. The receiver recognizes this type of pointer payload, transparently reads the data from Object Storage, and provides it to the end user.

  • What date format does the service accept?

    A common mistake is providing the incorrect date format.

    The Streaming service supports ISO-8601, including the time zone for all dates.

  • How do I get the partition number and offset from the Streaming service while publishing a message?

    The PutMessagesResultsEntry class provides the following methods:

    • getPartition, which provides the partition number to which the message was published
    • getOffset, which provides the offset of the published message
  • How do I get the partition number and offset from the Streaming service without publishing a message?

    At this time, there's no way to see the latest published message without publishing a message.

    There is a mechanism to see the latest committed offset, per group or partition. Look into the getGroup endpoint.

    Consuming Data from a Stream - Single Consumer

  • How do I read and consume data from a stream?

    Consuming messages requires you to:

    • Create a cursor
    • Use the cursor to read messages
    • Refer to the technical documentation for step by step guide on consuming data from a stream.

  • What are the different ways I can consume data from an OCI Streaming Stream?

    OCI Streaming provides two kinds of consume API:

    • Low-level inspection to precisely control partitions and offsets to read data from
    • Consumer groups to simplify application development by offloading load balancing, coordination, and offset tracking to the service
  • How do consumer groups work?

    Consumers can be configured to consume messages as part of a group. Stream partitions are distributed among members of a group so that messages from any single partition will only be sent to a single consumer.

    Partition assignments are re-balanced as consumers join or leave the group.

  • How do I avoid duplicate messages to my consumers?

    We recommend that consumer applications take care of duplicates.

  • How do I know whether consumers are falling behind?

    If you want to know if your consumer is falling behind (you are producing faster than you are consuming), you can use the difference between timestamp of the message and the current time. If this number gets higher, you might want to spawn a new consumer to take over some of the partitions from your first consumer.

  • What is a single consumer?

    A single consumer is an entity that reads messages from one or more streams.

    This entity could exist alone or be part of a consumer group.

  • What is cursor?

    A pointer to a location in a stream. This location could be a pointer to a specific offset or time in a partition, or to a groups' current location.

  • What cursor types are available?

    The following cursor types are available: TRIM_HORIZON, AT_OFFSET, AFTER_OFFSET, AT_TIME, and LATEST. For details, see the documentation.

  • Do I need to regenerate a cursor every time I consume a message?

    No, the cursor should be created outside of the loop. After you create a cursor, you can start consuming messages by using the GetMessages method. Each call to GetMessages returns the cursor to use in the next GetMessages call.

    The returned cursor is not null and expires in 5 minutes. As long as you keep consuming, you should never have to re-create a cursor.

    GetMessages, Commit, and Heartbeat all return a new cursor to use for subsequent calls.

    A Java code snippet is available in the documentation.

    In a couple of error cases, it's necessary to create new cursors. We recommend that you handle that as part of the failure strategy.

  • Can a consumer that belongs to Tenant B consume messages from a stream that belongs to Tenant A?

    This is possible through policies. Tenant A must create a policy that gives Tenant B stream-pull access

  • How do I handle offsets?

    When you aren't using group-cursors, storing processed offsets must be managed by the consumer.

    When you are using group-consumers, processed offsets can be committed, in case of failure.

    When you create a cursor, specify which type of cursor to use. When the application starts consuming messages, it needs to store which offset it reached/stopped at.

    This scenario is practical when doing a demo or proof-of-concept, using only one partition per stream. In a production environment with multiples partitions, we recommend using consumer groups.

  • How many messages can I get from a getMessage call?

    he getLimit( ) method of the GetMessageRequest class returns the maximum number of messages. You can specify any value up to 10,000. By default, the service returns as many messages as possible.

    Consider your average message size to avoid exceeding throughput on the stream.

    Streaming service getMessage batch sizes are based on the average message size published to the particular stream.

    Consuming Data from a Stream - Consumer Groups

  • How do consumer groups work?

    For a detailed explanation, see the documentation.

  • Why should I use consumer groups?

    Consumer groups provide the following advantages:

    • Each instance receives messages from one or more partitions (which are “automatically” assigned to it), and the same messages won’t be received by the other instances (assigned to different partitions). In this way, we can scale the number of the instances up (having one instance reading only one partition).
    • Having instances as part of different consumer groups means providing a publish/subscribe pattern in which the messages from partitions are sent to all the instances across the different groups. This is useful when the messages inside a partition are of interest for different applications that will process them in different ways. We want all the interested applications to receive all the same messages from the partition.
    • Another advantage of consumer groups is the rebalancing feature. When an instance joins a group, if enough partitions are available (that is, the limit of one instance per partition hasn't been reached), a rebalancing starts. The partitions are reassigned to the current instances, plus the new one. In the same way, if an instance leaves a group, the partitions are reassigned to the remaining instances.
    • Offset commits are managed automatically.
  • What is an instance?

    An instance is a member of a consumer group. It's defined when a group cursor is created.

    Partition reads are balanced among instances in a consumer group.

    The instance name identifies that member of the group for operations related to offset management.

    We recommend that you use unique instance names for each member of the consumer group.

  • Is there a best practice for naming my instances?

    The best practice is to use a concatenated string of useful information.

  • What timeouts do I need to be aware of?

    The following components of the Streaming service have timeouts:

    • Cursor: As long as you keep consuming messages, there is no need to create a new cursor. If the consumption of messages is stopped for more than 5 minutes, the cursor must be re-created.
    • Instance: If an instance stops consuming messages for more than 30 seconds, it is removed from the consumer group and its partition is reassigned to another instance (rebalancing is triggered.
  • How long does an instance have to heartbeat before timing out?

    Each instance within the consumer group needs to heartbeat before the 30-second timeout. For example, if a message is taking too long to process, we recommend that the instance send a heartbeat.

  • What happens when an instance times out?

    When reaching the 30-second timeout, the instance is removed from the consumer group and its partition is reassigned to another instance (if possible). This event is called rebalancing.

  • What is rebalancing within a consumer group?

    Rebalancing is the process in which a group of instances (belonging to the same consumer group) coordinate to own a mutually exclusive set of partitions that belongs to a specific stream.

    At the end of a successful rebalance operation for a consumer group, every partition within the stream is owned by a single or multiple consumer instances within the group.

  • How do I generate an effective partition key?

    To ensure uniform distribution, you want to create a good value for your message keys. To do so, consider the selectivity and cardinality of your streaming data.

    • Cardinality: Consider the total number of unique keys that could potentially be generated based on the specific use case. Higher key cardinality generally means better distribution.
    • Selectivity: Consider the number of messages with each key. Higher selectivity means more messages per key, which can lead to hotspots.

    Aim for high cardinality with low selectivity.

  • Which partition will my messages go to?

    In the Streaming service, the key is hashed and then used to determine the partition. Messages with the same key go to the same partition. Messages with different keys might go to different or to the same partitions.

  • Can I force messages to go to a specific partition?

    As a producer, there is no way for you to explicitly control the partition to which a message goes.

    If the data is sent with keys, the producer can't force it to a particular partition.

  • Can I share a StreamClient instance between threads?

    Yes, StreamClient is thread-safe.

    When an object is stateless, it doesn't have to retain any data between invocations. Because there's no state to modify, one thread can't affect the result of another thread that invokes the object's operations. For this reason, a stateless class is inherently thread-safe.

  • How many messages are in the current partition?

    Consumer lag is not yet implemented in the Streaming service.

    The produced offset for each message is returned after each successful putMessage call.

    The message offset is included with every message returned by getMessage calls.

    You can determine lag by tracking the delta between produced and consumed offsets, by partition.

  • How do I know if I'm falling behind in message consumption?

    To determine if your consumer is falling behind (you're producing faster than you're consuming), you can use the timestamp of the message. If the consumer is falling behind, consider spawning a new consumer to take over some of the partitions from your first consumer. If you're falling behind on a single partition, there's no way to recover.

    Consider the following options:

    • Increase the number of partitions on the stream.
    • If the issue is caused by a hotspot, change the message-key strategy.
    • Reduce message processing time, or handle requests in parallel.

    If you want to know how many messages are left to consume in a given partition, use a cursor of type , get the offset of the next LATESTpublished message, and make the delta with the offset that you are currently consuming.

    Because we don't have dense-offset, you will get a rough estimate only. However, if your producer stopped producing, you won't be able to get that information (because you'll never get the offset of the next published message).

  • What is the expected behavior if consumer A has a cursor for partition 1, but the partition gets reassigned to a new consumer before the 30 seconds timeout?

    Reassignment happens only on commit and timeout. We recommend using commitOnGet=true, and relying on a heartbeat if the processing takes longer than 30 seconds. Writing custom commit logic is complicated, and full of race conditions and considerations. There are many cases in which some internal state is changed, and the client is required to handle the situation.

  • What happens when an instance of the consumer group becomes inactive?

    Yes, StreamClient is thread-safe.

    An instance of a consumer group is consider inactive if it doesn't send a heartbeat for more than 30 seconds, or the process is terminated.

    When that happens, a rebalance within the consumer group occurs to handle the partitions previously consumed by the inactive instance.

  • What happens when an instance of the consumer group that was previously inactive rejoins the group?

    Such an instance is considered a new instance. A rebalance is triggered, and the instance is assigned a partition to start consuming messages.

    The Streaming service makes no guarantee about whether the same partition (the one assigned before termination) is reassigned to this instance.

  • When a previously inactive instance of the consumer group rejoins the group, does it consume duplicate messages?

    The Streaming service provides "at-least-once" semantics with the consumer group. Consider when offsets are committed in a message loop. If a consumer crashes before committing a batch of messages, that batch might be given to another consumer. When a partition is given to another consumer, the consumer uses the latest committed offset to start consumption. The consumer doesn't get messages before the committed offset.

  • How should I handle offset commits?

    The Streaming service handles offset commits automatically for the consumer group when the getCommitOnGet is set to true. We recommend using this method because it reduces the application complexity; that is, the application should not implement any commit mechanism. To overwrite this setting and implement a custom offset commit mechanism, set getCommitOnGet to false during when creating the consumer group.

  • How does CommitOnGet work?

    CommitOnGet means that offsets from the previous request are committed. To illustrate this feature, consider the following example:

    For a consumer A:

    • A calls getMessages and receives messages from an arbitrary partition, with offsets of 1–100.
    • A processes all 100 messages successfully.
    • A calls getMessages, the Streaming service commits offset 100 and returns messages with offsets 101–200.
    • A processes 15 messages, and then goes offline unexpectedly (for more than 30 seconds).

    The orchestration system starts a new consumer B:

    • B calls getMessages, the Streaming service uses the latest committed offset and returns messages with offsets 101–200.
    • B continues the message loop.

    In this example, no messages were lost, but offsets 101–115 were processed at least once, which mean that they could have been processed more than once.

    In this example, a portion (15) of the messages might have been processed and might be redelivered to the new consumer B after a fault event, but no data is lost.

  • How do I update the offsets that exceeded the retention period for a single partition in a multiple-partition stream?

    Currently, it's not possible to update an individual partition in a consumer group.

    The current behavior of the updateGroup call is to reset committedOffset for all partitions, which causes unnecessary old message retrievals for the partitions that were assigned to the other healthy consumers.

  • Why do I need to send heartbeats?

    In a consumer group, the instances that are consuming the messages need to send heartbeats before reaching the timeout of 30 seconds. If an instance fails to send a heartbeat, the Streaming service considers the instance inactive and triggers a reassignment of its partition.

  • What is the expected behavior if I send heartbeat with a cursor string that is already committed?

    A cursor retrieved from a committed call should have no offsets. Heartbeats extend the timeout of partitions in the cursor.

    Doing a heartbeat against an empty cursor should do nothing. The previous committed cursor could trigger a rebalance.

    If a cursor is committed, and then a heartbeat is done against the cursor (rather than the one returned by the commit call), it updates samethe timeouts for the offsets contained.

  • How do I recover from a consumer failure?

    To recover from a failure, you must store the offset of the last message that you processed (for each partition) so that you can start consuming from that message if you need to restart your consumer.

    Note: Do not store the cursor; they expire after 5 minutes.

    We don't provide any guidance for storing the offset of the last message that you processed, so you can use whatever method you want (for example, another stream, Kiev, a file on your machine, or Object Storage).

    When your consumer restarts, read the offset of the last message that you processed, and then create a cursor of type and AFTER_OFFSETspecify the offset that you just got.

    Managing an OCI Streaming Stream

  • Can I change the number of partitions later on?

    We recommend customers allocate partitions slightly higher than their maximum throughput. This will help them to manage their application spikes as we currently don't support changing the number of partition once a stream is created.

  • Can I change the durability of my topic?

    By default, we store data for 24 hours. You can set up the retention period up to 7 days while creating a stream. Once retention period is defined, it can't be edited.

  • How do I monitor the operations and performance of my OCI Streaming stream?

    The OCI Streaming console provides both operational and performance metrics, such as throughput of data input and output. OCI Streaming also integrates with OCI Telemetry so that you can collect, view, and analyze telemetry metrics for your streams.

    Security and Encryption

  • How do I manage and control access to my stream?

    All streams in the same tenancy have unique immutable names. Every stream has a compartment assigned. So, all the power of Oracle Cloud Infrastructure access control policies may be used to describe fine-grained rules at the tenancy, compartment, or single stream level.

    Access policy is specified in a form of "Allow to in where ".

  • How do I authenticate when emitting or consuming data from OCI Streaming?

    Our internet API uses the Oracle Identity service. Oracle Identity Service provides convenient way to authenticate users and authorize an access to such APIs from both browser (Username/password) and code (API Key).

    See documentation here.

  • When I use OCI Streaming, how secure is my data

    OCI Streaming is secure by default - User data is encrypted both at rest and in motion. Only the account and data stream owners have access to the stream resources they create. OCI Streaming supports user authentication to control access to data. You can use Oracle Cloud Infrastructure IAM policies to selectively grant permissions to users and groups of users. You can securely put and get your data from OCI Streaming through SSL endpoints using the HTTPS protocol.

  • Can I encrypt my data?

    You own the data you emit; you can encrypt your data before sending it to OSS.

  • Can you walk me through the encryption life cycle of my data from the point in time I send it to an OCI Streaming Stream to when I retrieve it?

    Ingestion (your producer - Streaming gateway): Data encrypted in motion due to SSL (HTTPS).

    Inside of streaming service: On the gateway SSL gets terminated, data is encrypted upon arriving with per-stream AES-128 key, and is sent to the storage layer for persistence.

    On consumption: Encrypted data is read from OSS, decrypted by the gateway node, and sent to consumer over SSL.

    On consumption: Encrypted data is read from OCI Streaming, decrypted by the gateway node, and sent to consumer over SSL.

  • What encryption algorithm is used for OCI Streaming encryption?

    OCI Streaming uses AES-GCM 128 algorithm for encryption.

    Monitoring

  • Where can I monitor my stream?

    Streaming is fully integrated with OCI Monitoring services. The Streaming metrics description is available here.

  • What metrics does the Streaming service emit?

    The monitoring in the Streaming service focuses on producers and consumers. For a list of the metrics emitted by the Streaming service, see the documentation.

  • What statistics are available in the monitoring of the Streaming service?

    Each metric available in the Console provides the following statistics:

    • Rate, Sum, and Mean
    • Min, Max, and Count
    • P50, P90, P95, P99, and P99.9

    These statistics offer four time intervals

    • Auto
    • 1 minute
    • 5 minutes
    • 1 hour
  • What metrics should I set alarms for?

    For producers, consider setting alarms on the following metrics:

    • Put Messages Latency:network issues.
    • Put Messages Total Throughput:
      • An important increase in total throughput could indicate that the 1 MB/s per partition limit will be reached and that event will trigger the throttling mechanism.
      • An important decrease could mean that the client producer is having an issue or is about to stop.
    • Put Messages Throttle Records: It's important to get notified when messages are throttled.
    • Put Messages Failure: It's important to get notified if Put messages start failing so that the Ops team can start investigating the reasons.

    For consumers, consider setting the same alarms based on the following metrics:

    • Get Messages Latency
    • Get Messages Total Throughpu
    • Get Messages Throttled Requests
    • Get Messages Failure
  • What should I do when I receive an alarm?

    When an alarm is triggered, the responsible team member needs to investigate the alarm and assess the situation.

    If the issue is related to the client (producer or consumer), then the team member needs to resolve it or investigate more with the Dev team.

    If the issue is related to the server, then the team member should contact Streaming service support.

  • How do I know that my stream is healthy?

    A healthy stream is a stream that is active: messages are received successfully, and messages are consumed successfully.

    Writes to the service are durable. If you can produce to your stream, and if you get a successful response, then the stream is health.

    After data is ingested, it is accessible to consumers for the configured retention period.

    If Get Messages API calls return elevated levels of internal server errors, the service isn't healthy.

    A healthy stream also has healthy metrics:

    • Put Messages Latency is low.
    • Put Messages Total Throughput is close to 1 MB/s per partition.
    • Put Messages Throttled Records is close to 0.
    • Put Messages Failure is close to 0.
    • Get Messages Latency is low.
    • Get Messages Total Throughput is close to 2 MB/s per partition.
    • Get Messages Throttled Requests is close to 0.
    • Get Messages Failure is close to 0.

    Error Types and Meanings

  • Where can I find the list of the API errors?

    Details about the API errors are located in the documentation.

  • What is partial failure?

    The Streaming service supports partial failures due to throttling, per partition. In the case of a partial failure, the service returns a 200 status code and indicates the failures in the response payload.

    If an entire request is throttled, you get a 429 status code.

  • Why am I getting the "You exceed the number of allowed partitions" error?

    Please contact Oracle Streaming Service to increase the limit for your tenancy.

    Pricing and Billing

  • How is OCI Streaming priced?

    OCI Streaming uses simple pay-as-you-go pricing. There are no upfront costs or minimum fees, and you only pay for the resources you use.

    • GET/PUT request price (GigaBytes of data transferred)

    Please refer to the pricing guide for actual pricing of OCI Streaming.

    Let's consider a scenario where a data producer puts 500 records per second in aggregate and each record is 2kB. The customer wants to egress/retrieve data at a rate twice that of ingress. Also the customer wants to store this data for 7 days.

    Price calculation/day (just as an example)

    Each record size = 4kB (rounded to 4kB for any record less than 4kB)

    • In this scenario, total amount of data produced per day = 500 * 4 * 24 * 60 * 60 kB = 172.8 GB
    • Total amount of data retrieved = Twice that of Produce = 2*172.8 GB = 345.6 GB
    • PUT Request price/day = $172.8 * $xx= $A
    • GET Request price/day = $345.6 * $xx = $B
    • Data storage cost = $172.8*24*7*$yy = $C
    • Total bill/day = $(A + B + C)

    Optional:

    • Extended data retention is an optional cost determined by the amount of additional days of retention beyond the default 24-hour retention (GigaBytes of storage per hour)
  • Is there a free tier for OCI Streaming?

    OCI Streaming doesn't have a free tier.