很抱歉。找不到符合您搜尋條件的項目。

以下操作有助您找到所需內容,建議您不妨一試:

  • 檢查您關鍵字搜尋的拼字是否正確。
  • 改用您所輸入關鍵字的同義詞,例如以「應用軟體」取代「軟體」。“”“”
  • 開始新的搜尋。
聯絡我們 登入 Oracle Cloud

AI Infrastructure

With support from one GPU up to tens of thousands of GPUs, Oracle Cloud Infrastructure (OCI) Compute virtual machines and bare metal instances can power applications for computer vision, natural language processing, recommendation systems, and more. For training large language models (LLMs), including conversational AI and diffusion models, OCI Supercluster provides ultralow latency cluster networking, HPC storage, and OCI Compute bare metal instances powered by NVIDIA GPUs.

Oracle CloudWorld: Conversation between Oracle CEO Safra Catz and NVIDIA CEO Jensen Huang (9:58)

與 Oracle 討論如何加速 GPU 工作負載。

OCI at NVIDIA GTC, the conference for AI and the metaverse

Explore how OCI supports model training and parallel applications

Deploy up to 32,768 NVIDIA A100 GPUs

Each OCI Compute bare metal instance is connected using OCI’s ultralow latency cluster networking that can scale up to 32,768 NVIDIA A100 GPUs in a single cluster. These instances use OCI’s unique high performance network architecture that leverages RDMA over Converged Ethernet (RoCE) v2 for creating RDMA superclusters with microseconds of latency between nodes and near line rate bandwidth of 200 Gb/sec between GPUs.

OCI’s implementation of RoCE v2 provides

  • 1,600 Gb/sec of bandwidth per server and 200 Gb/sec of bandwidth per A100 GPU
  • 3,200 Gb/sec of bandwidth per server and 400 Gb/sec of bandwidth per H100 GPU

High-speed RDMA cluster networks

High performance computing on Oracle Cloud Infrastructure provides powerful, cost-effective computing capabilities to solve complex mathematical and scientific problems across industries.

OCI's bare metal servers coupled with Oracle’s cluster networking provide access to ultralow-latency (less than 2 microseconds across clusters of tens of thousands of cores) RDMA over converged ethernet (RoCE) v2.

The chart shows the performance of Oracle’s cluster networking fabric. OCI can scale above 100% below 10,000 simulation cells per core with popular CFD codes, the same performance that you would see on-premises. It’s important to note that without the penalty of virtualization, bare metal HPC machines can use all the cores on the node without having to reserve any cores for costly overhead.

High performance computing (HPC) on OCI

HPC on OCI rivals the performance of on-premises solutions with the elasticity and consumption-based costs of the cloud, offering on-demand potential to scale tens of thousands of cores simultaneously.

With HPC on OCI, you get access to high-frequency processors; fast and dense local storage; high-throughput, ultralow-latency RDMA cluster networks; and the tools to automate and run jobs seamlessly.

OCI can provide latencies as low as 1.7 microseconds—lower than any other cloud vendor, according to an analysis by Exabyte.io. By enabling RDMA-connected clusters, OCI has expanded cluster networking for bare metal servers equipped with NVIDIA A100 GPUs.

The groundbreaking backend network fabric lets customers use Mellanox’s ConnectX-5 100 Gb/sec network interface cards with RDMA over converged Ethernet (RoCE) v2 to create clusters with the same low-latency networking and application scalability that can be achieved on-premises.

Unique bare metal GPU clusters

OCI’s bare metal NVIDIA GPU instances offer startups a high performance computing platform for applications that rely on machine learning, image processing, and massively parallel high performance computing jobs. GPU instances are ideally suited for model training, inference computation, physics and image rendering, and massively parallel applications.

The BM.GPU4.8 instances have eight NVIDIA A100 GPUs and use Oracle’s low-latency cluster networking, based on remote direct memory access (RDMA) running over converged Ethernet (RoCE) with less than 2-microsecond latency. Customers can now host more than 500 GPU clusters and easily scale on demand.

See how OCI and NVIDIA power next-generation AI models

Customers such as Adept, an ML research and product lab developing a universal AI teammate, are using the power of OCI and NVIDIA technologies to build the next generation of AI models. Running thousands of NVIDIA GPUs on clusters of OCI bare metal compute instances and capitalizing on OCI’s network bandwidth, Adept can train large-scale AI and ML models faster and more economically than before.

Adept builds a powerful AI teammate for everyone with Oracle and NVIDIA

“With the scalability and computing power of OCI and NVIDIA technology, we are training a neural network to use every software application, website, and API in existence—building on the capabilities that software makers have already created.”

David Luan, CEO
Adept

SoundHound selects OCI to support huge company growth

“We view this relationship with OCI as long term. We’re excited about taking advantage of the GPUs and using that to train our next generation of voice AI. There's a lot that we think that OCI will provide for us in terms of future growth.”

James Hom, Cofounder and Vice President of Products
SoundHound

“We selected Oracle because of the affordability and performance of the GPUs combined with Oracle’s extensive cloud footprint. GPUs are very important for training deep neural network models. The higher the GPU performance, the better our models. And because we work in several different countries and regions, we needed the infrastructure to support that.”

Nils Helset, Cofounder and CEO
DigiFarm

University of Michigan improves AI text summaries

“When running experiments with the same configuration, the A100 uses about 25% less time on average. What makes it even better is the smooth process of setting up the machine on Oracle Cloud.”

Shuyang Cao, Graduate Student Research Assistant
University of Michigan

MosaicML scales AI/ML training on OCI

Learn why MosaicML found that OCI is the best foundation for AI training.

Softdrive offers next-generation workstations with OCI Compute and NVIDIA A10

“Softdrive is the future of business computers. In the cloud PC market, performance means everything. NVIDIA GPUs on OCI bare metal servers have dramatically improved the experience for our customers.”

Leonard Ivey, Cofounder
Softdrive

What’s included with GPU instances on OCI?

Dedicated engineering support

OCI provides world-class technical experts to help you get up and running. We remove the technical barriers of a complex deployment—from planning to launch—to help ensure your success.

  • Solution architecture development
  • Networking, security, and auditing
  • Onboarding to OCI
  • Application migration
  • Post-migration training

Improved economics

OCI is built for enterprises seeking higher performance, consistently lower costs, and easier cloud migration for their current on-premises applications. When compared to AWS, OCI offers

  • Private network connectivity that costs 74% less
  • More than 3X better price-performance for compute
  • Up to 44% less expensive infrastructure with local solid-state disks, twice the RAM, RDMA networking, and a performance SLA
  • 20X the input/output operations per second for less than half the cost
March 21, 2023

Announcing limited availability of OCI Compute bare metal instances powered by NVIDIA H100 GPUs

Sagar Rawal, Vice President of Compute Product Management, OCI

Today, Oracle is announcing limited availability of Oracle Cloud Infrastructure (OCI) Compute bare metal instances powered by NVIDIA H100 Tensor Core GPUs and NVIDIA networking. OCI bare metal instances provide customers with consistent performance, ultralow latency, isolation, and control.

Read the complete post

Additional cloud architecture and deployment resources

OCI Cloud Adoption Framework (CAF)

IDC’s view on OCI and hybrid cloud

Omdia’s perspective on why all clouds are not the same

OCI for the modern enterprise