Author: Kartik Manocha, Product Lead | Cloud Solution Architect, Intel
In the summer of 2025, Google Cloud's C4 instances reached General Availability (GA) powered by Intel® Xeon® 6 processors. We were excited to watch our customers achieve the performance gains we had validated in earlier tests. Performance benchmarks from production workloads and customer deployments demonstrate that Google Cloud’s C4 VMs powered by Intel Xeon 6 processors deliver substantial, measurable business value, making them the optimal choice for the most demanding cloud workloads.
Why C4 Matters Beyond the Specs
Google Cloud's C4 specifications are impressive:
- up to 4.2 GHz (the highest frequency of any Google Compute Engine VM)
- more vCPUs and RAM than comparable Intel-based instances
- 1.35x higher maximum memory bandwidth.(1)
And, while these features alone are impressive, what's most interesting is how these features translate into superior workload performance for customers.
Google Cloud's C4 launch also introduced 28 new instance shapes, including next-generation Titanium Local SSD options, bare metal instances, and extra-large configurations. Enhanced maintenance controls with 30-day uptime windows and scalable hyperdisk storage (up to 500k IOPS and 10 GB/s) provide the operational flexibility that enterprise workloads require.
Performance Where it Counts
The gen-over-gen performance data demonstrates what Intel Xeon 6 brings to cloud computing. Across diverse workloads, including AI inference, database operations, and web applications, the results are consistently strong. The following charts highlight the performance gains of C4 VMs on Intel Xeon 6 over C3 VMs running on 4th Gen Intel Xeon processors, demonstrating a powerful TCO impact. The charts show performance and performance/dollar gains at 8 vCPU and 16 vCPU by workload type.
Compute workloads realize up to 1.55x performance (8vCPU) and up to 2.01x perf/dollar (16 vCPU*) improvements.
AI workloads realize up to 1.38x performance (8vCPU) and up to 1.76x perf/dollar (16 vCPU*) improvements.
Database workloads realize up to 1.54x performance (16vCPU) and up to 2.05x perf/dollar (16 vCPU*) improvements.
Web workloads realize up to 1.54x performance (8 vCPU) and up to 1.93x perf/dollar (16 vCPU*) improvements.
“Our cloud-native data and AI platform, SAS® Viya®, is engineered for performance and productivity. Through our partnership with Intel, we've optimized our software for Intel hardware, and the benefits are evident as we continue to benchmark C4 with Granite Rapids. We've observed up to 20% performance improvement in areas such as deep learning and synthetic data generation. We look forward to continue scaling with Intel and Google Cloud to offer superior performance at a lower cost for cloud analytics customers."
- Craig Rubendall, Vice President, Applied Architecture and Technology, SAS.
GCP C4 Bare metal on Intel Xeon 6 Processors
For workloads that demand direct access to the CPU and memory resources, Google Cloud introduced C4 bare metal shapes. In addition to offering all the raw compute and power of the Intel Xeon 6 processor, these C4 bare metal instances can use several on-board, function-specific accelerators and offloads, including Intel® QuickAssist Technology (Intel® QAT) for accelerating compression, encryption, and decryption.
Intel QAT Yields Impressive Results
By upgrading to C4 Metal on Intel Xeon 6 processors with QAT, you can realize performance gains up to 2.1x (See chart below).
c4-highmem-288-metal (Intel Xeon 6 Processor) vs. c3-standard-192-metal (4th Gen Intel Xeon Processor)
"Nutanix is committed to helping our customers run their applications and data anywhere, and our partnership with Google Cloud and Intel helps us deliver on that commitment. With Nutanix Cloud Clusters (NC2) on Google Cloud's C4 metal instances, our software maximizes the potential of the Xeon 6 platform, unlocking faster I/O and better throughput for mission-critical workloads. This translates into tangible customer benefits, from the ability to run AI inference workloads more efficiently using Intel AMX to hosting dense VDI clusters. We look forward to industry-leading performance with the Xeon 6 platform as we prepare to launch the new NC2 offering on C4 metal."
- Saveen Pakala, VP of Platforms at Nutanix
The Processor Behind the Results
Intel Xeon 6 processors offer so much more than impressive performance.
- Impactful Native FP16 Support: One of the most impactful new features for AI and HPC workloads, FP16 operations are now accelerated directly in the hardware, processing twice as many data elements per cycle, fitting larger models into memory.
- ISV Ecosystem Enablement: Intel Xeon 6 processors are designed to strengthen Independent Software Vendor (ISV) ecosystems, helping partners optimize, certify and scale applications faster across data center, AI and cloud environments.
- Superior AI Performance: Intel Xeon 6 processors deliver leadership performance in traditional machine learning, smaller GenAI models, and GPU-accelerated workloads in a host CPU capacity, reinforcing Xeon 6 as the go-to CPU for AI systems.
- More Memory Bandwidth: Intel Xeon 6 processors offer 12 memory channels, a full 50% per socket increase, significantly benefitting AI, analytics, and HPC workloads.
- Highest Frequency: 4.2 GHz frequency, the highest single core turbo frequency, and 3.9 GHz all core turbo frequency offering a 25% increase over the previous gen. Critical for all your latency sensitive workloads.
- Advance Sustainability: Intel Xeon 6 processors offer lower power consumption, reduce energy consumption, cut emissions and enhance sustainability goals without compromising business performance.
Google Cloud C4 instances on Intel Xeon 6 processors are delivering significantly more than just raw performance. Key benefits include a frequency of up to 4.2 GHz, 28 new instance shapes, accelerated native FP16 support, a 50% increase in memory bandwidth with 12 channels, and improved energy efficiency. These significant and tangible enhancements add up to improved TCO, making Intel Xeon 6 processors an excellent choice for your most demanding cloud workloads.
Conclusion
Maximize your AI ROI and time to value using Google Cloud's C4 VMs. These instances, featuring the high-performance Intel Xeon 6 processors, are designed to deliver breakthrough speeds not just for AI, but also for demanding general-purpose workloads. Get exceptional efficiency and power across all your tasks. That's the versatile power of Intel Inside®.
Learn more about Google Cloud C4 instances and start testing today.
Endnotes:
* 16 vCPU (Per vCPU score) for c3-standard instances is calculated based on 22 vCPU score.
Hardware Configurations:
c4-standard-lssd:
8: 1-instance, 8vcpu (Granite Rapids), 30GB total memory, bios: Google, microcode: 0xffffffff, Ubuntu 22.04.5 LTS, 6.8.0-1033-gcp ; Test by Intel as of September 2025
16: 1-instance, 16vcpu (Granite Rapids), 60GB total memory, bios: Google, microcode: 0xffffffff, Ubuntu 22.04.5 LTS, 6.8.0-1033-gcp ; Test by Intel as of September 2025
288: 1-instance, 288vcpu (Granite Rapids), 1088GB total memory, bios: Google, microcode: 0x1000380,Ubuntu 22.04.5 LTS, 6.8.0-1033-gcp ; Test by Intel as of October 2025
c3-Standard-lssd:
8: 1-instance, 8vcpu (Sapphire Rapids), 32GB total memory, bios: Google, microcode: 0xffffffff, Ubuntu 22.04.5 LTS, 6.8.0-1033-gcp ; Test by Intel as of September 2025
22: 1-instance, 22vcpu (Sapphire Rapids), 88GB total memory, bios: Google, microcode: 0xffffffff, Ubuntu 22.04.5 LTS, 6.8.0-1033-gcp ; Test by Intel as of September 2025
192: 1-instance, 192vcpu (Sapphire Rapids), 768GB total memory, bios: Google, microcode: 0x2b000603,Ubuntu 22.04.5 LTS, 6.8.0-1033-gcp ; Test by Intel as of October 2025
Workload Configurations:
Integer/FP Throughput (c4: 8/16, c3: 8/22): SPEC CPU* 2017 1.1.9, oneAPI 2024.0.2, HyperDisk Balanced used
STREAM Triad (c4: 8, c3: 8): App Version: v5.10, Triad, oneAPI 2025.0
Hammer DB/MySql (c4: 8/16, c3: 8/22): HammerDB 4.7, MySQL 8.0.33, local SSD used
NGINX (c4: 8/16, c3: 8/22): v1.22.1, QAT_ENGINE v1.6.1 , Openssl 3.1.4, TLS 1.3 handshake (requests/sec), HyperDisk Balanced used
MongoDB (c4: 8/16, c3: 8/22): MongoDB 6.0.4, HyperDisk Balanced used
BERT (c4: 8/16, c3: 8/22): pytorch+ipex 2.6.0.dev20241124, INT8, batched, HyperDisk Balanced used
RN50 (c4: 8/16, c3: 8/22): pytorch+ipex : 2.5.0.dev20240619+cpu, INT8, batched, HyperDisk Balanced used
Redis (c4: 8/16, c3: 8/22): Redis 7.2.5, memtier 2.0.0, HyperDisk Balanced used
HammerDB/PostgreSQL (c4: 8, c3: 8): HammerDB 4.7, PostgreSQL 17.2, HyperDisk Balanced used
DLRM (c4: 32, c3: 44): PyTorch+IPEX 2.6.0.dev20241124+cpu, BSX INT8, HyperDisk Balanced used
NGINX SSL Offload (c4: 288, c3: 192) : async_nginx v1.0.0 / nginx 1.26.2 / GCC 13.3 / QATEngine 2.00.1 / QAT Driver 2.0.1 with 275/200/200 clients
Pricing
Prices on demand per 1 month as of 10/02/2025:
c4-standard-8-lssd: $349.40
c4-standard-16-lssd: $698.00
c3-standard-8-lssd: $355.15
c3-standard-22-lssd: $930.26
Notices and Disclaimers
Performance varies by use, configuration, and other factors. Learn more on the Performance Index site.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
Your costs and results may vary.
Intel technologies may require enabled hardware, software, or service activation.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.