• Thread Author
Nvidia’s journey into the AI-powered PC era reached a fresh milestone with the first public benchmarks of its GB10 Superchip, but the results spark more questions than answers for power users, IT pros, and PC enthusiasts evaluating hardware for next-gen AI workstations. According to Geekbench 6 scores published and analyzed by Tom’s Hardware, Nvidia’s much-anticipated system-in-package (SiP) trails key rivals—namely Apple’s latest M3 and M4 chips and Qualcomm’s aggressive Snapdragon X Elite—in both single-core and multi-core performance. For Nvidia, a titan in GPU and AI accelerator markets, entering the ultracompetitive CPU space requires more than just raw compute claims or AI messaging; it demands silicon that delivers at the dawn of the AI PC revolution.

Close-up of an Nvidia GPU chip with a digital display showing data analytics on its circuit board.
Anatomy of the GB10 Superchip: Ambition Meets Architecture​

At the heart of GB10 is a hybrid configuration echoing the latest big.LITTLE approaches that dominate mobile and workstation CPU design. The SiP features:
  • 10 Arm Cortex-X925 performance cores running up to 3.90 GHz
  • 10 energy-efficient Cortex-A725 cores
  • All bound by a 256-bit memory interface supporting up to 128 GB LPDDR5X with an ample 273 GB/s bandwidth
Nvidia’s own marketing focuses not just on the Grace CPU’s computational power, but also on a Blackwell GPU boasting up to 1 PetaFLOP of FP4 AI performance, positioning the Superchip as an AI workstation centerpiece.

The Claim: Versatility for AI, Balanced CPU/Memory​

Nvidia touts the GB10 and related Grace-based designs as ideal for emerging AI workloads—compact, power-efficient, and able to offload intensive machine learning to the integrated Blackwell GPU. But benchmarking tells a more nuanced story, especially when viewed through the lens of synthetic CPU workloads.

Crunching the Numbers: How GB10 Compares in Practice​

The latest Geekbench 6 results provide the first public window into how GB10 handles general CPU tasks on Windows 11 Enterprise Insider Preview—a setup likely chosen for its early driver and microcode compatibility. Series of comparison data reveal:
ProcessorCore ConfigurationMax Freq.Single-Core (GB6)Multi-Core (GB6)
Nvidia GB1010P + 10E3.90 GHz2,96010,682
Snapdragon X Elite X1E-10012P4.20 GHz2,93915,654
Apple M34P + 4E4.05 GHz3,07611,863
Apple M46P + 6E4.40 GHz3,74714,900
Intel Core Ultra 9 285K8P + 16E/32T6.0 GHz3,13022,984
AMD Ryzen 9 9950X16P/32T5.70 GHz3,44723,344
Source: Tom’s Hardware, May 2025 / Geekbench 6 database

The Single-Core Story​

With a single-threaded score of 2,960, GB10 is roughly on par with Snapdragon X Elite and narrowly trails Apple’s M3, but falls short of newer Apple silicon (like the M4) and next-generation x86 parts from Intel and AMD. According to independent verification from Geekbench’s public rankings and cross-referenced with Anandtech’s coverage, both Apple and Qualcomm chips are leading in efficiency and IPC (instructions per cycle) thanks to advanced Arm core designs and top-tier process technology.

Multi-Core Scaling: Where GB10 Slips​

While one might expect GB10’s 20 cores to shine in parallel workloads, it only manages a multi-core score of 10,682—notably less than the Snapdragon X Elite’s 12 cores and nearly 39% lower than Apple’s M4 (10 cores). Even the 8-core Apple M3 nudges ahead in multithreaded score, underscoring a limitation in either the efficiency core contribution, scheduler optimization, or fundamental architectural balance.
This relative underperformance calls into question whether all of GB10’s efficiency cores were fully engaged (an issue noted in the Tom’s Hardware report and echoed by early engineers experimenting with Geekbench on pre-release silicon) or if early firmware, drivers, or Windows scheduler lacked proper tuning. The pattern suggests Nvidia is grappling with teething issues typical of a first-generation PC-class CPU platform.

Why Is GB10 Lagging—By Design or Oversight?​

To contextualize GB10’s results, it’s essential to recall that Nvidia is aiming for a very different sweet spot than Apple’s or Intel’s desktop chips. GB10 is tailored for compact AI workstations, environments where the CPU serves primarily in a supporting role: orchestrating data flow, managing device memory, and staging data for massive AI computation on the companion Blackwell GPU.

AI Versus General Purpose​

For conventional workloads—such as compiling code, database management, or running virtual machines—pure CPU horsepower still counts. In these use cases, GB10’s modest scores may leave power users cold. But for AI-focused tasks, such as inferencing large language models or executing machine vision pipelines, the offloaded, GPU-driven approach might mitigate raw CPU limitations.

Notable Strengths​

  • Unified Memory with 273 GB/s bandwidth positions GB10 competitively with Apple’s latest M4 Pro (identical memory setup), outpacing many x86 competitors still stuck on traditional DDR5 or even DDR4 bottlenecks.
  • Efficiency-favoring Arm architecture enables lower power consumption, unlocking new form factors for AI PCs and edge servers.
  • Copilot+ PC ambition: With Microsoft and Nvidia both chasing Copilot-powered PCs (devices versed in local AI), GB10 may well serve as a bridge between cloud and client for AI acceleration.

Potential Risks and Weaknesses​

  • Windows 11 Integration Unproven: The Geekbench scores were captured on a pre-release Windows 11 Enterprise Insider Preview, suggesting unresolved compatibility or scheduler issues. Further updates to Windows or Nvidia’s microcode could significantly influence release silicon performance.
  • Efficiency Core Underutilization: As contemporary workloads become increasingly multithreaded (especially in AI batch processing), the underperformance of GB10’s efficiency cores undercuts its theoretical competitive advantage.
  • Competing Chips are Advancing Fast: Apple and Qualcomm are iterating on leading-edge process technology (N3E/N3B for Apple, N4/N4P for Qualcomm), while Nvidia must maintain pace in a rapidly escalating arms race.

Benchmarks Are Just the Beginning: Real-World Relevance​

It’s easy to fixate on Geekbench numbers, but seasoned system builders and IT decision-makers know such synthetic tests are only proxies. For AI-centric workstations, the actual deliverable is the sum of CPU, GPU, memory, and interconnect performance, all working in harmony with real-world workloads.

GB10’s Role in Nvidia’s Broader Strategy​

While not headlining the multi-core charts, GB10’s architectural choices make sense for specialized, GPU-accelerated use cases. With Microsoft expected to emphasize on-device generative AI by year’s end, the need for massive, local compute grows. Nvidia’s ability to pair modest CPU performance with an extraordinary Blackwell GPU and unified high-speed memory could enable compact edge devices or “AI PCs” that compete favorably with Apple’s Mac Studio, MacBook Pro, and advanced Snapdragon X Elite-powered laptops.

Memory Bandwidth: The Great Equalizer​

A consistent bottleneck in AI performance on commodity PCs is memory throughput. Nvidia’s provision of 128 GB LPDDR5X with 273 GB/s bandwidth—matching Apple’s highest-tier M4 Pro—will be decisive for multi-modal AI models, rapid LLM inferencing, and high-throughput data workflows.

Industry Support and Ecosystem​

Transitioning from a GPU-centric vendor to a full PC platform supplier is a formidable challenge. Nvidia will need to nurture deep partnerships with OEMs, software vendors, and the open-source ecosystem to ensure optimal support for GB10 and its successors. Microsoft's anticipated expansion of Copilot features in Windows and enterprise AI APIs will be a critical enabler—or potential bottleneck if Nvidia can't match Apple’s or Qualcomm’s level of hardware/software integration.

What Comes Next: Waiting for N1/N1X​

Rumors swirling ahead of Computex suggest Nvidia is not content to stand pat with GB10. The anticipated N1 and N1X processors—aimed squarely at desktop and laptop PCs—are expected to borrow liberally from the GB10 formula. Should early performance patterns hold, Nvidia’s new entries may offer only incremental improvements, unless scheduler, driver, or core utilization inefficiencies are addressed between now and launch.

Will Nvidia Catch Up?​

  • Software Optimization: With a history of rapid driver iteration in GPUs, Nvidia may repeat this playbook—rolling out performance tweaks, microcode, and even Windows-level scheduler collaboration to unlock latent performance.
  • AI-Centric Focus: If the emerging “AI PC” landscape devalues raw CPU performance in favor of tight CPU–GPU integration, Nvidia may find itself ahead of the curve.
  • OEM Adoption: Success will hinge not just on benchmarks, but on how quickly vendors like Dell, HP, Lenovo, and system integrators can build compelling devices around the GB10 and its successors.

Table: Where GB10 Stands (Synthesized from multiple public benchmarks)​

ChipProcess NodeCore ConfigMax Freq.SC Perf.MC Perf.Mem Perf. (GB/s)Target Device
Nvidia GB10N3E10P+10E Arm3.90 GHz2,96010,682273AI Workstation
Apple M4 ProN3E6P+6E Arm4.40 GHz3,94222,405273High-End MacBook/Pro
Snapdragon X EliteN412P Arm4.20 GHz2,93915,654135–170Ultraportable Laptop
Intel Core Ultra 910nm ESF8P+16E/32T x866.0 GHz3,13022,984~100Desktop
AMD Ryzen 9 9950XN4P16P/32T x865.70 GHz3,44723,344~100Desktop
SC = Single Core, MC = Multi Core
Source: Tom’s Hardware, Geekbench, manufacturer specs

Critical Outlook: Strengths, Weaknesses, and Unanswered Questions​

Notable Strengths​

  • Unified Memory Architecture: Enables large AI model inferencing, big data analytics, and powerful multi-modal workflows in a compact package.
  • Energy Efficiency: Arm-based, hybrid design could lead to lower thermals and longer battery or operational lifetimes in edge workstation settings.
  • AI Integration: Tightly coupled CPU/GPU approach aligns with the trend toward “AI PCs,” allowing more computational workloads to be done locally instead of relying on external data centers or the cloud.

Potential Risks and Gaps​

  • Raw CPU Performance: As current benchmarks show, CPU-centric tasks still heavily favor Apple, Qualcomm, Intel, and AMD over Nvidia’s first-gen offering, especially in multi-threaded scenarios.
  • Ecosystem Readiness: Pre-release hardware, early Windows builds, and untested drivers mean real-world stability and compatibility are far from guaranteed in initial GB10 rollouts.
  • Competitive Pressure: Rapid innovation by rivals—especially Apple with its aggressive silicon roadmap—could leave Nvidia’s CPUs behind unless significant architectural or software progress is made.

Unverifiable Claims and Caution Points​

  • Some of the multi-core limitations may stem from early Windows scheduler bugs, microcode immaturity, or even disabled efficiency cores in the test platform. Actual shipping performance could differ significantly.
  • The stated 1 PetaFLOP AI compute from the Blackwell GPU makes GB10 appealing for AI use-cases, but independent, real-world inferencing results are still pending.

Conclusion: GB10 as a Harbinger, Not a Headliner—Yet​

For Nvidia, the GB10 Superchip is both a bold entrance into the full-stack PC platform space and a reminder that launching a competitive CPU is among the most challenging feats in silicon engineering. Current benchmarks expose real limitations—especially in multi-core scaling—but also hint at the direction AI workstations, AI PCs, and edge compute nodes may take in the coming years.
While Apple’s M4 and Qualcomm’s Snapdragon X Elite continue to set the bar for Arm-based performance in general-purpose computing, Nvidia’s specialized focus on AI could force a reevaluation of what matters most in next-generation systems. For power users, IT professionals, and AI researchers, the question isn’t “Can Nvidia catch up in benchmarks?”, but rather, “Can it redefine the AI PC experience at scale?”
As pre-release caveats subside, all eyes will be on Computex for the arrival of the N1 and N1X CPUs, where Nvidia will be tested not solely on hype but on the unforgiving metrics of real-world performance, compatibility, and platform reliability. In the AI-fueled arms race ahead, Nvidia’s GB10 may ultimately be remembered as the opening salvo, not the finish line. For now, its role as a trailblazer in the AI PC era is both promising and instructive, revealing the challenges every legacy GPU company faces when crossing into the demanding world of CPUs. The next chapter in AI computing may not be written in teraflops or Geekbench scores, but in how these platforms empower a new breed of software and user experience at the edge and beyond.

Source: Tom's Hardware Nvidia's GB10 Superchip trails Apple's M3 and Qualcomm's Snapdragon X Elite in latest benchmarks
 

Back
Top