Back to Search
Overview
Mid-Level

Network Architect

Confirmed live in the last 24 hours

Cerebras Systems

Cerebras Systems

Sunnyvale, CA
On-site
Posted April 15, 2026

Job Description

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. 

Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.

About The Role

As a Network Architect on the Cluster Architecture Team, you will work closely with the vendors, internal networking teams and industry peers to develop best-in-class front-end datacenter and interconnect architecture of the current and future generations of the Cerebras AI clusters. You will be responsible for developing proof-of-concept of new network designs and features enabling resilient and reliable network for AI workloads. The role will require cross-functional collaboration and interaction with diverse hardware components (e.g., network devices and the Wafer-Scale Engine) as well as software at several layers of the stack, from host-side networking to cluster-level coordination. The role also requires understanding of network monitoring systems and network debugging methodologies.

Responsibilities

  • Design and architect front-end network fabrics for AI/ML and HPC systems.
  • Identify and resolve performance and efficiency bottlenecks, ensuring high resource utilization, low latency, and high-throughput communication.
  • Lead cross-functional technical projects spanning multiple teams and integrating diverse software and hardware components to deliver advanced networking technologies.
  • Foster clear and effective communication across teams and stakeholders.
  • Collaborate with vendors and industry partners to shape network hardware and feature roadmaps.
  • Represent Cerebras in industry forums and technical communities.
  • Serve as the central point of contact for network reliability issues.

Skills & Qualifications

  • Ph.D. in Computer Science or Electrical Engineering + 10 years industry experience or Master’s in CS or EE + 15 years industry experience.
  • 8+ Years of experience in large scale network designs in datacenter and cloud environments.
  • Extensive experience debugging networking issues in large distributed systems environment with multiple networking platforms and protocols.
  • Experience of managing and leading multi-phase and multi-team projects.
  • Networking platforms like Juniper, Arista, Cisco, Open box architectures (Sonic, FOBSS).
  • Networking protocols like VXLAN, EVPN, RoCE, BGP, DCQCN, PFC, Streaming telemetry.
  • Familiarity with automation languages like Python, or Go.
  • Familiarity with Network visibility and management systems.
  • Prior experience in hyperscalers or cloud service providers is strongly preferred.

Why Join Cerebras

People who are serious

pythongomachine learningaidataproductdesign