Meta’s (formerly Facebook) journey in developing open and disaggregated network switches has revolutionized the way data centers are designed and operated. Beginning with the introduction of the Wedge-16X, Meta has consistently pushed the boundaries of networking hardware, contributing multiple switch designs to the Open Compute Project (OCP) that cater to varying performance needs and network topologies. This blog details the evolution of Meta’s switch designs, their technical specifications, and the chipsets that power them, presenting a comprehensive chronology from the Wedge-16X to the latest Minipack2.
1. Wedge-16X: The First Step in Open Networking (2014)
The Wedge-16X was Meta’s first foray into open networking, debuting in 2014 as a 16-port 40G QSFP+ switch. Designed as a Top-of-Rack (TOR) switch, Wedge-16X was optimized for web-scale data centers, with a focus on flexibility and openness.
Figure 1: Facebook / Meta Wedge-16X Switch.
Technical Highlights:
- Ports: 16 QSFP+ ports supporting 40G or 4x10G via breakout cables.
- Chipset: Broadcom Trident2 silicon, providing 1.28 Tbps full duplex switching capacity.
- CPU Module: Panther+ Microserver module based on Intel Atom C2550 4-core x86 processor.
- Software Compatibility: Fully compatible with FBOSS (Facebook Open Switching Software) and OpenBMC for open-source hardware management.
The Wedge-16X marked the beginning of Meta’s journey into creating open hardware for networking, laying the groundwork for subsequent switch designs. I still remember how the whole industry was excited about it — a great bare-metal switch that even came in a lovely blue color! There were so many comments and speculations about the Microserver as a control plane, with many thinking this was a new era. The reality, though, was that Facebook knew the Microserver very well, and the integration for them was straightforward. While they moved to more traditional x86 CPUs from Intel in future designs, the Wedge-16X with a Microserver was a real “wow” moment! 😉
2. 6-Pack: Modular Switch for Fabric Networks (2016)
The 6-Pack switch, introduced in 2016, represented a significant shift towards modular network architectures. It was designed to operate as a fabric switch, connecting multiple TOR switches in a data center.
Figure 2: Facebook / Meta Modular 6-Pack Switch – 128 x 40G.
Technical Highlights:
- Architecture: Modular chassis switch consisting of 12 Wedge 40G switches in a single enclosure.
- Ports: Each chassis supports 128 x 40G ports.
- Chipset: Broadcom Trident2+ ASICs.
- CPU Module: Uses the same Panther+ Microserver module as Wedge 16X.
- Design Philosophy: No central supervisor; each line and fabric card operates like an independent Wedge switch.
- Software Compatibility: Fully integrated with FBOSS, leveraging BGP for internal communication.
The 6-Pack facilitated scalable, high-capacity networking within data center fabrics, acting as a building block for Meta’s modular network architecture. It was a great example of designing once and reusing it multiple times — and it was still blue! 😉 I recall showing it off at some local events in Europe and even managing to fit the entire sample into my old Nissan X-Trail, secured with some blankets — a fully certified way to transport it from Amsterdam to Vienna in one day!
3. Wedge 100: Advancing to 100G Connectivity (2015)
In late 2015, Meta introduced the Wedge 100, a second-generation TOR switch designed to meet the increasing demands of its data centers by supporting 100G connectivity.
Figure 3: Facebook / Meta Wedge100 Switch with 21’ OCP Rack V2 Adapter.
Technical Highlights:
- Ports: 32 x 100G QSFP28 ports.
- Chipset: Broadcom Tomahawk ASIC, delivering up to 3.2 Tbps switching capacity.
- CPU Module: COM-Express Type 6 module for flexible CPU configurations, primarily using Atom CPUs.
- Serviceability: Tool-less access, hot-swappable fan trays, and efficient airflow management.
- Software Compatibility: FBOSS and OpenBMC with a robust software ecosystem around it.
The Wedge 100 enabled Meta to build 100G data centers and maintain compatibility with existing 40G devices, ensuring a smooth transition to higher-speed networking. And about its color — from this point onward, all designs were just silver, just bare metal! We joked that the absence of blue was to cut costs to the bare minimum. 😉
4. Wedge 100S: Enhanced 100G Switch for Scalability (2017)
The Wedge 100S, introduced in 2017, was an enhanced version of the Wedge 100, designed to provide even more flexibility and scalability in Meta’s data centers.
Technical Highlights:
- Ports: 32 x 100G QSFP28 ports, similar to Wedge 100.
- Chipset: Broadcom Tomahawk+ ASIC with additional optimizations for better performance and power efficiency.
- Modular Design: Updated COM-Express module, allowing for easy upgrades and compatibility with future enhancements, mostly using elements from the Xeon family.
- Serviceability: Improved hot-swappable fan trays and tool-less access for easier maintenance.
- Software Compatibility: Continued support for FBOSS and OpenBMC.
The Wedge 100S built upon the Wedge 100’s success, offering improved scalability and performance to support Meta’s growing data center infrastructure needs. The 100S came with a small chipset upgrade and a more powerful control plane, but the same tool-less housing, PSUs etc.
5. Backpack: The Next Generation Modular Switch (2017)
Backpack, introduced in 2017, was Meta’s second-generation modular switch platform designed to provide higher speeds and greater scalability compared to its predecessor, the 6-Pack.
Figure 4: Facebook / Meta Modular 100G Backpack – 128 x 100G.
Technical Highlights:
- Architecture: Fully disaggregated chassis with a two-stage Clos architecture.
- Ports: 128 x 100G ports, equivalent to 12 Wedge100S switches connected in a modular chassis.
- Chipset: Broadcom Tomahawk+ ASICs, supporting 3.2 Tbps per ASIC.
- Cooling Design: Advanced thermal design to support low-cost 55C optics.
- Software Compatibility: FBOSS and OpenBMC with support for external BGP monitoring tools.
Backpack’s modular design allowed for high-density 100G connectivity with enhanced cooling and power efficiency, supporting the scaling needs of Meta’s data centers.
6. Minipack: Compact, Energy-Efficient Modular Switch (2019)
In 2019, Meta unveiled Minipack, a more compact, energy-efficient switch that could serve multiple roles within its data center topologies.
Figure 5: Facebook / Meta Modular Minipack – 128 x 100G I 32 x 400G.
Technical Highlights:
- Architecture: Modular switch with interface modules for up to 128 x 100G ports. Planned with two types of line cards — one with 16 x 100G and another with 4 x 400G, allowing mixing and matching.
- Chipset: Broadcom Tomahawk 3 ASIC, providing 12.8 Tbps switching capacity.
- Design Features: Orthogonal-direct PIM (Port Interface Module) design for modularity and easy upgrades.
- Cooling Efficiency: Optimized airflow and thermal management to use low-power optics.
- Software Compatibility: FBOSS, with external compatibility for Cumulus Networks and SONiC from the OCP Networking Project.
Minipack was designed to reduce power consumption by up to 50% compared to Backpack while maintaining flexibility and modularity. It also marked the first time other Network Operating Systems (NOS), like Cumulus Linux and SONiC, were supported. What was also critical was that Minipack featured a single-chip design — unlike the older 6-Pack or Backpack.
7. Minipack2: Next-Generation Modular Switch (2021)
Building on the success of Minipack, Meta introduced Minipack2 in 2021, a next-generation modular switch designed to handle even higher performance and greater flexibility.
Figure 6: Facebook / Meta Modular Minipack2 – 128 x 200G I 64 x 400G.
Technical Highlights:
- Ports: Up to 128 x 200G ports or 64 x 400G ports.
- Chipset: Broadcom Tomahawk 4 (25.6 Tbps) ASIC, supporting higher port densities and speeds.
- Modularity: Supports multiple interface modules (PIM-16Q for 100G/200G, PIM-4DD for 400G).
- Optics Compatibility: 200G FR4 QSFP56 optics and backward compatibility with 100G switches.
- Software Compatibility: FBOSS, with support for SONiC and other OCP-compliant operating systems.
Minipack2 is optimized for high-performance, flexible data center networks, supporting a wide range of speeds and port configurations to meet future networking demands. Again, they managed to double performance with a single-chip design.
8. Wedge 400 and Wedge 400C: Pioneering High-Speed TOR Switches (2021)
The Wedge 400 and Wedge 400C represent Meta’s latest generation of TOR switches, designed for high-performance environments and data center modernization.
Figure 6: Facebook / Meta Wedge400 48 ports: 16 x 400G QSFP-DD + 32 x 200G QSFP56.
Wedge 400: Powered by Broadcom Tomahawk 3
- Ports: Uplinks: 16 x QSFP-DD (400/200/100G), 32 x QSFP56 (200/100/50/25/10G).
- Chipset: Broadcom Tomahawk 3 ASIC, offering 12.8 Tbps switching capacity.
- Performance: 4x the switching capacity of Wedge 100 and 8x the burst absorption performance.
- Serviceability: Field-replaceable CPU subsystem for easy upgrades and maintenance.
Wedge 400C: Featuring Cisco’s Silicon One
- Ports: Uplinks: 16 x QSFP-DD (400/200/100G), 32 x QSFP56 (200/100/50/25/10G).
- Chipset: Cisco Silicon One Q200L ASIC.
- Design Flexibility: Optimized for high-density, high-speed environments with support for a variety of network operating systems.
- Energy Efficiency: Reduces power consumption by 20% compared to the previous models.
The interesting port combinations allow for direct 400G connections to spines and flexible 200G connections to NICs, accommodating various configurations from 10G to 400G. The Wedge series continues to serve the TOR / Leaf roles in Meta’s data centers, providing critical server connectivity at various speeds.
Conclusion: Meta’s Commitment to Open Networking
From the Wedge-16X to the Minipack2, Meta’s innovations in networking hardware have consistently pushed the envelope of what is possible in open and disaggregated data center environments. Each iteration of Meta’s switches has contributed to a more scalable, efficient, and high-performance network infrastructure, aligning with the company’s vision of supporting its expanding global services and preparing for future applications like the metaverse.
The modular designs like 6-Pack, Backpack, and Minipack have been more tailored for Spine roles, supporting the backbone of the data center network. In contrast, the Wedge series has remained focused on the TOR / Leaf roles, providing dense server connectivity. This division of roles highlights Meta’s strategic approach to designing its networking stack.
And, of course, this journey does not end here. Later this year, we are excited to join the OCP Global Summit, and I’m sure we will see a lot of amazing designs. 800G networking will most likely be a top topic — stay tuned for more innovation!
Łukasz Łukowski is the Chief Sales and Marketing Officer for STORDIS. Working with channel partners, product management, business development and his marketing team, Łukasz is spearheading the effort to drive year-over-year revenue growth by more effectively leveraging STORDIS’ channel and alliance partners, particularly in the areas of open networking for data center, enterprise and telecom.
Prior to him joining STORDIS, Łukasz was the Vice President of EMEA Channel Sales and Alliances for Edgecore Networks. Łukasz has over 15 years of experience in the networking industry serving additional two roles as, Active A-Team Ambassador of the Open Networking Foundation (ONF); and a Regional Lead Manager for the Open Compute Project (OCP).