Compared with ordinary switches, core switches need to have the following characteristics: large cache, high capacity, virtualization, FCOE, layer 2 TRILL technology, etc. So, what are the advantages of core switches over ordinary switches?
The data center switch has changed the outbound port cache method of the traditional switch, adopting a distributed cache architecture. Its cache is much larger than that of the ordinary switch, and the cache capacity can reach more than 1G while the general switch can only reach 2-4m. Since each port has a burst traffic buffer capacity of 200ms under the condition of 10G full line speed, in the case of burst traffic, the large cache capacity can still ensure zero packet loss in network forwarding, which is just suitable for data centers with a large number of servers and burst traffic.
The network traffic in the data center has the characteristics of high-density application scheduling and surge-type burst buffering. However, ordinary switches are designed to meet the purpose of interconnection and intercommunication, and cannot realize the identification and control of services. In the case of large services, they cannot respond quickly with zero packet loss, so business continuity cannot be guaranteed.
Therefore, ordinary switches cannot meet the needs of data centers. Data center switches need to have high-capacity forwarding features, and must support high-density 10G boards, that is, 48-port 10G boards. In order to enable the 48-port 10G boards to have the authority to forward, the data center switches can only use the CLOS distributed switching architecture. In addition, with the popularization of 40G and 100G, 8-port 40G boards and 4-port 100G boards are also gradually commercialized. Data center switches with 40G and 100G boards have already entered the market to meet high-density application needs of data centers.
The network equipment in the data center needs to have the characteristics of high manageability, high security and reliability, so the switches in the data center also need to support virtualization. Virtualization is to transform physical resources into logically manageable resources to break down barriers between physical structures.
Through virtualization technology, multiple network devices can be managed in a unified manner, and services on one device can be completely isolated, thereby reducing data center management costs by 40% and increasing IT utilization by about 25%.
In terms of building a two-layer network in the data center, the original standard is the FTP protocol, but its inherent defects list as follows, STP works through port blocking, and redundant links do not perform data forwarding, resulting in waste of broadband resources, and STP only has a spanning tree, data packets must be forwarded by the root bridge to reach them, which affects the forwarding efficiency of the entire network.
Therefore, STP is no longer suitable for the expansion of super-large data centers, and TRILL was created to solve these defects of STP. The TRILL protocol is regarded as a technology born for data center applications, and it effectively combines the configuration and flexibility of the second layer with the convergence and scale of the third layer. Without the need for configuration on the second layer, the entire network can be loop-free forwarding. TRILL technology is a basic feature of Layer 2 switches in data centers, which is not available in ordinary switches.
A traditional data center often has a data network and a storage network, but the new generation of data center network integration trend is becoming more and more obvious. The emergence of FCOE technology makes network integration possible. FCOE is a technique that can encapsulate the data frames of the storage network for forwarding within an Ethernet frame. The realization of this fusion technology must depend on the switch in the data center, and ordinary switches generally do not have these functions.