Thursday, 13 January 2011

Datacentre Network Design

Datacentre networks connect the equipment inside to the outside world (a remote user or another datacentre).

Key Datacentre Requirements

  • Secure
  • High performance
  • Cost efficient (Especially regarding energy consumption).

There's no one off-the-shelf design for a datacentre network. Component selection will vary according to budget, business requirements, site location and capacity, available power and cooling.

Datacentre network are designed a series of layers, with the stored data at the bottom. On the 1st outer layer is the connection to the outside world - the internet - and, if it's an enterprise's own datacentre, to the rest of the company. If the datacentre is owned by a service provider and is servicing a number of external clients, the Internet connection and any other connections linking clients directly also sit on the outside ring.

The second layer, commonly referred to as the edge or access layer, consists of IP-based, Ethernet devices, such as firewalls, packet inspection appliances and switches, that route traffic to and from the core of the datacentre to the outside world. Here too sit many web servers in a so-called demilitarised zone or DMZ: hemmed in by firewalls, external visitors are allowed this far into the datacentre network but no further.

Below this is the core, with large, high-performance switches consisting of blades plugged into chassis, with each blade providing dozens of ports. The chassis is likely to be managed by a management blade, while other features such as security and traffic shaping can be provided by further blades. All data passes through these devices.

Closer to the servers will be a further layer, consisting of a series of switches, maybe one per rack or row of racks, depending on density, tasked with distributing data to and between servers in order to minimise load on the core.

Behind the servers, conceptually, is the main storage. This, the fourth and final layer, consists of a series of high-performance storage arrays connected via a Fibre Channel network that's entirely separate from the main network. This means that only the servers can connect directly to the storage, although there's likely also to be a link from the storage to the IP network for management purposes.

The Fibre Channel network needs separate switches and management systems to configure it, adding to IT staff's workload, so this situation is slowly changing. Over the next ten years analysts expect that storage systems will be connected using the IP-based Ethernet network, probably running at either 40Gbps or 100Gbps.

Request flow through a datacentre

A typical flow throught these layers will proceed as follows: The user will cick on a link in their browser, this generates a request for data that arrives at our datacentre via the Internet connection. The incoming request is scanned for malware, and is re-assembled and decrypted if WAN optimisation and encryption are in use. It's then sent on to a switch in the access layer. This switch routes the request to a web server in the DMZ, which might be physical or virtual, and which might be fronted by a load balancer to allow a cluster of servers to handle high traffic levels.

The web server receives and processes the request. A response needs information from a database, so the web server requests data from a database server at the core of the network.

Th data request is passed to a core switch which routes it to a database server. The processed request traverses the storage network, is pulled off the disks, arrives back from main storage, is packaged up and sent back to the web server. It's then assembled into a web page and pushed back out the Internet connection.

No comments: