Cutting data centre downtime - why reliability is more important than ever

Cutting data centre downtime - why reliability is more important than ever

Posted on

Today's data centres are evolving rapidly, as managers of these facilities are under growing pressure to cope with a huge increase in traffic. New applications such as big data mean that bandwidth demands are going through the roof, and key infrastructure will have to be upgraded to cope with this.

As more businesses become dependent on this data, this will also increase the need for high reliability within the data centre. Even the shortest interruption to a service could have serious repercussions for a business, resulting in lost revenue and unhappy customers, as well as the financial cost of repairing or replacing equipment and getting up and running again.

Information Age noted that for larger companies, the cost can be enormous. It highlighted how a 20-minute outage for Amazon in 2016 was reported to have cost the retailer $3.75 million (£2.99 million) in lost revenue, while a days-long outage at Delta Airlines cost the company $120 million - not to mention the huge travel disruption and reputation damage that also resulted.

One common issue that leads to downtime is that many data centres are still managed using manual processes, which are prone to error and mean operational data quickly becomes outdated. 

Therefore, many outages could be prevented with effective automated monitoring measures that are able to detect potential problems before they become reality.

Information Age said: "Monitoring the data centre environment in real-time enables data centre managers to better detect potential issues before they escalate. This includes leaks in cooling equipment, undercooled servers and lack of capacity."

It also noted that today's data centre is a constantly changing environment, with managers having to cope with both fluctuations in traffic and the need to move physical assets around. In fact, it was noted it is not uncommon for data centres to have networking or storage assets simply left to gather dust because organisations have lost track of them when they were in transit.

Dealing with the increase in traffic is also not only about having enough bandwidth to handle spikes. The physical infrastructure of the data centre must also be robust enough to cope with periods of high demand.

"Peak online shopping days cause a large increase of traffic to websites," Information Age stated. "This spike in data causes IT equipment to work overtime and to generate excess heat. If servers are not cooled effectively, the overheating could damage servers, or worse, cause them to fail."

But this increase in traffic does not have to cause problems for data centres. The publication observed that by planning in advance for spikes, having the tools on hand to deal with short-term increases, and continually monitoring the environment in real time will help prevent disasters, as well as reducing damage and downtime should an issue arise.

Read more on Networks

Draka joins HDBaseT Alliance

Cabling and systems specialist Draka has announced it has joined cross-industry group HDBaseT Alliance, in a move that will allow it to certify all of its HDBaseT products for compliance ...

Resource Hub

Designed to bring you the latest information - industry news, articles, calculators and tools

Visit our Resource Hub

Request Catalogue

Make sure to get a free copy of our latest catalogue featuring our products. Click on the button to receive your copy.

Click Here