Today, business operations’ have a growing reliance on data center and cloud services that are “always on”. To minimize outage costs and damage to customers, data center operators are working to digitalize, and improve visibility into all stages of the data center digitalization journey.
While a data center is commonly conceived as a network of highly automated and intelligent technologies that are connected and capable of tapping into the power of big data; that’s not necessarily the case, as many data centers still rely on a strategy of reactive maintenance to resolve downtime issues. The low digitalization level of a data center compromises overall user experience, satisfaction and confidence. Operators and customers are demanding prompt access to real-time data for better analysis and prediction, with an urgency to digitalize operations to meet increasingly dynamic business needs.
Digital construction blind spot: Less than 20% of available data is explored
A data center is a complex ecosystem with a variety of systems, processes, and stakeholders that produce a large volume of data. The typical data center has 4,000 racks, encompassing approximately 140,000 sensing points, 1,500 CCTVs, 10,000 pieces of critical equipment and many other data sources. Theoretically, data derived from all devices and platforms should be connected to optimize the data center’s operating efficiency. Yet in reality the amount of data collated and analyzed in a typical data center for insight generation in 2017 remains shockingly low – less than 20 percent of available data.
The health and operational efficiency of a data center depends on a wide range of tightly controlled environmental factors, including: temperature, humidity, airflow, light, sound, door position and power, to name just a few. For instance, if a technical glitch in the data center heath monitoring system fails to be detected in time, the resulted downtime would create potential damage to enterprises whose applications are mission-critical. To put this in perspective, the average cost of a data center outage increased to US$740,357 in 2016 – a seven percent increase from US$690,204 in 2013. The downtime costs have increased 38 percent since 2010, according to a market study.
The next big thing: predictive maintenance strategy
The future of data center health lies in predictive maintenance, which is set to become an integral component of the next-generation data center management system (DCMS) based on real-time data. But predictive maintenance will only reach its full potential if there is an effective platform that can deliver end-to-end visibility across all stages of a data center digitalization journey - including the center’s design, construction, commissioning, operation, service provision and performance assurance. By gathering and analyzing both historical patterns and real-time data, a data center’s overall performance, its reliability, capacity, efficiency, response time and security can be tracked promptly.
To walk the talk, we have tested that out with a cloud-based, self-service portal – a Virtual Data Center (vDC) Customer Portal at the Financial Data Center in Hong Kong. The feedback has been promising as customers are given access to all the key data center performance parameters, while alerts can be arranged based on predetermined thresholds, with appropriate actions taken immediately.
The promise of predictive maintenance gives operators and customers the confidence to anticipate issues even before an operations outage takes place. Thanks to these developments, we are one step closer to our goal of helping to protect corporate brands from the serious reputational and financial damage that an outage may cause, and more importantly, to make data center downtimes history.
Steven So is VP, Data Center Business, NTT Com Asia