Today’s digital world is increasingly tech-smart but a worrying chasm has emerged – one where a user’s idea of innovation isn’t matched by the provider’s ability to deliver it.
The situation is understandable, given how many enterprise data centers are slowed down by aging equipment, exacerbated by networks creaking under the strain of bandwidth and workload demands. At the core of the issue, is the over reliance on proprietary hardware and software which places time and cost barriers on the path to upgrade.
By the end of 2018, International Data Corporation (IDC) predicts that 40% of businesses will confront a mismatch in facilities. While many businesses have already invested significantly into existing data center facilities, many of these facilities are now unsuitable for third platform environments. They lack the power, cooling, network connectivity, and security required to support traditional, hyper-converged, and software-defined infrastructure – qualities necessary for the new systems of engagement, insight, and action.
In order to address these concerns, meet the growing need for agility as well as cater for present and future demands, network automation will be an essential component in the transformative evolution of the data center.
But therein lies the question for enterprises, “How do I innovate, without disrupting my business?”
In order to truly transform or automate the network, here are some considerations that first need to be resolved.
1. Locked versus unlocked?
Network planning is a long-term commitment, which makes technology evolution a complex decision. Which standards should enterprises commit to? There’s a danger in “I’ll just buy a
little more of this”: none of us have a perfect vision of the demands on the network five years from now.
To complicate matters, vendors prefer to differentiate, which may often lock enterprises into network architecture that will raise costs significantly or constrain flexibility. From our point of view, open standards is the recommended direction because we believe that vendor lock-in is a serious threat to agility.
2. The ability to manage private cloud network performance
Diagnosis of intermittent problems in the network subsystem can often become complex when the network has been virtualized – due to the fact that the path an application takes to reach the customer is abstracted.
Network data analysis however, takes this problem and turns it into a capability that surpasses that of non-virtualized networks, simply because it instruments both underlay and overlay, showing the performance in both; even providing the ability to isolate problems to the underlay, the overlay, or the application itself. In short, it’s a capability fundamental to creating reliable service delivery in cloud applications.
The network needs to be intelligent, smart, self-healing, proactive and secure; with the ability to collate, correlate and control data from across the network. It should utilize intelligence from network devices such as network analytics, buffer inputs/outputs and granular interface statistics – and allow businesses to do something with it.
In addition, an intelligent network takes information from the network, firewall or router and correlates the data with other technologies, software or platforms. In essence, it should permit smarter decision-making based on the entire technology ecosystem, and not just its own components.
3. Scaling your network
Historically, data centers are hierarchical, with multiple tiers of switching, built for North-South traffic. Today, as data centers become denser and applications become more distributed, we see heightened levels of traffic generated between hosts within the data center. We refer to this as East-West traffic.
In order to support this shift – as well as the rapid increase or decrease of resources that need to be connected – networks must become flatter and provide scalability.
Regardless of the architecture deployed today, it’s important that hardware can be remodeled to keep pace with business growth, as being locked in will become increasingly costly. Even if businesses are not yet ready to take the plunge, it’s undeniable that automation, centralized management, plug and play provisioning and SDN integrations are critical factors in boosting competitiveness.
These solutions might seem far-fetched currently, but we’re living in a world where technology is rapidly evolving. With that in mind, it’s key to build a foundation – one that has the logical and physical scale to support growth, so that the challenges that come with the network can be eliminated to fast-track the evolution of the enterprise.
4. Having the right skills
In the process of automating underlying networks, the next generation of engineers will emerge with new skillsets. Change creates opportunity to work across the boundaries, but the transition will take time. It’s yet another reason why at Juniper, we have embraced open standards. By allowing the widest range of skills to be tapped on, enterprises will eventually reap the long-term investment returns from reskilling their employees.
5. Envisioning a future with SDN
As data center technology evolves, it is imperative that organizations ensure the investments they make today are aligned with the vision of Software-Defined Networking (SDN). According to IDC, the SDN market will surpass the $1 billion mark by 2018, while some 50% of all enterprises will have transformed their networks with SDN-based, flexible networking to interconnect diversified environments and facilitate new data flows globally.
With upwards of 50 billion devices expected to be connected in the next five years, there’s no question the world needs both performance to support such demand, as well as automation to help unleash the full power of performance at lower costs, all while giving customers greater agility and scalability.
Russell Skingsley, Vice-President, Systems Engineering and Centre of Excellence, APAC, Juniper Networks