Migrating to the cloud isn't as simple as signing up for a SaaS application or deploying applications on a cloud infrastructure or platform — there are other factors at play. And the move to the cloud shows no signs of slowing, with Gartner predicting the worldwide public cloud services market to grow 17.3% in 2019. Getting cloud deployments right is more critical than ever.
As organizations shift more to the cloud, it means that they will increasingly rely on networks and infrastructure they don't own and directly manage. Yet this infrastructure is just as critical to consume and deliver the applications and services as when it was in the data center. Being able to explore these networks to identify choke points and routing issues in advance helps to inform better network investment and configuration decisions. It is essential for a successful cloud deployment as well as ensuring an optimized environment on an ongoing basis.
There are 6 key network considerations IT managers should take into account before shifting to the cloud:
1) Baseline Performance Before Deployment
Getting a baseline measure of network performance when moving to the cloud requires, first and foremost, adopting a different set of data points than organizations have typically used in the past. The move to IaaS, SaaS, or any cloud service for that matter, means that organizations are beholden to those providers as well as third-party service provider networks through which the application or service traffic traverses.
Prior to cloud adoption, the network was essentially under an organization’s control. And it’s not that networks are necessarily less complex; it’s just that network teams had access to all the network data, like packets, flows and device level information, to monitor performance and security. And when issues arose, teams had the ability to troubleshoot and triage problems.
Traditional monitoring approaches worked in a world where the network was managed, contained and boundaries were known. However, these data sources are no longer available for monitoring the performance of the cloud network, where devices can’t be directly instrumented or polled. Visibility into cloud and third-party networks can only come from data sources such as synthetic and end user monitoring. These techniques can provide quality of experience metrics that network teams can rely upon to test configurations and baseline performance before moving to the cloud.
2) See and Understand Bottlenecks in Your Infrastructure
It’s very possible that, based on existing configurations and routing policies, certain offices and sites are not optimized to consume applications or services over the Internet. For example, a branch office in India may have issues accessing Salesforce because of transcontinental latency and a bandwidth constrained MPLS circuit. Similarly, a branch in Austin may have better and more reliable access to applications being served from El Paso rather than San Antonio.
But how do you know what the ideal configurations are before deploying? Time to have a look at your new data sources.
The first in your arsenal is actually derived from a technique you may already be familiar with for monitoring application experience in terms of page load and user transaction timings. Synthetic monitoring can be used to gain insight into infrastructure and network performance as well as your application. What’s great about synthetic monitoring is that it works in cloud environments and across the Internet in addition to your data center for in-depth information about how each portion of the network is performing.
3) Map Out Real Traffic Paths
Having visibility into specific areas of a distributed network that are problematic, as well as pertinent details that can help determine a root cause, results in more tangible and actionable information for the network you own and those of your providers.
To achieve a more complete picture of how traffic is traversing the cloud network, it is essential to combine synthetic tests with ping and traceroute functionality. Now network teams can not only test the reachability of an endpoint or host through the cloud, but also use that data to determine the route or path of packet through a distributed network while measuring transit delays.
Another useful data point is Border Gateway Protocol (BGP) routing table information. BGP knits the Internet together by exchanging routing information between networks. And these routes determine how traffic flows to or from your apps and services. Layering this information into your diagnostic approach helps understand how routing configurations and changes impact reachability, latency, loss, jitter or other performance metrics.
4) Collaborate with SaaS and Cloud Providers
So, when it comes time to work with your your SaaS, ISP, and cloud providers, it helps to share this information, related visualizations, and diagnostic information in an easily consumable way. That way, when a trouble ticket is generated, it is dealt with much more quickly by the vendor or provider. Knowing exactly where a problem exists, and the related cause analysis, empowers teams to take immediate action and enable the responsible party take immediate action as well.
5) Get Things Right Before Deployment
With detailed visualizations and hop-by-hop metrics, it becomes possible to try out a routing change, plan for a new data center or test a roll out of a new application or SaaS service. Synthetic testing layered with contextual data and a detailed visual network topology allows teams to test performance and gain insight into initial infrastructure configurations, plan changes and understand their impact on cloud applications to get things right before deployment.
6) Continually Monitor the Performance of Your Network and Its Impact on Applications
The same techniques and data sources teams that are used to baseline performance and achieve optimal cloud configurations can also be used on an ongoing basis to ensure continued performance. Active monitoring with additional layered information allows organizations to constantly keep an eye on performance degradation to optimize end user experiences across their network.
And the additional inclusion of visual analysis reduces mean time to troubleshoot and repair issues. As the Internet and cloud computing reshape the enterprise network, incorporating these new data sources, correlation of pertinent data points with a unified visualization is critical to the successful deployment and management of cloud applications and services.
This virtuous cycle of better performance becomes even more important in the future as organizations increasingly rely on the Internet for their critical operations. Growing levels of software-based automation will only increase the number of different network paths traversed and the resulting end user experience. Accurate insights into the entire cloud network enable organizations to troubleshoot and solve their cloud performance problems, alleviating potential cloud migraines and ensuring the business runs smoothly and painlessly.
About Alex Henthorn-Iwane
Alex Henthorn-Iwane leads Product Marketing at ThousandEyes, which delivers Network Intelligence solutions that that enable companies to gain digital experience insights from every user to every app over any network. Prior to ThousandEyes, Alex has worked with big data network analytics, DevOps orchestration and Internet routing monitoring technologies at Kentik, Quali and Packet Design.