The key advantage to network monitoring, according to those who sell the equipment and services anyway, is that it allows the network manager to identify patterns, spot anomalies before they become problems, and respond to issues as they arise in a timely manner. All good goals, but as they say, “a watched pot never boils.” Does the same apply to networks and WANs? When monitoring is put in place, will it deliver the information required to proactively identify and eliminate problems that could impact the customer experience?
The fundamental issue however, is not whether monitoring is useful. It is. The question rather, is whether monitoring (which is inherently a passive task) will lead to an improved customer experience. Julian Palmer from Visualware offers a useful analogy by comparing passive monitoring to the process of monitoring motorway traffic. Town traffic planners will routinely set up a monitoring spot along a freeway to count the number of cars that go by in a given period of time. Doing so is a useful endeavour, but won’t do anything to change the number of road accidents, at least directly.
The primary task of monitoring is to monitor and collect information—not to improve the customer experience. Indirectly though, it may do just that, because it allows the administrators and managers to have enough data to plan and create more efficient systems.
It’s the concept of passive monitoring itself as being a solution to customer experience improvement that is flawed. The information gathered (because passive monitoring records data from individual network components) just doesn’t go very far in providing data that is useful to the customer experience.
An active monitoring approach on the other hand, does better at approaching the real user experience, since it measures a broader range of services—and can therefore be more easily gauged against an SLA. Going a step further, the more perfect solution would not only actively monitor so as to evaluate the true customer experience, but also identify the cause of whatever problems the system spots.
When all factors are considered though, it’s not a decision between passive or active monitoring, or a debate over which is more effective. According to a paper presented by the Stanford Linear Accelerator Center (SLAC), the two are complementary, in that the passive approach is still very useful in network troubleshooting; and the active approach supplements that with the ability to emulate error scenarios or isolate the location of a problem.
The trend is towards active monitoring that is application-aware and active, rather than simple passive collection alone, and according to Frost & Sullivan as well as numerous other analyst firms, IT organisations are spending more on monitoring network performance, particularly due to the increased complexity of networks, mobile convergence and the presence of ever more bandwidth-hungry applications.
Dan Blacharski is the author several books on technology, finance, and business. He has been a freelance writer and editorial consultant for over 15 years and currently covers high-tech topics.
TagsCIO Cisco cloud computing cloud data centre cloud hosting cloud providers cloud security cloud services cloud strategy collaboration desktop virtualization dotcloud boom Freeform Dynamics Frost & Sullivan Gartner hosted solution hybrid approach ICT strategy Internet Speed iPad Metro Ethernet Microsoft Microsoft Sharepoint MPLS Network monitoring outsourcing Polycom private cloud public cloud remote working ROI analysis SaaS server virtualization small business SMB SME Tablet PC telecommuting Unified Communications videoconferencing Vidyo virtual company virtualisation virtualization strategy Webex