Is my network worth the money?

In the internet age corporate networks must work harder than ever before. The volume of data to be carried is climbing all the time, while requirements on reliability and security rise constantly – and in the medium term at least, there is no end in sight. Careful benchmarking allows companies to check whether their networks are up to the task and worth the money.

Falling prices and contract risks increase uncertainty

What was thought inexpensive three years ago can now seem extortionate.

Networks have been a top topic in the media for about a year – but not the traditional networks for linking machines, rather the webbased channels and platforms between people. The significance attached to this subject indicates how crucial it is for a company to be able to find and approach the right communication partner quickly and reliably. No communication means no business, and the faster and longer the connection, the greater the marketplace. This has been known ever since the days of bartering, if not before, so it is sometimes a little surprising to see the euphoria that greets these allegedly new “social networks”.

With the lightning and also exciting recent developments on the web, traditional IT networks have gone out of the public eye to a certain extent. And yet it would be a mistake to suppose that there has not been any major change in the “old IT”. Infrastructure is certainly not a commodity and is far from being an exchangeable general product that can be bought on any street corner at a stable cost/benefit ratio. It is precisely because of the increasing importance of the internet, social networks and the huge volumes of information that are exchanged across broadband connections that traditional networks play a key role: for better or for worse, they must be able to keep up with the competition.

Network technology is still advancing rapidly in many aspects, and this has been accompanied by a collapse in prices. The best example is the development of DSL connections for private households, where the pressure from competitors is now so great the providers are having huge problems keeping their margins despite the further growth in the market as a whole. There is an exception to every rule, however, and demand for bandwidth in the LAN (Local Area Network) segment is rising only gradually, which means that the available technology is now well in advance of demand. Although gigabit switches can now be bought in the electronics shop for 50 euros and just about every new PC now has a 1 gigabit card, very few end users in business have gigabit capability on their desks. It simply would not be efficient for a company to give its end users gigabit capability, so that economic reasons compel them to do without. Networking outside the home has not yet reached this point of technological saturation – ADSL (up to 16 Mbit/s) will be followed by VDSL (up to 52 Mbit/s) and, some years down the line, VDSL2 (up to 200 Mbit/s in theory), while some regional carriers such as Munich supplier M-Net want to lay glass fibre cables into the building as an alternative to copper. That brings the broadband community steadily towards the 100 Mbit/s mark. Appropriate content must and will follow, which means that demand will continue to climb. The phenomenon is familiar from traffic dynamics: more roads bring more traffic. When applied to businesses, increasing possibilities in the private sector lead in the next step to greater requirements on corporate networks.

However, businesses can profit from the high demand for bandwidth from private households, because the cost of using the external network falls. The price spiral is becoming ever faster, and customers are adapting their behaviour accordingly: contracts with a term of three years, still regarded as “short-term” at the turn of the millennium, are now in the upper segment of market statistics. By contrast, service providers are trying to tie up long-term deals that allow the financial concessions granted at the start to be recovered over the long term. The customer can in turn exploit this as a bargaining tool, because higher service levels and lower cost prices can be pushed through if the term is longer. For the customer, this generates a pressure to compare the market price at regular intervals with the price being demanded in order not to be ripped off in the next round of negotiations. This can be done by means of a benchmark, although network benchmarking differs in some aspects from traditional benchmarking.

A network benchmark does not just relate to the external connections, but is normally divided into six different areas: Local Area Networks (LAN), Wide Area Networks (WAN), telephony, internet, Remote Access Services (RAS) and the whole issue of security. The boundaries between the areas are sometimes blurred as a result of changes in technology, an example here being the decline in the development of classical dial-up methods (analogue, ISDN) through to IP-VPN and MPLS connectivity.

LAN

LANs are predominantly operated by companies themselves, and hence the benchmarks created here are primarily focused on cost. Alongside the traditional office LAN (edge LAN) with PC workstations and printers, there are also data centre LANs (DC LANs, Core LANs) and compartmentalised developer and production LANs, the latter found for instance in production facilities of the manufacturing industry. The typical topology of the various networks differs in the requirements placed on them. Production LANs, for example, must be sufficiently flexible for rapid changeovers in production to be mastered. In contrast, office LANs are largely unaffected by rapid changes because usually only a few clients are added or removed. Again, in the DC LAN the routers and switches are subject to a different cost structure than the network appliances in the office, where switches are already a commodity. Data centres are increasingly populated by modular devices that can be managed and protected with the corresponding efficiency. In addition, work on core switches and their operating systems demands a higher level of skill in the staff dealing with them. The increased expenditure is then reflected in the labour costs.

The different requirements mean that it is not possible to apply a measured quantity for the LAN benchmark across all segments. An active LAN port in the DC is by its very nature much more expensive than an active LAN port in the office network. The costs per active LAN port, and the relationship to the installed ports, have generally established themselves as central benchmark parameters or key performance indicators (KPIs) in the local network. If only the costs per installed LAN port were to be considered, the benchmark can be easily influenced: an additional switch that is not assigned would result in the expense per LAN port falling and the KPI improving. It is a similar story when calculating the costs per router or per switch: given the diversity of products, it would be like comparing apples with pears.

WAN

Unlike the LAN, a WAN is not normally run inhouse but is instead bought in from the market. The benchmarking here is thus primarily focused on price. It is difficult to carry out an evaluation of a WAN however, because it is frequently a combination of external and internal services. The approach followed for the LAN, which is to use active ports as a basis for measurement, is obsolete here because it reveals nothing.

An attempt to benchmark the costs per linked location, nevertheless, is similarly futile, because locations seldom offer valid comparability: is a home office that dials up by VPN a location, or is a minimum number of employees needed as a criterion for locations, or are there even other parameters? Benchmarking the costs per line is also pointless – the local DSL line cannot be compared with the gigabit trunk across the Atlantic. The conclusion? The use of KPIs in the WAN to simplify the comparison parameters is all the more difficult because a huge number of constraints relating to the different environments of the clients must be recorded and harmonised, and this takes far too much time.

As a consequence, the benchmark consultant must analyse the individual transmission links and try to compare them with similar lines in the Maturity database. Factors to be considered may include the speed, Service Level Agreements (SLA), the installed backup options and their ownership and the responsibilities for configuration. The benchmark consultant ultimately obtains a price for the relevant line through harmonisation. Benchmarks are found frequently in the WAN segment because the market is undergoing rapid change and prices are falling overnight. It is therefore crucial, particularly for companies with a large infrastructure, to compare the current prices on the market. What this means for the benchmarker is that his or her comparative data has a much shorter half-life than is the case with a traditional benchmark.

Telephony

The telephony segment is changing primarily through the triumph of Voice over Internet Protocol technology, or VoIP. Many businesses already employ it, although not universally by any means. VoIP is predominantly used on trunk lines or in the connection of call centres because in both cases it makes financial sense to make the switch. This is partly also to do with the development in WAN technology and a technological transformation: where leased lines were once the first choice, now the trend is towards Multiprotocol Label Switching (MPLS) and DSL for networking locations and access to the internet. MPLS allows data packets to be transferred along a previously established path in a connection-less (IP) protocol. This enables the service quality (priorities, latency time, packet expiration) for a VoIP network to be guaranteed, and providers are now set up for this system. The benchmarking of the classic Octopus system of Deutsche Telekom and its environment is a far more frequent occurrence than the benchmarking of VoIP services, but the trend towards VoIP in the user companies is unmistakeable.

Benchmarks for call centres are also very attractive because these combine almost all aspects of technology relating to networks: telephony, WAN services for data transport and security in the retrieval of information from partner firms. This applies to both inhouse and outsourced call centres in which one side may provide rooms and agents, for instance, while the partner provides telephones, computers and applications. The benchmark also takes into account typical call centre criteria such as the quality of support at the various levels, the often very specific interface problems with applications and in-routing aspects or response times, because a comparison would otherwise have no validity. In a pure price benchmarking of a service provider, however, the deciding factor is not the technical and labour expenditure behind the warranted service that counts, but the price and the SLAs in conjunction with their degree of fulfilment.

Internet

A benchmarking process for internet services focuses on two areas: the “unspectacular” surfing component, and web hosting. To enable surfing, a provider offers a line with a certain bandwidth “into the ether” as well as a DNS service that translates the requests. The expenditure over and above this results from the security requirements. In hosting, servers and applications are the object of classic benchmarks. A specific network consideration, on the other hand, is applied with businesses that may, for instance, operate large portals with load balancers under their own management. Users are often faced with particular challenges here as the peaks of the traffic load fluctuate enormously between different times of the day and seasonality. In the online shopping sector, for instance, Christmas business is as important as it is for conventional retailers, so suppliers here cannot afford any failures in their IT systems.

Remote Access Service (RAS)

Access to corporate networks by staff on the move puts the infrastructure and the people responsible for it to a very difficult test: ideally, the remote access should enable everything without allowing what is not permitted. The use of DSL and VPNs is par for the course these days, albeit some companies are still using legacy technologies – in line with the old adage of ‘Never touch a running system’. Frequently, the businesses manage their VPN servers in-house because the provider cannot take on this role. The company must decide which users are to be allowed to access which data by which means. The range of options, from simple solutions (e.g. via IPsec encryption) to sophisticated procedures (access via the allocation of tokens and certificates, for instance), is large. A typical measured quantity in a network benchmark in the RAS environment will be the costs or the price per user. The particular challenge facing the benchmarker consists of ensuring a clear separation of the often intermeshed technology levels in the RAS environment so that the harmonisation expense in the analysis phase is kept to a minimum.

Security

IT security comprises services usually provided internally and (where necessary) external services, so that it is normally the costs that are evaluated by means of a benchmark. The proportion of security applications in the overall budget has risen in the last decade because the threats are becoming more sophisticated and the potential damage more serious. The security infrastructure, consisting of firewalls, demilitarised zones (DMZ), intrusion detection systems (IDS) and intrusion prevention systems (IPS) must offer high availability and redundancy.

The costs per external link have proved to be the best KPI for a security benchmark. These links connect the business to the outside world, e.g. a bank to a provider of financial information. Benchmarking the costs per firewall rule would not deliver a valid parameter because the same result will be achieved with just one rule as in a system with ten rules, so that comparability would no longer be assured. An external link requires a certain set of firewall rules, something that can also be defined as a “configuration per external link”. This is the main driver for costs: the more complicated the link, the more complicated the software, the hardware used (e.g. with additional throughput) and the set of rules. The home office of the employee is another input channel that needs to be controlled accordingly.

Logically, if there are 100 home office workers using the same access technology, this still represents only one link because a uniform set of rules is applied. This allows comparability between the security applications of different companies to be achieved. A further aspect for a benchmark relating to security is the personnel resources. Security specialists normally have a higher level of expertise than office LAN support staff, for instance, because very high SLAs are needed in the security system; after all, the security must be able to keep pace with the highest SLA in the external link. If, for instance, the WAN connection has an availability of 99.92 per cent, the security does not just have to guarantee an availability of 99.7 per cent. In this case the higher SLA in the WAN would not be worthwhile, because it is negated by the directly integrated security.

However, benchmarking is rarely applied in order to reduce outgoings in security – most businesses would rather cut expenditure in other areas of IT than security. Who would want to be responsible for something serious ever actually happening because they put their foot on the cost brakes?

Joachim Hess, Consultant

Go back