La parola a… Andrea Faggiano di Arthur D Little

Trends for new services on internet

So far the history of the Internet has been one of incredible success in organically developing a self-adapting complex of business relations and technology breakthroughs.
Now, the challenge is to make the Internet able to provide an increasing quality of experience for the end-user and for those mission critical applications that will revolutionize our lives in the future.

 

1 - The Quality of the Internet: a complex matter

The topic of Internet quality is difficult to explore, because the Internet that we experience every day is a complex combination of many elements: data transport networks, user devices, applications, glued together by two main protocols that served and continue to serve as universal languages: the IP Protocol for the transport layer (including its control mechanism, the TCP) and the HTTP for the application layer. The former has established the basis of the global interconnection of data networks, while the latter has given us the very first - and still today - the uttermost popular application: the web browser.
The challenge of making the Internet better encompasses multiple aspects. Among many others, we include the migration from IPv4 to IPv6 (to provide a number of IP addresses compatible with the Internet of Things scenario), the introduction of newer versions of HTML (HTML 5 was finalized in October 2014), the optimization of the TCP mechanisms (also referred to as Web Acceleration techniques), but utterly and most importantly, the evolution of data access networks and related interworking solutions.
Lots of effort and investments has been provided by telecom operators in the past 15 years to improve the capacity of the Internet access network, or so-called “last mile”: in the fixed domain from copper or coaxial cable to fiber; in the mobile domain,  from GPRS to 3G+, and from 4G today to 5G [nota 1] tomorrow.
Less known is the evolution of the so-called “up-stream” side of the Internet. This is where Internet access networks connect with (i) each other, (ii) bulk IP traffic transport networks and undersea cables connecting continents and (iii) server parks storing the most popular Internet content & applications, located at the edge of the access networks. This part of the Internet solely consists of “IP Interconnection” agreements, which determine the technical & economic conditions under which IP traffic is delivered from the originating party. This is done via several exchangeable delivery networks of multiple Internet connectivity providers (often used in parallel) to the residential Internet access networks of terminating ISPs, and vice versa.
IP Interconnection is, and has been, an essential building block for the quality and functionality of the Internet as ultimately experienced by the end-user, despite the fact that the end-user is not a party to IP-Interconnection arrangements. 

 

Slowly but surely, the up-stream side of the Internet is flattening towards an architecture where a myriad of data parks (hosted in so called, “proximity datacenters”) offer access to content and applications at a distance of just a few milliseconds away from the end user (not tens or hundreds of milliseconds as today!). In this architecture, the classical IP Interconnection point with the server park where content or applications are stored is much closer to the end user.
This second set of improvements allow increasing the average capacity and latency of the Internet, a second order bottleneck in Internet quality.
The Internet has today become mission critical for most Content & Application Providers (also called OTTs [nota 2]).
Minor disturbances in the quality of delivery directly impact the willingness of end-users and advertisers to pay for online services.
The control of Internet quality is definitely the next challenge in the telecom community.

 

2 - IP Interconnection is robust and competitive, and it will continue to evolve

IP Interconnection is the glue of the Internet and must be preserved against breakdown scenarios – for example, in the circumstances of extreme concerns about national or regional cyber-security.
IP Interconnection, so far, has adapted well to support the changing nature of the Internet. The IP Interconnection value chain converges, but remains dynamic and competitive. The proliferation of Content Delivery Networks and Internet Exchanges, the commoditization of IP transit and CDN price challenges existing interconnection models and enables new ones. This is evidence of healthy dynamism.
From the early days of “IP transit” and “Peering”, a genuine mix of viable application/content delivery strategies is today available to all players seeking connectivity.
Content & Application Providers and ISPs are setting the pace and determining the nature of IP Interconnection innovation by vertically integrating and/or interconnecting directly, which disintermediates pure Internet connectivity providers to some extent.
It should be noted that changes in the IP Interconnection ecosystem lead to some tension between IP Interconnection players in the past years. However, disputes concern less than 1% of all IP Interconnection agreements worldwide and are solved without regulatory intervention in more than half of these cases.
Finally, end-users have not been substantially or structurally affected by IP Interconnection disputes, and this segment of the Internet remains completely transparent to them.

 

Dynamism in the IP Intercinnection platform - Source: Cisco, Informa, OECD, Arthur D. Little analysis

 

3 - The Application and Service landscape is changing: watch out for Future Trends

The Future of The Internet will be dictated by future application and service scenarios, but the Internet is vital, continuously evolving, and its future is difficult to predict.
From a consumer perspective, the Internet has been transformed into a new media platform, as the nature of Internet traffic has changed from static data & text file transfer to streaming interactive media content (now more than 60% of total Internet traffic). Today the Internet delivers new TV!
From an enterprise perspective, HTTP and HTTPS traffic continues to dominate, which indicates that enterprise applications are continuing to transition away from on-premises data centers, to web-based consumption and other new delivery models.  This means that the migration to cloud-based models is really happening!
Moving forward, a relevant scenario has already been anticipated [nota 3]:  the rise of The Tactile Internet.
This term is used to refer to a data network able to deliver data within 1-millisecond latency in analogy to the human tactile system that works at such extreme response speeds.
Networks with extremely low-latency specifications would enable previously unimaginable scenarios in automation and remote assistance.
In such respect, the imagination becomes the only limit in defining potential applications: remote surgery, industrial control, high-precision agriculture, robotics, etc.

 

Communication: services provided by application (Skype, WhatsApp, iMessage, FaceTime, ect); Data: File sharing (Bit Torret, eDonkey, ect), Web browsing, social networking, email, ect.; Media: Streamed or buffered audio and video (Netflix, non-linear TV services); (1) 2009-2012 CAGR; (2) Interviews - Source; CISCO, Sandvine; Aryka, Arthur D. Little analisys

4- Enterprises may be interested in a special purpose ‘mission critical’ Internet

Interestingly, the Internet is attracting more and more attention among Enterprises compared to classical leased lines and VPN solutions.
As more companies decide to take advantage of the scale, flexibility, and agility delivered by cloud services, they increase the reliance on using Internet links for enterprise applications including real time collaboration tools. In other words, the Internet links are becoming an extended part of an enterprise WAN since they are much more convenient and cheaper than IP VPN solutions, especially those based on MPLS technology.
However, such reliance on cloud services has raised the question of whether the Internet can deliver the performance and reliability required by enterprises to access their cloud applications.
Based on recent findings [nota 4], 25% of the time, the Internet is said to fail to deliver the performance and reliability required by enterprise applications, such as real-time collaborations and VoIP.
Best-effort Internet connectivity is indeed found highly unpredictable with large variation across all link types, ISPs, cities, and times of the day.
Such ‘best-effort’ performance is acceptable when the utilization of the Internet is confined to for-free entertainment but it is hardly conceivable with premium entertainment and enterprise applications, as is more and more the case today.
Notably, it is found that despite the unpredictability of the Internet, by using a combination of multiple Internet links, Software Defined - WAN (SD-WAN) technology and special IP Interconnection solutions, enterprises can buy Internet-enabled connectivity services whose performance and reliability is much greater than standard best-effort Internet access.
This is an area of investigation and innovation of high interest for telecom operators that are willing to offer more quality control and lower TCO [nota 5] for enterprises.
But, all this requires effort and investments and should be compatible with the current debate around net neutrality.

 

% of time Internet quality is below the acceptable level for enterprise applications - Source: VeloCloud, Arthur D. Little analysis

5 - The Open Internet debate can affect the speed of innovation, a new deal is necessary

In this evolutionary context, regulators face multiple challenges in adapting the regulatory framework to such a fast-evolving industry.
As of today, regulation is not ready for the competitive dynamics that are shaping the digital and telecom sectors - for example, this is demonstrated by the dispersed debate around net neutrality.
It is today hoped that a certain form of regulation will be introduced to better favor cooperation among all players, whether Telcos, Device players or Content and Application providers.
Indeed, a win-win relationship is envisaged by many, because the interests at stake can converge.
Telcos:

  • want premium content on their networks to support broadband services take-up and satisfy their core customers’ needs;
  • need to invest in developing the capacity of their networks, innovate and have looked into different options to monetize this asset,
  • wish to provide guaranteed access to certain services.

Application and Content providers:

  • need guaranteed quality services to successfully distribute their content/ applications;
  • desire the highest level of proximity to final customers;
  • expect other potential services, such as secure payments, billing services, etc.

More specifically, it is highlighted that more legal certainty on net neutrality as well as regulation - enabling a fairly balanced profit split between Telcos and Content and Application providers that partner for content distribution - would increase the predictability of return on future investments, and therefore incentivize investment.
Nevertheless, it is also mentioned that regulation of traffic managed services could also induce a situation in which telecom operators would get to decide which services were discriminated against because of their strong bargaining positions in local access.
The debate is on.

 

6 - Network monetization passes through innovation and new business models

Content and Application providers are in demand for quality delivery of their content, and hence of quality networks. The Netflix or YouTube ISP speed indexes illustrate interest in this.
Telecom operators must find a way to provide enhanced services for certain applications in a non-discriminatory way. This would enable them to capture value from the improved quality without obstructing the net neutrality principle [nota 6].
With large international players such as Apple, Google, Amazon and Facebook, there will always be considerations of size and footprint, even if the telecom operator is strong in its country.
Large Application and Content providers will not need more than network services from telecom operators, and this may reduce exclusively to the local loop part in extreme cases. However, these players are dependent on telecoms operators to provide high-quality access for a premium user experience, and telecom operators have a clear opportunity here.
The anticipated and progressive introduction of SDN (Software Defined Network) and NFV (Network Function Virtualization) technologies will provide further scope to carve out newer functionalities to be offered against a fee to the Application and Content providers.

 

Conclusioni

The control of Internet quality is the next frontier in networking. However, the best effort Internet quality is far from being up to the level required by Enterprise applications.

Enabling the “millisecond-Internet” will enable previously unimaginable scenarios for our societies and economies.
Multiple aspects must be taken into consideration for achieving a substantial improvement and quality control of the Internet, and IP Interconnection is an essential driver of Internet quality.
The technology outlook is favorable, and the sector will benefit from the introduction of SDN and NFV technologies.
Difficulties lie ahead for business, regulatory and political forces that should work cohesively for achieving the best outcome in the shortest timeframe.
The opportunity is clear and ripe for the taking.

 

Note

  1. Referring only to 3GPP technologies
  2. OTT: Over The Top
  3. See ITU: The Tactile Internet
  4. See Internet Quality Report, 2H 2014, by VeloCloud
  5. TCO: Total Cost of Ownership
  6. To some extent, the IPTV service is already providing this, forming a separate virtual network-carrying video with a higher level of quality of service.
 

comments powered by Disqus