If we still have some vestige of the digital technological revolution of the last century, it is certainly the data transport mechanisms in use throughout the internet. Although maybe we should talk in plural because they are precisely two mechanisms we have inherited from our digital prehistory. These are TCP and UDP.
Both data transport mechanisms have been with us almost from the beginning, and are so embedded in our hardware and software systems to be replaced would be a virtually impossible task since it resides in the heart of all our communications.
However, attempts are being made to improve the methods of internet communication of the last century. And I say improve, because so far, there has been no public attempt to develop alternative communication protocol(s) to replace the one hundred percent the existing one. On the contrary, it seems that the workaround for a new protocol that meets the needs of a rapidly growing internet would be to recycle the old and obsolete UDP protocol.
Basically the difference between TCP and UDP is the control each exerts on protocols that transport data. TCP requires constant communication between client and server that serves to control the communication between both parties, in addition to TCP consists of sophisticated mechanisms that monitor the data flow sequence and ensuring the integrity thereof. Furthermore UDP is more careless in their purpose and sending everything that comes with a wide lead indiscriminately as far as performance is concerned but leave much to be desired when it comes to quality control. For these reasons and others, the UDP protocol is typically used primarily for the massive flow of data such as video transfer and audio, where reasonable loss of information is not a big risk, while TCP is more suitable for sensitive data sent. The most widespread use of TCP is the HTTP protocol that allows us to surf the internet and receive texts and pictures flawlessly.
However, the control mechanism of TCP can certainly affect the user experience by causing waiting times as a result of sequential processing of data. These delays went unnoticed twenty years ago but not today with data transfer speeds that can exceed 50 Mbps. Another drawback of TCP timeout are caused by initiating communications protocol commonly called RTT (the English “Round -Trip-Time “) where the client and server exchange information and” greetings “before performing the requested data to the user from the browser.
In order to speed up internet there have been several proposals among which a formal initiative of the IETF (Internet Engineering Task force) with its new version of the protocol that HTTP2 applications use, but makes use of the well known TCP systematically by improving browser speed by making use of techniques such as header compression, multiplexing and pipelining asynchronous connections.
However, it is the private sector which has to work with proposals that promise to improve the protocol transfer layer compared to what we know them as today. One of these companies is Fujitsu, who has developed a new technology based on TCP RTT, reducing by one-sixth of the original time and increases the performance of TCP by 30 percent over the current performance. This new protocol is implemented via software and improves the response time to time revise, retrieve and resend lost packets. The protocol developed by Fujitsu, which still has no name, is also able to adjust the data transfer bandwidth available in order to minimize the loss of information and avoid bottlenecks.
But the coming war of the protocols would not be war without the presence of Google, which is still struggling for Internet supremacy. Google has not only tried to unseat the new, yet unreleased, HTTP2 protocol applications with their own version dubbed SPDY, but has been redesigned with star UDP TCP elements such as security and control packet reception, but maintaining the simplicity and speed of the UDP protocol. Google’s solution for a faster internet called QUIC.
The implementation of a new transport protocol can be a daunting task that may never come to be. An example of this is the IPv6 protocol, but it has takes years already, and it has not yet been able to be implemented in full, despite the fact your need is prevailing these days. IPv6 is a clear example of how complicated it can be to replace a well-established protocol, although children are worthy heirs of their father.
Moreover, the idea of a “pseudo-protocol” based software such as Fujitsu suggests, it could be the key to the successful implementation of new protocol (and even layers) transport could well be implemented at the operating system level. It is definitely a clean, non-invasive solution.
But could a company give encouragement that a new protocol needs to be adopted by the rest of the digital community without any corporate interest? If anyone with enough power to successfully bring such a task, that is Google. It is assumed that Google is the trendsetter as far as technology is concerned and therefore its Chrome browser as its Android operating system are definitely weapons of mass incorporation of developing protocols ready to be done with the world market.