Today, connectivity to the Internet is easy; you simply get an Ethernet driver and hook up the TCP/IP protocol stack. Then dissimilar network types in remote locations can communicate with each other. However, before the introduction of the TCP/IP model, networks were manually connected but with the TCP/IP stack, the networks can connect themselves up, nice and easy. This eventually caused the Internet to explode, followed by the World Wide Web.
So far, TCP/IP has been a great success. It’s good at moving data and is both robust and scalable. It enables any node to talk to any other node by using a point-to-point communication channel with IP addresses as identifiers for the source and destination. Ideally, a network ships the data bits. You can either name the locations to ship the bits to or name the bits themselves. Today’s TCP/IP protocol architecture picked the first option. Let’s discuss the section option later in the article.
It essentially follows the communication model used by the circuit-switched telephone networks. We migrated from phone numbers to IP addresses and circuit-switching by packet-switching with datagram delivery. But the point-to-point, location-based model stayed the same. This made sense during the old times, but not in today's times as the view of the world has changed considerably. Computing and communication technologies have advanced rapidly.
[ Now read 20 hot jobs ambitious IT pros should shoot for. ]
New applications such as securing IoT, distributing a vast amount of video to a global audience and viewing through mobile devices, in turn, places new demands on the underlying technologies. Authentically, the Internet and how we use it has changed since its inception in the late 1980s. Originally, it was used as a location-based point-to-point system, which doesn't fit well in today's environment. People look at the Internet for “what” it contains, but the communications pattern is still in terms of the “where.”
The changing landscape
Objectively, the goal of the networking protocols was to enable you to share resources among computers. Resources 40 years ago, such as a printer, were expensive, maybe at the same cost as a house. Back then, networking had nothing to do with sharing data. All the data was on external tapes and card decks.
How we are using networks today is very different from how we used them in the past. Data is the core and we live in what’s known as an information-centric world that is driven by mobile, digital media, social networking, and video streaming to name a few.
The tools used for today's networking use TCP/IP as their foundation, but TCP/IP was designed in the late 1970s. Therefore, the old tricks we used in the past fall short in many ways. When we collide our host-centric architecture IP with today's information-centric world, we encounter many challenges.
Networking today has created a brand-new world of content and IP networking that does not seem to fit in today’s world It does not work well with broadcast links and links that don't have addresses. It seems to be ill-equipped when it comes to mobility as its model is for two fixed nodes of communication. Yet, today's world is all about mobile. Mobile pushes IP networking out of its comfort zone. So what we need today is different than what we needed 40 years ago.
While I sit in my coworking space – cboxworking – it’s so easy to connect to the Internet and carry out my work. I’m connected in a matter of seconds. There are many moving parts under the hood of networking that enable me to connect in seconds. We have accepted them as the norm, but the moving parts create complexity that needs to be managed and troubleshooted.
An example for more clarity
Let's say you are accessing your home laptop and you want to go to www.network-insight.net. In this case, IP doesn't send to names, it sends to an IP address. For this to happen, something has to change the name to an IP address. This is the job of the domain name system (DNS).
Under the hood, a DNS request is sent to the configured DNS server and an IP address is returned. So you might ask is how does your laptop know and communicate to a DNS server.
Primarily, what happens under the hood is the operation of a dynamic host configuration protocol (DHCP). Your laptop sends a DHCP Discover message and it gets back information, such as the IP of the default gateway and a couple of DNS server IP addresses.
Now it needs to send the information to the DNS server which is not on the local network. Therefore, it needs to send to the local default gateway. Broadly, IP is a logical construct and can be dynamically created. It has no physical meaning whatsoever. As a result, it has to be bound to the Layer 2 link-level address.
So now you need something that binds the remote gateway address to the Layer 2 link-level address. Here, address resolution protocol (ARP) is the protocol that does this. ARP says “I have this IP address but what is the MAC address?”
However, with the introduction of Named Data Networking (NDN), all these complicated moving parts and IP addresses get thrown away. NDN uses an identifier or a name instead of an IP address. Hence, there is no more need for IP address allocation or DNS services to translate names that are used by applications to addresses or by IP for delivery.
Introducing named data networking
Named Data Networking (NDN) was triggered back in the early 2000s by a research direction called informative-centric networking (ICN) that included work by Van Jacobson. Later, it started as a National Science Foundation (NSF) project in 2010. The researchers wanted to create a new architecture for the future Internet. NDN takes the second option of network namespace design – naming bits, unlike TCP/IP that took the first option – naming locations.
Named Data Networking (NDN) is one of the five research projects funded by the U.S. National Science Foundation under its future Internet architecture program. The other projects are MobilityFirst, NEBULA, eXpressive Internet Architecture and ChoiceNet.
NDN proposes an evolution in the IP architecture, such packets can name objects other than the communication endpoints. Instead of delivering a packet to a given destination address, we are fetching data identified by a given name at the network layer. Fundamentally, NDN doesn’t even have the concept of a destination.
NDN routes and forwards packets based on names which eliminate the problems caused by addresses in the IP architecture, such as address space exhaustion, network address translation (NAT) traversal, IP address management and upgrades to IPv6.
With NDN, the naming schema at the application data layer becomes the names at the networking layer. The NDN names are opaque to the network. Significantly, this allows each application to choose its own naming scheme, thereby enabling the naming scheme to evolve independently from the network.
It takes the metadata which is the data schema used to describe the data at the application layer and places it into the network layer. Hence, this removes the need to have IP addresses at the networking layer because you are using the names instead. As a result, you are routing based on the hierarchy of names as opposed to the IP addresses. You are using the application's metadata and not the IP addresses.
In summary, the NDN network layer has no addresses; instead, it uses application-defined namespaces, whereas, NDN names data instead of data locations. In NDN, consumers fetch data instead of senders pushing packets to destinations. Also, IP has a finite address space but NDN’s namespace is unbounded.
Named data networking and security
IP pushes packets to the destination address in comparison to NDN that fetches data by names. With this approach, the security can go with the data itself. In this case, essentially you are securing the data and not the connections.
With TCP/IP, the need for security came later; hence we opted for the transport layer security (TLS) and encrypted point-to-point channels. TCP/IP leaves the responsibility of security to the endpoints and it’s never going to be true end-to-end security. NDN takes security right to the data level, making security end-to-end, not point-to-point.
NDN can use a crypto signature that binds the name to the context. Therefore, the context and the name cannot be altered. It does so by requiring the data producers to cryptographically sign every data packet. This ensures data integrity and forms a data-centric security model. Ultimately, the application now has control of the security perimeter.
The applications can control access to data via encryption and distribute keys as encrypted NDN data. This completely limits the data security perimeter to the context of a single application.
Security and old style of networks
When we examine security in our current world, it doesn't really exist, does it? It really is ridiculous to say that we can’t be 100% secure. Authentically, 100% security is the demand of time. The problem is that networking has no visibility about what we are doing on the wire. Its focus is just on connectivity, not on data visibility.
So when you talk about security at the network level, IP can only make sure that the bits in transit don't get corrupted but that does not solve the purpose. Today's networking cannot see the content. Essentially, we can only pretend that we are secure. We have created a perimeter, but this framework has neither worked in the earlier times nor it has proved to be viable today.
The perimeter has gone too fluid now and it has no clear demarcation points, making the matter even worse. Undeniably, we are making progress with the introduction of zero-trust, micro-segmentation, and the software-defined perimeter. However, today's security perimeter model can only slow down the attackers for a little while.
A persistent bad actor will eventually get past all your guarded walls. They are even finding new ways to perform the data exfiltration with social media accounts, such as Twitter and also DNS. Basically, DNS is not a transfer file mechanism and hence is often not checked by the firewalls for this purpose.
The network cannot look at the data, its opaque to you. The root node of the data is the destination, and this is the basis of all DDoS attacks. It’s not the network’s fault; the network is doing its job of sending traffic to the destination. But this ferries all the advantages to the attacker. However, if we change to a content model, DDoS will automatically stop.
With NDN, when you receive the traffic back, the first question that surfaces is “Have I asked for this data?” If you haven't asked, then it's unsolicited. This prevents DDoS as you simply ignore the incoming data. The current TCP/IP architecture struggles to cope with this present time requirement.
Today, we have many middleboxes for security due to the lack of state in routers. Routers do actually have state, but they are bolted by VPN and MPLS creating conflicts. However, as a general definition, IP routers are stateless.
As a result, an end-to-end TCP connection rarely exists. This makes the TLS security very questionable. However, when you secure the data with NDN, you have true end-to-end crypto. Today, we are facing problems with IP networking and we need to solve them with a different design that uproots the limitations. NDN is one of the most interesting and forward-thinking movements that I see happening today.
Typically, everyone has multiple devices and none of them are in sync without the use of the cloud. This is an IP architectural problem that we need to solve. As Lixia Zhang mentioned with her closing comments on recent named data network video that everything talks to the cloud but should we rely on the cloud as much as we do? When a large provider has an outage, it can undoubtedly affect millions.
This comment made me question as we move forward in the hi-tech work of the Internet. Should we rely on the cloud as much as we do? Will NDN kill the cloud, just like content delivery networks (CDN) kill latency?
This article is published as part of the IDG Contributor Network. Want to Join?