Explore chapters and articles related to this topic
SDN and NFV
Published in Dijiang Huang, Ankur Chowdhary, Sandeep Pisharody, Software-Defined Networking and Security, 2018
Dijiang Huang, Ankur Chowdhary, Sandeep Pisharody
RouteFlow controller interacts with RouteFlow Server through RouteFlow Protocol. The Virtual Environment (VE) consists of RouteFlow Client and Routing Engine. The Route-Flow Client collects Forwarding Information Base (FIB) from the Routing Engine (Quagga, BIRD, XORP, etc). RouteFlow Client transforms these FIBs into OpenFlow tuples that are forwarded to the RouteFlow Server that is responsible for establishing routing logic out of these tuples. The routing logic is transferred to RouteFlow controller that defines the match field and action against each match. The VE is connected directly to the RouteFlow controller through a Virtual Switch, such as OVS. The direct connection between VE and controller reduces the delay by providing a direct mapping of physical and virtual topology. There were no databases used in the first phase of the development of RouteFlow, which may choke the RouteFlow Server. To overcome this issue, NoSQL (MongoDB, Redis, CouchDB) was introduced in the RouteFlow architecture to provide inter-process communication between different components of the architecture. Routeflow performs multiple operations in different scenarios: i) logical split, ii) multiplexing and iii) aggregation. The entire routing tasks are done by the virtual environment that provides flexibility. The different phases of RouteFlow development makes it possible to integrate it with SDN, so much so that RouteFlow is considered the basic architecture to control routing in SDNs.
Efficient name matching based on a fast two-dimensional filter in named data networking
Published in International Journal of Parallel, Emergent and Distributed Systems, 2019
Second, NDN utilizes three distinct forwarding data structures [7,8]:Pending Interest Table (PIT) is a table that saves unsatisfied Interests. A new entry is added when a new Interest packet arrives and removed when it is satisfied by the corresponding Data packet.Content Store (CS) is a buffer/cache memory that saves previously processed Data packets in case they are re-requested later.Forwarding Information Base (FIB) is used for forwarding Interest packets based on the longest prefix match.In NDN paradigm, as explained in Figure 1, all these data structures are required at different points to perform the packet forwarding process. In the arriving of an Interest packet, a router first examines the content store looking for a matching. If there is a match, the router responses by returning the Data packet through the interface from which the Interest packet arrived. Otherwise, the router examines its PIT looking for any matching entry, and if such entry exists, it simply registers the incoming interface of this interest packet in this PIT entry. If there is no matching entry in the PIT, the router then forward the Interest packet toward the data container(s) depending on information inside FIB and the router’s adaptive forwarding strategy. If the router receives multiple Interests from multiple nodes, it forwards just the first incoming one toward the data container(s). In the arriving of a Data packet, a router discovers the matching PIT entry and forwards the Data packet to every interface registered in that entry. It then eliminates that PIT entry, and stores the data in the content store [9,10].
A Survey on Packet Switching Networks
Published in IETE Journal of Research, 2022
The first and foremost thing is that the Cisco Express Forwarding (CEF) table must be enabled on the Cisco router to use MPLS technology. The CEF table is used to create the Forwarding Information Base (FIB) table, an optimized form of the routing table or Routing Information Base (RIB). The FIB table is not much different from the RIB table as both tables contain the information (next hop and outgoing interfaces) of some specific routes. Nevertheless, the advantage of the FIB table is that more packets are sent per second because the router finds the correct entry in no time. Another table named Label Forwarding Information Base (LFIB) table performs a significant role in forwarding the packets in the MPLS network. This table is created from the FIB table and Label Information Base (LIB) table. The LIB contains all labels and information used by Label Switch Router (LSR) in forwarding the packets. The LSR is the router that knows about the MPLS technology, and these are providers (Ps) in the MPLS domain, while the first and the last PEs in the MPLS domain are known as Label Edge Routers (LERs). All LSRs and LERs assign the labels to the packets independently. They also exchange the information of labels to one another using the Label Distributing Protocol. After establishing the LDP, all routers in the MPLS domain build their MPLS forwarding tables, which are LFIB tables and the LSRs/LERs use these tables to perform three types of forwarding on the incoming packets: IP-to-Label forwarding, Label-to-Label forwarding, and Label-to-IP forwarding. These types of forwarding depend on the information saved in the LFIB tables. IP-to-Label forwarding means that the label is injected into the incoming packet by the first LER in the MPLS domain. This injection of the label is also called pushing the label. Label-to-Label forwarding is also known as swapping/switching the label, performed by the LSRs. When a labelled packet reaches an LSR, it sends the packet to the following related node, but before that, it swaps the current label with its own assigned label. The last forwarding type is Label-to-IP forwarding which is also called removing the label. In this popping label operation, the last P receives a labelled packet from the previous P and removes the label from the packet, and forwards the packet as a regular IP packet to the LER.