Explore chapters and articles related to this topic
Mobile Medium Access Control Protocols for Wireless Sensor Networks
Published in Shafiullah Khan, Al-Sakib Khan Pathan, Nabil Ali Alrajeh, Wireless Sensor Networks, 2016
Bilal Muhammad Khan, Falah H. Ali
Node mobility in WSN under certain scenarios can be beneficial, as in case of increasing network lifetime and network coverage [1–3]. However, as discussed in Section 5.1, mobility also imposes new challenges and problems on the design of MAC protocols. As the node moves away from the communicating node, frame loss and packet drop can occur due to the significant signal strength variations [4]. Moreover, due to hidden node and nonsynchronization with the new cluster, the mobile nodes contribute in the increase of number of collision, which plays a significant role in the degradation of QoS in wireless network. Especially in case of WSN where the resources are limited and energy conservation is significant, collision causes excessive loss of energy. Due to the loss of packet, the number of retransmission increases resulting in severe degradation in throughput, loss of energy, inefficient bandwidth utilization, and higher latency for the network. Designing MAC protocol for mobile scenario is a challenging task, especially in case of WSNs, which are inherent resource constrained. The prime objective of such type of protocols in mobile scenario is to maintain connectivity and acceptable QoS while incurring less collision in the network. The protocols should be less complex and conserve energy. Moreover, the time associated for connectivity, neighborhood discovery, and synchronization should be minimized as these factors contribute significantly in the increase of latency.
Network Theory
Published in Andy Bailey, Network Technology for Digital Audio, 2013
Layers 4 through 7 have less to do with the mechanisms for moving information around, since at this point the data have been created and moved from one node to another. These layers are collectively called the end-to-end layers because their services are required only in the end nodes and not in the intermediate nodes. The transport layer manages end-node to end-node communication, delivering the units of data of whatever size, from one device across the network to the receiving device. While the data link layer ensures that a message will not be damaged, the subnet layers may not necessarily guarantee that all messages will be delivered, or if they are delivered, in what order. The transport layer is therefore called upon to handle flow control, retransmission and message sequencing.
Digital Video Transmission
Published in Goff Hill, The Cable and Telecommunications Professionals' Reference, 2012
IP networks permit the inclusion of Transmission Control Protocol (TCP) (Tanenbaum, 2003) packets. TCP provides a method of a connection orientation where packets are assured delivery provided the network remains in an operational state. This is achieved by keeping a record of the packet numbers at the sending and receiving sides. If a packet delivery fails, a request is made for retransmission. The difficulty with this approach is the round-trip time—the time taken for the retransmission request (negative acknowledgement) and the retransmissions. This delay means the video display must also be delayed, and this will impact the feeling of interaction in a real-time two-way communication.
Event-based efficient filtering for wireless network control systems with User Datagram Protocol
Published in International Journal of Control, 2021
Jianhuai Dong, Zhixuan Dong, Wenlong Guo
WNCSs are usually suffering from network-induced drawbacks, such as packet loss, latency, and capacity limitations (Hespanha et al., 2007; Shi & Fang, 2010; Yan et al., 2016), which can degrade the performance of control systems. Among them, packet dropout is the main factor of the performance reduction, and it has been attracting constant attention for decades (Lin et al., 2016; Plarre & Bullo, 2009; Sinopoli et al., 2004; Wang et al., 2016). For packet loss, two control protocols, TCP and UDP, are commonly adopted in practice. The corresponding systems are usually called TCP-like systems and UDP-like systems (Lin et al., 2017). The main difference between them is whether there is an acknowledgment (ACK) signal sent by the actuator to inform the controller (estimator) that the control packet is missing. TCP adopts the retransmission mechanism through ACK to guarantee the final arrival of each control packet (Sinopoli et al., 2008), while UDP has no ACK signal and will not retransmit the lost data. Compared with TCP, UDP has faster transmission speed, less delay, and lower energy consumption, which make it more suitable for WNCSs with high requirements on real-time performance and energy efficiency (Ploplys et al., 2004).
An investigation on adaptive HTTP media streaming Quality-of-Experience (QoE) and agility using cloud media services
Published in International Journal of Computers and Applications, 2021
Selvaraj Kesavan, E. Saravana Kumar, Abhishek Kumar, K. Vengatesan
Progressive download is used for file download and render at the same time. The streaming file is downloaded from web server to the client device. As soon as the file downloads starts, client invokes the media player to play after sizable data available in the client play out buffer. Sometimes the buffer overrun happens when the download rate exceeds the playback rate. Progressive download uses HTTP (Hypertext transport protocol) over TCP (Transport control protocol). TCP is a reliable protocol optimized for guarantee of delivery, irrespective of file format or size and it controls the actual packet transport over the IP network. Packet retransmission consumes extra bandwidth and time which restricts the real time end user experience. Regardless of bandwidth drop or surge, the video representation remains same for the entire duration. HTTP Web servers keep pushing the data flowing until the download is completed. It uses the existing web infrastructure and does not require any additional set up.
Equilibrium balking strategies in the single-server retrial queue with constant retrial rate and catastrophes
Published in Quality Technology & Quantitative Management, 2021
In practice, catastrophes happen in various situations. For example, in the telephone switching systems where customers who find server busy leave their contact details and the idle server seeks a customer to serve among them according to their contact details, when network failures (such as cascading failures, software faults, configuration faults or hardware failures) happen, those customers’ information will be lost which can be regarded as ‘catastrophes’. Another example arises in communication area. For example, under the ALOHA protocol and CSMA/CD protocol, a back-off algorithm is usually used for the retransmission policy if the channel is sensed to be busy and the server reschedules the transmission of the packet to a later time. In this case, the requests for service can be modeled as retrial customers in retrial orbit with constant retrial policy, and request loss caused by attacks from outside or system failures can be regard as catastrophes. These catastrophes heavily affect the network performance and customers’ behavior.