Explore chapters and articles related to this topic
Overview
Published in Naoaki Yamanaka, High-Performance Backbone Network Technology, 2020
End-to-end latency in an IP network is composed of propagation delay, queuing delay, processing delay, and transmission delay. Packet processing delay is negligible in high performance routers. The dominant delay in light to moderately loaded high bandwidth wide area networks is the propagation delay, which is uniquely determined by the length of the optical path connecting the routers over the physical WDM network. Queuing delay along packet paths is not significant (or at least should not be) until congestion is encountered. Indeed, in our simulation, while the improvement in light load is marginal, significant improvement is observed under heavy load. Figure 14 illustrates the utilization for all the links in the original topology and in the reconfigured topology. It clearly illustrates that the effect of reconfiguration is to disperse load from congested links to less congested links. The improvements in average delay are the results of a more efficient network resource allocation, which migrates the loads on congested links over to less utilized ones.
Digital Video Interfaces
Published in Francis Rumsey, John Watkinson, Digital Interface Handbook, 2013
Francis Rumsey, John Watkinson
The audio control packet structure is shown in Figure 7.43. Following the usual header are symbols representing the audio frame number, the sampling rate, the active channels, the processing delay and some reserved symbols. The sampling rate parameter allows the two AES/EBU channel pairs in a group to have different sampling rates if required. The active channel parameter simply describes which channels in a group carry meaningful audio data. The processing delay parameter denotes the delay the audio has experienced measured in audio sample periods. The parameter is a 26-bit two’s complement number requiring three symbols for each channel. Since the four audio channels in a group are generally channel pairs, only two delay parameters are needed. However, if four independent channels are used, one parameter each will be required. The e bit denotes whether four individual channels or two pairs are being transmitted.
Audio routing and transmission
Published in John Watkinson, Audio for Television, 1997
The audio control packet structure is shown in Figure 5.20. Following the usual header are symbols representing the audio frame number, the sampling rate, the active channels, the processing delay and some reserved symbols. The sampling rate parameter allows the two AES/EBU channel pairs in a group to have different sampling rates if required. The active channel parameter simply describes which channels in a group carry meaningful audio data. The processing delay parameter denoted the delay the audio has experienced measured in audio sample periods. The parameter is a 26-bit two’s complement number requiring three symbols for each channel. Since the four audio channels in a group are generally channel pairs, only two delay parameters are needed. However, if four independent channels are used, one parameter each will be required. The e bit denotes whether four individual channels or two pairs are being transmitted.
Resisting bad mouth attack in vehicular platoon using node-centric weight-based trust management algorithm (NC-WTM)
Published in Connection Science, 2022
It is defined as the overall time occupied through data to move a platoon from the source vehicle to the destination vehicle. End-to-end network delay is caused by various factors, including transmission delay, propagation delay, processing delay, and queuing delay. For TCP connections, application relays can minimise end-to-end delays and enhance performance. Additional features, such as application relays, media transcoding, and mixers can be added to increase the performance of broadcasting between participants in the overlay.
Adaptive relay co-ordination scheme for radial microgrid
Published in International Journal of Ambient Energy, 2022
Belwin J. Brearley, R. Raja Prabu, K. Regin Bose, V. Sankaranarayanan
Processing delay is the time consumption for data decoding/ encoding, switching of data into/from communication channel, sampling rate, routing algorithm and authenticating the data. Processing delay is considered as 100 µs (Padhi et al. 2010). Queuing delay is the time for which the packet waits in transmitting device to get transmitted through link. As dedication channel is offered for communication between relay and MCC, queing delay is neglected. From Equation (1), the maximum possible communication delay = 0.01 ms + 0.833 µs + 100µs = 0.010933 ms.
Nature-inspired cost optimisation for enterprise cloud systems using joint allocation of resources
Published in Enterprise Information Systems, 2021
Suchintan Mishra, Manmath Narayan Sahoo, Arun Kumar Sangaiah, Sambit Bakshi
Joint allocation in cloud means optimising the allocation for two or more physical resources at the same time. Most of the VM placement approaches in literature tend to optimise a single objective by the allocation of a single resource. Such optimisation approaches fail in cloud systems due to the possible conflicting nature of cloud stakeholders. There are multiple stakeholders in cloud, viz, the end-users and the providers. These stakeholders have different objectives. Providers have the aim to earn profit by maximising the utilisation of resources and reducing wastage. On the other hand, the end-users have a very different set of objectives as they wish for cheaper and quicker solutions to their jobs, uninterrupted availability, fault tolerance, etc. Since the motivations for using cloud are different in both the cases the objectives are also conflicting in nature. For example, the closing of excess resources may reduce the power consumption; however, a large surge in unexpected traffic loads can create hot-spots in the network which may be costly to recover. Thus, there is a need to simultaneously optimise multiple resources and find a trade-off among conflicting objectives, which must be optimal or nearly optimal. Moreover, such a solution must be found within a reasonable amount of time in order for the solution to be effective in an ever-changing cloud network. Solutions to such multiobjective optimisation functions are possible using the mathematical model of multiobjective optimisation. In multiobjective optimisation, multiple conflicting objectives are simultaneously optimised. The multiple objectives are combined together into a single objective using scalarisation methods. In this work, we consider compute and bandwidth allocation as our decision variables and design a relation between them that represents cost incurred to the end-user. We choose these as decision variables since these resources affect the overall cost incurred to the end-user. Cost is mainly incurred due to processing delay and communication delay. Hence, it seems appropriate to form compute and bandwidth allocation as the decision variables. Cost of compute is incurred when a task uses practical computing resources such as memory or CPU. In addition to computational cost, the request from the end-user also has to travel the entire cloud infrastructure before being submitted to a suitable physical machine for processing. This transmission cost affects the overall cost incurred to the end-user considerably. The waiting time of a resource depends upon how free the network is. A bottleneck in the network forces the task to be queued thereby increasing the cost incurred. Here, we optimise compute and bandwidth allocation simultaneously which in turn optimises the cost incurred to the end-user.