Explore chapters and articles related to this topic
Data Communication
Published in Sunit Kumar Sen, Fieldbus and Networking in Process Automation, 2017
Latency or time delay is another very important characteristic in message transmission via a transmission link. Latency refers to the time taken by an entire message to reach the destination via various links. There are several components in latency: propagation time, transmission time, queuing time, and processing time. Thus, latency is the sum of all these delays taken together. The less the delay in transmission for a message, the better the system is. Propagation time is the time needed by a single bit to reach the destination from the source. It is measured by dividing the distance by the speed of the medium through which propagation is taking place. Transmission time is the time between the first bit leaving the sender and the last bit of the message to reach the destination. It is defined as the message size divided by the bandwidth. Queuing time is a variable one and depends on the load or traffic through which the data/message has to pass. This is akin to a car taking more time covering a distance during day time when traffic is heavy, but takes much less time during morning when traffic is expectedly less. Message from the source to destination has to pass through different nodes. The nodes themselves are to cater to traffic from other sources that pass through them. Thus, depending on the en route traffic, there is always a variable delay for the message to reach the destination.
Data Communication
Published in Sunit Kumar Sen, Fieldbus and Networking in Process Automation, 2014
Latency or time delay is another very important characteristic in message transmission via a transmission link. Latency refers to the time taken by an entire message to reach the destination via various links. There are several components in latency which are: propagation time, transmission time, queuing time, and processing time. Thus, latency is the sum of all these delays taken together. Less the delay in transmission of a message, the better the system is. Propagation time is the time needed by a single bit to reach the destination from the source. It is measured by dividing the distance by the speed of the medium through which propagation is taking place. Transmission time is the time between the first bit leaving the sender and the last bit of the message reaching the destination. It is defined as the message size divided by the bandwidth. Queuing time is a variable one and depends on the load or traffic through which the data/message has to pass. This is akin to a car taking more time covering a distance during day time when traffic is heavy but takes much less time during morning when traffic is expectedly less. Message from the source to destination has to pass through different nodes. The nodes themselves are to cater to traffic from other sources which pass through them. Thus, depending on the en route traffic, there is always a variable delay for a message to reach the destination.
Data Transmission
Published in Goff Hill, The Cable and Telecommunications Professionals' Reference, 2012
Stuart D. Walker, Emilio Hugues-Salas, Rouzbeh Razavi, Tahmina Ajmal
Latency is a measurement of the time it takes a signal to completely arrive at the destination from the moment the first symbol (bit) was delivered by the source. This parameter is very important if a transmission network is considered for the propagation of a signal. In a network the transmission is usually executed by sending noncontinuous messages, such as packets of information, which can be a set of bits. In this way, the transmission is more bandwidth efficient. To achieve a proper latency calculation, parameters like propagation time, transmission time, and queuing time have to be considered.Propagation time refers to the time it takes a bit of information to travel from the source to the target receiver. This time will correspond to the ratio of the distance to the speed of propagation in the transmission network.Transmission time relates to the time to transfer a message. The messages (packets) consist of a set of bits that establish a message size. The transmission time, therefore, is the ratio of the packet size to the bandwidth of the channel.Queuing time is a parameter in the transmission network that is required because of several factors. One is the need to keep the packet stored while its destination prepares to receive it. Another is the need to avoid internal collisions of packets that are sent to the same destination. Queuing in a transmission system is a topic by itself and is treated elsewhere.
Integrated application model for visual detection of welding quality based on visual neuron under edge-end collaboration
Published in International Journal of Computer Integrated Manufacturing, 2023
Liangrui Zhang, Xi Zhang, Jing Hu, Mingzhou Liu, Lin Ling
The bandwidth in the experiment was measured at 70 Mbps, and the single original image resolution was 3840 × 2160 pixels, which is approximately 6.5 M. The measured data size of the model transmission process and data transmission times for each stage of the task are listed in Table 4. The total data transmission time of a single task is 384.2 ms, and the total execution time of model transition is 887.5 ms. Therefore, the response time TRT of a single task is 1271.7 ms, which is less than the actual gluing beat and meets the requirements of the quality detection speed of glue. The task data transmission time can be reduced by increasing network bandwidth.
Nature-inspired cost optimisation for enterprise cloud systems using joint allocation of resources
Published in Enterprise Information Systems, 2021
Suchintan Mishra, Manmath Narayan Sahoo, Arun Kumar Sangaiah, Sambit Bakshi
The response time of a task () is the amount of time taken by any system to receive the task, start processing it and submit the results. It is cumulative of the service time, waiting time and transmission time Wescott (2013). It serves as a measure of the overall system performance and also decides the cost the end-user has to pay. Cloud end-users pay on the go, abiding by a pay-per-use policy. It is the average response time that decides how much cost will be incurred to the end-user. Service time in cloud mostly depend on the computing server to which the task is assigned. Similarly, waiting time is the amount of time the request has to wait for bottlenecks to be released. Finally, transmission time depends on the capacity of the networking elements and traffic on the network. Overall response time is a dependable measure of the cost incurred by the end-user. The average response time is calculated as shown in Equation 6.