Explore chapters and articles related to this topic
Design Principles and Privacy in Cloud Computing
Published in Gautam Kumar, Dinesh Kumar Saini, Nguyen Ha Huy Cuong, Cyber Defense Mechanisms, 2020
Mohammad Wazid, Ashok Kumar Das
An “intrusion detection system” (IDS) is useful to monitor and analyze malicious traffic to protect the devices (i.e., smart devices) from the threats. In a cloud computing environment, an IDS verifies all inbound packets and searches for any symptom of intrusion. If a threat is identified, the deployed tools can take proper actions (e.g., notifying the administrators, omitting the source IP address from accessing other resources). In an “IoT-based cloud computing environment,” it is also possible that an adversary may physically capture some of the smart devices. The adversary can also deploy his/her malicious nodes (devices) using the extracted information from the captured devices. In addition, these malicious nodes may be pre-installed with malicious script to launch various attacks (i.e., routing attack) [59,60,63,67]. Upon successful execution of these attacks, the data packets may get lost, dropped, delayed, or modified. It may again cause degradation in performance of communication. This may lead to reduction in “network throughput” and “packet delivery ratio,” and increase in high “end-to-end delay” [63,67]. Therefore, it is essential to design an IDS for protecting communication over the cloud.
An adaptive PI active queue management algorithm based on queue length
Published in Amir Hussain, Mirjana Ivanovic, Electronics, Communications and Networks IV, 2015
Hongcheng Huang*, Fan Yang, Shiwei Wang, Gaofei Xue
With the explosive growth of the Internet, the Internet traffic has increased quickly, so that network congestion occurs frequently. Network congestion will directly lead to the degradation of the entire network performance, such as the decrease of network throughput, and the increase of packet loss rate and end-to-end delay. The crash of the network will be caused by serious network congestion.
An Energy-Efficient Protocol based on Recursive Geographic Forwarding Mechanisms for Improving Routing Performance in WSN
Published in IETE Journal of Research, 2023
Prasanta Pratim Bairagi, Mala Dutta, Kanojia Sindhuben Babulal
The criteria on which the entire analysis is conducted are as follows [31,32]. Network Delay (ND): It is the total time taken by a data packet to travel from its origin to its destination. Lower values indicate higher performance in the context of ND.ND = ((Receive time-Send time)/Total packet send) *1000 msData Delivery Ratio (DDR): It is described as the proportion of total packets sent from the source to the destination in the network to all packets actually delivered. When it comes to DDR, we can state that a greater value indicates better performance.DDR = (Total no. of packet Arrived/Total no. of the packet sent from sources) × 100.Network Throughput (NT): The average rate of successful message transmission across a communication link is termed as network throughput. Higher values indicate better performance when NT is involved.NT = (Received packet size/ (End time-Start time)) *(8/1000)
Towards complex dynamic fog network orchestration using embedded neural switch
Published in International Journal of Computers and Applications, 2021
K. C. Okafor, G. C. Ononiwu, Sam Goundar, V. C. Chijindu, C. C. Udeze
Due to the massive nature of communication pattern in the HDN, a smart Fog layer is needed. Without this layer, there will be frequent high link utilization and workload congestion at aggregation or core layers [8]. This is because, data center networks leverage multiple parallel paths connecting end host pairs to offer high bisection bandwidth for cluster computing applications [9]. Those congested heavy flows often lead to the unavoidable breakdown of the commodity servers while some specialized links within HDN observe higher loss ratio than others [10]. In such networks, Equal Cost Multipath (ECMP) routing protocols seem to be unaware of the traffic–workload due to static flow-to-link assignments which usually causes bandwidth loss arising from flow collisions. While high resource utilization is favorable to service providers partly, network congestion can cause harmful queuing delay and packet loss, and thus affects the network throughput. These consequences could significantly degrade application performance and user experience. Figure 1 shows how the Fog Ethernet switch components are used for connections leading to latency and congestion management similar to a traditional DCN network storage [11,12]. In this case, it is used for the separation of storage, network and HPC traffic into separate virtual fabrics using shared memory partitions. This is useful in a well-managed Fog data center network in order to achieve overall performance efficiency in deployment contexts.
A new on the fly energy-efficient opportunistic routing in wireless multi-hop networks
Published in Journal of the Chinese Institute of Engineers, 2023
Samaneh Shabani, Neda Moghim, Ali Bohlooli
Network throughput is shown in Figure 5 in different scenarios with various numbers of nodes in the network. The packets’ inter-arrival time is considered 0.1 seconds, and there are 7 source-destination pairs in the network. As the number of nodes increases, network congestion increases, and more collisions occur. Therefore, network throughput decreases. As shown in Figure 5, a new algorithm achieves higher throughput compared to ROMER and CORP-M protocols in various numbers of nodes. This improved performance is because forwarders are restricted in the zone-based approach and the broadcast storm of packets is avoided. In addition to the zoning approach, the proposed method uses threshold_credit for the selection of candidate forwarders. Therefore, it achieves better results compared to CORP-M, which does not have any threshold for its forwarder selection. On the other hand, EEOPR achieved better throughput compared to our previous algorithm, proposed in EOpR. EEOPR reevaluates the candidate nodes to select their best, using the genetic algorithm. Figure 6 shows network throughput versus different packet inter-arrival times in a 30-node scenario. When packet inter-arrival time increases, lower traffic is applied to the network. EEOPR’s throughput is higher than ROMER, CORP-M, and EOpR in high-traffic loads. As Figure 6 shows, when network traffic becomes higher in the network, EEOPR performs much better than ROMER, CORP-M, and EOpR. Some reasons lead to higher throughput in EEOPR. First of all, due to the kind of routing that EEOPR runs in the network, the candidate nodes are not selected in advance. Therefore, there will be a chance to select a better forwarder for the packets. As the quality of wireless channels varies across the path and over time due to shadowing or short-term fading, most wireless routing algorithms have packet loss problems. However, a well-conditioned downstream node is selected in each hop according to the network condition in EEOPR. Secondly, using the genetic algorithm leads to fewer network collisions that result in higher throughput.