Explore chapters and articles related to this topic
A Model for and Inventory of Cybersecurity Values: Metrics and Best Practices
Published in Natalie M. Scala, James P. Howard, Handbook of Military and Defense Operations Research, 2020
Natalie M. Scala, Paul L. Goethals
In general, the security metrics problem is especially challenging. Systems are extremely complex, and small nuances may have significant impact. The challenge is to develop security metrics, models, and best practices that are capable of predicting if or confirming that a cyber system preserves a given set of security properties. Metrics must be quantifiable, feasible, repeatable, and objective (Goethals, Farahani & Scala, 2015). A system having security properties, however, does not mean it is immune to an attack or potential breach; rather, the goal is to prevent these activities. In contrast, the analytics field considers a metric to be descriptive, defined as a variable of interest or as a unit of measurement that provides a means to objectively quantify performance (Evans, 2013). By definition, descriptive analytics use data from the past and present to understand current and historical performance; unlike predictive analytics, they do not make a forecast about future events. Thus, a disconnect in semantics exists, with the cybersecurity field attempting to utilize descriptive analytics (such as a metric on a static system) to make an assessment or prediction about future events. Such a disconnect may cause confusion between members of a project team or among client interactions. It is important to understand that the two fields view metrics with differing purpose, and a team must be clear as to how metrics will be managed and defined within the project. For this chapter, the SoS definition of metrics is used to examine both metrics and best practices that can augment security of a cyber system.
Applications of IoT in Medical Technology and Healthcare
Published in Rajdeep Chowdhury, S. K. Niranjan, Advances in Applications of Computational Intelligence and the Internet of Things, 2022
Parshant Kumar Sharma, Shraddha Kaushik, Saurabh Pandey, Megha Gupta, Nishant Vats, Manu Gautam, Madhusudan Sharma
The term “big data” refers to enormous amounts of data, which are collected, processed, and used for analytics. When the data is analyzed by the experts, the information of it has their multiple benefits, such as reducing health-care costs, predicting epidemics, avoiding preventable deaths, improving quality of life, reducing health-care waste, developing new drugs and treatments, and improving the efficiency and quality of care.12 Healthcare stores huge amounts of data of every single second (one research study can amount to 100 TB of data), so these facilities will require safe storage solutions, expandable, and cost-effective. To deliver services across the Internet, cloud then uses hardware and software.12
Smart cities and buildings
Published in Pieter Pauwels, Kris McGlinn, Buildings and Semantics, 2023
Hendro Wicaksono, Baris Yuce, Kris McGlinn, Ozum Calli
In smart city applications, several data analytics models have been proposed. One of them is prescriptive analytics which is associated with both descriptive and predictive analytics. A descriptive analytics approach focuses on what has happened, while predictive analytics focuses on what might happen. The main purpose of the prescriptive analytics receives data from both descriptive and predictive sources and searches for the best solution among choices [163]. The approach utilises the outcome of the optimisation and simulation systems to suggest possible options [100]. Hence, it is a reliable approach to utilise in smart city applications to solve complex problems.
Retinal blood vessels segmentation using Wald PDF and MSMO operator
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2023
Sushil Kumar Saroj, Rakesh Kumar, Nagendra Pratap Singh
The Sobel, Canny, Gradient, and Prewitt operators are the classical methods for edge detection. They work well when edges are distinct as well as sharp. However, retinal blood vessels are generally thin and entangled. Intensities of vessels change gradually and the contrast between vessels and their background is generally very low. In these scenarios, these classical methods fail to detect retinal vessels. Machine learning methods are the data analytics techniques that use computational procedures to learn information directly from data without depending on a pre-determined equation as a model. In machine learning, the supervised methods learn the rules for vessel detection using a training set of manually segmented standard samples based on given characteristics. These methods need a basic understanding of labelling for the identification of non-vessel and vessel pixels. They also need pre-labelled training datasets. The unsupervised algorithms seek intrinsic features of vessels for the detection of non-vessel as well as vessel pixels. They take too much time to extract vessels. Deep learning methods such as CNN, DNN, RNN, etc. are the multi-level representation learning methods generated by consisting of simple but non-linear modules. These methods require bulk data to perform better than other methods. These methods are very costly to train due to their sophisticated data models.
Analyse vehicle–pedestrian crash severity at intersection with data mining techniques
Published in International Journal of Crashworthiness, 2022
The majority of the previous research on crash severity modelling applied statistical regression models, whose performance may not be good when handling the mass complicated crash data with many discrete variables or variables with a large number of categories. These models also usually rely on strict statistical assumptions such as the linearity assumption, which is difficult to satisfy in crash circumstances [28,29]. As such, the non-parametric data mining techniques is a potential solution. The data mining analytic process is designed to explore large amounts of data or big data to search for structures, commonalities, and hidden patterns or rules in the dataset. It is able to handle large, complicated datasets, requiring relatively short data preparation time, providing satisfiable accuracy, as well as addressing the non-linear effects and interactions among predictors [30,31].
Transportation data visualization with a focus on freight: a literature review
Published in Transportation Planning and Technology, 2022
Yunfei Ma, Amir Amiri, Elkafi Hassini, Saiedeh Razavi
Factories, warehouses, and business zones have become more integrated into urban spaces over time. Therefore, the success of supply chains significantly depends on how well in-land traffic and urban road congestion can be managed around logistics hubs, such as warehouses. Road-based freight movements are a critical component of the supply chain and transportation networks, especially for the middle and last-mile distribution stages. According to data from Transforma Insights, more than 67% of the 2019 Internet of Things (IoT) connected devices worldwide are generated by supply chains, including consumers, transportation and storage, retail and wholesale, and manufacturing (Morrish and Hatton 2020). According to Statista, these devices generated about 9.11 zettabytes, or more than nine times the annual volume of internet traffic in 2016, as estimated by Cisco (Cisco 2016). With the drive to invest in big data collection capabilities continuing, where in 2021 more than 74% of companies were planning to have a moderate to significant investment, 70% of enterprise leaders said that most of the data remains underutilized according to HFS Research (Phil, Ryan, and Rajagopalan 2021). Thus, it is essential that data analytics procedures are developed to make use of the available freight data to improve the visibility of goods movements on roads, increase the supply chain efficiency and justify the investment in data collection capabilities. Data analytics is constituted of three types: descriptive, predictive, and prescriptive.