Explore chapters and articles related to this topic
IoT Security Frameworks and Countermeasures
Published in Stavros Shiaeles, Nicholas Kolokotronis, Internet of Things, Threats, Landscape, and Countermeasures, 2021
G. Bendiab, B. Saridou, L. Barlow, N. Savage, S. Shiaeles
Best practice for backup and recovery help to mitigate or minimize the effect of various threats such as the effect of contaminating the software/firmware of devices with malware through restoration of “clean” backup. For instance, backups can be used as the last line of defense and the most effective countermeasure during the mitigation process of ransomware attacks. Today, ransomware is one of the top cybersecurity threats in all sectors, such as government, manufacturing, retail, finance, healthcare, etc., with attacks being held even in countries with access to the most advanced security technologies. In addition, ransomware is not restricted to computers, but it can also attack servers, mobile, and cloud systems. When it comes to costs, monetary damage caused by hackers exceeds the amount of the ransom, as organizations have to cover costs resulting from downtime, data loss, network/systems restoration, and reputation [100, 101]. Despite efforts, there is no security measure that can truly protect systems from ransomware. Users need to combine a variety of measures, such as antivirus systems, firewalls, IDS, use of authorized software, visiting reputable websites, etc., to help decrease the probability of a ransom attack [100]. To keep systems secure, organizations can choose among the three types of backup for their data: a full backup, an incremental backup, or a differential backup [19]. However, organizations need to constantly update their strategies, as hackers are able to encrypt or block access to backup data as well.
Basic IT for Radiographers
Published in Alexander Peck, Clark’s Essential PACS, RIS and Imaging Informatics, 2017
Backups can be either full, incremental or differential depending on the chosen backup plan and requirements of the application: Full backups take the entire dataset (e.g. a RIS database, or a PACS repository) and create an identical copy, either by lossless compression or in raw 1:1 format. This is both time consuming and resource intensive.Incremental backups include only data that have changed from the previous backup.Differential backups are similar to incremental backups, but include all data changed from the very first, full backup (rather than a previous differential backup).
Data Reporting and Analysis
Published in Ron S. Kenett, Emanuel R. Baker, Process Improvement and CMMI® for Systems and Software, 2010
Ron S. Kenett, Emanuel R. Baker
Also of importance is the architecture of the data collection system. If all the elements of the data collection and analysis system are located on one machine, together with other data that is being collected by the organization such as personnel records, online document storage, financial data, purchase orders, order status, etc., the backup strategy and frequency will be dictated by the organizational data center policies. In such a case, the backup strategy needs to consider the needs of all the affected parties and ensure that the organization’s management structure is responsive to all the user requirements. Backups are necessary for a number of reasons: data re-creation, disaster recovery, “what-if” analyses, etc. In the case of data re-creation, it is sometimes necessary to go back and re-create data from some point in the past. For example, an error may have been noted for some data point in the past, and all subsequent data points that depended on that value are wrong. The backup strategy must consider how far back the organization wants to be able to have the capability to retrieve old data; consequently, how long a period old data will be archived and stored is a major consideration. There may be governmental and regulatory agency requirements that set mandatory requirements on data archiving. Department of Defense (DoD) contracts, for example, will often specify a requirement for data retention for a minimum period of time extending beyond delivery of the final item on the contract. Accordingly, data storage facilities must be adequate to accommodate documents and backup tapes or disks for the required period of time.
Prediction of Future Failures for Heterogeneous Reliability Field Data
Published in Technometrics, 2022
Colin Lewis-Beck, Qinglong Tian, William Q. Meeker
Backblaze is a company that provides cloud backup storage to protect against customer data loss. Since 2013, Backblaze (2020) has been collecting daily operational data on all of the hard drives operating at its facilities. Every quarter the company reports detailed operational data and summary statistics on the different drive-models in operation through their website (https://www.backblaze.com/b2/hard-drive-test-data.html, accessed June 1, 2020). The purpose is to provide consumers and businesses with reliability information on different drive-models. The hard drives continuously spin in controlled-environment storage pods. Drives are run until failure or until they are replaced with newer technology drives. When a hard drive fails, it is removed and replaced. In addition, the number of storage pods is increasing as Backblaze expands their business and adds drives to its storage capacity. A subset of these data was analyzed by Mittman, Lewis-Beck, and Meeker (2019). However, their focus was on comparing the reliability of the different drive-model brands whereas our interest is in predicting the number of future failures over a fixed future period of time for a current population of drives.
Human mobility data in the COVID-19 pandemic: characteristics, applications, and challenges
Published in International Journal of Digital Earth, 2021
Tao Hu, Siqin Wang, Bing She, Mengxi Zhang, Xiao Huang, Yunhe Cui, Jacob Khuri, Yaxin Hu, Xiaokang Fu, Xiaoyue Wang, Peixiao Wang, Xinyan Zhu, Shuming Bao, Wendy Guan, Zhenlong Li
Mobility data consists of location stamps about individuals. While it can help reveal the underlying patterns of human movement behaviors, it also poses a challenge to privacy protection as human movements are highly unique and predictable (Song et al. 2010; De Montjoye et al. 2013). The risk is even higher when different mobility datasets are merged, even with every dataset being anonymized (Kondor et al. 2020). Therefore, it is crucial to establish standards in the deposit, storage, processing, and distribution of mobility data. Researchers need to ensure that any identifiers from the datasets are removed before depositing the data. The storage of mobility data must be secure and must disallow any unauthenticated and unauthorized access. Data security in transit is also critical as this ensures that data is protected while being transferred in-between networks, such as during the upload, download, and data transmission steps of processing and backup. Due to the complexity of technologies involved that ensure data security and long-term preservation, it is often not practical for researchers to host the data on their own. After the data are processed, researchers may choose a trusted data repository in which their datasets could be deposited (Corrado 2019).
Anomaly Detection Model for Predicting Hard Disk Drive Failures
Published in Applied Artificial Intelligence, 2021
Sladjana M. Djurasevic, Uros M. Pesovic, Borislav S. Djordjevic
HDDs have been primary technology for computer data storage for several decades. Newly emerging SSDs (Solid State Drives), based on semiconductor storage, surpass HDDs in terms of response time and throughput performance. On the other hand, HDDs are dozen times cheaper per stored byte than SSDs (Appuswamy et al. 2017), and it is still the predominant data storage medium both in the enterprise and consumer market. The electromechanical design of the HDD renders it more susceptible to failures than other components of the computer system, with an average annual failure rate of HDDs in the range from 0.3 to 3%. The HDD failure generally leads to permanent data loss and typically the cost of losing data exceeds that of HDD itself. Reliability of data storage on HDD is significantly improved using RAID (Redundant Array of Independent Disks) technology which provides data retention in case one or more HDDs in RAID array had failed. RAID technology is commonly used in enterprise computer systems given its considerable cost and multiple-HDD requirement in forming a redundant array. Typical computer systems for consumer market utilize a single HDD. The prediction of HDD failure can be very useful in preventing data loss as it allows for data backup in case of imminent HDD failure warning.