Explore chapters and articles related to this topic
Virtual File System
Published in Yi Qiu, Puxiang Xiong, Tianlong Zhu, The Design and Implementation of the RT-Thread Operating System, 2020
Yi Qiu, Puxiang Xiong, Tianlong Zhu
A file system is a set of abstract data types that implements the storage, hierarchical organization, access, and retrieval of data. It is a mechanism for providing underlying data access to users. Files and folders are two basic concepts of a file system. A file is where data is stored, and a folder helps keep an organized tree structure.
Real-Time Operating Systems
Published in Leanna Rierson, Developing Safety-Critical Software, 2017
File system is a way of managing data storage.† Similar to the file system in a desktop environment, the RTOS file system manages and hides the details of the various forms of data storage on the hardware. The file system provides the ability to open, close, read, write, and delete files and directories.
Off-chain management and state-tracking of smart programs on blockchain for secure and efficient decentralized computation
Published in International Journal of Computers and Applications, 2022
Mahdi Mallaki, Babak Majidi, Amirhossein Peyvandi, Ali Movaghar
The final issue to consider is that storing information on a Blockchain network is costly. For example, the cost of storing each KB of data on the Atrium network is about 0.076 USD, which results in about 76,000 USD per gigabyte. This is costly and therefore, it is necessary to use another platform to store smart application data. One of the data storage platforms is InterPlanetary File System (IPFS) [19], which is a unique network for data storage and is structurally very similar to the Blockchain. The unique identifier of any data on the IPFS network is the hash address (SHA-256) of that file. Storage on this network is economically viable. The smart application data can be stored on the IPFS and only the hash file of the application will be stored on the main Blockchain network.
A feature-based intelligent deduplication compression system with extreme resemblance detection
Published in Connection Science, 2021
Xiaotong Wu, Jiaquan Gao, Genlin Ji, Taotao Wu, Yuan Tian, Najla Al-Nabhan
However, there is a very small probability to cause hash collision, meaning that different chunks have the same fingerprints. For various hash algorithms, Wen et al. (2016) analysed hash collision probability with different sizes of data under an average chunk size of 8 KB. In the worst case, the probability is with SHA-1 under size of 1 YB. Meanwhile, the probability decreases to with SHA-256 for the same size. On the other hand, SHA-256 needs more computational workload. Therefore, the deduplication system selects the most proper hash algorithm according to its storage and computation capability. For hash algorithm SHA-1, SHA-256, SHA-512, the output length is 160, 256 and 512 bits, respectively. In general, the system extracts the first ξ bits of the output as the fingerprint of a chunk. For example, Muthitacharoen et al. (2001) indexed each chunk by the first 64 bits of SHA-1 hash value in the network file system.
An architecture for synchronising cloud file storage and organisation repositories
Published in International Journal of Parallel, Emergent and Distributed Systems, 2019
Gil Andriani, Eduardo Godoy, Guilherme Koslovski, Rafael Obelheiro, Mauricio Pillon
Storage systems based on the Network File System (NFS) protocol [18], Common Internet File System (CIFS) standard effort [19] and Server Message Block (SMB) protocol [20] are largely used by enterprise environments. Such solutions are useful for servers replicated through virtual private networks. While the storage systems can be deployed in Paris and New York, however, collaborators in Joinville and Prague must use a different approach for sharing files, even if a virtual private network (VPN) over the Internet is available. Although a VPN can offer a private communication channel, the traffic is routed atop the Internet and quality-of-service guarantees are not provided (differently from a private network between Paris and New York). Summing up, the packet losses currently observed on Internet affect the performance of TCP-congestion control algorithm [21] propagating a performance degradation to final applications. Consequently, the synchronisation time is increased, mainly for small files. Moreover, for collaborators within a single site (for instance, users at the Paris and New York facilities in Figure 1), the Internet access link can become a bottleneck that degrades the perceived quality-of-experience of file synchronisation with the cloud.