Explore chapters and articles related to this topic
Security and Privacy in Mobile Cloud Computing
Published in Mukesh Kumar Awasthi, Ravi Tomar, Maanak Gupta, Mathematical Modeling for Intelligent Systems, 2023
The most serious data security issue happens as a result of phone devices’ data being stored and analyzed in clouds located at network operator terminals. Information concerns include loss of data, data leakage, backup and recovery, data localization and data protection. The loss and compromise of data violates two security standards: consistency and privacy. Loss of data in this sense refers to user information that was damaged or lost through any physical force while process, transport, or storage. In a data breach incident, individuals’ information is acquired, copied, or utilized by intruders. These two can be caused by malevolent insiders or by malicious apps from outside. Another issue to be concerned about is data recovery. However, because the users’ data is kept on the premises of the service providers in cloud service models, firms need to ensure wherever the information is hosted or kept; thus, data localization is also a challenge. One customers’ information should also be kept distinct from the other. When one user’s data is mixed, combined, or confounded with the data of other users, it becomes considerably more susceptible. Whenever information is exported to cloud servers to improve storage space, phone devices lose physical control of the information at the same time. As a result, in the cloud storage scenario, one of the issues for mobile users is the accuracy of the data. Even though data centers are considerably more reliable and efficient than portable devices, they suffer a plethora of threats and vulnerabilities to data security.
Selecting an Export Market
Published in Sarita D. Jackson, International Trade in Services, 2021
One tool that can serve as a starting point for looking at the trends of which services are being purchased and sold to different countries around the world is the Trade Map database. This database is presented by the International Trade Centre and consists of data collected from the United Nations and the World Trade Organization. For instance, a U.S.-based financial services company may consider Japan as a potential export market because of the 2020 trade agreement between the United States and Japan and the fact that the trade deal includes provisions pertaining to digital trade. The U.S.-Japan digital trade provisions make data localization requirements illegal, including for foreign service suppliers. The same U.S.-based financial service company can use the Trade Map database to see the most recent trends in U.S.–Japan financial services trade (Table 6.1).
Distributed Systems
Published in Vivek Kale, Digital Transformation of Enterprise Architecture, 2019
Improved performance: A distributed DBMS fragments the database by keeping the data closer to where it is needed most. Data localization reduces the contention for CPU and I/O services and simultaneously reduces access delays involved in wide area networks. When a large database is distributed over multiple sites, smaller databases exist at each site. As a result, local queries and transactions accessing data at a single site have better performance because of the smaller local data-bases. In addition, each site has a smaller number of transactions executing than if all transactions are submitted to a single centralized database. Moreover, inter-query and intra-query parallelism can be achieved by executing multiple queries at different sites, or by breaking up a query into a number of subqueries that execute in parallel. This contributes to improved performance.
Current status and future directions of geoportals
Published in International Journal of Digital Earth, 2020
Hao Jiang, John van Genderen, Paolo Mazzetti, Hyeongmo Koo, Min Chen
Geoportal common functionalities include a metadata registry, data discovery through a catalogue service, data visualization, and data access. A geospatial metadata catalogue provides data descriptions in terms of metadata (e.g. contributor, data type, language, contact point, keywords, and dataset identifiers for data localization and indexing). In addition, the metadata catalogue is often used for implementing harmonized data discovery. Since users are typically interested in finding datasets matching specific constraints, the data discovery functionality is one of the basic functions that geoportals offer. Specifically, geoportals providing data discovery generally allow searching datasets along the who, when, where and what axes, that is, by geo-location (where), data provider (who), time range (when), thematic layer, and keywords (what). The user interface provides graphical tools, like a bounding box on a map, to set spatial and temporal constraints. Moreover, users can be directed to a gazetteer, a thesaurus, or other knowledge bases for better scoping their query. Various approaches have been developed to enhance geoportal search capabilities, e.g. the use of thesauri, ontologies, and semantic text matching algorithms (Wang, Gong, and Wu 2007; Santoro et al. 2012).
Population distribution modelling at fine spatio-temporal scale based on mobile phone data
Published in International Journal of Digital Earth, 2019
Petr Kubíček, Milan Konečný, Zdeněk Stachoň, Jie Shen, Lukáš Herman, Tomáš Řezník, Karel Staněk, Radim Štampach, Šimon Leitgeb
A review of existing research is presented in two separate discussions – one dealing with the technological background of mobile phone localization studies and the other analysing the potential of mobile phone data localization under emergency management.
Implementation of the parallel mean shift-based image segmentation algorithm on a GPU cluster
Published in International Journal of Digital Earth, 2019
Fang Huang, Yinjie Chen, Li Li, Ji Zhou, Jian Tao, Xicheng Tan, Guangsong Fan
Although parallelization of the serial mean shift algorithm achieved a high speedup ratio on heterogeneous platforms, the time consumption of this algorithm on a single GPU system is still a major barrier to making it really useful (Li and Xiao 2009; Zhou, Zhao, and Ma 2010), especially when the data volume reaches a certain level. In particular, when detecting changes in multi-temporal RS images, a single GPU is just not powerful enough to finish the task in a timely manner. Because of the excellent cost-to-performance ratio of GPU-based heterogeneous systems, in recent years, many researchers began to carry out research on GPU clusters. For instance, Zhang et al. (2010) used an MPI + OpenMP + CUDA hybrid programming model to accelerate high-resolution molecular dynamics simulations of proteins on an eight-node GPU cluster and achieved very good speedup. To solve data localization problems in a large-scale parallel fast Fourier transformation (FFT) algorithm, Chen et al. implemented the Peking University FFT (PKUFFT) algorithm that transformed 512GB 3D on a 16-node GPU cluster. In comparisons with the FFTW and Intel MKL libraries, Chen, Cui, and Mei (2010) achieved speedups of 24.3 and 7, respectively. In a study on parallel programming interfaces for GPU clusters, Fan, Qiu, and Kaufman (2008) proposed a GPU cluster common programming framework called Zippy in 2008. Zippy used the GA (Nieplocha et al. 2006) library, Cg libraries, OpenGL libraries, and CUDA to achieve non-uniform memory access and two-level parallel mechanisms to resolve data inconsistencies in GPU memory across the cluster. In 2009, Lawlor developed a cudaMPI library for general-purpose computing using the MPI + CUDA hybrid programming model (Lawlor 2009). This library provided an application programming interface (API) similar to that of the MPI communication interface, but was available for NVIDIA GPU clusters only. Kim et al. (2012) proposed a common programming framework for CPU/GPU heterogeneous clustering using MPI + OpenCL hybrid programming to reduce the complexity of GPU cluster programming and solve the problem of poor maintenance and poor portability of GPU cluster applications. SnuCL, by packaging MPI communication functions with an OpenCL API, provides users an AP that supports GPU cluster communication (Kim et al.2012). SnuCL scales well in small and medium-sized GPU clusters.