Explore chapters and articles related to this topic
Big Data
Published in James William Martin, Operational Excellence, 2021
Big data is driving global transformation of the ways we learn, work, and produce goods and services. First, there is the IoT, composed of interconnected devices and sensors that provide information on status, predict performance, and control these connected devices. Currently there are more than twenty billion of these connections. They control global production and services across supply chains. They also offer opportunities to improve efficiency while meeting customer expectations. Second, there is virtualization in the design of almost anything today. This enables physical objects to be created using models and algorithms, and the model can be tested in virtual environments to identify design flaws that can be corrected prior to production. Service system models can also be simulated to analyze their response to changes in incoming demand and capacity if systems fail. Data virtualization promotes the use of Big Data because it can be organized and presented in easily consumable formats that provide insights of relationships and status for decision making. It also provides a single source of trusted truth.
Predicting the Future of Augmented Intelligence
Published in Judith Hurwitz, Henry Morris, Candace Sidner, Daniel Kirsch, Augmented Intelligence, 2019
Judith Hurwitz, Henry Morris, Candace Sidner, Daniel Kirsch
One of the issues that organizations have to grapple with is the need to move sensitive data outside of their organization in order to execute machine learning models. Techniques that enable a business to move the model to the data rather than moving the data will provide more secure methods of protecting data security during analytic processing. An emerging approach is data virtualization. Data virtualization allows organizations to manage data access and manipulate and query data without having to move the data into a single repository or warehouse. In essence, data virtualization is a peer-to-peer architecture whereby queries are broken down and sent closer to the data sets. After all the subqueries are processed, results are combined along the way, thus eliminating the application entry point/service node as the bottleneck. Data virtualization allows organizations to analyze data where it resides rather than requiring that the data be moved to a different location.
Schema on read modeling approach as a basis of big data analytics integration in EIS
Published in Enterprise Information Systems, 2018
Slađana Janković, Snežana Mladenović, Dušan Mladenović, Slavko Vesković, Draženko Glavić
Zdravković and Panetto (2017) highlighted that current challenges in EISs development are related to the growing need for flexibility caused by cooperation with other EISs. EISs environment has become very dynamic and variable not only in terms of collaboration with other EISs, but also in terms of availability of data sources. The research aims to offer a solution that would efficiently meet the following three key requirements: frequent appearance of new Big Data sources (either corporate or external), application of new data processing, analysis and visualization methods, and integration of structured (i.e. relational) and semi- and non-structured data sources. To solve the above problems, the schema alignment method of data integration has been selected. The traditional schema alignment method of data integration has been adapted to Big Data sources and methods of Big Data analysis by being based on the schema on read data modeling approach and data virtualization concepts. Schema on read means you create the schema only when reading the data. Structure is applied to the data only when it’s read, this allows unstructured data to be stored in the database. Since it’s not necessary to define the schema before storing the data it makes it easier to bring in new data sources on the fly. Data virtualization is any approach to data management that allows an application to retrieve and manipulate data without requiring technical details about the data, such as how it is formatted at source, or where it is physically located. The research also provides a technological framework for the implementation of the proposed integration model. It includes the following three technological environments: NoSQL databases, data virtualization servers and data integration tools.