Explore chapters and articles related to this topic
Data Analytics for the Smart Grid
Published in Stuart Borlase, Smart Grids, 2018
Greg Robinson, Jim Horstman, Mirrasoul J. Mousavi, Gowri Rajappan
Big Data Capture and Integration: A Big Data Appliance is a converged hardware and software platform for big data that can be used to capture and analyze data from a wide variety of sources. Big data integration and analysis should support all enterprise data, whether the data are structured or unstructured, which are often collectively referred to as the “Data Lake.” Data access refers to software and activities related to storing, retrieving, or acting on data housed in the overall data repository system. Solutions are usually a hybrid of technologies, with each component using different methods and languages. It is noteworthy that many of these individual repositories will store their content in different and incompatible formats. Big data solutions provide data scientists with tools to access data from multiple heterogeneous (i.e., relational or nonrelational) data sources. These distributed data can then remain housed in the type of store most optimal for its volume and variety. Big data integration usually occurs over a distributed file system that enables files to be divided into large blocks and distributed across a cluster of nodes. The Hadoop Distributed File System (HDFS) is an example of a storage solution that can address long-term data storage needs for files, including waveform, telemetry, and events.
Big Data Optimization in Electric Power Systems: A Review
Published in Ahmed F. Zobaa, Trevor J. Bihl, Big Data Analytics in Future Power Systems, 2018
Iman Rahimi, Abdollah Ahmadi, Ahmed F. Zobaa, Ali Emrouznejad, Shady H.E. Abdel Aleem
To support a big data-based project, one first needs to analyze the data. There are specific data management tools for storing and analyzing large-scale data. Even in a simple project, there are several steps that must be performed. Figure 4.1 shows these steps that include data preparation, analysis, validation, collaboration, reporting, and access. They are briefed as follows: Data preparation is the process of collecting, cleaning, and consolidating data into one file or data table to be used in the analysis.Data analysis is the process of inspecting, cleansing, transforming, and modeling data to discover the useful information, draw conclusions, and support decision-making.Data validation is the process of ensuring that data have undergone a kind of cleansing to ensure they have acceptable quality and are correct and useful.Data collaboration means data visualization from all available different data sources while getting the data from the right people, in the right format, to be used in making effective decisions.Data reporting is the process of collecting and submitting data to authorities augmented with statistics.Data access typically refers to software and activities related to store, retrieve, or act on data housed in a database or other repository.
MarineMAS: A multi-agent framework to aid design, modelling, and evaluation of autonomous shipping systems
Published in Journal of International Maritime Safety, Environmental Affairs, and Shipping, 2019
Zhe Xiao, Xiuju Fu, Liye Zhang, Wanbing Zhang, Manu Agarwal, Rick Siow Mong Goh
The prototype system is developed using J2EE framework with integration of cluster supporting in-memory caching and concurrent computing libraries. The prototype system has a layered architecture as complying with J2EE convention. Data access layer defines the fundamental operations such query and update that interface with the underlying database or in-memory caching. For performance consideration, the frequently used data will be loaded into in-memory data structure when the system is booted up. Basically, the modelling data can be classified as the core data for agent-based modelling such as maritime traffic knowledge and GIS data and the supporting data to persist the modelling configuration parameters, indexes, and states for efficient operation. Function layer, i.e., service layer, has higher level processing logic encapsulation (planning, SA, and actions), which may invoke the basic DAO methods in the data access layer if any data operation is required. Except from the core functionalities of MAS modelling, if the simulation is running over a cluster, the cluster features and in-memory data grid (across the cluster) are managed in this layer to achieve system scalability. Visualization layer provides the user interface for dynamic modelling animation, illustration of analytic results, or other system configuration info display. The prototype system is implemented as a web-based B/S application; however, such a modularized design makes it easier to be transformed to a desktop version.