Explore chapters and articles related to this topic
Developing an Energy Information System: An Efficient Methodology
Published in Barney L. Capehart, Lynne C. Capehart, Paul Allen, David Green, Web Based Enterprise Energy and Building Automation Systems, 2020
The database software is really the basic building block for an EIS. So it should be one that works well with the web. A database that allows the programmer to work with tables on their own and not tied up into a database container is better because it makes it easier to export and import data one table at a time. It might also help in debugging. Almost all of today’s database engines support text indexing, that is using non-numeric values as the key field in a database table. This is a great feature that eliminates creating fields simply for the sake of acting as indexes to link tables together. The database engine should, of course, be very fast. Web data queries seem to take longer or maybe we expect them to be faster. In any case, a fast database engine is always desirable. The operating system and network software choices are most likely going to be the ones already in use at the organization. There is probably no advantage to using anything other than the systems already in place. The programming language should be a “higher level” language as opposed to any scripting language designed specifically for the web. Calculations and features of the EIS may very well require the versatility of a more robust programming language. It too must be compatible with the web, widely used and well supported. The choice of programming languages may have an effect on where files are stored and how they are organized.
Tracking organ doses for patient safety in radiation therapy
Published in Jun Deng, Lei Xing, Big Data in Radiation Oncology, 2019
Wazir Muhammad, Ying Liang, Gregory R. Hart, Bradley J. Nartowt, David A. Roffman, Jun Deng
Tracking organ doses for all patients and for all the radiation events over the entire course of treatment would produce a large amount of data, which would need to be carefully maintained by a database engine for data storage, update, and query. At the current stage, we choose to use the relational database management system SQLite as a demonstration implementation of the PODA database in our institution. The SQLite is a highly efficient, open-source database engine suitable for small-to-medium sized data (<2 TB). Moreover, it is an embedded database engine, which means the database could be contained in the PODA system itself and not need an independent thread running a database. The advantage of SQLite is that it adds convenience during PODA development and deployment, which compensates for its drawback in concurrency.
The Data Warehouse
Published in Richard J. Roiger, Data Mining, 2017
Broadly speaking, two general techniques have been adopted for implementing a data warehouse. One method is to structure the warehouse model as a multidimensional array. In this case, the data are stored in a form similar to the format used for presentation to the user. In Section 14.3, you will learn more about the advantages and disadvantages of the multidimensional database model. A more common approach stores the warehouse data using the relational model and invokes a relational database engine to present the data to the user in a multidimensional format. Here we discuss a popular relational modeling technique known as the star schema.
JavaScript MEAN stack application approach for real-time nonconformity management in SMEs as a quality control aspect within Industry 4.0 concept
Published in International Journal of Computer Integrated Manufacturing, 2023
Aleksandar Đorđević, Miladin Stefanovic, Tijana Petrović, Milan Erić, Yury Klochkov, Milan Mišić
The authors believe that no single research presented such results and solutions so that modern NCM could benefit from the proposed approach. The use of technologies from the I4.0 pack could contribute to better and more effective NCM and QM generally by tracing the path to Q4.0 and QMS 4.0. Consequently, the use of new solutions and technologies could improve even the basic principles of QMS (such as the involvement of employees and innovation). Maintaining the assumptions that it is necessary to develop a universal solution that can be applied on multiple platforms, this study set out to compare the relative strengths of several database systems in dealing with the increasingly complex and massive data streams emanating from the I4.0 edge devices. Three types of databases MySQL, MariaDB, and MongoDB were analysed. The advantages gravitated towards document-oriented schema-less MongoDB database as the most effective database engine that was used in this research. Thus, the authors of this paper proved that the implementation and usage of affordable, but advanced systems based on mobile platforms could improve the number of indicators connected with NCM.
Improving big data governance in healthcare institutions: user experience research for honest broker based application to access healthcare big data
Published in Behaviour & Information Technology, 2023
Kanupriya Singh, Shangman Li, Isa Jahnke, Mauro Lemus Alarcon, Abu Mosa, Prasad Calyam
Regarding the computational complexity of the new proposed system, Version 2.0 uses a trust model to compute two different trust values during the data brokering process (see Alarcon et al. 2021). The first trust value, conservative data identifier trust, is computed using a Dirichlet model, which predicts a long-term reputation of the requestor, which is used later in the data approval process. The second trust value, optimistic data domain trust, is computed using a Beta model which predicts the requestor’s degree of responsibility in managing the requested data. Thus, trustworthy users with a higher rank are rewarded with the allocation of additional Computation and Analytics Workspace resources and tools for them, which encourages data usage best practices among the user community. The trust values are computed by comparing approved data identifier and domain items with the requested data identifier and domain items during the lifetime of a project. The computational complexity of these calculations is independent of the size of the requested dataset, or the parameters included; rather, it corresponds to a constant time factor. The processing time of the data request is dependent on the underlaying database engine used on a specific implementation and the size of the output dataset, as is the case for any data management system.
Air quality monitoring platform with multiple data source support
Published in Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, 2021
Dessislava Petrova-Antonova, Jelyazko Jelyazkov, Irena Pavlova
The architecture of the platform consists of 5 layers as follows: Data storage layer – stores data in SQLite database. Since the SQLite does not require separate server process and related configuration, it is used as an embedded SQL database engine. Thus, minimal requirements to the deployment environment are achieved.Object Relational Mapping (ORM) layer – provides a level of abstraction between the database and the business layer. Thus, the database could be easily replaced. The mapping is implemented by a Hibernate ORP provider.Business layer – processes data collected from the external APIs and provides data to the services called by the client’s interface.Interface layer – implements services for communication with the external APIs and client’s interface.External APIs layer – delivers data from different sources, which the platform aggregates.Client’s interface layer – implements the user interface for interaction with the platform.