Explore chapters and articles related to this topic
General Reservoir Management Practices, Aberrations, and Consequences
Published in Ashok K. Pathak, Petroleum Reservoir Management, 2021
Data purging removes obsolete or superfluous data from the system to avoid unnecessary crowding of the active database. The purged data is not permanently deleted from the system; it is backed up and archived in a separate storage device to be recalled later if required. At this point, let us differentiate between the backup data and archiving. Database backup is usually a periodic short-term measure mandated often by both the organization and the government. It ensures that the operational database is always functional, and critical/essential business data is protected from accidental system failures, outages, or crashes. Database backup is performed with a database management software that creates duplicate or multiple copies of the same data stored locally or on a backup server.
A semi-automated approach to validation and error diagnostics of water network data
Published in Urban Water Journal, 2019
Jonas Kjeld Kirstein, Klavs Høgh, Martin Rygaard, Morten Borup
Flagged and missing data should be reconstructed and stored together with the validated data in an operational database. Such data are often stored in a uniform manner to account for differences in timestamp intervals between various data streams. This application of the data is outside the scope of this article (Figure 1). However, our error analysis and visualization step can use the flags stored in the MAID to provide both short- and long-term diagnostics as well as day-to-day visualizations of errors for use in the daily operation. Having detected anomalies, operators can use this information to investigate whether the data are, in fact, erroneous and to improve future data collection processes (Figure 1).
A survey on spatial, temporal, and spatio-temporal database research and an original example of relevant applications using SQL ecosystem and deep learning
Published in Journal of Information and Telecommunication, 2020
Kulsawasd Jitkajornwanich, Neelabh Pant, Mohammadhani Fouladgar, Ramez Elmasri
However, queries on these data in an operational database are complex and reduce the performance of the system. Therefore, data warehouse technology is developed to increase the query performance of the system by pre-computing large groups of data at summarized and aggregated levels to which complex queries are responded significantly faster. In other words, a data warehouse (read-only repository) does not involve concurrency control and recovery mechanism, so it can return queries’ results with high throughput. As a consequence, decision support can then be done easier and in a more robust fashion as routine logging and other tedious tasks are not required and aggregation/summarization are more focused.
Identifying nonconformities in contributions to programming projects: from an engagement perspective in improving code quality
Published in Behaviour & Information Technology, 2023
Bao-An Nguyen, Hsi-Min Chen, Chyi-Ren Dow
Feature extraction was performed using the operational database and code repository. A total of 2687 submissions were made during the course. The average number of submissions by each team was 65.5 (sd = 34.1; max = 194; and min = 20), and the average number of submissions by each student was 18.4 (sd = 16.4; max = 152; and min = 1). The results of clustering analysis were highly susceptible to outliers (Hair et al. 1998); therefore, we conducted outlier detection based on multivariate Mahalanobis distance prior to LPA. Two teams (4.34%) among 46, were identified as outliers (Figure 5). These teams were excluded from LPA; however, we addressed their engagement behavior in the discussion.