Explore chapters and articles related to this topic
Federated data storage for the AEC industry
Published in Pieter Pauwels, Kris McGlinn, Buildings and Semantics, 2023
Jeroen Werbrouck, Madhumitha Senthilvel, Mads Holten Rasmussen
Web APIs provide an interface to communicate with a server over HTTP, and trigger certain actions, given the right conditions. This ‘machine-based consumption of web content’ [401] happens through a structured system of requests (sent by the client to the server and opening the connection) and response (sent by the server to the client). A HTTP request is sent to a specific endpoint provided by the API (in the form of a URL), which tells the server what to do with the information contained in the request. A request can carry very specific information, where the most visible one is the ‘method’ (Table 6.1). For example, the GET method is used to retrieve data: when you visit a website through your web browser, a GET request is sent to the server, which then returns the actual content of the website to your browser. Other information carried in a HTTP request is, for example, information needed for authenticating the user (using ‘Headers’7) or additional data necessary to process the request (i.e. the payload), mostly present in the ‘body’. While such body can be structured in different formats, one of the most common ways to express structured data is to send it as JSON (JavaScript Object Notation).
Intergovernmental and International Aquatic Ecological Programs: Approaches for Successful Implementation
Published in Antonius Laenen, David A. Dunnette, River Quality, 2018
Advances in scientific technology and knowledge of ecological principles have had significant impacts on ecological assessments, resource managers, and water quality regulators in the following ways: Advances in analytical chemistry have paralleled developments in ecology. Analytical instrumentation is now capable of producing lower detection levels, cheaper analyses, and a better awareness of new and complex findings in our environment.The development of the computer has been critical to our ability to store and analyze the large amounts of data collected in the last two decades. Computers allow us to store and retrieve large amounts of data efficiently and to be responsive to data requests in a timely manner. Computers also have allowed us to develop methods for analyzing data using models and statistics which are effective tools for today’s scientists and policymakers. Information is now being shared beyond the dreams of our parents. The Internet gives the public, researchers, data managers, and policymakers access to very large data sets.The understanding and misunderstanding of ecology causes the public to demand that managers of our natural resources think, regulate, and manage ecologically so that our children and grandchildren will be able to have water of good quality and experience the same vistas and habitats as we do now. An example of a program born of this concern is the President’s Forest Management Conference in Portland, OR, where President Clinton brought resource managers, environmental groups, industrialists, and scientists together to establish a policy of ecological management for our national forest lands. Carefully crafted, such a policy would provide the wood-products industry with their necessary resources, allow the public to enjoy the recreational attributes of forest land, and preserve natural habitats and clean water for future generations.
A machine learning approach for building an adaptive, real-time decision support system for emergency response to road traffic injuries
Published in International Journal of Injury Control and Safety Promotion, 2021
Salah Taamneh, Madhar M. Taamneh
The historical data about previously occurred accidents as well as the newly reported accidents need to be stored somewhere in the system for further use. The original records are used to build the initial Predictor, while the newly stored ones are used for building updated versions of the predictor. In the proposed system, the dataset is stored in a relational database management system called H2, which enables the system to effectively retrieve and store the data. H2 is a relational database management system with three modes: embedded, server, and in-memory modes. In this system, the embedded mode has been chosen. The database is very simple and consists of only two tables. Each table has attributes equal to the number of attributes selected for each accident (i.e., 16 attributes). The first table, called all_accidents, is used to store the records with validated class values (i.e., severity degree). The initial dataset goes directly to this table. The second table, called new_accidents, stores information about the reported accidents that their severities were generated by the Predictor but have not yet validated. The outcomes of the Predictor are validated using information received from the medical units dispatched to the accident scene. Once the severity of these accidents has been determined by the system operator, they are automatically moved to the first table.
Circular interpolation and chronological-whale optimization based privacy preservation in cloud
Published in International Journal of Computers and Applications, 2021
S. Adhirai, Paramjit Singh, Rajendra Prasad Mahapatra
Xuyun Zhang et al. [21] proposed the Proximity-Aware Local-Recoding Anonymization algorithm for the data privacy. The scheme handled large data growth in the cloud through the incorporation of the MapReduce framework. The scheme provided improved scalability and the time-efficiency. While ensuring privacy in the database, the data distortion was increased. Yang Pan et al. [23] presented the retrievable data perturbation model for handling the privacy issues in the cloud. The model preserved the privacy of the database without altering the mean and covariance features of the database. Thus, the scheme could retrieve the database more effectively. The model failed to tackle the challenges occurring during the key management, as it used a number of keys for privacy preservation. Gaofeng Zhang et al. [15] presented the noise generation strategy based on the time-series pattern for privacy preservation on the cloud. The strategy had improved the effectiveness of cloud privacy protection to withstand the probability fluctuation privacy risk.
Rainfall flood hazard at nuclear power plants in India
Published in Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards, 2018
Rainfall data have been obtained from IMD (Indian Meteorological Department) Pune, for a period of 1901–2004 AD (National Climate Centre, Pune) (Rajeevan et al. 2006). The rainfall data have been collected as daily rainfall data over 1° × 1° longitude latitude high-resolution grid for the Indian region. This corresponds to approximately 111 km × 111 km at equator. Rainfall data are arranged in 35 × 33 grid points for Indian region. This represents a rectangular box in which city is located. Rainfall data over various stations in a grid have been recorded. Figure 1 represents a grid in which Kalpakkam (*) is located. The total number of rainfall station within this grid (Kalpakkam) is 15. As the data were obtained from the IMD Pune, it is not clear that rainfall data used in the analysis are average over these rainfall stations or maximum observed over a station. It is also not clear that which station is closest to Kalpakkam. The uncertainty arising due to this issue is being accounted by adjustment factor (IAEA NS-G-3.4. 2003b). A FORTRAN program has been written to retrieve data for a particular grid. The longitude latitude location of various cities used in the analysis is shown in Table 1. Figure 2 represents the location of various NPPs in the map of India (Map courtesy: wikihow).