Explore chapters and articles related to this topic
Semantic Interoperability of Long-Tail Geoscience Resources over the Web
Published in Ashok N. Srivastava, Ramakrishna Nemani, Karsten Steinhaeuser, Large-Scale Machine Learning in the Earth Sciences, 2017
Mostafa M. Elag, Praveen Kumar, Luigi Marini, Scott D. Peckham, Rui Liu
We classified the semantic interoperability between resources over the Web into five classes based on the ability of one resource to programmatically reuse and understand the information model associated with another resource (Table 9.2). The Interoperable class includes resources that follow global metadata standards (e.g., Dublin Core). Reusing this type of resources is straightforward and usually can be done programmatically. The Semi-Interoperable class defines the interoperability between a resource that follows a global standard and another one that compiles to domain-level standards, henceforth described as partially-standardized resources. Semantic mediation between the two standards is necessary to make resources interpretable for each other. The Potential-Interoperable class describes the interoperability between two partially-standardized resources, where each resource is defined using its domain concepts and vocabularies. The One-Sided Interoperable class identifies the interoperability between a non-standardized resource and another resource that is not supported with metadata, henceforth defined as non-standardized resources. In this class, frequent scientist intervention is required to allow the non-standardized resource to programmatically interpret and process the partially-standardized resources. Finally, the Non-Interoperable class groups resources that are not supported with information. While it is difficult to quantify the cost that results from the lack of semantic interoperability, we believe it is necessary to leverage the partially-standardized and non-standardized resources to the standardized class (Table 9.2).
Media Systems Integration
Published in Al Kovalick, Video Systems in an IT Environment, 2013
The syntax is obvious. All the information is easily contained in a small file, e.g., London-text.xml. Importantly, XML is human readable. The labels may take on many forms, and these are preferably standardized. Several groups have standardized the label fields (<scenes>), as described later. For example, one of the early standards (not A/V specific) is called the Dublin Core. The Dublin Core Metadata Initiative (DCMI) is an organization dedicated to promoting the widespread adoption of interoperable metadata standards and developing specialized metadata vocabularies for describing resources that enable more intelligent information discovery systems (www.dublincore.org).
Digital Cinema Distribution
Published in Charles S. Swartz, Understanding Digital Cinema, 2004
Of critical importance is the establishment of metadata standards. A significant amount of work has already taken place, and metadata for the purpose of describing the image itself from reference display system to theater display system have already been preliminarily defined through SMPTE. Elementary metadata identifying the values necessary to support interchange are mapped and must be carried between systems to successfully display the original file.
Data Stream Management for CPS-based Healthcare: A Contemporary Review
Published in IETE Technical Review, 2022
Sadhana Tiwari, Sonali Agarwal
Data Management Plan Guidelines [61–63] for stream processing includes streaming nature, type, scope and range of the high-speed sequence of infinite data streams. The data management plan emphasizes the following aspects of healthcare streaming data The description of data type, sample size, data acquisition software.Format of data and metadata, standards as per healthcare application.Design policies for privacy protection, confidentiality, security and other rights or requirements.Develop methods for re-use, re-distribution and production of derivatives.
Iot based laundry services: an application of big data analytics, intelligent logistics management, and machine learning techniques
Published in International Journal of Production Research, 2020
Chang Liu, Yongfu Feng, Dongtao Lin, Liang Wu, Min Guo
The implementation of IoT-based big data analytics from enterprise-class architectures is based on big data and interdisciplinary technologies to handle enterprise-wide information flow and business operations. It has several layers, and each layer is designed to accomplish a different task by using the output from the previous layer, except the first layer which is the peripheral systems. Peripheral systems, using third-party data sources, have one of the most important influences on the accuracy and timeliness of the entire architecture. This is where most of the problems occur because this layer is not built on internal data processing modules. Therefore, the second layer mainly deals with data processing, which is the first step towards internal data management. Based on the output of this layer, metadata standards, model management, and data processing begin. Any data passing through this layer should be error-proof and can be further utilised. The third level focuses on data archiving at the enterprise level. At this stage, data can be shared with the third parties. One of the key functions of this layer is to provide data for the next layer, application analysis layer or core applications. This layer also supports the entire query, reports and all interdisciplinary applications such as scheduling, ITIE and RMDP modules. The last level integrates enterprise-level information flow. It provides detailed supporting processing for the entire business. In addition, in order to ensure the consistency of information flow, the jumps from the neighbour layer is prohibited. Figure 4 shows the detail.
Flood damage cost estimation in 3D based on an indicator modelling framework
Published in Geomatics, Natural Hazards and Risk, 2020
Mostafa Elfouly, Anna Labetski
The output of the weaving generates an XML with derived and underived attributes and their type (XML 2). These indicate the attributes present within CityGML that match the needs of the domain specialist as well as an indication of which attributes are missing. This has a two-fold advantage: it generates metadata for the CityGML model, indicating which attributes are present and this leads to the second advantage, it helps the domain specialist determine whether the model is appropriate for their analysis, a fitness-for-purpose analysis. A further advantage is that certain attributes themselves are generated during the weaving process such as the number of buildings and the total floor area of the buildings for the study area, and these are easily calculated during the weaving process and can be recorded in the metadata. The domain-specific metadata generation, generated in XML, can then be easily integrated with other geospatial metadata standards, for example, integration with ISO 19115 is possible through the Metadata Extension Information class. Furthermore, this can be combined with the CityGML metadata ADE (which supports extendibility) to have extensive metadata support for flood modelling specialists (Labetski et al. 2018).