Explore chapters and articles related to this topic
Using Transactional-Level Models in an SoC Design Flow
Published in Louis Scheffer, Luciano Lavagno, Grant Martin, EDA for IC System Design, Verification, and Testing, 2018
Alain Clouard, Frank Ghenassia, Laurent Maillet-Contoz, Jean-Philippe Strassen
In a typical design flow, any independently verified IPs must be verified in an SoC environment as well. Functional integration tests are developed to verify the following features: Memory map. To prevent registers from being mapped at the same addresses and; guarantee that hardware and software teams use the same mapping.Data consistency. To ensure that data generated by an IP matches the input data (in terms of format and content) of another IP that will reuse the generated data.Concurrency. To ensure that concurrent execution of all IPs is controlled for deterministic expected behavior.
Dependable Automotive CAN Networks
Published in Nicolas Navet, Françoise Simonot-Lion, Automotive Embedded Systems Handbook, 2017
Juan Pimentel, Julian Proenza, Luis Almeida, Guillermo Rodriguez-Navas, Manuel Barranco, Joaquim Ferreira
Whenever a node in the error-active state detects an error thanks to the previously mentioned mechanisms, it signals this situation to the rest of nodes by sending what is called an active error flag. An active error flag consists of six consecutive dominant bits, and starts at least one bit after the error was detected. This flag will eventually violate a CAN protocol rule, for example, it can destroy the bit fields requiring fixed form and thus cause a form error. As a consequence, all the other nodes detect an error condition too and start transmission of an active error flag as well. After transmitting an active error flag, each node sends recessive bits and monitors the bus until it detects a recessive bit. Afterward, it starts transmitting seven more recessive bits. The eight recessive bit chain resulting on the bus is called error delimiter. This error delimiter together with the superposition of error flags from different nodes constitutes what is called an error frame. After the error frame transmission, the frame that was being sent is automatically rejected by all receivers and retransmitted by the original transmitter. This simple mechanism allows the globalization of local errors and provides tolerance to the transient fault causing the error. In this way, data consistency is supposedly achieved. Nevertheless, it is not always the case that local errors can be globalized.
Leveraging Semantic Web Technologies for Veracity Assessment of Big Biodiversity Data
Published in Archana Patel, Narayan C. Debnath, Bharat Bhushan, Semantic Web Technologies, 2023
Zaenal Akbar, Yulia A. Kartika, Dadan R. Saleh, Hani F. Mustika, Lindung P. Manik, Foni A. Setiawan, Ika A. Satya
Three sources of data consistency were investigated. First, data structure analysis, applied at the datasets level, was intended to measure how a defined vocabulary was utilized across multiple datasets. Second, data type analysis, applied at the data attribute level, was intended to measure how the data type of an attribute is used within a dataset or across multiple datasets. Third, data granularity analysis, applied at the data value level, was intended to measure how multiple concepts were used in values of selected attributes. As our datasets, we collected publicly available biodiversity data more than 60,000 records of species occurrences that are available in nine distributed data sources. Our analysis was conducted in several systematic steps, namely: Data collection, where biodiversity data from multiple sources were collected. In most cases, web-scraping techniques were used to extract the relevant pair of (key, [values]) for every element data that was presented on a website.Data mapping, where every extracted “key” is mapped to the most suitable attribute in a selected vocabulary to produce a pair of (key, attribute). We used the Darwin Core as our vocabulary due to its wide adoption in the biodiversity area.Based on the mapping, we constructed a final collection of (attribute, [values]). After that, several statistics regarding data consistency were computed.
Scientific, technical and institutional challenges towards next-generation operational flood risk management decision support systems
Published in International Journal of River Basin Management, 2018
D. Schwanenberg, M. Natschke, E. Todini, P. Reggiani
In the conceptual sub-system, we distinguish the (i) data consistency analytics, (ii) data validation and (iii) data estimation. Therein, data consistency is a measure of the completeness and timeliness of either raw or processed data. In operation forecasting systems, these indicators have a direct impact on the skill of forecasts. Whereas data consistency relates to data availability, data validation addresses data quality. This may include a wide range of primary and secondary validation steps starting with checks of the physical range or rate-of-change at single time series and out-of-bounds values. Data verification algorithms of a more complex structure verify the inner consistency of several parameters at a single gauge or the spatial correlation of a parameter between neighbouring gauges. The main result of the data validation process is quality flags used to define the downstream use of data.
A framework for big data pre-processing and search optimization using HMGA-ACO: a hierarchical optimization approach
Published in International Journal of Computers and Applications, 2019
K. V. Rama Satish, N. P. Kavya
Data consistency refers to the validity and integrity of data representing real-world entities. It aims to detect errors in the data, typically identified as violations of data dependencies. It is also to help us repair the data by fixing the errors. We define the consistency of D, denoted by the following equation expressed as