Explore chapters and articles related to this topic
Functional Architecture for Knowledge Augmentation, Derivation, and Synthesis
Published in Denise Bedford, Knowledge Architectures, 2020
Many forms of annotations are in common use today. Document annotation is the most common. Mathematicians use annotations to translate symbols and formulae into natural language meaning, to handle disambiguation, and support recommendations. In computer science, annotation refers to documentation and comments that found on code to explain the expected functionality. In computational biology, annotations identify the locations of genes, defining what those genes do, and eventually making sense of a sequenced gene. Digital imaging uses annotations to superimpose descriptions onto an image without changing the underlying image – similar to a sticky note. Dramatic annotations identify the elements that characterize the drama, often leveraging a formal annotation scheme. Story annotation adds comments and notes to narratives (Schank, 1975). Legal annotations interpret legal statutes and are critical tools for legal research. Film annotations tend towards a critique of films and their presentations – these are in context commentaries. They support the scholarly use of films and take the form of writing in a film. They are different from a film review or critique because they are embedded in or closely attached to the film. They are often attached to the shot or frame level in a film.
Challenges in Designing Software Architectures for Web-Based Biomedical Signal Analysis
Published in Aboul Ella Hassanien, Nilanjan Dey, Surekha Borra, Medical Big Data and Internet of Medical Things, 2018
Alan Jovic, Kresimir Jozic, Davor Kukolja, Kresimir Friganovic, Mario Cifrek
On the backend, we use Java Persistence API–JPA. This is a mechanism that maps Java classes to database tables by using annotations. JPA defines Java Persistence Query Language, which is a simplified version of SQL language and is adapted to the object-oriented way of programming. Changes in the database reflect easily to the backend. One only needs to change the variable definitions and annotations. In the following example, we provide the definition of the class Phase in backend.
Using deep learning in an embedded system for real-time target detection based on images from an unmanned aerial vehicle: vehicle detection as a case study
Published in International Journal of Digital Earth, 2023
Fang Huang, Shengyi Chen, Qi Wang, Yingjie Chen, Dandan Zhang
We used the DJI Phantom 4 Pro four-rotor UAV to collect data on vehicles on the ground. The flight altitude was set to 40–50 m, and covered five parking lots on the Qingshuihe Campus of the University of Electronic Science and Technology (UESTC). The size of the collected images was 5472 × 3078 pixels, and they were divided into 1024 × 1024-pixel images for training. A total of 3,276 training images and 655 test images were obtained after manual screening. We used the LabelImg open-source image annotation tool to annotate each vehicle in each image and generated extensible markup language (XML) files for YOLOv4. Further, we modified the category file, configuration file, header file, and other codes corresponding to YOLOv4, and then downloaded the pre-training weights. Finally, we executed 10,000 iterations on the high-performance platform on the ground to obtain the final trained weight file. We trained our model using the Adamw optimizer (Loshchilov and Frank 2018), with an initial learning rate of lr = 0.001, weight decay of wd = 0.00001, cosine decay, and a batch size of 32 on a workstation equipped with an 8G RTX 2080 GPU. During training, the loss declined and the AP increased, which suggests that the K-YOLOv4 algorithm had learned to detect vehicles. The relationship between the number of iterations and the loss of K-YOLOv4 is shown in Figure 14.
Efficient key frame extraction and hybrid wavelet convolutional manta ray foraging for sports video classification
Published in The Imaging Science Journal, 2023
The manual labelling of videos with the text is done with the help of a text annotation-based video retrieval process. The CNN model extracts images’ underlying feature information, which builds indexes based on similarity measure techniques. In this article, Xiaoping Guo [38] developed a histogram difference method based on TL and mutation detection based on a four-step method enabled block matching. Mutation detection technique used to determine the shot boundaries by adaptive thresholding that marked a candidate area for shots. Clustering and optical flow analysis is performed to extract the keyframes in the sports video. The extracted key frames are effective for the classification as the key frame extraction algorithm removes the repeated frames from the frames. An enhanced Deep Neural Network (DNN) algorithm is applied to perform the ontology semantic expansion in the input data.
FEED2SEARCH: a framework for hybrid-molecule based semantic search
Published in International Journal of General Systems, 2023
Nathalie Charbel, Christian Sallaberry, Sebastien Laborie, Richard Chbeir
We choose the conciseness criterion, which is a well-known criterion among the ontology evaluation methods (Raad and Cruz 2015). In our context, we evaluate the conciseness of the annotations in terms of the following metrics: (i) the number of annotated documents, (ii) the number of resulting annotation files, (iii) the cumulative number of annotation elements (i.e. the number of XML tags in the XML annotation files and the number of RDF triples in the RDF annotation file) within the annotation files, (iv) the number of redundancies i.e. the overlapping annotation elements, and (v) the percentage of covering a pre-defined list of relevant criteria. This list comes down to categories of annotation elements aligned with the requirements we set in Section 1 such as inter and intra-document links, general metadata of the documents, structural metadata for the text, structural metadata for the image, etc. We consider an annotation model to be the most concise if it is capable of both (1) annotating all the documents in a given corpus with a minimum number of annotation elements, a minimum number of annotation files and a minimum number of redundancies, while (2) covering a maximum number of relevant criteria from the pre-defined list.