Explore chapters and articles related to this topic
Tracking and Calibration
Published in Terry M. Peters, Cristian A. Linte, Ziv Yaniv, Jacqueline Williams, Mixed and Augmented Reality in Medicine, 2018
The principle of pose tracking for all IR-based and videometric systems with multiple cameras is based on triangulation and registration. A set of fiducial markers is rigidly arranged in a known configuration and defines a DRF. During tracking, the location of each fiducial marker is determined via triangulation, and the pose of the DRF is determined by registering fiducial locations to the known configuration. Both the number and spatial distribution of the fiducial marker influences the tracking accuracy of the DRF [4]. At least three fiducial markers must be arranged in a noncollinear configuration to resolve the pose of the DRF in six degrees-of-freedom. Passive tracking systems, such as MicronTracker (ClaroNav, Toronto, Ontario Canada) and Polaris (Northern Digital Inc., Waterloo, Ontario Canada), are wireless. In active tracking systems, power supplied to the infrared emitting diode (IRED) emitter can be wired (as in Certus (Northern Digital Inc., Waterloo, Ontario, Canada)) or wireless and battery-powered (as in FusionTrack (Atracsys, Switzerland)). Hybrid systems are capable of tracking both passive and active fiducial markers.
Stability control algorithm for unmanned aerial vehicle based on dynamic feedback search
Published in Lin Liu, Automotive, Mechanical and Electrical Engineering, 2017
In order to improve the stability of the unmanned aerial vehicle, a stability control algorithm for UAV is proposed based on dynamic feedback search. Firstly, a longitudinal motion mathematical model is constructed of UAV. The control objective function and control constraint parameters are analysed, adaptive inversion pitch angle tracking method is used for integral stability functional analysis, the angular velocity is taken as the virtual control input, adaptive dynamic feedback error tracking and search is realised in greater attitude change, PID adaptive control law is designed, eliminating the effect of swinging inertia force and moment on the stability of the system, and the control algorithm optimisation design is realised. Finally, the simulation test is performed The simulation results show that the stability of control is optimised, and pose tracking and error compensation are obtained effectively. It can improve the stability during the flight, the attitude angle tracking error can converge to zero rapidly, and it has good control performance and control quality.
Interactive Multimedia Technology in Learning: Integrating Multimodality, Embodiment, and Composition for Mixed-Reality Learning Environments
Published in Ling Guan, Yifeng He, Sun-Yuan Kung, Multimedia Image and Video Processing, 2012
David Birchfield, Harvey Thornburg, M. Colleen Megowan-Romanowicz, Sarah Hatton, Brandon Mechtley, Igor Dolgov, Winslow Burleson, Gang Qian
Physically, SMALLab consists of a 15′ × 15′ × 12′ portable, freestanding media environment (Birchfield et al. 2006). A cube-shaped trussing structure frames an open physical architecture and supports the following sensing and feedback equipment: a six-element array of Point Grey Dragonfly firewire cameras (three color, three infrared) for vision-based tracking, a low-cost eight-camera OptiTrack marker-based motion capture system for accurate position and pose tracking, a top-mounted video projector providing real-time visual feedback, four audio speakers for surround sound feedback, and an array of tracked physical objects (custom-made glowballs and marker-attached rigid objects). A networked computing cluster with custom software drives the interactive system.
Virtual reality for synergistic surgical training and data generation
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2021
Adnan Munawar, Zhaoshuo Li, Punit Kunjam, Nimesh Nagururu, Andy S. Ding, Peter Kazanzides, Thomas Looi, Francis X. Creighton, Russell H. Taylor, Mathias Unberath
Vision-based pose tracking is a task where input images are processed to infer the pose information of objects. It recently has found applications in Augmented Reality (AR) and can potentially improve the safety of skull-base surgeries when used as a navigation system (Mezger et al. (2013)). We demonstrate how the data generated from the simulator can be used for evaluating vision-based tracking algorithms. In the experiment, we use the state-of-the-art ORB SLAM V3 algorithm by Campos et al. (2021). The algorithm extracts ORB features (Rublee et al. (2011)) from stereo images and uses the matched features to compute the camera pose relative to the patient’s anatomy based on geometric consistency. We refer interested readers to the paper for more details. A video demonstration of ORB SLAM V3 running on our data can be found in the supplementary material.
Enhancing smart shop floor management with ubiquitous augmented reality
Published in International Journal of Production Research, 2019
X. Wang, A.W.W. Yew, S.K. Ong, A.Y.C. Nee
User tracking serves three purposes, namely, to display AR user interfaces correctly to the users, to allow the cloud services to provide context-aware information to the users, and to trigger relevant AR guidance to users who are working with certain manufacturing resources in accordance with the production schedule. Pose tracking can be performed using sensors embedded in the environment, sensors that are attached to the users, or a combination of these techniques. While marker tracking, natural features tracking and motion capture technology have been employed in AR, viewing devices that are designed for indoor positioning, such as the Lenovo Phab 2 Pro and Microsoft Hololens, have recently become available to enable portable and robust AR systems at a low cost and without significant environment preparation. For cloud services, information about the users, such as their position in the environment, focus and intent, are useful in providing context-aware AR interfaces. Sensor fusion would be required to combine data from sensors, such as cameras, accelerometers and gyroscopes and other contextual cues, such as the time of day, historical behaviour of the users, etc.