Explore chapters and articles related to this topic
Biomedical Imaging and Sensor Fusion
Published in Suman Lata Tripathi, Kanav Dhir, Deepika Ghai, Shashikant Patil, Health Informatics and Technological Solutions for Coronavirus (COVID-19), 2021
Satyendra Pratap Singh, Shalini Soman
There are various applications of sensor fusion such as in robotics, biomedical imaging, equipment monitoring, remote sensing, transportation systems, micro and smart sensors and military applications. Here mainly we will discuss about the application of sensor fusion in biomedical imaging. The field of multisensor fusion is evolving and the need of it is significant. As in future it would help us to deal with problems which can affect the whole world.
Context-Aware Computing for CPS
Published in G.R. Karpagam, B. Vinoth Kumar, J. Uma Maheswari, Xiao-Zhi Gao, Smart Cyber Physical Systems, 2020
Bhuvaneswari Arunagiri, Maheswari Subburaj
The sensor fusion is a process of combining the sensor readings from different types of sensors. For example, a patient’s blood pressure and heart rate should be sensed to monitor the functioning of heart, and the patient’s location should be identified to direct the medical personnel. This is a good example of sensor fusion, which includes different sensors, specifically: (i) a pressure sensor, to sense the blood pressure of the patient, (ii) a heart rate sensor, to determine the number of heartbeats in a specific time period, and (iii) a location sensor, to identify the location of the patient. Depending on the scenario, the sensor fusion can be carried out either at the hardware or software level.
A Review on Internally Cooled Smart Cutting Tools
Published in P. C. Thomas, Vishal John Mathai, Geevarghese Titus, Emerging Technologies for Sustainability, 2020
Based on the key aspects proposed by Byrne and Scholta (Byrne and Scholta, 1993) as illustrated in Figure and the issues highlighted by Deshayes et al., (2005), there is a need for a smart cutting tools with the following features: Addressing the communication of all information needed to fabricate a product that satisfies customer needs.Accommodating sensor fusion that can increase the confidence in tool and process monitoring.Possessing the characteristics of plug and produce for easy re-configurability.Environmental friendly in terms of the application of the tools in machining.
Influential variables impacting the reliability of building occupancy sensor systems: A systematic review and expert survey
Published in Science and Technology for the Built Environment, 2022
Yiyi Chu, Debrudra Mitra, Zheng O’neill, Kristen Cetin
Figure 1(a) and (b) summarizes the number of papers where each of the occupancy sensor technologies appear. From Figure 1(a), the most commonly used sensor technologies include both single-sensor systems, which are radiofrequency-, vision-, infrared-, and sound wave-based sensors, and sensor fusion systems. Sensor fusion is generally a combination of one or more of the single-sensor systems’ sensing modalities and one or more environmental sensors. Figure 1(b) further subdivides the sensor modalities’ sensor fusion systems into each of the single-sensor systems as well as each of the environmental sensors and provides the number of papers in which each appears. For those papers that use sensor fusion systems, each is included in the count for each of the single-sensor systems and environmental sensors. From Figure 1(b), it is also noted that there are some other types of sensors that were used in sensor fusion methods, such as door sensors, reed switches, and pressure mats, which have not been discussed in detail as a single-sensor system in this research since it was not found to be common for these sensor types to be used as a single-sensor system, as compared to others.
The development of autonomous driving technology: perspectives from patent citation analysis
Published in Transport Reviews, 2021
Rico Lee-Ting Cho, John S. Liu, Mei Hsiu-Ching Ho
Aside from Martinez-Diaz et al. (2019), there are also research articles that report and discuss crucial technologies and their applications for autonomous vehicles. Multi-sensor data fusion can be applied to automated target recognition and control of autonomous vehicles (Hall & Llinas, 1997). Vision-based and sensor fusion technologies are used for autonomous navigation, path following, inspection, monitoring, or risky situation detection (Bonin-Font et al., 2008; Jung, Lee, Kang, & Kim, 2009). Several methods of motion planning for autonomous vehicles have also been reported (Frazzoli et al., 2002; Park, Lee, & Han, 2014; Shiller & Gwo, 1991), with robotics and dynamics control supporting trajectory prediction and motion planning (Goerzen, Kong, & Mettler, 2010). Vehicle connectivity improves the performance of autonomous vehicles and contributes to the enhancement of transportation systems (Montanaro et al., 2019). Vehicular cloud computing technology has been proposed for vehicular communication networks (Whaiduzzaman, Sookhak, Gani, & Buyya, 2014). Most of these studies focus on a single technology and do not relate the technology to the whole autonomous vehicle system.
Enhancing smart shop floor management with ubiquitous augmented reality
Published in International Journal of Production Research, 2019
X. Wang, A.W.W. Yew, S.K. Ong, A.Y.C. Nee
User tracking serves three purposes, namely, to display AR user interfaces correctly to the users, to allow the cloud services to provide context-aware information to the users, and to trigger relevant AR guidance to users who are working with certain manufacturing resources in accordance with the production schedule. Pose tracking can be performed using sensors embedded in the environment, sensors that are attached to the users, or a combination of these techniques. While marker tracking, natural features tracking and motion capture technology have been employed in AR, viewing devices that are designed for indoor positioning, such as the Lenovo Phab 2 Pro and Microsoft Hololens, have recently become available to enable portable and robust AR systems at a low cost and without significant environment preparation. For cloud services, information about the users, such as their position in the environment, focus and intent, are useful in providing context-aware AR interfaces. Sensor fusion would be required to combine data from sensors, such as cameras, accelerometers and gyroscopes and other contextual cues, such as the time of day, historical behaviour of the users, etc.