Explore chapters and articles related to this topic
Technologies for vision impairment
Published in John Ravenscroft, The Routledge Handbook of Visual Impairment, 2019
Lauren N. Ayton, Penelope J. Allen, Carla J. Abbott, Matthew A. Petoe
Last in this section are the Royal National Institute of Blind People (UK) RNIB Smart Glasses (www.rnib.org.uk/knowledge-and-research-hub-research-reports/technology-and-television-research/smartglasses), which provide visual augmentation to people with some remaining sight. Rather than substituting vision for another sense, the RNIB Smart Glasses convert the visual scene into a high-contrast representation. The distance from the wearer to nearby objects is represented by increasing levels of brightness, determined by a special depth-sensing camera (Hicks et al., 2013).
Envoi: Today and tomorrow
Published in Harold Ellis, Sala Abdalla, A History of Surgery, 2018
Other emerging technologies that have arisen from the rapid expansion of AI systems are Augmented Reality (AR) and Virtual Reality (VR). AR systems superimpose artificial information generated by a computer to one or more of the senses to enhance its performance. Examples of this include the AccuVein (Figure 16.6), a device that projects the anatomy of veins onto the skin surface, assisting with venous puncture. The Google Glass is another device that comes in a mounted headset and demonstrates certain anatomical or functional details on real-time images. The Hololens is a brand of smart glasses available today that displays computer-generated holograms onto real objects. Current applications of AR in the operating room include its use for optimal port placement in laparoscopic surgery and in combination with near infra-red spectroscopy to provide visual guidance in lymph node dissection in cancer surgery.
Psychological principles and health behaviour change
Published in Lisette van Gemert-Pijnen, Saskia M. Kelders, Hanneke Kip, Robbert Sanderman, eHealth Research, Theory and Development, 2018
Traditionally, psychological research has relied heavily on self-report measures of health behaviour (e.g. self-reported smoking, food diaries, medication adherence) as well as observational methods. These, coupled with traditional clinical measures of health (e.g. BMI, cholesterol, BP) have helped to profile the links between health behaviour change and health outcomes. These assessments, however, are often collected at a few distinct points in time, and whilst providing a useful snapshot of cross-sectional data, they often failed to grasp the personal, contextual and social factors that influence the success of a health behaviour change intervention in achieving the desired changes in behaviour. In recent years the rapid growth of technology in the health sector has brought with it a golden opportunity for new objective measurements of health behaviour. For example, the advent of wearable technologies such as Fitbit and smart glasses automatically provide minute-by-minute monitoring of objective measurements of behaviour (e.g. in the case of physical activity, heartbeat, location). This provides researchers with an excellent opportunity to measure health-related outcomes that were previously reliant on self-report or were confined to a lab environment (e.g. stress, mood). Even better, this ‘time and place’-specific data open up the potential for researchers to implement context and time-appropriate interventions. These can be considered ‘user-friendly’ interventions, as they can provide behavioural support at key times when a person has the opportunity to change and is receptive to such support (Moller et al., 2017; Naughton et al., 2016).
Remote mentoring in laparotomic and laparoscopic cancer surgery during Covid-19 pandemic: an experimental setup based on mixed reality
Published in Medical Education Online, 2021
Michele Simone, Rocco Galati, Graziana Barile, Emanuele Grasso, Raffaele De Luca, Carmine Cartanese, Rocco Lomonaco, Eustachio Ruggieri, Anna Albano, Antonello Rucci, Giuseppe Grassi
The members of the surgical team have been equipped with the MR headset. They can visualize the results of medical screenings (i.e., radiography, magnetic resonance imaging, blood tests, etc.) and can add visual information on the patient’s body using MR tools. These data generated by the smartglasses are shared with the medical doctors under training via their laptops and Android-based smartphones. The conceived architecture includes several tools for aligning virtual objects with the physical world in both the smartglasses and the mobile devices. Note that the considered smartglasses, while sending scene information to the mobile device, update its local application content, so that all the team’s members share the surgeon’s point of view [9]. This enables the training of medical doctors to be effective, because they can observe the real scene with all the augmented information placed by the surgeon, including virtual tooltips on the patient’s body to highlight regions of interest.
Device profile of the XVision-spine (XVS) augmented-reality surgical navigation system: overview of its safety and efficacy
Published in Expert Review of Medical Devices, 2021
Christopher F. Dibble, Camilo A. Molina
With regards to the adoption of this technology, we are certainly in the early phases of realizing and exploiting the potential that fully integrated neuronavigation headsets have. Surgical navigation is becoming the norm rather than the exception for spine surgery, barring unforeseen barriers due to cost. Technologies such as smartglasses or Microsoft Hololens have obvious promise, along with alternative strategies such as direct projection onto the patient. Microscopic AR will continue to advance and robotics will likely continue to play a greater role.
An independent shopping experience for wheelchair users through augmented reality and RFID
Published in Assistive Technology, 2019
Zulqarnain Rashid, Rafael Pous, Christopher S. Norrie
For smart glasses interfaces, the user needs to focus upon a particular location on the shelf for about 3 seconds. Information about the products present at that particular location will then be displayed on the screen of the smart glasses to the user, who is then able to interact with the information through voice commands or gestures. The subsequent information provides a rich user experience for the end user.