Explore chapters and articles related to this topic
Consumer-Grade Cameras and Other Approaches to Surface Imaging
Published in Jeremy D. P. Hoisak, Adam B. Paxton, Benjamin Waghorn, Todd Pawlicki, Surface Guided Radiation Therapy, 2020
The Kinect was introduced in 2010 for the Xbox® gaming console (Microsoft Corporation, Redmond, WA, USA), and Microsoft shortly after introduced an application programming interface (API) in 2012. Designed primarily for gaming applications, the device had embedded computer vision algorithms which parsed depth data into human skeleton models in real-time. These models provided “joint-positions,” simplified bone-junction coordinates, which were made available in the API. Additionally, the API allowed access to the data streams including depth, color, and audio. In 2013, a new version of the Kinect (v2) was released with a higher resolution, different depth sensing technology, and more computer vision processing capabilities allowing the joint positions of up to six people to be tracked in real-time.
Feedback-Based Technologies for Adult Physical Rehabilitation
Published in Christopher M. Hayre, Dave J. Muller, Marcia J. Scherer, Everyday Technologies in Healthcare, 2019
Leanne Hassett, Natalie Allen, Maayken van den Berg
The Xbox Kinect uses cameras and depth sensors to enable gesture recognition, using full-body 3D motion data, and does not require handheld controllers or a balance board. The user interacts with the games through a virtual avatar created by the system. In addition, the system guides the user with audio and visual feedback (Türkbey et al., 2017). All games use whole body movements and may require stepping, bending and jumping. Xbox Kinect Adventures and Sports are commonly used in the clinic. One example from the Adventures games is 20,000 leaks. The player performs reaching and stepping movements to plug the holes created by various sea creatures in an ocean glass tank. Feedback is provided during (KP) and after the game (KR). Both auditory and visual feedback is provided when the player plugs a leak (visually displaying the body part that plugged the leak). The score is shown during and at the end of the game.
Using expressive movement and haptics to explore kinaesthetic empathy, aesthetic and physical literacy
Published in John Ravenscroft, The Routledge Handbook of Visual Impairment, 2019
Wendy Timmons, John Ravenscroft
Haptic technology that supports tactile devices is increasingly employed for persons with visual impairment to assist, negotiate, understand and investigate their immediate surroundings. Tactile devices engage users through their sense of touch, by combining tactile perception with kinaesthetic sensing (i.e. the position, placement and orientation) through appropriate haptic interfaces. This technology has not been used to assist access of visually impaired persons to movement-related events and spectacles, such as dance performances or sports events. The concept behind the device was that it would provide people with visual impairments unique access to the dynamic feelings associated with expressive movement (dance). The prototype technology comprised of three separate deliverable components: (i) tracking technology (Kinect sensors); (ii) mapping software (bespoke); and (iii) a haptic interface (vibro-tactile motors), which made it inexpensive, novel and accessible. The device would provide audiences with visual impairments access to the affective phenomenon of kinaesthetic empathy (Reason and Reynolds, 2010) when attending expressive movement.
Virtual reality in physical rehabilitation: a narrative review and critical reflection
Published in Physical Therapy Reviews, 2022
Michael J. Lukacs, Shahan Salim, Michael J. Katchabaw, Euson Yeung, David M. Walton
The Wii led to other developers to release their own video-capture devices [29]. In 2009, Sony released the PlayStation Move for the PlayStation 3 video game console, while Microsoft released the Kinect camera for the Xbox 360 video game console the following year [30]. The PlayStation Move was a video-capture system with a motion-sensing wand that used accelerometers and gyroscopes [30]. In contrast, the Kinect for the Xbox 360 used a camera designed for body tracking and depth detection in three-dimensional space. Thus, users had their movements and gestures translated into in-game movements without additional equipment [29, 55]. While both systems have been employed for upper extremity rehabilitation within patients with burns, it appears that the Playstation Move may not have been as effective as the Kinect system [30]. The Kinect also directly addressed one common criticism of video-capture VR remotes, in that participants had to produce full body movements instead of cheat movements (e.g. wrist flick) [53, 55] to be registered by the Kinect camera as the correct movement. Unfortunately, the Microsoft Kinect was discontinued in 2017 due to low levels of commercial support, even though there was active research using the platform at the time [56].
Artificial Intelligence-Assisted motion capture for medical applications: a comparative study between markerless and passive marker motion capture
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2021
Iwori Takeda, Atsushi Yamada, Hiroshi Onodera
Previous studies have also attempted to analyze human motion without using markers; Microsoft Kinect has been used to detect human body parts using an infrared depth sensor. Eltoukhy et al. (2017) compared the conventional PMMC system with the Kinect method for joint angle and step length determination during treadmill walking. In their study, the maximum ICC (2,1) for the range of motion of the hip, knee, and ankle joints were 0.80, 0.80, and 0.05, respectively. Oh et al. (2018) studied stair ascent and descent motions; the maximum ICC (2,1) values for the aforementioned joints were 0.86, 0.90, and 0.49, respectively. These studies also reported lower correlations between the markerless method and the conventional PMMC system; however, the analysis results of OpenPose were significantly more accurate. The inaccurate ankle joint results of Kinect could be owing to the difference in the definition of the ankle joint angle (Eltoukhy et al. 2017; Oh et al. 2018). As the ankle and toe are the only key points of the foot that Kinect can recognize, many studies defined the angle between the knee, ankle, and toe as the ankle joint angle. However, the ankle angle should be defined as the angle between the line connecting the knee joint and ankle joint and the line connecting the toe and heel, as shown in Figure 2 (Grill and Mortimer 1996; Surer et al. 2011; Romain and Xuguang 2012; Chen et al. 2013).
Assistive technologies for severe and profound hearing loss: Beyond hearing aids and implants
Published in Assistive Technology, 2020
Setia Hermawati, Katerina Pieri
Lang et al. (2012), a study that evaluated the use of Microsoft Kinect to recognise SL vocabularies more than any other studies, demonstrated the “Dragonfly”, an open-source framework developed for German Sign Language recognition. Kinect as well as Hidden Markov Models were used for a comprehensive recognition of various signs. Through their study, they proved that without using CyberGlove (Kessler, Walker, & Hodges, 1995) – “a whole hand input device” – and only with the use of a 3D depth camera, the detection of various gestures is accomplishable. However, they have not mentioned the context in which this system could be applied and how it could aid deaf people to communicate with hearing people. The numerous disadvantages of Kinect, such as the fact that it tracks humanlike objects located in the background (Zhao, Naguib, & Lee, 2014), were also not discussed in the study. Therefore, these issues were not taken into consideration in the design of the framework. Nevertheless, it can be used for the development of educational games for deaf children.