Prosopagnosia
Alexander R. Toftness in Incredible Consequences of Brain Injury, 2023
Facial recognition is seemingly automatic in many people, but some struggle to recognize identities from faces. Prosopagnosia was named in 1947, when Joachim Bodamer wrote about some patients that had problems identifying who was in a given photograph. For example, one of the patients, when shown a picture of a dog, thought that it was a picture of a person, but with “curious hair” (Bodamer, 1947, as discussed in Bornstein et al., 1969, p. 164). He named this disorder “prosop agnosia” from the Greek words for “face ignorance.” In popular culture, prosopagnosia is often called face blindness, which is much easier to say. The disorder made some appearances in a few episodes of the television show Arrested Development and was depicted more or less accurately, except perhaps for the scene in which Tobias is mistaken as Lindsay (Lorey et al., 2013). People with prosopagnosia often complain about being unable to follow the storyline of television shows and films because they have trouble telling the actors apart, so they would probably find that face-blindness plot from Arrested Development to be quite confusing (e.g., Bowles et al., 2009).
Consumer-Grade Cameras and Other Approaches to Surface Imaging
Jeremy D. P. Hoisak, Adam B. Paxton, Benjamin Waghorn, Todd Pawlicki in Surface Guided Radiation Therapy, 2020
The Kinect API allows for more than just raw point cloud data to be acquired. Computer vision capabilities are also embedded in both the hardware and the software system which allows the parsing of estimated skeletal positions and facial features. Both of these functions have been exploited and applied to radiation therapy needs. Silverstein et al. demonstrated a rudimentary facial recognition system for verifying patient identity prior to treatment.12 The system relied on the Kinect API to provide 31 unique facial points from the subject’s surface scan. From these points, the distance between each pair of points was calculated to create a vector array unique to each patient’s face. Using just the mean and median differences between two scanned vector arrays, the discrimination of identity could be obtained (sensitivity 96.5%, specificity 96.7%). The authors did note that the results were highly dependent on lighting conditions. Another use of the Kinect was by Mullaney et al. in which they demonstrated patients could be allowed to participate in their positioning.13 In their prototype example, the depth sensors parsed the patient joint locations and the patients watched a display showing the desired and current position of their joints. A feedback system, consisting of a simple user interface, was used to positively validate the correct position in a user interface.
Playing With Biofeedback
Lawrence C. Rubin in Handbook of Medical Play Therapy and Child Life, 2017
Technologically, Nevermind can function with inexpensive biotechnology. The game is compatible with an array of commercial heartrate monitors commonly used for exercise tracking. We use (and recommend) a chest strap HR monitor (which can be purchased for around $40–$60), as chest strap monitors are generally more reliable and accurate than wristwatch monitors for heartrate. Facial recognition can be accomplished with nearly any web camera. Recently (October 2016) a VR version of Never-mind was released for the Oculus Rift, which is a VR mask/interface similar to other consumer-grade VR devices that have recently entered the market. Obviously, this version enables even more advanced technological capabilities. However, VR devices require extra expense. For the clinician or consumer on a budget, the game does work well with only an HR monitor and/or webcam.
Short-term effects of nicotine gum on facial detection in healthy nonsmokers: a pilot randomized controlled trial
Published in Journal of Addictive Diseases, 2020
Thiago P. Fernandes, Pamela D. Butler, Stephanye J. Rodrigues, Gabriella M. Silva, Marcos V. Anchieta, Jandirlly J. S. Souto, Giulliana H. V. Gomes, Natalia L. Almeida, Natanael A. Santos
Facial stimuli were randomly taken from the Ignatian Educational Foundation (FEI) database,33 an adaptation of the Face Recognition Technology (FERET) database,34 which is widely used in facial recognition research. To avoid visual cues, the images had an oval mask applied to remove hair information. The image luminance was averaged to a grayscale value of 127. The image and screen background were set to this equivalent value. To produce the nonface images that were used in the facial detection task, the faces were segmented (200 × 240 pixels) into 30 squares, each with a size of 33 × 40 pixels and each square was rotated, with equal probabilities, by one of the following angles: 45°, 90°, and 180° (Figure 1a). For further information about the stimuli used here, see Silva et al.35
Evaluation and prediction of polygon approximations of planar contours for shape analysis
Published in Journal of Applied Statistics, 2018
Chalani Prematilake, Leif Ellingson
The expansion of technology over the past few decades has brought with it tremendous amounts of digital imaging data arising in a variety of fields. Perhaps most visibly, even standard digital cameras are capable of producing images that are suitable for use in a number of applications related to computer vision, including scene recognition and facial recognition. Additionally, advances in health-care have led to a rapid growth in medical imaging technology. Among the many types of medical images are X-rays and computed tomography scans. When working with such data, researchers are typically concerned with certain features of an image rather than the entire image, itself. Often, the feature of interest is the shape of the outline of an object depicted in the image, as discussed in Osborne [34] and Qiu et al. [37].
Black Boxes and Bias in AI Challenge Autonomy
Published in The American Journal of Bioethics, 2021
In 2016, Microsoft introduced a conversational AI named Tay to Twitter. Users started sending Tay all sorts of “misogynistic, racist, and Donald Trumpist remarks,” and within 24 hours Tay became a misogynistic, racist, xenophobic conspiracy theorist (Vincent 2016). Microsoft pulled the plug. A 2019 report found that facial recognition algorithms such as the ones that unlock a mobile phone or identify people in photos uploaded to social media are inherently racist and sexist: They misattribute people of color and often fail to recognize female faces (Grother, Ngan, and Hanaoka 2019). Why? Most programmers are male and white and that’s the dataset that was used to train these algorithms. These real-world examples demonstrate a reality of AI: They tend to reflect the biases of the data they are fed.
Related Knowledge Centers
- Consciousness
- Neurological Disorder
- Perception
- Social Cognition
- Semantic Memory
- Face
- Emotion Recognition
- Brain Damage
- Thatcher Effect
- Prosopometamorphopsia