Explore chapters and articles related to this topic
Microcontroller Hardware
Published in Syed R. Rizvi, Microcontroller Programming, 2016
A keypad is a set of buttons arranged in a block or “pad” that usually bear digits and other symbols and usually a complete set of alphabetical letters. Switch matrix keyboards and keypads are really just an extension of the button concepts. Figure 3.12 shows a keypad with a connector that can plug the keypad directly into the decoder/driver chip or the starter kits. If a keypad only contains numbers, it is called a numeric keypad. Keypads are found on many alphanumeric keyboards and on other devices such as calculators, push-button telephones, combination locks, and digital door locks, which require mainly numeric input. They are part of mobile phones that are replaceable, and sit on a sensor board of the phone. Some multimedia mobile phones have a small joystick that has a cap to match the keypad. Keypads are also a feature of some combination locks. This type of lock is often used on doors such as that found at the main entrance to some offices.
Circuit Components
Published in Julio Sanchez, Maria P. Canton, Microcontroller Programming, 2018
Julio Sanchez, Maria P. Canton
In the context of microcontroller-based circuits, a keypad (also called a numeric keypad) is a set of pushbutton switches sometimes labeled with digits, mathematical symbols, or letters of the alphabet. For example, a calculator keypad contains the decimal (occasionally hexadecimal) digits, the decimal point, and keys for the mathematical features of the calculator. Although in theory the computer keyboard is a keypad, the keypad is usually limited to a smaller arrangement of buttons or to part of a computer keyboard consisting mainly of numeric keys.
Input/Output Bank Programming and Interfacing
Published in A. Arockia Bazil Raj, FPGA-Based Embedded System Developer's Guide, 2018
Matrix keypads are commonly used in calculators, telephones, weighing scales, vending machines, and so on, where a number of input switches are required. The matrix keypad is made by arranging push-button switches in rows and columns, as shown in Figure 5.12. To connect 4 × 4 push buttons to the FPGA in a straightforward way, we need 16 input pins. By connecting the switches in a matrix, we need only eight pins [39,57]. We can read the status of each switch using a port of eight pins in the FPGA.
Q-bot: heavy object carriage robot for in-house logistics based on universal vacuum gripper
Published in Advanced Robotics, 2020
Isamu Matsuo, Toshihiko Shimizu, Yusuke Nakai, Masahiro Kakimoto, Yuki Sawasaki, Yoshiki Mori, Takamasa Sugano, Shuhei Ikemoto, Takeshi Miyamoto
The outline of Q-bot system configuration is shown in Figure 12. Q-bot equips a Ubuntu14.04 PC and multiple Arduinos. There are a numeric keypad, speakers and a monitor on the front of the robot as the external user interface. Communication between each element is achieved by Robot Operating System (ROS) [21]. Communication between Ubuntu and each Arduino takes place via Ros serial. In this system, Ubuntu receives external inputs through the numeric keypad. Based on the received input, Ubuntu transmits the motor command values (set in advance) such as the joint angle and the ON / OFF command for the vacuum pump, to each Arduino. At the same time, voice synthesis software is used to display a message from the speaker about the operation to be performed and a note on the external monitor. For example, after giving an audio signal ‘I will carry,’ the monitor will show the note to being carried. The Arduino compares the value sent from the Ubuntu for each sensor in the robot body and drives the motor driver. By carrying out this series of operations, Q-bot grasps and transports the object. Two distance measurement sensors (Figure 13), at the center of the robot's torso, are used to detect object and walls. An object to be held when it is in front of the robot is detected by monitoring changes in distance values. Also, the gyro sensor is corrected on the basis of distance measurements (two places in front, one on the left, and one on the right) from the robot to a wall by a sensor attached to the bogie. Also, line sensors are mounted low on the side of the robot (Figure 13).
Identifying Users by Their Hand Tracking Data in Augmented and Virtual Reality
Published in International Journal of Human–Computer Interaction, 2022
Jonathan Liebers, Sascha Brockel, Uwe Gruenefeld, Stefan Schneegass
Before commercial off-the-shelf HMDs for Augmented and Virtual Reality were equipped with hand and finger tracking technology, the “Leap Motion”1 was widely used to explore hand-related biometric features. The device senses hand gestures at an accuracy of 0.2 mm for static and 1.2 mm for dynamic setups (Weichert et al., 2013) and recognizes a hand model consisting of several bones via its sensors. Thereby, various systems for identification have been developed. One example comes from the work of Maruyama et al. (2017), who explored user authentication based on the hands’ skeletal features, such as the distance between fingertips, as a biometric trait. Although their approach falls primarily into the domain of physiological biometrics, behavioral-biometrics-based approaches exist as well. Another example is the approach by Chan et al. (2015) that also includes hand gestures as a form of behavior. However, in their discussion they state that hand geometry matters the most for identification. The Leap Motion has also been utilized to implement behavioral biometric authentication schemes, such as the handwriting in the air (Kamaishi & Uda, 2016). Similarly, Xiao et al. (2016) developed a Leap-Motion-based behavioral signature verification. Musa (2017) further investigated a hand-tremor-based biometric recognition, testing whether hand tremor is unique for humans. It was positively evaluated in a user study with five subjects. Manabe and Yamana (2019) also developed a two-factor authentication system. They used a numeric keypad and a Leap Motion to determine identities through physiological features, such as the length of phalanges and metacarpals, but also by behavioral finger speed.
Secure and Memorable Authentication Using Dynamic Combinations of 3D Objects in Virtual Reality
Published in International Journal of Human–Computer Interaction, 2023
Jiawei Wang, BoYu Gao, Huawei Tu, Hai-Ning Liang, Zitao Liu, Weiqi Luo, Jian Weng
The other category of knowledge-based authentication is to design the representation of passwords for VR authentication. For example, Yu et al. (2016) compared the 3D mode, the 2D sliding mode, and the 2D numeric keypad password system in a virtual environment. Similarly, George et al. (2017) adapted existing mechanisms (ie, PIN and 2D mode) for VR. Olade, Liang, et al. (2020) proposed the SWIPE authentication system in VR. Although their works showed positive results in usability with a low authentication time (2.38 s in (George et al., 2017), 1–1.8 s in (Olade, Liang, et al., 2020)), it suffered from the security aspect: bystanders could observe up to 18% of the password information in (George et al., 2017), and attackers with the help of video recordings could reach high guessability rates of 20%–40% in (Olade, Liang, et al., 2020). Correspondingly, some researchers utilized the 3D environment to resist the observation attacks in the real world (Funk et al., 2019; George et al., 2019; Mathis et al., 2021). For example, in RoomLock (George et al., 2019) and LookUnlock (Funk et al., 2019), the user-selected 3D objects are scattered in a virtual environment, which effectively improves the security of the authentication scheme from shoulder surfing attacks. The successful attack rates of RoomLock and LookUnlock were about 12.5% and 0%–5.9%, respectively. Mathis et al.’s RubikAuth (Mathis et al., 2021) used an environment-independent manipulable cube to verify users’ identity and showed that it had a high observation resistance (about 96%–99.55%). George et al. (2020) proposed the GazeRoomLock authentication system, which required users to select a number of 3D objects in a virtual room. They evaluated four multimodal techniques for entering passwords, and the best one was MultimodalGaze (authentication time was 5.94 s, input error rate was 1.32%, and attack success rates were 18% and 10% under real-world and offline observations, respectively). Abdelrahman et al. (2022) introduced the concept of cue-based authentication for VR by digit PIN codes, and their CueVR was fully resistant to observation attacks by design under the assumption that the attacker cannot know random cues.