Explore chapters and articles related to this topic
Smart Shopping Robot for Supermarkets using Dijkstra’s Algorithm with user Interface
Published in P. C. Thomas, Vishal John Mathai, Geevarghese Titus, Emerging Technologies for Sustainability, 2020
Stevens Johnson, V.M. Midhun, Nithin Issac, Ann Mariya Mathew, Shinosh Mathew
The Graphical User Interface (GUI) is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators, instead of text-based user interfaces, typed command labels or text navigation. The actions in a GUI are usually performed through direct manipulation of the graphical elements.
Introduction of Remote Laboratory Technology
Published in Ning Wang, Qianlong Lan, Xuemin Chen, Gangbing Song, Hamid Parsaei, Development of a Remote Laboratory for Engineering Education, 2020
Ning Wang, Qianlong Lan, Xuemin Chen, Gangbing Song, Hamid Parsaei
The technologies used in the development of RL systems are various. In the past decades, NI LabVIEW and MATLAB®/Simulink® were the major software tools used for the development of RLs’ experimental environment. The server majorly uses Apache web engine in Linux operating system. MySQL database system has been the most widely used for experimental data database development since the late 1990s. Java, HTML, JavaScript, PHP (Hypertext Preprocessor), and Adobe Flash are all popular choices for the development of graphical user interface (GUI). Majority of real-time videos are played by the ActiveX components embed in the web pages. Figure 1.1 depicts a time line of technology development in RL based on public information and references [45, 53, 57, 65–70].
Graphical User Interface in Python
Published in Amartya Mukherjee, Nilanjan Dey, Smart Computing with Open Source Platforms, 2019
Amartya Mukherjee, Nilanjan Dey
Representation of software in a graphical user interface (GUI) is now common. Most of the modern programming platforms such as C++, Java, and Microsoft visual studio provide GUI components such as frames, forms, buttons, text box, combo box, and list box. The GUI makes the software more readable and interactive as well as more user-friendly. Nowadays, Android provides a more sophisticated and user-friendly GUI for smartphones and other smart gadgets. Python also has a huge number of GUI frameworks or toolkits in the form of libraries and APIs. The primary and classical GUI component in Python is Tkinter, which is bundled with Python using Tk. There are also some cross-platforms and native solutions available to build a platform-specific software.
Design and implementation of a VoIP PBX integrated Vietnamese virtual assistant: a case study
Published in Journal of Information and Telecommunication, 2023
Hai Son Hoang, Anh Khoa Tran, Thanh Phong Doan, Huu Khoa Tran, Ngoc Minh Duc Dang, Hoang Nam Nguyen
The front-end languages used to create graphical user interfaces (GUI) in this article are hypertext markup language (HTML), cascading style sheets (CSS), and JavaScript (JS). The Bootstrap framework and the JQuery library are also used due to their customizability, speed of development, and ease of use. Moreover, for back-end development, Python and Perl languages are used. Asterisk's Monitor library is used to record voice commands and encode them into a.wav file before performing any analysis tasks. Rasa chatbot's modules were chosen because it is free and supports the Python language. Additionally, contacting for Rasa support is easier and faster than consulting a forum of many members worldwide. In addition, the provided tool has the advantage of supporting local languages, including Vietnamese, unlike similar products provided by AWS or Microsoft.
Depth and Breadth of Pie Menus for Mid-air Gesture Interaction
Published in International Journal of Human–Computer Interaction, 2020
Wenmin Li, Xueyi Wan, Yanwei Shi, Nailang Yao, Ci Wang, Zaifeng Gao
With advances in human–computer interaction technologies such as multi-touch, speech recognition, gesture interaction, and gaze control, emerging natural user interfaces (NUI) are changing the way that users rely on control devices in graphical user interfaces (GUI). Among various new methods of interaction in NUI, gesture interaction has become one of the most promising fields (e.g., Chen et al., 2018; Pereira et al., 2015; Zhao et al., 2014). Gesture interaction enables users to interact with a device in an intuitive and easy manner without traditional input devices such as mouse or keyboard (e.g., Baudel & Beaudouin-Lafon, 1993; Chen et al., 2018; Davis et al., 2016; M. Lee et al., 2020; Norman, 2010; Pang et al., 2014). Compared with conventional interaction methods, gesture interaction is more natural for users (e.g., Baudel & Beaudouin-Lafon, 1993; Bolt, 1980; Chen et al., 2018; Pereira et al., 2015), and can improve users’ engagement and emotional experience (Bianchi-berthouze et al., 2007). Therefore, gesture interaction shows great potential application for smart home, virtual games and intelligent driving (BMW Group, 2019; Kang et al., 2013; Liang, 2013). At the current stage, there are two typical categories of gestural interaction in general. In the first category, users physically touch an equipment (e.g., hand-held wands, hand-shape gloves) when performing a gesture interaction (e.g., Cao & Balakrishnan, 2003; M. Lee et al., 2020; Wilson & Shafer, 2003). In the second category, users fulfill the interaction by using mid-air gestures without touching any equipment. This mid-air gesture interaction is achieved by using body gestures in 3D space with the help of cameras (e.g., Kinect, Leap Motion; Brand et al., 2016; Ferron et al., 2019; Koutsabasis & Vogiatzidakis, 2019; M. Lee et al., 2020). Mid-air gestures can even be learned effectively via cross-modal training (e.g., Henderson et al., 2019). This study focused on mid-air gesture interaction.