Explore chapters and articles related to this topic
Data Analysis and Manipulation
Published in José Guillermo Sánchez León, ® Beyond Mathematics, 2017
Here we use a figure known as the “Utah Teapot”, included in ExampleData, to make a 3D bar chart. g = ExampleData[{"Geometry3D", "UtahTeapot"}];BarChart3D[data3, ChartElements → g, BoxRatios → {4, 1, 1}]
Fast and cross-vendor OpenCL-based implementation for voxelization of triangular mesh models
Published in Computer-Aided Design and Applications, 2018
Mohammadreza Faieghi, O. Remus Tutunea-Fatan, Roy Eagleson
As it can be noticed, while the Utah teapot features a coarse mesh, both Stanford bunny and dragon are characterized by fine meshes. By returning to the general surgical orthopaedic context of the present study, the last two objects represent the cutting end of a surgical tool used in glenoid reaming procedures along with the geometry of a scapula obtained through the reverse engineering of a patient-specific CT model.
Pointing It Out! Comparing Manual Segmentation of 3D Point Clouds between Desktop, Tablet, and Virtual Reality
Published in International Journal of Human–Computer Interaction, 2023
Carina Liebers, Marvin Prochazka, Niklas Pfützenreuter, Jonathan Liebers, Jonas Auda, Uwe Gruenefeld, Stefan Schneegass
We split the study into three blocks– one for each device. First, the participants entered a training scene where they could freely explore the application’s features. The scene included the Utah teapot standing on a table with a mug, a book, and a camera (see Figure 5). As in the following tasks, the participants were asked to segment the teapot. Participants were informed that the scenes might include artifacts, so points may have different colors, causing color assignments to result in possible errors. The experimenter placed the printed teapot next to the participant to enable verification of its properties. In VR, the object was placed on a table behind the safety guard so that participants could view it using the see-through functionality introduced before the training started. During training, which lasted for a maximum of fifteen minutes, the experimenter answered all questions regarding the usage of the application. Furthermore, the experimenter ensured that all functionalities were applied at least once. If a participant did not use a functionality, the experimenter suggested it. In VR, the experimenter gave verbal instructions to ensure the use of all features before allowing the participant to explore the application on their own. After familiarizing themselves with the application, the participants could begin the segmentation tasks. As in the training scene, the experimenter placed the object to be segmented next to the participant. The participants had five minutes to remove all pixels that did not belong to the target object, after which the scene automatically ended. If the participant finished before the time expired, they could end the scene themselves by clicking the Complete button (see Figure 3). After all segmentation tasks on one device were finished, the participants were asked to complete the NASA TLX Index from Hart (1986) and the SUS questionnaire proposed by Brooke (1996). They then answered custom Likert items and questions regarding their assessment of segmentation on the different devices. After the participants finished all tasks on all devices, we conducted a semi-structured interview. Each participant took approximately 1 hour and 15 minutes for the entire study.