Explore chapters and articles related to this topic
Automatic Fitting of a 3D Character to a Computer Manikin
Published in Vincent G. Duffy, Advances in Applied Human Modeling and Simulation, 2012
Stefan Gustafsson, Sebastian Tafuri, Niclas Delfs, Robert Bohlin, Johan S. Carlson
A unit dual quaternion rˆ=q0 with dual part qε=0 will represent a rotation just like a reqular unit quaternion. However, a unit dual quaternion tˆ=1+ε2it0+jt1+kt2
Rigid-Body Motion
Published in Gregory S. Chirikjian, Alexander B. Kyatkin, Engineering Applications of Noncommutative Harmonic Analysis, 2021
Gregory S. Chirikjian, Alexander B. Kyatkin
By using the concept of dual numbers, we can describe a rigid-body motion in space using a 3 ×3 matrix of real and dual numbers, a 2 ×2 matrix of complex and dual numbers, or a dual quaternion. Each of these different ways of describing motion in space is examined in the following subsections.
Application of SPH to Single and Multiphase Geophysical, Biophysical and Industrial Fluid Flows
Published in International Journal of Computational Fluid Dynamics, 2021
Paul W. Cleary, Simon M. Harrison, Matt D. Sinnott, Gerald G. Pereira, Mahesh Prakash, Raymond C. Z. Cohen, Murray Rudman, Nick Stokes
Figure 5 shows visualisations of the RP and I3.5 simulations in a plane through the centre of the athlete at five times. The initially still water (2 m wide, 5 m long and 5 m deep) is simulated using 13 million SPH particles with a mean spacing of 15 mm. The athlete’s body is simulated using a mesh with 15,000 nodes and an average spacing of 10 mm. The mesh representing the athlete’s body is deformed using a skeleton of 23 spherical joints and the dual quaternion skin rigging method (Kavan et al. 2008). Using this method each node of the mesh is deformed in accordance with one or more of the skeleton joints in a manner that represents the skin true deformation (Harrison et al. 2016). The motion of the body is calculated using Newton’s second law of motion and the sum of fluid forces on the body in Equations (11) and (12).
A novel robot hand-eye calibration method to enhance calibration accuracy based on the POE model
Published in Advanced Robotics, 2023
Zizhen Jiang, Jun Zhou, Hongqi Han
Hand-eye calibration determines the pose transformation between the camera's and end-effector's frame, which includes two types of eye-to-hand and eye-in-hand depending on the camera-mounted position. Generally, it is converted to the well-known AX = XB problem at last, where A and B represent the pose transformation of the end-effector's and camera's frames between the two different robot movements, respectively, and X denotes the pose transformation from the camera's frame to the end-effector's frame, namely the hand-eye pose [5]. The solutions for AX = XB can be divided into separate and simultaneous methods. The separate method divides the homogeneous transformation motion into rotation and translation [6,7], causing error propagation and losing the inherent coupling between the rotation and translation parameters [8]. In contrast, Daniilidis introduced a simultaneous approach that converts linear transformation to the dual-quaternion product and performs the singular value decomposition to acquire X directly [9]. Moreover, Zhuang et al. and Horaud et al. proposed simultaneous methods considering hand-eye calibration as a nonlinear optimization problem [10,11]; however, these methods fall into local optima without suitable initial values [12]. Zhao presented a hand-eye calibration algorithm employing convex optimization to resolve the sensitivity to initial values [12]; nevertheless, its rotation solution might not be strictly an orthogonal matrix [13]. Li et al. applied the Kronecker product with an additional orthogonalization step for hand-eye calibration [14], avoiding the limitation faced in [12]. Pan et al. proposed a hand-eye calibration method for an eye-to-hand configuration with the data rectification process to reduce the impact of data noise [15] and improved the accuracy of the hand-eye pose. Pachtrachai et al. also addressed the data source problem and provided a method to solve non-synchronized data streams in hand-eye calibration [16]. With advancements in point cloud techniques, Li et al. proposed a hand-eye calibration method based on 3D reconstruction for a line laser mounted on the robot end [17]. Those methods were implemented with a calibration marker, while Pachtrachai et al. proposed a method without the calibration marker and obtained the hand-eye calibration equation by constraining the robot end to move around a remote center [18].
Direct 3D coordinate transformation based on the affine invariance of barycentric coordinates
Published in Journal of Spatial Science, 2021
Currently, the methods to solve the parameters of the 3D coordinate transformation model can be roughly classified into two categories: iterative methods and closed-form methods. Since the 3D coordinate transformation model is a non-linear function model, iterative methods usually require providing the approximate values of transformation parameters, linearising their function models, and then iteratively solving the parameters based on the least-squares criterion or the total least-squares criterion. Actually, rotation and scaling are mainly taken into account for the non-linearity of the 3D coordinate transformation model. Zhang et al. (2010) firstly obtained the accurate initial values of angle elements via singular value decomposition (Golub and Van Loan 2013), and then solved the transformation parameters iteratively. Chen et al. (2004) regarded the nine elements of the rotation matrix as independent parameters, and simultaneously imposed six nonlinear constraints, and then, an iterative solution was conducted with the conditional indirect adjustment. The solution was suitable for the 3D coordinate transformation model with arbitrary rotation angles but obviously increased the number of unknowns. Genetic algorithm and pattern search were combined together by Zeng and Huang (2008); however, different initial values and different stopping criteria will affect the convergence speed and the accuracy of the solution. For iterative methods, the closeness of the initial values of unknowns to the true values will influence the convergence speed and the accuracy of the solution. To address these issues, closed-form methods have received much attention, because they do not require the initial values of unknowns and solve the optimal transformation parameters directly. For closed-form methods, the translation parameters of the 3D coordinate transformation model can be described as the difference between the centroid of the coordinate system after transformation and the centroid of the rotated and scaled coordinate system before transformation; the optimal scale parameters are the ratios of the RMSE (root-mean-square error) of the coordinates in the two coordinate systems to their respective centroids. The difference between different closed-form methods mainly lies in the expression used for the rotation matrix. The rotation matrix is represented by the unit quaternion (Horn 1987), and the unit quaternion of the optimal rotation matrix is the eigenvector corresponding to the largest positive eigenvalue of the symmetric matrix. Arun et al. (1987) and Umeyama (1991) adopted singular value decomposition to calculate the correct rotation matrix directly. Based on the work of Walker et al. (1991), Wang et al. (2014) represent the rotation and translation via dual quaternion, and then similar transformation with seven parameters was solved simultaneously, which effectively avoids error accumulation.