Explore chapters and articles related to this topic
Methods of Spatial Visualisation
Published in Ken Morling, Stéphane Danjou, Geometric and Engineering Drawing, 2022
When we look at an object in space, the visible part of it is projected on our eye’s retina, the plane of projection. The type of projection that creates the mapped object in our eye is referred to as central projection. By using this projection method, the projection centre, also known as station point (SP), is placed at a finite distance apart from the projection plane, and all projectors are converging. This kind of projection is called perspective projection and is used to pictorially represent a 3D object on 2D paper or a 2D computer screen with just one single view. The reason this representation is sometimes used is that it mimics the human eye. While this might be suitable for providing an impression of three dimensions, this kind of projection is rarely used in mechanical engineering.
Data modeling based on a 3D BIM standard and viewer system for the bridge inspections
Published in Hiroshi Yokota, Dan M. Frangopol, Bridge Maintenance, Safety, Management, Life-Cycle Sustainability and Innovations, 2021
Fumiki Tanaka, Yuta Nakajima, Eiji Egusa, Masahiko Onosato
From a defect sketch, the position of a defect is extracted using image recognition technologies. Leader lines of the defect are extracted from the defect sketch through line-segment detection. Using the location of the line, the position of the defect in two dimensions is obtained. Text recognition techniques are used to identify the type of defect and photographic image identifier. Next, 3D defect position data are obtained from 2D defect position data on the projection plane using 3D shape data of the bridge element and property information. The projection plane is the plane of the 3D shape model space on which the 2D defect sketch is placed. Finally, these data are integrated into the bridge inspection information model.
Computer-aided inverse panorama on a conical projection surface
Published in Inverse Problems in Science and Engineering, 2019
Comparing our method of perspective representation with inverse perspective of Byzantine times where straight lines diverge against the horizon, here the curves which are panoramic images of straight lines converge at the horizon. However, like in the case of Byzantine perspective onto flat projection plane, dependently on the object’s location towards the projection surface, its panoramic image can show more than its two sides. Such a case of the conical panorama from the centres dispersed on a circle is presented in the Figure 21. The characteristics of the projection apparatus is the same as in the previous examples, whereas the coordinates of the object’s vertices are as follows: A(−10,24,0), B(−10,34,0), C(30,34,0), D(30,24,0), E(−10,24,10), F(−10,34,0), G(30,34,10), H(30,24,10), I(−10,29,14), J(30,29,14), K(30,14,0), L(40,14,0), M(40,34,0), N(30,14,26), O(40,34,26), P(30,34,26), R(30,34,26).
A triangle mesh-based corner detection algorithm for catadioptric images
Published in The Imaging Science Journal, 2018
To avoid direct application of classical algorithms, catadioptric images are unwrapped to panoramic images using polar to Cartesian coordinates conversion [16–19]. The advantage of this geometric transformation lies in decreasing distortions and non-uniformity of resolution. By considering that panoramic images are planar, conventional algorithms are then applied to detect visual features. However, this transformation does not remove image distortions completely [8, 10]. Another similar approach is to generate perspective images from each region of catadioptric images that enables to remove nonlinear distortions caused by quadratic mirrors [1, 20]. Nevertheless, distortions in reconstructed perspective images strongly depend on the position and orientation of the projection plane, so one must vary parameters of the transformation for different catadioptric cameras and pose of objects.
A clipping algorithm for real-scene 3D models
Published in International Journal of Digital Earth, 2023
Jianhua Chen, Xu Liu, Bingqian Wang, Jian Lu
Below, we compare our results with the results of SuperMap iDesktop 10i software. First, we draw a clipping boundary similar to Figure 14 (a1) in the SuperMap iDesktop 10i. Second, we use this boundary to clip the holes in the model; the results are shown in Figure 17 (a). The results are not good, many parts of the clipped model are not within the boundary line. Because the clipping perspective of SuperMap iDesktop 10i is fixed, it always uses the horizontal plane as the projection plane. When the observation perspective of the model is adjusted to be perpendicular to the horizontal plane, the results shown in Figure 17 (b) are obtained. At this time, the clipped area of the model roughly coincides with the red boundary line.