Explore chapters and articles related to this topic
Methods of Digital Analysis and Interpretation
Published in Victor Raizer, Optical Remote Sensing of Ocean Hydrodynamics, 2019
Texture synthesis is a common computer graphics technique to create large textures from small texture samples. Texture synthesis is used for texture mapping in surface or scene rendering applications. A synthetic texture should differ from input texture samples, but should have perceptually identical texture characteristics. In computer vision, texture synthesis increases texture coherence providing superior image quality and performance. Compared to texture classification and segmentation, texture synthesis poses a challenge on texture analysis because it requires a more detailed texture description. Applications of texture synthesis include image editing, image completion, video synthesis, and computer animations (Magnenat-Thalmann and Thalmann 1987). Algorithms of texture synthesis use the following methods: (1) Pixel-based sampling, (2) Block sampling, (3) Multiresolution sampling, (4) Mosaic-based synthesis, (5) Markov-Gibbs random field, (6) Pyramided-based synthesis, (7) Cut-Primed Smart Copying. Most of synthesis approaches are based on Markov-Gibbs random field models of textures (Li 2009). A few recent algorithms designed for texture synthesis on 3D surfaces.
Advances in Automated Restoration of Archived Video
Published in Filippo Stanco, Sebastiano Battiato, Giovanni Gallo, Digital Imaging for Cultural Heritage Preservation, 2017
Filippo Stanco, Sebastiano Battiato, Giovanni Gallo
However, removing line scratches is notoriously difficult. While convincing spatial interpolation can be achieved in a single frame, over several frames any error in reconstruction is clearly seen since it is correlated with the same position in many frames. Example based texture synthesis, famously introduced by Efros et al. [52] can achieve very good spatial reconstructions, but temporally the result is poor if simply repeated on multiple frames. Most of the proposed approaches assume the absence of the original information in the degraded region, see for instance [7,44,51,53-57]. Therefore, they propagate neighboring clean information into the degraded area. The neighboring information can be found in the same frame [7, 44, 53, 54] or also in the preceding and successive frame exploiting the temporal coherency, as done in [51,55,56]. The propagation of information can be performed using in-painting methods, as in [53,54], or interpolation schemes. In [7], an autoregressive filter is used for predicting the original image value within the degraded area. A cubic interpolation is used in [58], by also taking into account the texture near the degraded area (see also [57] for a similar approach), while in [44] a different interpolation scheme is used for low and high frequency components. Finally, in [55] each restored pixel is obtained by a linear regression using the block in the image that better matches the neighborhood of the degraded pixel. Figure 11.12 shows the problem of poor temporal consistency in the region of local motion. The autoregressive interpolator of Kokaram et al. [43] was used here.
Image Descriptors and Features
Published in Manas Kamal Bhuyan, Computer Vision and Image Processing, 2019
In texture classification, the problem is identifying the given textured region from a given set of texture classes. The texture analysis algorithms extract discriminative features from each region to perform classification of such texture patterns.Unlike texture classification, texture segmentation is concerned with automatically determining the boundaries between various textured regions in an image. Both reign-based and boundary-based methods can be employed to segment texture images.Texture synthesis is the process of algorithmically constructing a large digital image from a small digital sample image by taking advantage of its structural content. Given a finite sample of some textures, the goal is to synthesize other samples from that texture.As discussed in Chapter 1, shape determination from texture (shape from texture) information is another important research area of computer vision. Texture pattern variations give cue to estimate shape of a surface. For example, the texture gradient can be defined as the magnitude and direction of maximum change in the primitive size of the texture elements (texel). So, texture gradient information can be used to determine the orientation of a surface in an image.Three principal approaches are more commonly used in image processing to describe texture patterns of a region. They are statistical, structural, and spectral. Statistical approaches analyze the gray level characteristics of textures as smooth, coarse, grainy and so on. Structural techniques deal with the arrangement of image primitives. A particular texture region can be represented based on these primitives. Spectral techniques are based on the properties of the Fourier spectrum. In these methods, the global periodicity occurring in an image is detected by examining the spectral characteristics of the Fourier spectrum. Let us now discuss some very important texture representation methods.
Constructive Steganography Using Texture Synthesis
Published in IETE Technical Review, 2018
Zhenxing Qian, Nannan Huang, Sheng Li, Xinpeng Zhang
Texture synthesis is a technology that uses a texture pattern with limited contents to generate a larger sized image with similar appearance. The exemplar-based texture synthesis is a popular idea that resamples the source pattern to produce a synthesized image [14–16]. In [14] and [15], pixels in the source pattern are used to propagate the synthesis content. The pixel by pixel algorithm requires a lot of time to synthesize each region. In [16], Efros and Freeman propose a fast synthesis algorithm named “image quilting”. This approach constructs the new texture by finding appropriate patches from the source pattern. Accordingly, the neighboring patches are quilted by an optimal cut. An example of texture synthesis using the method in [16] is shown in Figure 1, in which (a) is a source pattern and (b) a synthesized image with a larger size.
Structure constrained image completion by geometric transformation model and dynamic patches
Published in Journal of Modern Optics, 2019
Qiaochuan Chen, Guangyao Li, Li Xie, Qingguo Xiao, Mang Xiao
Exemplar-based methods sample the pixels from a known region of the image and copy them to a damaged region. Efros and Leung (6) proposed a non-parametric method for texture synthesis. The texture synthesis process grows a new image outward from an initial seed, one pixel at a time. Criminisi et al. (7) proposed a method using structure-based priority. Owing to the greedy strategy in the searching process, this method propagates incorrect texture into the damaged region. To address the filling ordering issue, Drori et al. (8) used an iterative process that interleaved smooth reconstruction with the synthesis of image fragments by example, and Xu et al. (9) completed images using patch sparsity.