WO2018176185A1 - 一种纹理合成方法及其装置 - Google Patents

一种纹理合成方法及其装置 Download PDF

Info

Publication number
WO2018176185A1
WO2018176185A1 PCT/CN2017/078248 CN2017078248W WO2018176185A1 WO 2018176185 A1 WO2018176185 A1 WO 2018176185A1 CN 2017078248 W CN2017078248 W CN 2017078248W WO 2018176185 A1 WO2018176185 A1 WO 2018176185A1
Authority
WO
WIPO (PCT)
Prior art keywords
texture
map
label map
label
target
Prior art date
Application number
PCT/CN2017/078248
Other languages
English (en)
French (fr)
Inventor
石华杰
周漾
黄惠
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to US16/487,087 priority Critical patent/US10916022B2/en
Priority to PCT/CN2017/078248 priority patent/WO2018176185A1/zh
Publication of WO2018176185A1 publication Critical patent/WO2018176185A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Definitions

  • the present invention relates to the field of graphics and image processing, and in particular, to a texture synthesis method and apparatus therefor.
  • Texture synthesis technology is designed to use computer to synthesize texture images that meet people's requirements. It has a wide range of applications in both realistic and non-photorealistic texture rendering and filling. At the same time, it has broad application prospects in image restoration, image art style conversion, fast transmission of network compressed data and computer animation.
  • Sample-based texture synthesis can produce good results in many cases, but the current sample-based texture synthesis technique is uncontrolled when the original image contains structural information, consists of multiple materials, or has complex textures such as non-uniform gradients. In the case, the texture image cannot be synthesized well, and more importantly, the result of the automatic synthesis cannot better meet the specific needs of the user.
  • the technical problem to be solved by the present invention is to provide a texture synthesis method and apparatus thereof which are easy to control in a texture synthesis process.
  • a technical solution adopted by the present invention is to provide a texture synthesis method, which includes the following sequential steps:
  • the step s2 specifically includes the following steps:
  • the step s3 specifically includes the following steps:
  • step s4 includes the following steps:
  • the step s5 further includes the following steps:
  • step s3 If it is determined that the texture feature distribution of the label map is inaccurate, the material texture is relabeled and then proceeds to step s3.
  • the target label map in the step s6 includes texture distribution information of the target texture map.
  • step s6 includes the following steps:
  • the step s6 further includes the following steps:
  • the present invention also provides a texture synthesizing apparatus, comprising:
  • a label map generating unit for abstracting the input original image and extracting the texture
  • the feature vector of the sign is marked, and the different material textures are respectively labeled, the texture feature training prediction algorithm of the label area is selected and the unlabeled area is predicted, and finally the label map is created according to the distribution of the texture feature;
  • a feature judging unit configured to be connected to the label map generating unit, configured to determine whether the texture feature distribution of the label map is accurate; if not, re-mark the material texture and activate the label map production unit Retraining the prediction algorithm and generating the label map;
  • the synthesizing unit is connected to the label map generating unit, and is configured to synthesize the target texture map with the preset target label map based on the label map.
  • a color module for extracting a color histogram of the area, represented by a histogram
  • a filter bank response information module configured to extract filter group response information of the area, represented by a histogram
  • An edge information module configured to extract edge information of the area, represented by a histogram
  • the synthesizing unit includes:
  • the texture boundary optimization module extracts a distance offset map according to the label map and the target label map, and performs weight optimization on the edge of the target texture map according to the extracted distance offset map.
  • the invention has the beneficial effects that: compared with the prior art, the present invention adopts a texture feature that abstracts the original image to obtain the texture of the material in the texture synthesis process, and marks the original image according to the different texture features, and adopts
  • the trainable prediction algorithm classifies the original image and obtains the label map of the original image through the graph cut model; further determines whether the label map is accurate. If not, repeat the above steps until the label map can accurately reflect the texture.
  • the distribution of different material textures in the original image; finally, the new texture is synthesized under the guidance of the target label map.
  • the quality of the texture synthesis can be effectively controlled, so that the final synthesized texture is controllable and meets the requirements. Effectively save labor costs, high efficiency and controllable quality of texture synthesis.
  • the apparatus using the above method also has the same technical effect.
  • FIG. 1 is a block diagram showing the basic steps of a texture synthesis method of the present invention
  • Figure 2 is a block diagram showing the complete steps of the texture synthesis method of the present invention.
  • FIG. 3 is a block diagram showing the basic structure of the texture synthesizing device of the present invention.
  • Figure 4 is a block diagram showing the complete structure of the texture synthesizing device of the present invention.
  • Figure 5 is an original view of the input when implementing the texture synthesis method of the present invention.
  • Figure 6 is a label diagram generated according to Figure 5;
  • Figure 7 is a diagram of a target label input when implementing the texture synthesis method of the present invention.
  • Figure 8 is a target texture map synthesized according to the guidance of Figure 7 based on Figure 6;
  • Figure 9 is an original view of the input when implementing the texture synthesis method of the present invention.
  • Figure 10 is a hand-drawn annotation drawing
  • FIG 11 is a reference diagram generated in accordance with Figure 9;
  • Figure 12 is a diagram showing a target label input when implementing the texture synthesis method of the present invention.
  • Figure 13 is a target texture diagram synthesized according to the guidance of Figure 12 based on Figure 10;
  • Fig. 14 is a target texture diagram synthesized based on the guidance of Fig. 12 based on Fig. 11;
  • a texture synthesis method and an apparatus using the same according to the present invention are specifically described below with reference to FIGS. 1 through 14.
  • the texture synthesis method includes the following sequential steps:
  • the abstract operation on the texture of the material is essentially to analyze the texture of the material and extract the corresponding texture features so that the texture can be recognized and classified by the computer.
  • This step intuitively refers to the original small pixels as a logical pixel, and extracts features for each logical pixel of the original image (ie, the three types of features described below: color histogram, filter bank response information, and Edge information), each logical pixel will have a feature vector representation.
  • features for each logical pixel of the original image ie, the three types of features described below: color histogram, filter bank response information, and Edge information
  • each logical pixel will have a feature vector representation.
  • the areas of different material textures are marked to facilitate subsequent prediction algorithms. Training and judging whether the generated label map is accurate.
  • the original image is used as the background, and different colors or numbers are marked in the corresponding area according to the texture of the material, so as to correspond to the texture features of the material texture and corresponding
  • the annotations are associated.
  • the SLIC algorithm is used to divide the original image (as shown in FIG. 5 and FIG. 9) into a plurality of uniform Super Pixel blocks, and the texture features of each pixel block are counted.
  • a super-pixel block generally divides a picture that is originally a pixel-level into a district-level map. From these areas, it is convenient to extract effective basic information such as color histograms, texture information, and so on.
  • the advantage of using superpixels is that on the one hand, the number of sample points is reduced, and the algorithm process is accelerated. On the other hand, the super pixel block can more effectively reflect the texture features than a single pixel.
  • the original image may also be abstracted and segmented into logical pixels with feature consistency using the SEEDS algorithm.
  • the texture synthesis method of the present invention can selectively set the seed point (selected position) in the original picture and extract the texture feature. . Referencing the texture rules in the seed points by the prediction algorithm helps to quickly complete the classification and labeling of the unlabeled areas of the original image.
  • a random forest algorithm is used as the prediction algorithm.
  • random forests As a combination of a set of decision trees, random forests have high classification accuracy for texture features, fast and stable results, and predictable results. They can process high-dimensional data without feature selection. In the current texture image processing, it has great advantages over other algorithms.
  • the meaning of the label map is that it can better describe the distribution of textures of different materials in the texture image (refer to FIG. 6 and FIG. 11 , the label map connects different colors or marks with different material textures in the original image). Get up and show it relatively intuitively).
  • the computer system can accurately identify the distribution of the material texture in the original image according to the information of the label map, and facilitate classification and application of the material texture, such as filling, blurring, texture transformation, and the like.
  • the object of the present invention is to control the process of texture synthesis based on the label map, so the quality of the label map generation directly determines the effect of the final texture synthesis.
  • the interaction of the label map is realized by the judgment of step s5, that is, the accuracy of the produced label map is judged, so that the label map is in a controlled state, and the iterative steps are further analyzed by repeating the classification on the existing basis.
  • the accuracy of the distribution of texture features in the label map can be further improved, so that the finally generated label map satisfies the user's needs.
  • the comparison of the accuracy of the label map is performed by the computer based on the texture feature for automatic comparison and comparison.
  • manual intervention may also be performed, and the difference between the label map and the original image by the staff member is judged to be iterative (classification and optimization on the existing basis) to regenerate the label map or use the label map.
  • the next step is texture synthesis.
  • the target label map (as shown in FIG. 7 and FIG. 12) includes type expectation information and structure information of the texture feature distribution of the synthetic target, and the label map is based on the basis of filling the corresponding texture in the corresponding region.
  • the material texture corresponding to the feature is finally combined with the target texture map (as shown in Figures 8 and 14).
  • the use of the target label map further enhances the control of texture synthesis, so that the target texture map can better meet the user's synthetic expectations, so as to better meet the user's needs.
  • step s21 specifically includes the following steps:
  • Extracting the feature vectors of the above types helps the computer system to recognize, analyze and edit the texture features, and can improve the accuracy of texture data processing.
  • the texture features respectively extracted for each super pixel block (logical pixel) of the texture image include the following three types of features, so that the computer can recognize the texture of the material: a color histogram, Filter bank response information and edge information.
  • Color histogram the color value of the image is the most basic information of the image, the color histogram can describe the proportion of different colors in the whole image, and does not care about the spatial position of each color, especially suitable for describing the difficulty Automatically split images.
  • Filter bank response information is obtained using the MR8 filter bank.
  • the MR8 filter bank includes both isotropic and anisotropic filters, which overcomes the shortcomings of the traditional rotation invariant filter for weak response of the filter set response information, and the MR8 filter filter set has only an 8-dimensional response dimension. Greatly reduce the complexity of data processing.
  • Edge information is obtained using the gPb detector.
  • the gPb detector takes into account image brightness, texture and color information, combined with local and global image information, and is a high performance contour detector. In addition, it uses edge detection as a pixel classification problem, and training the classifier from the natural image of the artificially labeled boundary (ie, prediction algorithm, such as random forest), will make the shorter noise edge weaker and get longer and more obvious. Image boundary (large gray value).
  • the above three types of features are represented by histograms, and finally the three histograms are connected into one long vector as the final feature vector of each super pixel block.
  • step s3 further comprises the following steps:
  • the marked material texture is used as a seed point; by selecting the seed point, the subsequent generated label map can more accurately reflect the distribution information of the texture feature and improve the controllability of the texture synthesis.
  • step s4 comprises the following steps:
  • the classifier (prediction algorithm) is trained according to the selected super pixel block (labeled area), and the classifier is preferably a random forest integrated learning method, and the prediction algorithm is mature and reliable. Finally, the classifier is used to classify other unselected super pixel blocks.
  • the classification result is optimized by using the Graph Cut algorithm, and the energy function is as follows:
  • L adj (p, q) represents the number of pixels adjacent to the adjacent superpixels p and q.
  • the threshold a 10 is in this embodiment.
  • the prediction algorithm may also be trained using a Gradient Boost Decision Tree and predict (classify) the unselected regions. At the same time, more kinds of features can be extracted into the final feature vector.
  • step s5 further includes the following steps:
  • step S51 If it is determined that the texture feature distribution of the label map is inaccurate, the material texture is relabeled and the process proceeds to step s3.
  • This step can edit or adjust the result obtained in step s5.
  • the result of step s4 is inaccurate, it may mean that the labeling of the material texture in step s25 may be inaccurate, and re-marking can effectively correct the problem.
  • the target label map in step s6 contains texture distribution information for the target texture map.
  • step s6 includes the following steps:
  • a self-adjusting texture optimization method is used to generate a target texture map.
  • the label map is added as an additional channel to the texture image synthesis, and the difference between the label map and the target label map is calculated as an additional penalty item added to the texture optimization, as follows:
  • Texture optimization (originally proposed by Kwatra et al.) is the similarity between the target map T and the sample S by minimizing the distance between the target and all corresponding overlapping local patches on the sample, ie:
  • t i denotes an N*N-sized block in the target image T
  • the upper left corner of the block corresponds to the i-th pixel of the texture
  • s i is the block most similar to t i in the sample S.
  • Set N 10 in the program.
  • the distance between the block and the block is the sum of the squares of the differences in color values:
  • the distance metric of the above formula is modified as follows, the label map L S corresponding to the original image is given, and the distance metric is modified to constrain the texture synthesis with the target label map L T provided by the user:
  • the first part of the formula is the sum of the squares of the differences in color values.
  • the second part is the penalty term, which measures the difference between the label map L S corresponding to the source image and the target label map L T on the corresponding partial block.
  • step s6 further comprises the following steps:
  • the advantages of the discrete labeling diagram are clear and intuitive.
  • the segmented labeling diagram is There are precise edges.
  • the present invention has a gradual transition texture image for the boundary.
  • the distance offset map is used to optimize the weight at the edge to reduce the weight of the penalty item near the boundary to reduce the control near the control boundary. purpose.
  • the boundary is extracted as a feature line according to the original image and the generated label map.
  • a distance transform map is then generated based on the feature lines.
  • the "continuous" grayscale image obtained by the distance transformation is added to the synthetic control as the weight map of the constraint.
  • the new distance metric is defined as follows:
  • the source weight and the target weight are respectively. Therefore, when the image blocks t i and s i are located near the boundary, w T and w S will become very small, and the boundary will be approximately equal to 0, so that the penalty term will become very small, the constraint will be greatly reduced, and the boundary in the target image will be greatly reduced.
  • the blocks are more likely to look for pixel blocks of uniform color from the gradual area near the boundary of the source image, making the composition closer to the original image and becoming more natural.
  • Figure 10 is a hand-drawn annotation drawing by the user using the existing tool software Photoshop, which takes about 10 minutes.
  • Figure 11 is generated by the aforementioned interactive iterative segmentation algorithm, which edits the produced label map in a total of 5 iterations, which takes about 3 minutes. From the classification results, the label map (Fig. 11) generated by the method of the present invention is close to the manually labeled label map (Fig. 10). More importantly, it is finally possible to synthesize very desirable results, as shown in Figures 12 and 13.
  • the method for producing a label map in the texture synthesis method of the present invention can generate a label map simply and efficiently.
  • the interactive iterative image segmentation method avoids the shortcomings of judging whether the texture is suitable or not.
  • the method is more suitable for the user to quickly and accurately obtain the label map, and even for complex texture images, the label map can be generated efficiently and intuitively.
  • the present invention aims at edge gradient (such as weathering, rust, etc.) texture, and reduces the weight of the penalty item near the boundary to achieve the purpose of controlling the edge gradient texture.
  • edge gradient such as weathering, rust, etc.
  • the application of the texture synthesis technology in the present invention is convenient to expand, and can be applied to scenes such as image repair, background reconstruction, and 3D model texture synthesis only by simply replacing the target label map.
  • the present invention also provides a texture synthesizing apparatus, comprising:
  • the label map generating unit is configured to abstract the input original image, extract the feature vector of the texture feature, respectively label the different material textures, select the texture feature training prediction algorithm of the label region, and predict the unlabeled region. Finally, a label map is created based on the distribution of texture features.
  • the feature judging unit is connected to the label map generating unit to determine whether the texture feature distribution of the label map is accurate; if not, re-mark the material texture and activate the label map production unit re-training prediction algorithm and generate a label map;
  • the synthesizing unit is connected to the label map generating unit, and is configured to synthesize the target texture map based on the label map and the preset target label map.
  • the label map generating unit further includes the following feature extracting module:
  • a color module that extracts the color histogram of the area, represented by a histogram.
  • the filter bank response information module is configured to extract filter group response information of the region, and is represented by a histogram.
  • An edge information module configured to extract edge information of a region, represented by a histogram.
  • the synthesis unit includes:
  • the texture boundary optimization module extracts the distance offset map according to the label map and the target label map, and performs weight optimization on the edge of the target texture map according to the extracted distance offset map.
  • the invention adopts the labeling diagram of the original drawing to guide the process of texture synthesis. Since the process of producing the labeling map is controllable, the texture synthesis is also in a controlled state, which effectively improves the processing of the computer by various Accuracy and efficiency when constructing materials or having complex texture information such as non-uniform gradations. At the same time, in the process of producing the label map, the accuracy of the texture feature of the label map is judged, and the label map with low accuracy is re-abstracted to make the classification of the texture features more accurate, and the manner of the interactive iteration is improved. The controllability of the label generation process makes the finally generated texture composite image accurately meet the user's requirements, and achieves the purpose of accurately controlling the sample-based texture synthesis.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

一种纹理合成方法和采用该方法的装置,该方法由于采用源标号图与目标标号图共同引导纹理合成的过程,使得纹理合成处于受控状态,有效提升了计算机在处理由多种材质构成或者存在非均匀渐变等复杂纹理时的准确性和效率。同时,在生产标号图的过程中对标号图纹理特征准确性进行判断,并将准确性不高的标号图重新抽象分割,使得对纹理特征的分类更为准确,提高了标号图产生过程的准确性。采用该方法的装置也具有同样的技术效果。

Description

一种纹理合成方法及其装置 技术领域
本发明涉及图形、图像处理领域,特别涉及一种纹理合成方法及其装置。
背景技术
随着计算机图形、图形处理技术的进步,纹理合成越来越多的应用在计算机图像处理中。纹理合成技术旨在使用计算机合成符合人们要求的纹理图片,在真实感以及非真实感的纹理绘制和填充中有着广泛的应用。同时在图像修复、图像艺术风格转换、网络压缩数据的快速传输以及计算机动画等方面也有着广阔的应用前景。
基于样例的纹理合成在很多情况下能合成不错的结果,但是对于原图包含结构信息、由多种材质构成或者存在非均匀渐变等复杂纹理时,目前基于样例的纹理合成技术在无控制的情况下,无法很好的合成纹理图像,更重要的是自动合成的结果无法较好的满足用户的特定需求。
发明内容
本发明主要解决的技术问题是提供一种在纹理合成过程中便于控制的纹理合成方法及其装置。
为解决上述技术问题,本发明采用的一个技术方案是:提供一种纹理合成方法,包括以下顺序步骤:
s1.对原图中的材质纹理进行抽象,分析该材质纹理的纹理特征;
s2.提取所述纹理特征的特征向量,并对不同的所述材质纹理分别标注;
s3.选取所述原图中的标注区域对应的所述纹理特征训练预测算法;
s4.使用已训练的所述预测算法对未标注区域进行预测,并根据预测后的所述纹理特征分布情况创建标号图;
s5.判断所述标号图的所述纹理特征分布是否准确,若准确则执行步骤s6;若不准确则返回步骤s3;
s6.以所述标号图为基础,与预设的目标标号图进行合成得到的目标纹理图。
其中,所述步骤s2具体包括以下步骤:
s21.提取所述纹理特征的色彩直方图,以直方图表示;
s22.提取所述纹理特征的滤波器组响应信息,以直方图表示;
s23.提取所述纹理特征的边缘信息,以直方图表示;
s24.将上述步骤的直方图连接得到所述特征向量;
s25.根据所述纹理特征的差异对不同的所述材质纹理分别进行标注。
其中,所述步骤s3具体包括以下步骤:
s31.将已标注的所述材质纹理作为种子点;
s32.选取所述种子点对应的所述标注区域组成训练集;
s33.提取所述训练集中的所述纹理特征训练随机森林模型。
其中,所述步骤s4包括以下步骤:
s41.使用所述随机森林模型对所述未标注区域进行预测;
s42.使用图割模型对预测完毕的初步标号图进行优化,并生成所述标号图。
其中,所述步骤s5还包括以下步骤:
s51.若判断所述标号图的所述纹理特征分布不准确,则对所述材质纹理重新标注后进入步骤s3。
其中,所述步骤s6中的所述目标标号图包含所述目标纹理图的纹理分布信息。
其中,所述步骤s6包括以下步骤:
s61.将所述标号图与所述目标标号图加入到附加通道;
s62.采用自调节纹理优化方法生成所述目标纹理图。
其中,所述步骤s6还包括以下步骤:
s63.采用距离偏移图对所述目标纹理图的边缘进行加权优化。
为了解决上述技术问题,本发明还提供一种纹理合成装置,包括:
标号图生成单元,用于对输入的原图进行抽象分析后,提取纹理特 征的特征向量,并对不同的材质纹理分别标注,选取标注区域的所述纹理特征训练预测算法并对未标注区域进行预测,最后根据所述纹理特征分布情况创建标号图;
特征判断单元,与所述标号图生成单元连接,用于判断所述标号图的所述纹理特征分布是否准确;若不准确,则重新对所述材质纹理进行标注并激活所述标号图生产单元重新训练所述预测算法并生成所述标号图;
合成单元,与所述标号图生成单元连接,用于以所述标号图为基础,与预设的目标标号图进行合成得到的目标纹理图。
其中,所述标号图生成单元包括:
色彩模块,用于提取所述区域的色彩直方图,以直方图表示;
滤波器组响应信息模块,用于提取所述区域的滤波器组响应信息,以直方图表示;
边缘信息模块,用于提取所述区域的边缘信息,以直方图表示;
所述合成单元包括:
附加通道,用于载入的所述标号图和所述目标标号图;
纹理边界优化模块,根据所述标号图和所述目标标号图分别提取距离偏移图,并根据提取的所述距离偏移图对所述目标纹理图的边缘进行加权优化。
本发明的有益效果是:与现有技术相比,本发明在纹理合成过程中采用了对原图进行抽象得到其材质纹理的纹理特征,根据该纹理特征的不同对原图进行标注,并采用可训练的预测算法对原图进行分类后通过图割模型得到该原图的标号图;进而对该标号图是否准确进行判断,如不合适,则重复上述步骤,直到标号图能够准确的反应纹理原图中不同材质纹理的分布情况;最终在目标标号图的引导下合成新的纹理。通过对标号图是否准确的判断以及标号图不准确时的迭代操作,可以有效的对纹理合成的质量进行控制,使得最终合成的纹理可控且符合需求。有效的节约了人工成本、效率高且对纹理合成的质量可控。采用上述方法的装置也具有同样的技术效果。
附图说明
图1是本发明纹理合成方法的基本步骤框图;
图2是本发明纹理合成方法的完整步骤框图;
图3是本发明纹理合成装置的基本结构框图;
图4是本发明纹理合成装置的完整结构框图;
图5是实施本发明纹理合成方法时输入的原图;
图6是根据图5生成的标号图;
图7是实施本发明纹理合成方法时输入的目标标号图;
图8是以图6为基础根据图7的引导进行合成后的目标纹理图;
图9是实施本发明纹理合成方法时输入的原图;
图10手绘标注图;
图11是根据图9生成的标号图;
图12是实施本发明纹理合成方法时输入的目标标号图;
图13是以图10为基础根据图12的引导进行合成后的目标纹理图;
图14是以图11为基础根据图12的引导进行合成后的目标纹理图。
具体实施方式
以下结合图1至图14具体说明本发明提供的一种纹理合成方法和采用该方法的装置。
如图1所示,纹理合成方法包括以下顺序步骤:
s1.对原图中的材质纹理进行抽象,分析该材质纹理的纹理特征。
在本步骤中,对材质纹理进行的抽象操作其实质为对材质纹理进行分析,并提取出相应的纹理特征,使得材质纹理可以被计算机所识别和分类。
s2.提取纹理特征的特征向量,并对不同的材质纹理分别标注。
本步骤直观的说就是把原始几个小像素看成一个逻辑像素,并对原图的每个逻辑像素提取特征(即下文中所述的三类特征:色彩直方图、滤波器组响应信息和边缘信息),每个逻辑像素都会有一个特征向量表示。同时,将不同材质纹理的区域进行标注,可以便于后续对预测算法 的培训以及判断生成的标号图是否准确。
在对原图的材质纹理进行标注具体操作中,一般以原图为背景,根据其材质纹理的不同而在对应区域内标记不同的颜色或者数字等,以便于将该材质纹理的纹理特征与相应的标注建立关联。
具体操作时,本实施例使用SLIC算法将原图(如图5、图9所示)分割成若干均匀的超像素(Super Pixel)块,并统计每个像素块的纹理特征。超像素块一般而言就是把一幅原本是像素级(pixel-level)的图,划分成区域级(district-level)的图。从这些区域中可以方便的提取出有效的基本信息,比如颜色直方图、纹理信息等。使用超像素的优点在于一方面降低样本点数量,加速了算法过程,另一方面超像素块相比单个像素能更有效的体现出纹理特征。
在其他实施例中,还可以使用SEEDS算法对原图进行抽象并分割为具有特征一致性的逻辑像素。
s3.选取原图中的标注区域对应的纹理特征训练预测算法。
s4.使用已训练的预测算法对未标注区域进行预测,并根据预测后的纹理特征分布情况创建标号图。
通过选取标注区域对预测算法进行训练,以及采用预测算法对未标注区域进行预测,本发明的纹理合成方法可以有选择的设置原图中种子点(选择的位置),并对其纹理特征进行提取。通过预测算法参照种子点中的纹理规则有助于快速的完成对原图未标注区域的分类和标注。
在本实施例中,采用随机森林算法作为预测算法。随机森林作为一组决策树的组合,其对纹理特征的分类精度高,且快速稳定、结果可预见性强,能够处理很高维度的数据,并且不用做特征选择。在当前的纹理图像处理上,相对其他算法有着很大的优势。
当已训练的随机森林算法对未标注区域进行预测,并记录每个超像素p属于标号lp的置信度P(lp|fp)后,使用图割(Graph cut)模型对分类结果进行优化,即可生成本次的标号图(如图6、图11所示)。
s5.判断标号图的纹理特征分布是否准确,若准确则执行步骤s6;若不准确则返回步骤s3。
具体的,采用标号图意义在于其能够较好的描述纹理图像中不同材质纹理的分布情况(参照图6、图11所示,标号图将不同的颜色或者标记与原图中不同的材质纹理联系起来,并相对直观的表示出来)。计算机系统可以根据标号图的信息准确识别原图中的材质纹理的分布,便于对材质纹理进行分类和应用,如填充、模糊、纹理转化等。本发明目标是基于标号图来控制纹理合成的过程,所以标号图生成质量的好坏直接决定最终纹理合成的效果。
通过步骤s5的判断实现对标号图的产生进行交互,即对生产的标号图的准确性进行判断,使得标号图处于受控状态,并通过在现有基础上进一步的重复分类辨析这样的迭代步骤,可以进一步提升标号图中纹理特征分布的准确性,使得最终生成的标号图满足用户的需求。在本实施例中,关于标号图准确性的比对采用计算机基于纹理特征进行分析自动的比对判断。在其他实施例中,也可以进行人工干预,由工作人员比对标号图与原图的差异,判断是进行迭代(在现有基础上进行分类和优化)重新生成标号图或者采用该标号图进行下一步的纹理合成。
s6.以标号图为基础,与预设的目标标号图进行合成得到的目标纹理图。
在合成步骤中,该目标标号图(如图7、图12所示)包括对合成目标的纹理特征分布的类型预期信息和结构信息,标号图与此为基础在相应的区域中填充相应的纹理特征对应的材质纹理,最终合成目标纹理图(如图8、图14所示)。目标标号图的使用进一步加强了对纹理合成时的控制,使得目标纹理图可以更为符合用户的合成预期,以此更好的满足用户的需求。
如图2所示,在优选的实施例中,步骤s21具体包括以下步骤:
s21.提取区域的色彩直方图,以直方图表示;
s22.提取区域的滤波器组响应信息,以直方图表示;
s23.提取区域的边缘信息,以直方图表示;
s24.将上述步骤的直方图连接得到特征向量;
s25.根据纹理特征的差异对不同的材质纹理分别进行标注。
提取上述类型的特征向量,有助于计算机系统对纹理特征的识别、分析和编辑,可以提高对纹理数据处理的准确性。
具体的,在本发明交互式迭代分割方法中,对纹理图像的每个超像素块(逻辑像素)分别提取的纹理特征包括以下3类特征,以便于计算机对材质纹理进行识别:色彩直方图、滤波器组响应信息和边缘信息。
色彩直方图,图像的色彩值是图像最基本的信息,色彩直方图能够描述不同色彩在整幅图像中所占的比例,而并不关心每种色彩所处的空间位置,特别适合描述难以进行自动分割的图像。
滤波器组响应信息,采用MR8滤波器组获得。MR8滤波器组同时包含各向同性和各向异性滤波器,克服了传统的旋转不变性滤波器对滤波器组响应信息图像响应较弱的缺陷,而且MR8滤波滤波器组只有8维响应维度,大大降低了数据处理的复杂度。
边缘信息,采用gPb检测器获得。gPb检测器考虑了图像亮度、纹理和色彩信息,结合局部和全局图像信息,是一个高性能的轮廓检测器。此外,其将边缘检测作为一个像素的分类问题,并从人工标注边界的自然图像中训练分类器(即预测算法,比如随机森林),将使较短的噪声边缘变弱而得到较长较明显(灰度值大)的图像边界。
以上三类特征均使用直方图表示,最终将三个直方图连接成一个长向量作为每个超像素块的最终的特征向量。
在优选的实施例中,步骤s3还包括以下步骤:
s31.将已标注的材质纹理作为种子点;通过该种子点的选择,可以使得后续产生的标号图可以更为准确的体现纹理特征的分布信息,提高纹理合成的可控性。
s32.选取所述种子点对应的所述标注区域组成训练集;
s33.提取训练集中的纹理特征训练随机森林模型。
优选的,步骤s4包括以下步骤:
s41.使用随机森林模型对未选取的区域进行预测;
s42.使用图割模型对预测完毕的初步标号图进行优化,并生成标号图。
基于上述三个特征向量,根据选取已经标注的超像素块(标注区域)来训练分类器(预测算法),该分类器优选随机森林(Random Forest)集成学习方法,该预测算法较为成熟可靠。最后采用该分类器对其他未被选取的超像素块进行分类。
具体的,在采用随机森林分类结束后,使用图割(Graph Cut)算法对分类结果进行优化,其能量函数如下:
Figure PCTCN2017078248-appb-000001
其中数据项Dp(lp)=1-P(lp|fp),P(lp|fp)表示超像素p分类为lp的概率(置信度),平滑项vpq(lp,lq)=DEMD(fp,fq),为相邻超像素被赋予不同标号时的代价,这里使用超像素p和q对应特征向量的EMD(Earth Movers’Distance)距离。本发明所有试验中设置λ=1。而权重系数wpq与超像素间的邻接边长相关,选取公式:
Figure PCTCN2017078248-appb-000002
其中Ladj(p,q)表示相邻超像素p和q邻接的像素个数。阈值a=10在本实施例中。
在其他实施例中,预测算法还可以采用Gradient Boost Decision Tree(迭代决策树)来被训练并对未被选取的区域进行预测(分类)。同时,也可以提取更多种类的特征加入最终的特征向量中。
如图1所示,步骤s5还包括以下步骤:
s51.若判断标号图的纹理特征分布不准确,则对材质纹理重新标注后进入步骤s3。
该步骤可以对步骤s5得到的结果进行编辑或调整,当步骤s4的结果不准确时,有可能意味着步骤s25对材质纹理的标注可能不准确,重新标注可以有效的修正这个问题。
在优选的实施例中,步骤s6中的目标标号图包含目标纹理图的纹理分布信息。
其中,步骤s6包括以下步骤:
s61.将标号图与目标标号图加入到附加通道;
s62.采用自调节纹理优化方法生成目标纹理图。
在具体操作中,基于自调节纹理优化方法,将标号图作为附加通道加入到纹理图像的合成中,计算标号图与目标标号图的差异作为额外的惩罚项加入到纹理优化中去,具体如下:
纹理优化(最初由Kwatra等提出)的是目标图T与样图S之间的相似性,通过最小化目标与样图上所有对应的重叠局部块(overlapping local patches)之间的距离,即:
Figure PCTCN2017078248-appb-000003
这里ti表示目标图像T中一个N*N大小的块,块左上角点对应纹理的第i个像素,si是样图S中与ti最相似的块。程序中设置N=10。块与块之间的距离采用色彩值之差的平方和:
Figure PCTCN2017078248-appb-000004
对上式的距离度量进行如下修改,给出原图对应的标号图LS,修改距离度量以配合用户提供的目标标号图LT来约束纹理合成:
Figure PCTCN2017078248-appb-000005
式中第一部分是色彩值之差的平方和。第二部分是惩罚项,它度量了源图像对应的标号图LS和目标标号图LT在对应局部块上的差异。λ用于调节色彩与惩罚项之间的权重,本发明所有试验均取λ=0.9;C是一个常数,本发明设置为C=100,B(x,y)是一个二值函数,仅当
Figure PCTCN2017078248-appb-000006
时取0,否则取1。
在优选的实施例中,步骤s6还包括以下步骤:
s63.采用距离偏移图对目标纹理图的边缘进行加权优化。
在具体操作时,离散标号图优点是清晰直观,然而对于一些风化、锈蚀等纹理,由于不同材质或同一材质的不同风化程度之间并不存在十分精确的边缘,而分割得到的标号图却是存在精确边缘的。针对这一矛盾,本发明对于边界具有渐变过渡的纹理图像,在合成目标纹理图时,采用距离偏移图对边缘处加权优化降低边界附近惩罚项权值的方式来达到降低控制边界附近控制的目的。
首先根据原图和生成的标号图提取边界作为特征线。然后基于特征线生成距离变换(distance transform)图。最后将距离变换得到的“连续”灰度图作为约束项的权值图(weight map),加入到合成控制中。此时,新的距离度量定义如下:
Figure PCTCN2017078248-appb-000007
其中
Figure PCTCN2017078248-appb-000008
Figure PCTCN2017078248-appb-000009
分别为源权值和目标权值。因此,当图像块ti和si位于边界附近时,wT和wS将变得非常小,边界上近似等于0,这样惩罚项将变的非常小,约束将大大削减,目标图像中边界的块更倾向于从源图像边界附近的渐变区域寻找色彩一致的像素块,从而使得合成效果更接近原图,变得更加自然。
图9至图14对比了本发明方法生成的标号图与人工标注的标号图,以及相应的引导合成结果。相比人工标注本发明方法更简单高效。图10是用户利用现有工具软件Photoshop手绘的标注图,耗时大约10分钟左右。图11由前述的交互迭代分割算法生成,共计5次迭代编辑生产的标号图,耗时大约3分钟。从分类结果来看,本发明方法生成的标号图(图11)接近人工标注的标号图(图10)。更重要的是,最终同样可以合成十分理想的结果,如图12和图13所示。
可见,本发明的纹理合成方法中关于标号图的生产方法可以简单高效生成标号图。且采用交互迭代式的图像分割方法,避免了对纹理是否合适的判断需要庞大数据库支持的缺点。本方法更适合用户快速而准确的获取标号图,即便对于复杂的纹理图像,也同样能高效直观的生成标号图。
同时,本发明针对边缘渐变(如风化、锈蚀等)纹理,用降低边界附近惩罚项权值的方式来达到对边缘渐变纹理控制的目的。
进一步的,本发明中纹理合成技术的应用方便拓展,只需要简单更换目标标号图就可以应用到如图像修补、背景重建及3D模型贴图合成等场景中。
为解决上述技术问题,本发明还提供一种纹理合成装置,包括:
标号图生成单元,用于对输入的原图进行抽象分析后,提取纹理特征的特征向量,并对不同的材质纹理分别标注,选取标注区域的纹理特征训练预测算法并对未标注区域进行预测,最后根据纹理特征分布情况创建标号图。
特征判断单元,与标号图生成单元连接,用于判断标号图的纹理特征分布是否准确;若不准确,则重新对材质纹理进行标注并激活标号图生产单元重新训练预测算法并生成标号图;。
合成单元,与标号图生成单元连接,用于以标号图为基础,与预设的目标标号图进行合成得到的目标纹理图。
如图4所示,纹理合成装置中,标号图生成单元还包括以下特征提取模块:
色彩模块,用于提取区域的色彩直方图,以直方图表示。
滤波器组响应信息模块,用于提取区域的滤波器组响应信息,以直方图表示。
边缘信息模块,用于提取区域的边缘信息,以直方图表示。
合成单元包括:
附加通道,用于载入的标号图和目标标号图;
纹理边界优化模块,根据标号图和目标标号图分别提取距离偏移图,并根据提取的距离偏移图对目标纹理图的边缘进行加权优化。
区别于现有技术,本发明采用原图的标号图来引导纹理合成的过程,由于生产的标号图的过程可控,使得纹理合成也处于受控状态,有效的提升了计算机在处理由多种材质构成或者存在非均匀渐变等复杂纹理信息时的准确性和效率。同时,在生产标号图的过程中引入的对标号图纹理特征准确性的判断,并将准确性不高的标号图重新抽象分割使得对其中纹理特征的分类更为准确,该交互迭代的方式提高了标号图产生过程的可控性,使得最终生成的纹理合成图像准确的符合用户要求,实现精准控制基于样例的纹理合成的目的。
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或 直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (10)

  1. 一种纹理合成方法,其特征在于,包括以下顺序步骤:
    s1.对原图中的材质纹理进行抽象,分析该材质纹理的纹理特征;
    s2.提取所述纹理特征的特征向量,并对不同的所述材质纹理分别标注;
    s3.选取所述原图中的标注区域对应的所述纹理特征训练预测算法;
    s4.使用已训练的所述预测算法对未标注区域进行预测,并根据预测后的所述纹理特征分布情况创建标号图;
    s5.判断所述标号图的所述纹理特征分布是否准确,若准确则执行步骤s6;若不准确则返回步骤s3;
    s6.以所述标号图为基础,与预设的目标标号图进行合成得到的目标纹理图。
  2. 根据权利要求1所述的纹理合成方法,其特征在于,所述步骤s2具体包括以下步骤:
    s21.提取所述纹理特征的色彩直方图,以直方图表示;
    s22.提取所述纹理特征的滤波器组响应信息,以直方图表示;
    s23.提取所述纹理特征的边缘信息,以直方图表示;
    s24.将上述步骤的直方图连接得到所述特征向量;
    s25.根据所述纹理特征的差异对不同的所述材质纹理分别进行标注。
  3. 根据权利要求2所述的纹理合成方法,其特征在于,所述步骤s3具体包括以下步骤:
    s31.将已标注的所述材质纹理作为种子点;
    s32.选取所述种子点对应的所述标注区域组成训练集;
    s33.提取所述训练集中的所述纹理特征训练随机森林模型。
  4. 根据权利要求3所述的纹理合成方法,其特征在于,所述步骤s4包括以下步骤:
    s41.使用所述随机森林模型对所述未标注区域进行预测;
    s42.使用图割模型对预测完毕的初步标号图进行优化,并生成所述 标号图。
  5. 根据权利要求4所述的纹理合成方法,其特征在于,所述步骤s5还包括以下步骤:
    s51.若判断所述标号图的所述纹理特征分布不准确,则对所述材质纹理重新标注后进入步骤s3。
  6. 根据权利要求1所述的纹理合成方法,其特征在于,所述步骤s6中的所述目标标号图包含所述目标纹理图的纹理分布信息。
  7. 根据权利要求6所述的纹理合成方法,其特征在于,所述步骤s6包括以下步骤:
    s61.将所述标号图与所述目标标号图加入到附加通道;
    s62.采用自调节纹理优化方法生成所述目标纹理图。
  8. 根据权利要求7所述的纹理合成方法,其特征在于,所述步骤s6还包括以下步骤:
    s63.采用距离偏移图对所述目标纹理图的边缘进行加权优化。
  9. 一种纹理合成装置,其特征在于,包括:
    标号图生成单元,用于对输入的原图进行抽象分析后,提取纹理特征的特征向量,并对不同的材质纹理分别标注,选取标注区域的所述纹理特征训练预测算法并对未标注区域进行预测,最后根据所述纹理特征分布情况创建标号图;
    特征判断单元,与所述标号图生成单元连接,用于判断所述标号图的所述纹理特征分布是否准确;若不准确,则重新对所述材质纹理进行标注并激活所述标号图生产单元重新训练所述预测算法并生成所述标号图;
    合成单元,与所述标号图生成单元连接,用于以所述标号图为基础,与预设的目标标号图进行合成得到的目标纹理图。
  10. 根据权利要求9所述的纹理合成方法,其特征在于,所述标号图生成单元包括:
    色彩模块,用于提取所述区域的色彩直方图,以直方图表示;
    滤波器组响应信息模块,用于提取所述区域的滤波器组响应信息, 以直方图表示;
    边缘信息模块,用于提取所述区域的边缘信息,以直方图表示;
    所述合成单元包括:
    附加通道,用于载入的所述标号图和所述目标标号图;
    纹理边界优化模块,根据所述标号图和所述目标标号图分别提取距离偏移图,并根据提取的所述距离偏移图对所述目标纹理图的边缘进行加权优化。
PCT/CN2017/078248 2017-03-27 2017-03-27 一种纹理合成方法及其装置 WO2018176185A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/487,087 US10916022B2 (en) 2017-03-27 2017-03-27 Texture synthesis method, and device for same
PCT/CN2017/078248 WO2018176185A1 (zh) 2017-03-27 2017-03-27 一种纹理合成方法及其装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/078248 WO2018176185A1 (zh) 2017-03-27 2017-03-27 一种纹理合成方法及其装置

Publications (1)

Publication Number Publication Date
WO2018176185A1 true WO2018176185A1 (zh) 2018-10-04

Family

ID=63673901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/078248 WO2018176185A1 (zh) 2017-03-27 2017-03-27 一种纹理合成方法及其装置

Country Status (2)

Country Link
US (1) US10916022B2 (zh)
WO (1) WO2018176185A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695552A (zh) * 2020-05-28 2020-09-22 河海大学 多特征融合的水下目标建模及优化方法
CN112138378A (zh) * 2020-09-22 2020-12-29 网易(杭州)网络有限公司 2d游戏中闪光效果的实现方法、装置、设备及存储介质
CN117725942A (zh) * 2024-02-06 2024-03-19 浙江码尚科技股份有限公司 用于标签纹理防伪的识别预警方法及系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766347A (zh) * 2021-01-12 2021-05-07 合肥黎曼信息科技有限公司 一种结合标注质量控制的主动学习方法
WO2023044896A1 (zh) * 2021-09-27 2023-03-30 京东方科技集团股份有限公司 水纹仿真方法及装置、电子设备、存储介质
CN115082502B (zh) * 2022-06-30 2024-05-10 温州医科大学 一种基于距离引导的深度学习策略的图像分割方法
CN115914634A (zh) * 2022-12-16 2023-04-04 苏州迈创信息技术有限公司 一种环境安防工程监测数据管理方法及系统
CN116958135B (zh) * 2023-09-18 2024-03-08 支付宝(杭州)信息技术有限公司 纹理检测处理方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110292064A1 (en) * 2010-05-28 2011-12-01 Microsoft Corporation Discrete Element Texture Synthesis
CN102324102A (zh) * 2011-10-08 2012-01-18 北京航空航天大学 一种图像场景空洞区域结构和纹理信息自动填补方法
CN102426708A (zh) * 2011-11-08 2012-04-25 上海交通大学 基于基元重组的纹理设计与合成方法
CN102521869A (zh) * 2011-09-30 2012-06-27 北京航空航天大学 一种几何特征引导的三维模型表面纹理空洞填补方法
CN103839271A (zh) * 2014-03-25 2014-06-04 天津理工大学 一种基于最佳匹配的图像纹理合成方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013078822A1 (en) * 2011-11-29 2013-06-06 Thomson Licensing Texture masking for video quality measurement
US9307252B2 (en) * 2012-06-04 2016-04-05 City University Of Hong Kong View synthesis distortion model for multiview depth video coding
CN103218785B (zh) * 2013-04-19 2015-10-28 中国科学院深圳先进技术研究院 图像修复方法和装置
US9727802B2 (en) * 2014-10-23 2017-08-08 The Penn State Research Foundation Automatic, computer-based detection of triangular compositions in digital photographic images
GB201512278D0 (en) * 2015-07-14 2015-08-19 Apical Ltd Hybrid neural network
CN109643125B (zh) * 2016-06-28 2022-11-15 柯尼亚塔有限公司 用于训练自动驾驶系统的逼真的3d虚拟世界创造与模拟
US10229533B2 (en) * 2016-11-03 2019-03-12 Mitsubishi Electric Research Laboratories, Inc. Methods and systems for fast resampling method and apparatus for point cloud data
US10816981B2 (en) * 2018-04-09 2020-10-27 Diveplane Corporation Feature analysis in computer-based reasoning models

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110292064A1 (en) * 2010-05-28 2011-12-01 Microsoft Corporation Discrete Element Texture Synthesis
CN102521869A (zh) * 2011-09-30 2012-06-27 北京航空航天大学 一种几何特征引导的三维模型表面纹理空洞填补方法
CN102324102A (zh) * 2011-10-08 2012-01-18 北京航空航天大学 一种图像场景空洞区域结构和纹理信息自动填补方法
CN102426708A (zh) * 2011-11-08 2012-04-25 上海交通大学 基于基元重组的纹理设计与合成方法
CN103839271A (zh) * 2014-03-25 2014-06-04 天津理工大学 一种基于最佳匹配的图像纹理合成方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695552A (zh) * 2020-05-28 2020-09-22 河海大学 多特征融合的水下目标建模及优化方法
CN111695552B (zh) * 2020-05-28 2022-07-26 河海大学 多特征融合的水下目标建模及优化方法
CN112138378A (zh) * 2020-09-22 2020-12-29 网易(杭州)网络有限公司 2d游戏中闪光效果的实现方法、装置、设备及存储介质
CN117725942A (zh) * 2024-02-06 2024-03-19 浙江码尚科技股份有限公司 用于标签纹理防伪的识别预警方法及系统

Also Published As

Publication number Publication date
US10916022B2 (en) 2021-02-09
US20190370987A1 (en) 2019-12-05

Similar Documents

Publication Publication Date Title
WO2018176185A1 (zh) 一种纹理合成方法及其装置
Rother et al. " GrabCut" interactive foreground extraction using iterated graph cuts
CN109685067A (zh) 一种基于区域和深度残差网络的图像语义分割方法
CN104899877A (zh) 基于超像素和快速三分图的图像前景提取方法
US11049044B1 (en) Visual image annotation utilizing machine learning for in-time feedback
Lin et al. Painterly animation using video semantics and feature correspondence
Dong et al. Fast multi-operator image resizing and evaluation
CN104239855B (zh) 一种基于笔画合成的图像风格迁移合成方法
CN111325661B (zh) 一种名为msgan的图像的季节风格转换模型及方法
CN112418134A (zh) 基于行人解析的多流多标签行人再识别方法
Qin et al. Automatic skin and hair masking using fully convolutional networks
CN105956995A (zh) 一种基于实时视频本征分解的人脸外观编辑方法
CN106780701A (zh) 非均匀纹理图像的合成控制方法、装置、存储介质及设备
CN113705579A (zh) 一种视觉显著性驱动的图像自动标注方法
Penhouët et al. Automated deep photo style transfer
CN107045727B (zh) 一种纹理合成方法及其装置
CN110084821B (zh) 一种多实例交互式图像分割方法
Liu An overview of color transfer and style transfer for images and videos
Musat et al. Depth-sims: Semi-parametric image and depth synthesis
CN108269298A (zh) 一种在非线性绑定空间进行人脸表情编辑的新方法
Liu et al. Anime Sketch Coloring with Swish-gated Residual U-net and Spectrally Normalized GAN.
Zhang et al. New image processing: VGG image style transfer with gram matrix style features
CN106296740A (zh) 一种基于低秩稀疏表达的目标精细轮廓跟踪方法
Li et al. Superpixels with contour adherence via label expansion for image decomposition
Wang et al. Image Extraction of Mural Line Drawing Based on Color Image Segmentation Algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17904016

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17904016

Country of ref document: EP

Kind code of ref document: A1