CN103198495B - The texture compression method that importance degree drives - Google Patents
The texture compression method that importance degree drives Download PDFInfo
- Publication number
- CN103198495B CN103198495B CN201310076875.5A CN201310076875A CN103198495B CN 103198495 B CN103198495 B CN 103198495B CN 201310076875 A CN201310076875 A CN 201310076875A CN 103198495 B CN103198495 B CN 103198495B
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- importance
- texture
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007906 compression Methods 0.000 title claims abstract description 48
- 230000006835 compression Effects 0.000 title claims abstract description 42
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000004364 calculation method Methods 0.000 claims abstract description 28
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 27
- 230000006837 decompression Effects 0.000 claims abstract description 22
- 230000006870 function Effects 0.000 claims description 22
- 238000005457 optimization Methods 0.000 claims description 15
- 239000003086 colorant Substances 0.000 claims description 5
- 230000002457 bidirectional effect Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims 1
- 230000015572 biosynthetic process Effects 0.000 description 11
- 238000003786 synthesis reaction Methods 0.000 description 11
- 230000003044 adaptive effect Effects 0.000 description 6
- 238000009877 rendering Methods 0.000 description 3
- 235000001275 Bouea macrophylla Nutrition 0.000 description 2
- 240000001160 Bouea macrophylla Species 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
重要度驱动的纹理压缩方法,由两个部分组成:纹理图像压缩和解压。在压缩过程中,对于输入的纹理首先计算其控制图和重要度信息图,然后使用重要度驱动纹理压缩算法对原纹理进行压缩得到相应的压缩后纹理和控制图;在解压阶段,类似于图像类比,将压缩后的控制图和压缩后纹理以及原控制图分别作为输入,经图像类比得到解压后的纹理输出图。具体包括:控制图计算、重要度图计算、重要度驱动的纹理压缩、基于图像类比的解压。
The importance-driven texture compression method consists of two parts: texture image compression and decompression. In the compression process, firstly calculate the control map and importance information map for the input texture, and then use the importance-driven texture compression algorithm to compress the original texture to obtain the corresponding compressed texture and control map; in the decompression stage, similar to the image In the analogy, the compressed control map, the compressed texture and the original control map are respectively used as input, and the decompressed texture output map is obtained through image analogy. Specifically, it includes: control graph calculation, importance graph calculation, importance-driven texture compression, and decompression based on image analogy.
Description
技术领域technical field
本发明属于信息技术领域。The present invention belongs to the field of information technology.
背景技术Background technique
现如今大规模的基于网络环境的场景绘制(如谷歌地球,街景技术等)都需要大量具有真实感的纹理图像。在实现上存在两个挑战:(1)互联网上的传输带宽限制;(2)客户端存储这些需要绘制的纹理的存储空间的限制。需要有相应的技术来减少带宽和存储空间。Today's large-scale network-based scene rendering (such as Google Earth, street view technology, etc.) requires a large number of realistic texture images. There are two challenges in implementation: (1) the limitation of transmission bandwidth on the Internet; (2) the limitation of the storage space of the client to store these textures that need to be drawn. Corresponding techniques are required to reduce bandwidth and storage space.
传统的纹理压缩算法通常采用矢量量化等图像编码技术对纹理进行压缩,压缩率有限且没有使用GPU进行加速解压绘制。最新的纹理压缩技术利用纹理合成的思路对纹理图像进行压缩,通过提取纹理内重复使用的纹元图案对纹理图像压缩。该技术不但可以压缩图像的局部冗余信息,还可以压缩全局重复信息,且利于使用GPU解压绘制。Traditional texture compression algorithms usually use image coding techniques such as vector quantization to compress textures, and the compression rate is limited and GPU is not used for accelerated decompression drawing. The latest texture compression technology uses the idea of texture synthesis to compress texture images, and compresses texture images by extracting repeatedly used texel patterns in textures. This technology can not only compress the local redundant information of the image, but also compress the global repetitive information, and it is beneficial to use GPU to decompress and draw.
逆向纹理合成算法基于纹理合成的思想,通过能量函数的优化将大幅输入纹理压缩成小块纹理摘要图像。逆向纹理合成存在的一个重要问题是在压缩的过程中同等对待所有区域。对于某些纹理来说并不是所有区域都是视觉一致性的,可分为前景部分和背景部分。从视觉显著性模型可知,纹理中的前景部分往往是人眼所关注的部分,在压缩时需尽可能保留。因此需要在压缩时对纹理的不同部分进行自适应压缩。最后,控制图在逆向合成框架中起着关键的作用,只有基于适当的控制图逆向合成才能重构原来纹理。网络环境下,纹理在客户端解压需要合适的控制图。逆向合成并没有针对控制图的生成提出较通用的解决方法,限制了逆向合成在纹理压缩领域的应用范围。The reverse texture synthesis algorithm is based on the idea of texture synthesis, and compresses a large input texture into a small texture summary image through the optimization of the energy function. An important problem with inverse texture synthesis is that all regions are treated equally during the compression process. For some textures, not all regions are visually consistent and can be divided into foreground and background parts. From the visual saliency model, it can be seen that the foreground part of the texture is often the part that human eyes pay attention to, and it should be preserved as much as possible during compression. Therefore, it is necessary to perform adaptive compression on different parts of the texture when compressing. Finally, the control map plays a key role in the reverse synthesis framework, and the original texture can only be reconstructed by reverse synthesis based on a proper control map. In a network environment, texture decompression on the client side requires a suitable control map. Inverse synthesis does not propose a more general solution for the generation of control maps, which limits the application range of inverse synthesis in the field of texture compression.
针对逆向合成中存在的上述问题,目前还没有研究人员面向实时解压绘制提出自适应的快速纹理压缩算法。In view of the above problems in reverse synthesis, no researchers have proposed an adaptive fast texture compression algorithm for real-time decompression rendering.
本发明对逆向合成的能量函数进行修改,使它适合于自适应的纹理压缩,即对纹理的重要区域和非重要区域进行不同程度的压缩。在计算阶段首先依据视觉显著性模型计算出纹理的重要度图,然后将重要度图,纹理及其控制图作为输入,对新的能量函数进行迭代优化。The invention modifies the energy function of reverse synthesis to make it suitable for adaptive texture compression, that is, to compress important regions and non-important regions of texture in different degrees. In the calculation stage, the texture importance map is first calculated according to the visual saliency model, and then the importance map, texture and its control map are used as input to iteratively optimize the new energy function.
针对逆向合成的控制图问题,我们提出选取反映图像像素亮度变化的灰度图作为控制图。Aiming at the control map problem of reverse synthesis, we propose to select the grayscale image reflecting the brightness change of image pixels as the control map.
发明内容Contents of the invention
本发明要克服现有技术的上述缺点,提供一种对纹理进行压缩、有效减少大尺度纹理图像所占用的纹理内存、实现其在网络上的快速传输和GPU上的实时绘制、基于重要度驱动的自适应纹理压缩方法,使得纹理图像解压之后重要区域的图像质量保持较好。The present invention overcomes the above-mentioned shortcomings of the prior art, and provides a method for compressing textures, effectively reducing texture memory occupied by large-scale texture images, realizing fast transmission on the network and real-time rendering on the GPU, and importance-based driving The adaptive texture compression method makes the image quality of the important area keep better after the texture image is decompressed.
本发明所述的重要度驱动的纹理压缩方法,由两个部分组成:纹理图像压缩和解压。在压缩过程中,对于输入的纹理首先计算其控制图和重要度信息图,然后使用重要度驱动纹理压缩算法对原纹理进行压缩得到相应的压缩后纹理和控制图;在解压阶段,类似于图像类比,将压缩后的控制图和压缩后纹理以及原控制图分别作为输入,经图像类比得到解压后的纹理输出图;具体步骤如下:The importance-driven texture compression method of the present invention consists of two parts: texture image compression and decompression. In the compression process, firstly calculate the control map and importance information map for the input texture, and then use the importance-driven texture compression algorithm to compress the original texture to obtain the corresponding compressed texture and control map; in the decompression stage, similar to the image For analogy, the compressed control map, the compressed texture and the original control map are respectively used as input, and the decompressed texture output map is obtained through image analogy; the specific steps are as follows:
步骤一、控制图计算Step 1. Control chart calculation
本发明的所有控制图都为原图的灰度图,灰度图很好地保存了原图的亮度细节;采用YIQ计算模型将原图(彩色图)转换为灰度图,该模型中Y代表亮度,即所需要的灰度信息,I代表色调,Q代表饱和度。根据相应的模型转化矩阵,将RGB转化成Y的计算公式如公式(1):All the control charts of the present invention are grayscale images of the original image, and the grayscale image well preserves the brightness details of the original image; the YIQ calculation model is used to convert the original image (color image) into a grayscale image, and Y in the model Represents brightness, that is, the required grayscale information, I represents hue, and Q represents saturation. According to the corresponding model conversion matrix, the calculation formula for converting RGB into Y is as formula (1):
Y=0.299R+0.587G+0.114B (1)Y=0.299R+0.587G+0.114B (1)
其中R,G,B分别为红绿蓝三色。Among them, R, G, and B are the three colors of red, green and blue respectively.
步骤二、重要度图计算Step 2. Importance map calculation
纹理的重要度信息直接影响了算法的最终结果。为了得到更好的重要度信息,我们首先计算图像的显著性图,该显著性图与原图分辨率一致。基于Saliency Filters算法计算显著性图:首先对图像进行抽象化,即保留图像相关的结构特征,去除图像一些不需要的细节特征;然后计算出图像中具有唯一性的元素和颜色等底层信息的分布;最后综合这两者信息来得到显著性图saliency。The importance information of the texture directly affects the final result of the algorithm. In order to get better importance information, we first compute the saliency map of the image, which is consistent with the resolution of the original image. Calculate the saliency map based on the Saliency Filters algorithm: first abstract the image, that is, retain the structural features related to the image, and remove some unnecessary details of the image; then calculate the distribution of the underlying information such as unique elements and colors in the image ; Finally, combine the two information to get the saliency map saliency.
假设输入图像为X,本发明采用公式(2)计算重要度图中每个像素的重要度值w(x,y):Assuming that the input image is X, the present invention uses formula (2) to calculate the importance value w(x,y) of each pixel in the importance map:
其中saliency(x,y)为像素(x,y)处的显著性值,k是显著性信息的权重,通常k=1能取得理想的效果;I(x,y)为图像像素(x,y)处的亮度,方程后两项和为像素(x,y)处梯度值x方向分量和y方向分量的绝对值。进一步,我们对公式(2)计算出的重要度值进行归一化,使得重要度图中每个像素的值的范围在[0,1]之间。w(x,y)的值越大,其对应的像素的重要度越高。Among them, saliency(x, y) is the saliency value at the pixel (x, y), k is the weight of saliency information, usually k=1 can achieve the ideal effect; I(x, y) is the image pixel (x, y) y), the last two terms of the equation and is the absolute value of the x-direction component and y-direction component of the gradient value at the pixel (x, y). Further, we normalize the importance value calculated by formula (2), so that the value of each pixel in the importance map ranges between [0,1]. The larger the value of w(x,y), the higher the importance of the corresponding pixel.
步骤三、重要度驱动的纹理压缩Step 3. Importance-driven texture compression
在得到重要度图和控制图之后,对原图和控制图进行压缩;After obtaining the importance map and the control map, compress the original map and the control map;
本方法中纹理压缩被定义成能量函数优化问题,该能量函数计算原图所有的像素邻域与压缩后图像的像素邻域的双向相似性;对于输入图X(它可以是原图或控制图),它与压缩目标图Z之间考虑了重要度信息的能量函数如公式(4)所示.In this method, texture compression is defined as an energy function optimization problem. The energy function calculates the bidirectional similarity between all the pixel neighborhoods of the original image and the pixel neighborhood of the compressed image; for the input image X (it can be the original image or the control image ), the energy function between it and the compression target graph Z considering the importance information is shown in formula (4).
其中z/x为Z/X的样本值,q/p分别为Z/X的子集Z+/X+中的像素点位置,xp/zq表示p/q点为中心的空间邻域,zp/xq是Z/X中与xp/zq最相似的邻域,α是用户可调的权重值,当α=0.01时适合于绝大多数纹理;为p点邻域xp的重要度权重,计算图像一个区域的重要度方法有很多,如取邻域内所有像素权重的最小/大值、中值或均值等,我们取像素权重均值作为邻域xp的重要度;用同样方法计算区域xq的重要度|·|2计算两个邻域之间的距离,通过对邻域内对应像素颜色差的平方求和来计算两个邻域之间的距离。Where z/x is the sample value of Z/X, q/p are the pixel positions in the subset Z + /X + of Z/X respectively, and x p /z q represents the spatial neighborhood centered at point p/q , z p /x q is the most similar neighborhood to x p /z q in Z/X, α is a user-adjustable weight value, when α=0.01 is suitable for most textures; is the importance weight of the p-point neighborhood x p , there are many ways to calculate the importance of an image area, such as taking the minimum/large value, median or mean value of all pixel weights in the neighborhood, etc., we take the average value of the pixel weight as the neighborhood Importance of x p ; Use the same method to calculate the importance of the region x q |·| 2 Calculate the distance between two neighborhoods by summing the squares of the color differences of corresponding pixels in the neighborhood to calculate the distance between two neighborhoods.
公式(3)的能量函数由两项相加组成,这两项形式相似,但是计算功能不同;The energy function of formula (3) is composed of the addition of two terms, which are similar in form, but have different calculation functions;
第一项称为inverse项,确保了输入图像X中的每一个邻域xp在Z中能找到与之相似的zp;第二项称为forward项,它保证了Z中没有一个新的zq不与X中的任何xq相似;The first item is called the inverse item, which ensures that each neighborhood x p in the input image X can find a similar z p in Z; the second item is called the forward item, which ensures that there is no new one in Z z q is not similar to any x q in X;
基于公式(3),下面推导出计算目标纹理Z中每一个像素q处颜色值的方法。对于目标纹理中的每一个像素q∈Z,它对整体能量值的贡献也包含对于forward和inverse两项的计算,具体贡献值可由如下步骤得到:Based on formula (3), the method for calculating the color value at each pixel q in the target texture Z is deduced below. For each pixel q∈Z in the target texture, its contribution to the overall energy value also includes the calculation of forward and inverse. The specific contribution value can be obtained by the following steps:
(1)像素q对forward项的贡献(1) The contribution of pixel q to the forward term
表示在目标图Z中所有含有像素q的邻域(其中q1,...,qm为邻域的中心点,注意这里q1,...,qm需经过相应的偏移才能得到q),m值为邻域的个数,它与我们选择的邻域大小相关,当邻域大小为5x5时,m=25(本发明中,高斯金子塔的每一层有2组不同大小的邻域,分别为17x17,9x9)。邻域在X中的最近邻为表示中与中的q相对应的像素位置(如图2所示)。这样,q对forward term的贡献值为
(2)像素q对inverse项的贡献(2) Contribution of pixel q to the inverse term
为X中的邻域并且这些邻域在Z中的最近邻域为包含q的邻域,其中n为邻域的个数,不同于上面的m值,n值的大小不是固定,随着q的不同而不同。像素p1,...,pn为邻域的中心点,它们同样经过相应的偏移得到像素点这些像素点与Z中像素q相对应(如图2)。这样我们可得q对inverse term的贡献值有
综述所述,单个像素q的能量为上述forward项和inverse项之和,具体计算如下:In summary, the energy of a single pixel q is the sum of the above forward and inverse items, and the specific calculation is as follows:
可以通过求解Energy(IZ(q))的最小值来求解像素q处的颜色值。将方程关于IZ(q)求导并等于0,可得IZ(q)的解为:The color value at pixel q can be solved for by finding the minimum of Energy(I Z (q)). Taking the derivative of the equation with respect to I Z (q) and equaling it to 0, the solution of I Z (q) can be obtained as:
(3)能量优化方法(3) Energy optimization method
基于公式(5)求解IZ(q)的迭代能量优化步骤为:The iterative energy optimization steps for solving I Z (q) based on formula (5) are:
step1:对于每一个目标像素q∈Z在输入图X中搜索到与其邻域zq最相似的邻域xq。将xq中像素的颜色值乘该像素的重要度权重并且乘以权重之后将计算得到的值投票给zq中相应的像素;Step1: For each target pixel q∈Z, search for the neighborhood x q most similar to its neighborhood z q in the input image X. Multiply the color value of the pixel in x q by the importance weight of the pixel and multiply by the weight Then vote the calculated value to the corresponding pixel in z q ;
step2:与step1相反,对于每一个像素p∈X在Z中搜索到与其邻域xp最相似的邻域zp。将xp的像素颜色值乘该像素的重要度权重并乘,之后将计算得到的值投票给zp中相应的像素;Step2: In contrast to step1, for each pixel p∈X, search for the neighborhood z p that is most similar to its neighborhood x p in Z. Multiply the pixel color value of x p by the importance weight of the pixel and multiply , and then vote the calculated value to the corresponding pixel in z p ;
step3:对于每一个目标像素q∈Z对于其所有的投票之和求平均值,即除以
重复step1,step2,step3直至收敛。Repeat step1, step2, step3 until convergence.
步骤四、基于图像类比的解压Step 4. Decompression based on image analogy
给定一对图片A和A’(分别对应原图的和对其进行一定处理之后生成的图),输入另一张未处理的目标图B,通过图像类比方法计算得到一张新的图B’,使得:Given a pair of pictures A and A' (respectively corresponding to the original picture and the picture generated after certain processing), input another unprocessed target picture B, and calculate a new picture B by image analogy method ', making:
A:A'::B:B'A:A'::B:B'
即B’和B的关系与A’和A的关系相似。图像类比算法能够学习A和A’之间的关系,并将学习的结果应用于生成B’。这里我们将它应用于压缩数据的解压。That is, the relationship between B' and B is similar to the relationship between A' and A. The image analogy algorithm can learn the relationship between A and A', and apply the learned result to generate B'. Here we apply it to the decompression of compressed data.
我们将压缩后的控制图以及压缩后的纹理分别对应于图像类比中的A和A’,并且将原控制图作为B,使用图像类比合成新的B’,B’即为解压缩结果图像。We correspond the compressed control map and the compressed texture to A and A' in the image analogy, and use the original control map as B, and use the image analogy to synthesize a new B', which is the decompression result image.
通过图像类比算法恢复出原图,从而得到解压后的结果。The original image is recovered through the image analogy algorithm, so as to obtain the decompressed result.
本发明描述了一种重要度驱动的纹理压缩算法,把重要度驱动的纹理压缩映射成对能量函数的优化问题。将重要度信息融合到能量函数中,并重新推导能量函数的求解器,采用灰度图作为纹理的控制图。使用公式(2)计算得到重要度图。本发明方法在对纹理进行压缩时能够很好保留纹理中的重要区域,并且压缩效率较高。The invention describes an importance-driven texture compression algorithm, which maps the importance-driven texture compression into an optimization problem of an energy function. Integrate the importance information into the energy function, and re-deduce the solver of the energy function, and use the grayscale image as the control map of the texture. Use formula (2) to calculate the importance map. The method of the invention can well preserve important areas in the texture when compressing the texture, and the compression efficiency is high.
本发明的优点如下:The advantages of the present invention are as follows:
(1)重要信息保留的自适应压缩。利用重要度概念对纹理图像进行自适应压缩,在压缩过程中对图像中重要区域的压缩质量较高。与以往的压缩算法不同,算法的压缩结果也是一幅有意义的纹理,该压缩结果可用于再合成新的纹理。(1) Adaptive compression for important information preservation. The texture image is adaptively compressed by using the concept of importance, and the compression quality of important areas in the image is high during the compression process. Different from previous compression algorithms, the compression result of the algorithm is also a meaningful texture, which can be used to synthesize new textures.
(2)思路新颖,非经典方法:该方法采用能量优化的方式来压缩纹理,通过迭代求解能量函数的最小值得到压缩纹理结果,思路新颖,创新意义大。(2) Novel idea, non-classical method: This method uses energy optimization to compress texture, and obtains the compressed texture result by iteratively solving the minimum value of the energy function. The idea is novel and innovative.
(3)实现简单且运行效率高。能量优化计算简单,采用类EM算法进行迭代优化求解能量最小值,只需对step3、step1和step2进行简单的设计。能量优化中的搜索计算具有高度并行性,利于采用GPU进行并行加速计算,从而大大提高算法的运行效率。(3) The implementation is simple and the operation efficiency is high. The energy optimization calculation is simple, and the EM-like algorithm is used for iterative optimization to solve the energy minimum value, and only a simple design of step3, step1 and step2 is required. The search calculation in energy optimization has a high degree of parallelism, which is conducive to the use of GPU for parallel accelerated calculation, thereby greatly improving the operating efficiency of the algorithm.
(4)方便易行。用户只需提供一张图像,算法会自动计算出其控制图以及用户交互的方式半自动的计算出重要度图,然后进行重要度驱动的纹理压缩,在解压阶段只需提供原来图的控制图即可重新合成原图。(4) Convenient and easy. The user only needs to provide an image, and the algorithm will automatically calculate its control map and semi-automatically calculate the importance map through user interaction, and then perform importance-driven texture compression. In the decompression stage, only the control map of the original image is provided. The original image can be resynthesized.
(5)可控性好:用户对算法的控制无需复杂的输入参数,只需改变重要度图,算法就会在压缩时对重要度图所决定的重要区域进行高保真压缩。用户可以通过编辑重要度图对图像的自适应压缩进行控制。(5) Good controllability: the user does not need complex input parameters to control the algorithm, just change the importance map, and the algorithm will perform high-fidelity compression on the important areas determined by the importance map during compression. Users can control the adaptive compression of images by editing the importance map.
附图说明Description of drawings
图1为本技术发明的技术框架图。Fig. 1 is a technical framework diagram of the present technical invention.
图2为推导过程中输入图与目标图之间像素对应关系的标记图。Figure 2 is a labeled map of the pixel correspondence between the input map and the target map during the derivation process.
具体实施方式Detailed ways
如图所示,本发明由两个部分组成:纹理图像压缩和解压。在压缩过程中,对于输入的纹理首先计算其控制图和重要度信息图,然后使用重要度驱动纹理压缩算法对原纹理进行压缩得到相应的压缩后纹理和控制图。在解压阶段,类似于图像类比,将压缩后的控制图和压缩后纹理以及原控制图分别作为输入,经图像类比得到解压后的纹理输出图。下面我们具体介绍本发明方法的四个计算步骤。As shown in the figure, the invention consists of two parts: texture image compression and decompression. In the compression process, the input texture first calculates its control map and importance information map, and then uses the importance-driven texture compression algorithm to compress the original texture to obtain the corresponding compressed texture and control map. In the decompression stage, similar to the image analogy, the compressed control map, the compressed texture and the original control map are respectively used as input, and the decompressed texture output map is obtained through image analogy. Below we specifically introduce the four calculation steps of the method of the present invention.
步骤一、控制图计算Step 1. Control chart calculation
本发明的所有控制图都为原图的灰度图,灰度图很好地保存了原图的亮度细节;采用YIQ计算模型将原图(彩色图)转换为灰度图,该模型中Y代表亮度,即所需要的灰度信息,I代表色调,Q代表饱和度。根据相应的模型转化矩阵,将RGB转化成Y的计算公式如公式(1):All the control charts of the present invention are grayscale images of the original image, and the grayscale image well preserves the brightness details of the original image; the YIQ calculation model is used to convert the original image (color image) into a grayscale image, and Y in the model Represents brightness, that is, the required grayscale information, I represents hue, and Q represents saturation. According to the corresponding model conversion matrix, the calculation formula for converting RGB into Y is as formula (1):
Y=0.299R+0.587G+0.114B (1)Y=0.299R+0.587G+0.114B (1)
其中R,G,B分别为红绿蓝三色。Among them, R, G, and B are the three colors of red, green and blue respectively.
步骤二、重要度图计算Step 2. Importance map calculation
纹理的重要度信息直接影响了算法的最终结果。为了得到更好的重要度信息,我们首先计算图像的显著性图,该显著性图与原图分辨率一致。基于Saliency Filters算法计算显著性图:首先对图像进行抽象化,即保留图像相关的结构特征,去除图像一些不需要的细节特征;然后计算出图像中具有唯一性的元素和颜色等底层信息的分布;最后综合这两者信息来得到显著性图saliency。The importance information of the texture directly affects the final result of the algorithm. In order to get better importance information, we first compute the saliency map of the image, which is consistent with the resolution of the original image. Calculate the saliency map based on the Saliency Filters algorithm: first abstract the image, that is, retain the structural features related to the image, and remove some unnecessary details of the image; then calculate the distribution of the underlying information such as unique elements and colors in the image ; Finally, combine the two information to get the saliency map saliency.
假设输入图像为X,本发明采用公式(2)计算重要度图中每个像素的重要度值w(x,y):Assuming that the input image is X, the present invention uses formula (2) to calculate the importance value w(x,y) of each pixel in the importance map:
其中saliency(x,y)为像素(x,y)处的显著性值,k是显著性信息的权重,通常k=1能取得理想的效果;I(x,y)为图像像素(x,y)处的亮度,方程后两项和为像素(x,y)处梯度值x方向分量和y方向分量的绝对值。进一步,我们对公式(2)计算出的重要度值进行归一化,使得重要度图中每个像素的值的范围在[0,1]之间。w(x,y)的值越大,其对应的像素的重要度越高。Among them, saliency(x, y) is the saliency value at the pixel (x, y), k is the weight of saliency information, usually k=1 can achieve the ideal effect; I(x, y) is the image pixel (x, y) y), the last two terms of the equation and is the absolute value of the x-direction component and y-direction component of the gradient value at the pixel (x, y). Further, we normalize the importance value calculated by formula (2), so that the value of each pixel in the importance map ranges between [0,1]. The larger the value of w(x,y), the higher the importance of the corresponding pixel.
步骤三、重要度驱动的纹理压缩Step 3. Importance-driven texture compression
在得到重要度图和控制图之后,对原图和控制图进行压缩;After obtaining the importance map and the control map, compress the original map and the control map;
本方法中纹理压缩被定义成能量函数优化问题,该能量函数计算原图所有的像素邻域与压缩后图像的像素邻域的双向相似性;对于输入图X(它可以是原图或控制图),它与压缩目标图Z之间考虑了重要度信息的能量函数如公式(4)所示.In this method, texture compression is defined as an energy function optimization problem. The energy function calculates the bidirectional similarity between all the pixel neighborhoods of the original image and the pixel neighborhood of the compressed image; for the input image X (it can be the original image or the control image ), the energy function between it and the compression target graph Z considering the importance information is shown in formula (4).
其中z/x为Z/X的样本值,q/p分别为Z/X的子集Z+/X+中的像素点位置,xp/zq表示p/q点为中心的空间邻域,zp/xq是Z/X中与xp/zq最相似的邻域,α是用户可调的权重值,当α=0.01时适合于绝大多数纹理;为p点邻域xp的重要度权重,计算图像一个区域的重要度方法有很多,如取邻域内所有像素权重的最小/大值、中值或均值等,我们取像素权重均值作为邻域xp的重要度;用同样方法计算区域xq的重要度|·|2计算两个邻域之间的距离,通过对邻域内对应像素颜色差的平方求和来计算两个邻域之间的距离。Where z/x is the sample value of Z/X, q/p are the pixel positions in the subset Z + /X + of Z/X respectively, and x p /z q represents the spatial neighborhood centered at point p/q , z p /x q is the most similar neighborhood to x p /z q in Z/X, α is a user-adjustable weight value, when α=0.01 is suitable for most textures; is the importance weight of the p-point neighborhood x p , there are many ways to calculate the importance of an image area, such as taking the minimum/large value, median or mean value of all pixel weights in the neighborhood, etc., we take the average value of the pixel weight as the neighborhood Importance of x p ; Calculate the importance of the region x q in the same way |·| 2 Calculate the distance between two neighborhoods by summing the squares of the color differences of corresponding pixels in the neighborhood to calculate the distance between two neighborhoods.
公式(3)的能量函数由两项相加组成,这两项形式相似,但是计算功能不同;The energy function of formula (3) is composed of the addition of two terms, which are similar in form, but have different calculation functions;
第一项称为inverse项,确保了输入图像X中的每一个邻域xp在Z中能找到与之相似的zp;第二项称为forward项,它保证了Z中没有一个新的zq不与X中的任何xq相似;The first item is called the inverse item, which ensures that each neighborhood x p in the input image X can find a similar z p in Z; the second item is called the forward item, which ensures that there is no new one in Z z q is not similar to any x q in X;
基于公式(3),下面推导出计算目标纹理Z中每一个像素q处颜色值的方法。对于目标纹理中的每一个像素q∈Z,它对整体能量值的贡献也包含对于forward和inverse两项的计算,具体贡献值可由如下步骤得到:Based on formula (3), the method for calculating the color value at each pixel q in the target texture Z is deduced below. For each pixel q∈Z in the target texture, its contribution to the overall energy value also includes the calculation of forward and inverse. The specific contribution value can be obtained by the following steps:
(1)像素q对forward项的贡献(1) The contribution of pixel q to the forward term
表示在目标图Z中所有含有像素q的邻域(其中q1,...,qm为邻域的中心点,注意这里q1,...,qm需经过相应的偏移才能得到q),m值为邻域的个数,它与我们选择的邻域大小相关,当邻域大小为5x5时,m=25(本发明中,高斯金子塔的每一层有2组不同大小的邻域,分别为17x17,9x9)。邻域在X中的最近邻为表示中与中的q相对应的像素位置(如图2所示)。这样,q对forward term的贡献值为
(2)像素q对inverse项的贡献(2) Contribution of pixel q to the inverse term
为X中的邻域并且这些邻域在Z中的最近邻域为包含q的邻域,其中n为邻域的个数,不同于上面的m值,n值的大小不是固定,随着q的不同而不同。像素p1,...,pn为邻域的中心点,它们同样经过相应的偏移得到像素点这些像素点与Z中像素q相对应(如图2)。这样我们可得q对inverse term的贡献值有
综述所述,单个像素q的能量为上述forward项和inverse项之和,具体计算如下:In summary, the energy of a single pixel q is the sum of the above forward and inverse items, and the specific calculation is as follows:
可以通过求解Energy(IZ(q))的最小值来求解像素q处的颜色值。将方程关于IZ(q)求导并等于0,可得IZ(q)的解为:The color value at pixel q can be solved for by finding the minimum of Energy(I Z (q)). Taking the derivative of the equation with respect to I Z (q) and equaling it to 0, the solution of I Z (q) can be obtained as:
(3)能量优化方法(3) Energy optimization method
基于公式(5)求解IZ(q)的迭代能量优化步骤为:The iterative energy optimization steps for solving I Z (q) based on formula (5) are:
step1:对于每一个目标像素q∈Z在输入图X中搜索到与其邻域zq最相似的邻域xq。将xq中像素的颜色值乘该像素的重要度权重并且乘以权重之后将计算得到的值投票给zq中相应的像素;Step1: For each target pixel q∈Z, search for the neighborhood x q most similar to its neighborhood z q in the input image X. Multiply the color value of the pixel in x q by the importance weight of the pixel and multiply by the weight Then vote the calculated value to the corresponding pixel in z q ;
step2:与step1相反,对于每一个像素p∈X在Z中搜索到与其邻域xp最相似的邻域zp。将xp的像素颜色值乘该像素的重要度权重并乘之后将计算得到的值投票给zp中相应的像素;Step2: In contrast to step1, for each pixel p∈X, search for the neighborhood z p that is most similar to its neighborhood x p in Z. Multiply the pixel color value of x p by the importance weight of the pixel and multiply Then vote the calculated value to the corresponding pixel in zp ;
step3:对于每一个目标像素q∈Z对于其所有的投票之和求平均值,即除以
重复step1,step2,step3直至收敛。Repeat step1, step2, step3 until convergence.
步骤四、基于图像类比的解压Step 4. Decompression based on image analogy
给定一对图片A和A’(分别对应原图的和对其进行一定处理之后生成的图),输入另一张未处理的目标图B,通过图像类比方法计算得到一张新的图B’,使得:Given a pair of pictures A and A' (respectively corresponding to the original picture and the picture generated after certain processing), input another unprocessed target picture B, and calculate a new picture B by image analogy method ', making:
A:A'::B:B'A:A'::B:B'
即B’和B的关系与A’和A的关系相似。图像类比算法能够学习A和A’之间的关系,并将学习的结果应用于生成B’。这里我们将它应用于压缩数据的解压。That is, the relationship between B' and B is similar to the relationship between A' and A. The image analogy algorithm can learn the relationship between A and A', and apply the learned result to generate B'. Here we apply it to the decompression of compressed data.
我们将压缩后的控制图以及压缩后的纹理分别对应于图像类比中的A和A’,并且将原控制图作为B,使用图像类比合成新的B’,B’即为解压缩结果图像。We correspond the compressed control map and the compressed texture to A and A' in the image analogy, and use the original control map as B, and use the image analogy to synthesize a new B', which is the decompression result image.
通过图像类比算法恢复出原图,从而得到解压后的结果。The original image is recovered through the image analogy algorithm, so as to obtain the decompressed result.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310076875.5A CN103198495B (en) | 2013-03-11 | 2013-03-11 | The texture compression method that importance degree drives |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310076875.5A CN103198495B (en) | 2013-03-11 | 2013-03-11 | The texture compression method that importance degree drives |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103198495A CN103198495A (en) | 2013-07-10 |
CN103198495B true CN103198495B (en) | 2015-10-07 |
Family
ID=48721000
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310076875.5A Active CN103198495B (en) | 2013-03-11 | 2013-03-11 | The texture compression method that importance degree drives |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103198495B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117292003B (en) * | 2023-11-27 | 2024-03-19 | 深圳对对科技有限公司 | Picture cloud data storage method for computer network |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102393966B (en) * | 2011-06-15 | 2013-02-27 | 西安电子科技大学 | Adaptive image compression sampling method based on multi-scale saliency map |
CN102867290B (en) * | 2012-08-28 | 2015-04-22 | 浙江工业大学 | Texture optimization-based non-homogeneous image synthesis method |
-
2013
- 2013-03-11 CN CN201310076875.5A patent/CN103198495B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN103198495A (en) | 2013-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108921786B (en) | Image super-resolution reconstruction method based on residual convolutional neural network | |
CN110738605B (en) | Image denoising method, system, equipment and medium based on transfer learning | |
CN106683067B (en) | Deep learning super-resolution reconstruction method based on residual sub-images | |
Feng et al. | LKASR: Large kernel attention for lightweight image super-resolution | |
CN105631807B (en) | The single-frame image super-resolution reconstruction method chosen based on sparse domain | |
CN111899176B (en) | A video image enhancement method | |
CN103871041B (en) | The image super-resolution reconstructing method built based on cognitive regularization parameter | |
CN106204447A (en) | The super resolution ratio reconstruction method with convolutional neural networks is divided based on total variance | |
CN107123089A (en) | Remote sensing images super-resolution reconstruction method and system based on depth convolutional network | |
CN102968772A (en) | Image defogging method based on dark channel information | |
CN103295192B (en) | The image real-time super-resolution method for reconstructing accelerating based on GPU | |
CN110675462A (en) | A Colorization Method of Grayscale Image Based on Convolutional Neural Network | |
CN106127818A (en) | A kind of material appearance based on single image obtains system and method | |
CN106127688A (en) | A kind of super-resolution image reconstruction method and system thereof | |
CN112200724A (en) | Single-image super-resolution reconstruction system and method based on feedback mechanism | |
CN111833261A (en) | An Attention-Based Generative Adversarial Network for Image Super-Resolution Restoration | |
CN104657962A (en) | Image super-resolution reconstruction method based on cascading linear regression | |
CN110838089B (en) | Fast image denoising method based on OctBlock dense block | |
CN115018708A (en) | Airborne remote sensing image super-resolution reconstruction method based on multi-scale feature fusion | |
CN117745541A (en) | Image super-resolution reconstruction method based on lightweight mixed attention network | |
CN111626943B (en) | Total variation image denoising method based on first-order forward and backward algorithm | |
Zhao et al. | A method of degradation mechanism-based unsupervised remote sensing image super-resolution | |
CN107170032A (en) | Power rendering intent based on linear sketch image | |
CN118096978B (en) | A method for rapid generation of 3D art content based on arbitrary stylization | |
CN103530860B (en) | Adaptive autoregressive model-based hyper-spectral imagery super-resolution method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |