CN103198495B - The texture compression method that importance degree drives - Google Patents

The texture compression method that importance degree drives Download PDF

Info

Publication number
CN103198495B
CN103198495B CN201310076875.5A CN201310076875A CN103198495B CN 103198495 B CN103198495 B CN 103198495B CN 201310076875 A CN201310076875 A CN 201310076875A CN 103198495 B CN103198495 B CN 103198495B
Authority
CN
China
Prior art keywords
pixel
neighborhood
texture
importance degree
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310076875.5A
Other languages
Chinese (zh)
Other versions
CN103198495A (en
Inventor
汤颖
周展
范菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201310076875.5A priority Critical patent/CN103198495B/en
Publication of CN103198495A publication Critical patent/CN103198495A/en
Application granted granted Critical
Publication of CN103198495B publication Critical patent/CN103198495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The texture compression method that importance degree drives, is made up of two parts: texture image compression and decompress(ion).In compression process, first its control chart and importance degree hum pattern are calculated for the texture inputted, then use importance degree to drive texture compression algorithm to carry out compression to former texture and obtain the rear texture of corresponding compression and control chart; In decompression phase, be similar to image co nvolution, using the control chart after compression, texture and former control chart, as input, obtain the texture output map after decompress(ion) through image co nvolution afterwards with compression.Specifically comprise: the texture compression that control chart calculates, importance degree figure calculates, importance degree drives, the decompress(ion) based on image co nvolution.

Description

The texture compression method that importance degree drives
Technical field
The invention belongs to areas of information technology.
Background technology
Nowadays the scene drawing (as Google Earth, streetscape technology etc.) of large-scale Network Environment all needs texture image realistic in a large number.There are two challenges in realization: the transmission bandwidth restriction on (1) internet; (2) client stores these restrictions needing the storage space of the texture drawn.Need corresponding technology to reduce bandwidth sum storage space.
Traditional texture compression algorithm adopts the image coding techniques such as vector quantization to compress texture usually, compressibility limited and do not use GPU carry out acceleration decompress(ion) draw.Up-to-date texture compression techniques utilizes the thinking of textures synthesis to compress texture image, compresses texture image by extracting reusable line unit pattern in texture.This technology not only can the local redundancy information of compressed image, can also compress overall duplicate message, and be beneficial to and use GPU decompress(ion) to draw.
Reverse Texture Synthesis, based on the thought of textures synthesis, significantly will input texture compression by the optimization of energy function and become fritter texture summary image.The major issue that reverse textures synthesis exists puts on an equal footing all regions in the process of compression.Be all visual consistency for not all region some texture, prospect part and background parts can be divided into.From vision significance model, the prospect part in texture is the part paid close attention to of human eye often, need retain as far as possible when compressing.Therefore need to carry out self-adapting compressing when compressing to the different piece of texture.Finally, control chart plays a part key in reverse reaction framework, only could reconstruct original texture based on suitable control chart reverse reaction.Under network environment, texture needs suitable control chart at client decompress(ion).Reverse reaction does not propose more general solution for controlling map generalization, limits the range of application of reverse reaction in texture compression field.
For the problems referred to above existed in reverse reaction, also do not have researchist to draw towards real-time decompress(ion) at present and propose adaptive quick texture compression algorithm.
The energy function of the present invention to reverse reaction is modified, and makes it be suitable for adaptive texture compression, namely to the compression that the important area of texture and insignificant region carry out in various degree.First calculate the importance degree figure of texture according to vision significance model in calculation stages, then by importance degree figure, texture and control chart thereof, as input, carry out iteration optimization to new energy function.
For the control chart problem of reverse reaction, we propose to choose reflects that the gray-scale map of image pixel intensity change is as control chart.
Summary of the invention
The present invention will overcome the above-mentioned shortcoming of prior art, the self-adaptation texture compression method providing a kind of and compress texture, effectively reduce texture memory shared by large scale texture image, realize its fast transport on network and the real-time rendering on GPU, drive based on importance degree, after making texture image decompress(ion), the picture quality of important area keeps better.
The texture compression method that importance degree of the present invention drives, is made up of two parts: texture image compression and decompress(ion).In compression process, first its control chart and importance degree hum pattern are calculated for the texture inputted, then use importance degree to drive texture compression algorithm to carry out compression to former texture and obtain the rear texture of corresponding compression and control chart; In decompression phase, be similar to image co nvolution, using the control chart after compression, texture and former control chart, as input, obtain the texture output map after decompress(ion) through image co nvolution afterwards with compression; Concrete steps are as follows:
Step one, control chart calculate
All control charts of the present invention are all the gray-scale map of former figure, and gray-scale map saves the luminance detail of former figure well; Adopt YIQ computation model that former figure (cromogram) is converted to gray-scale map, in this model, Y represents brightness, and namely required half-tone information, I represents tone, and Q represents saturation degree.According to corresponding model conversation matrix, RGB is changed into the computing formula of Y as formula (1):
Y=0.299R+0.587G+0.114B (1)
Wherein R, G, B are respectively redgreenblue.
Step 2, importance degree figure calculate
The importance degree information of texture directly affects the net result of algorithm.In order to obtain better importance degree information, the Saliency maps of our first computed image, this Saliency maps is consistent with former figure resolution.Calculate Saliency maps based on Saliency Filters algorithm: first abstract is carried out to image, namely retain the relevant architectural feature of image, remove some unwanted minutias of image; Then the distribution of the bottom-up information such as element and color with uniqueness is calculated in image; Last both information comprehensive obtains Saliency maps saliency.
Suppose that input picture is X, the present invention adopts formula (2) to calculate the importance value w (x, y) of each pixel in importance degree figure:
w ( x , y ) = k · saliency ( x , y ) + | ∂ ∂ x I ( x , y ) | + | ∂ ∂ y I ( x , y ) | ) - - - ( 2 )
Wherein saliency (x, y) is the significance value at pixel (x, y) place, and k is the weight of conspicuousness information, and usual k=1 can obtain desirable effect; I (x, y) is the brightness at image pixel (x, y) place, after equation two with for the absolute value of pixel (x, y) place Grad x durection component and y durection component.Further, we are normalized the importance value that formula (2) calculates, and make the scope of the value of each pixel in importance degree figure between [0,1].The value of w (x, y) is larger, and the importance degree of the pixel of its correspondence is higher.
The texture compression that step 3, importance degree drive
After obtaining importance degree figure and control chart, former figure and control chart are compressed;
In this method, texture compression is defined by energy function optimization problem, and this energy function calculates the two-way similarity of the neighborhood of pixels of all neighborhood of pixels of former figure and compressed images; For input figure X(, it can be former figure or control chart), consider the energy function of importance degree information between it and compression goal figure Z as shown in formula (4).
Wherein z/x is the sample value of Z/X, and q/p is respectively the subset Z of Z/X +/ X +in pixel position, x p/ z qrepresent the spatial neighborhood centered by p/q point, z p/ x qbe in Z/X with x p/ z qthe most similar neighborhood, α is the adjustable weighted value of user, is suitable for most texture when α=0.01; for p vertex neighborhood x pimportance degree weight, the importance degree method in a computed image region has a lot, as get all pixel weight in neighborhood minimum/large value, intermediate value or average etc., the plain weight equal value of our capture is as neighborhood x pimportance degree ; With same method zoning x qimportance degree || 2calculate the distance between two neighborhoods, by calculating the distance between two neighborhoods to square summation of respective pixel colour-difference in neighborhood.
The energy function of formula (3) is made up of two additions, these two similar in forms, but computing function is different;
Section 1 is called inverse item, ensure that each neighborhood x in input picture X pz similarly can be found in Z p; Section 2 is called forward item, which ensures the z that neither one in Z is new qnot with any x in X qsimilar;
Based on formula (3), derive the method calculating each pixel q place color value in target texture Z below.For each the pixel q ∈ Z in target texture, it also comprises the calculating for forward and inverse two to the contribution of integral energy value, and concrete contribution margin can be obtained by following steps:
(1) pixel q is to the contribution of forward item
represent all neighborhood (wherein q containing pixel q in target figure Z 1..., q mfor neighborhood central point, note q here 1..., q mjust can need obtain q) through corresponding skew, m value is neighborhood number, it is relevant to the Size of Neighborhood that we select, and when Size of Neighborhood is 5x5, in m=25(the present invention, every one deck of Gauss's gold tower has the neighborhood of 2 groups of different sizes, is respectively 17x17,9x9).Neighborhood arest neighbors in X is represent in with in the corresponding location of pixels (as shown in Figure 2) of q.Like this, q to the contribution margin of forward term is α Σ q ∈ Z + w x q Σ i = 1 m w ( p ⩓ i ) ( I X ( p ⩓ i ) - I Z ( q ) ) 2 (wherein w () be pixel importance value, I x(), I z() provides the pixel color value of input figure X and target figure Z respectively, lower same);
(2) pixel q is to the contribution of inverse item
for the neighborhood in X and the nearest-neighbor of these neighborhoods in Z is the neighborhood comprising q, wherein n is neighborhood number, be different from m value above, the size of n value is not fixing, different along with the difference of q.Pixel p 1..., p nfor neighborhood central point, their equally obtain pixel through corresponding skew these pixels corresponding with pixel q in Z (as Fig. 2).We can obtain the contribution margin of q to inverse term like this has 1 Σ p ∈ X + w x p Σ j = 1 n w ( p ‾ j ) ( I X ( p ‾ j ) - I Z ( q ) ) 2 ;
Described in summary, the energy of single pixel q is above-mentioned forward item and inverse item sum, is specifically calculated as follows:
Energy ( I Z ( q ) ) =
1 Σ p ∈ X + w x p Σ j = 1 n w ( p ‾ j ) ( I X ( p ‾ j ) - I Z ( q ) ) 2 + α Σ q ∈ Z + w x q Σ i = 1 m w ( p ⩓ i ) ( I X ( p ⩓ i ) - I Z ( q ) ) 2 - - - ( 4 )
Can by solving Energy (I z(q)) minimum value solve the color value at pixel q place.By equation about I zq () differentiate also equals 0, can obtain I zq the solution of () is:
I Z ( q ) = 1 Σ p ∈ X + w x p Σ j = 1 n w ( p ‾ j ) I X ( p ‾ j ) + α Σ q ∈ Z + w x q Σ i = 1 m w ( p ⩓ i ) I X ( p ⩓ i ) 1 Σ p ∈ X + w x p Σ j = 1 n w ( p ‾ j ) + α Σ q ∈ Z + w x q Σ i = 1 m w ( p ⩓ i ) - - - ( 5 )
(3) energy optimizing method
I is solved based on formula (5) zq the iteration energy-optimizing step of () is:
Step1: each object pixel q ∈ Z is searched and its neighborhood z in input figure X qthe most similar neighborhood x q.By x qthe color value of middle pixel is taken advantage of the importance degree weight of this pixel and is multiplied by weight afterwards the value calculated is voted to z qin corresponding pixel;
Step2: contrary with step1, searches and its neighborhood x for each pixel p ∈ X in Z pthe most similar neighborhood z p.By x ppixel color value take advantage of the importance degree weight of this pixel and take advantage of , afterwards the value calculated is voted to z pin corresponding pixel;
Step3: each object pixel q ∈ Z is averaged for its all ballot sum, namely divided by 1 Σ p ∈ X + w x p Σ j = 1 n w ( p ‾ j ) + α Σ q ∈ Z + w x q Σ i = 1 m w ( p ⩓ i ) , Obtain new I z(q);
Repeat step1, step2, step3 until convergence.
Step 4, decompress(ion) based on image co nvolution
Given a pair picture A and A ' (respectively corresponding former figure's and the figure that generates after it is necessarily processed), input another untreated target figure B, calculate a new figure B ' by image co nvolution method, make:
A:A'::B:B'
Namely B ' is similar with the relation of A to A ' with the relation of B.Image co nvolution algorithm can learn the relation between A and A ', and the result of study is applied to generation B '.Here it is applied to the decompress(ion) of packed data by us.
Texture after control chart after compression and compression is corresponded respectively to A and A ' in image co nvolution by us, and using former control chart as B, use image co nvolution to synthesize new B ', B ' is decompression result image.
Recover former figure by image co nvolution algorithm, thus obtain the result after decompress(ion).
The invention describes the texture compression algorithm that a kind of importance degree drives, the texture compression that importance degree drives is mapped to the optimization problem to energy function.By importance degree information fusion in energy function, and the solver of Derivation of The Energy function again, adopt gray-scale map as the control chart of texture.Formula (2) is used to calculate importance degree figure.The inventive method can retain the important area in texture very well when compressing texture, and compression efficiency is higher.
Advantage of the present invention is as follows:
(1) self-adapting compressing of important information reservation.Importance degree concept is utilized to carry out self-adapting compressing to texture image, higher to the compression quality of important area in image in compression process.Different from compression algorithm in the past, the compression result of algorithm is also a significant texture, and this compression result can be used for synthesizing new texture again.
(2) thinking is novel, nonclassical method: the method adopts energy-optimised mode to carry out compressed textures, obtains compressed textures result by the minimum value of iterative energy function, and thinking is novel, and innovative significance is large.
(3) realize simple and operational efficiency is high.Energy-optimised calculating is simple, adopts class EM algorithm to carry out iteration optimization and solves energy-minimum, only simply need design step3, step1 and step2.Search in energy-optimised calculates has massive parallelism, is beneficial to and adopts GPU to carry out parallel accelerate calculating, thus greatly improve the operational efficiency of algorithm.
(4) convenient and easy.User only need provide an image, algorithm can calculate that the mode of its control chart and user interactions is automanual calculates importance degree figure automatically, then carry out the texture compression of importance degree driving, only need provide the control chart of original figure can composite artwork again in decompression phase.
(5) controllability is good: user, to the input parameter of the control of algorithm without the need to complexity, only need change importance degree figure, and algorithm will carry out high-fidelity compression when compressing to the important area that importance degree figure determines.User can be controlled by the self-adapting compressing of editor importance degree figure to image.
Accompanying drawing explanation
Fig. 1 is the technological frame figure of this technological invention.
Fig. 2 is the signature of pixel corresponding relation between input figure and target figure in derivation.
Embodiment
As shown in the figure, the present invention is made up of two parts: texture image compression and decompress(ion).In compression process, first its control chart and importance degree hum pattern are calculated for the texture inputted, then use importance degree to drive texture compression algorithm to carry out compression to former texture and obtain the rear texture of corresponding compression and control chart.In decompression phase, be similar to image co nvolution, using the control chart after compression, texture and former control chart, as input, obtain the texture output map after decompress(ion) through image co nvolution afterwards with compression.We specifically introduce four calculation procedures of the inventive method below.
Step one, control chart calculate
All control charts of the present invention are all the gray-scale map of former figure, and gray-scale map saves the luminance detail of former figure well; Adopt YIQ computation model that former figure (cromogram) is converted to gray-scale map, in this model, Y represents brightness, and namely required half-tone information, I represents tone, and Q represents saturation degree.According to corresponding model conversation matrix, RGB is changed into the computing formula of Y as formula (1):
Y=0.299R+0.587G+0.114B (1)
Wherein R, G, B are respectively redgreenblue.
Step 2, importance degree figure calculate
The importance degree information of texture directly affects the net result of algorithm.In order to obtain better importance degree information, the Saliency maps of our first computed image, this Saliency maps is consistent with former figure resolution.Calculate Saliency maps based on Saliency Filters algorithm: first abstract is carried out to image, namely retain the relevant architectural feature of image, remove some unwanted minutias of image; Then the distribution of the bottom-up information such as element and color with uniqueness is calculated in image; Last both information comprehensive obtains Saliency maps saliency.
Suppose that input picture is X, the present invention adopts formula (2) to calculate the importance value w (x, y) of each pixel in importance degree figure:
w ( x , y ) = k · saliency ( x , y ) + | ∂ ∂ x I ( x , y ) | + | ∂ ∂ y I ( x , y ) | ) - - - ( 2 )
Wherein saliency (x, y) is the significance value at pixel (x, y) place, and k is the weight of conspicuousness information, and usual k=1 can obtain desirable effect; I (x, y) is the brightness at image pixel (x, y) place, after equation two with for the absolute value of pixel (x, y) place Grad x durection component and y durection component.Further, we are normalized the importance value that formula (2) calculates, and make the scope of the value of each pixel in importance degree figure between [0,1].The value of w (x, y) is larger, and the importance degree of the pixel of its correspondence is higher.
The texture compression that step 3, importance degree drive
After obtaining importance degree figure and control chart, former figure and control chart are compressed;
In this method, texture compression is defined by energy function optimization problem, and this energy function calculates the two-way similarity of the neighborhood of pixels of all neighborhood of pixels of former figure and compressed images; For input figure X(, it can be former figure or control chart), consider the energy function of importance degree information between it and compression goal figure Z as shown in formula (4).
Wherein z/x is the sample value of Z/X, and q/p is respectively the subset Z of Z/X +/ X +in pixel position, x p/ z qrepresent the spatial neighborhood centered by p/q point, z p/ x qbe in Z/X with x p/ z qthe most similar neighborhood, α is the adjustable weighted value of user, is suitable for most texture when α=0.01; for p vertex neighborhood x pimportance degree weight, the importance degree method in a computed image region has a lot, as get all pixel weight in neighborhood minimum/large value, intermediate value or average etc., the plain weight equal value of our capture is as neighborhood x pimportance degree ; With same method zoning x qimportance degree || 2calculate the distance between two neighborhoods, by calculating the distance between two neighborhoods to square summation of respective pixel colour-difference in neighborhood.
The energy function of formula (3) is made up of two additions, these two similar in forms, but computing function is different;
Section 1 is called inverse item, ensure that each neighborhood x in input picture X pz similarly can be found in Z p; Section 2 is called forward item, which ensures the z that neither one in Z is new qnot with any x in X qsimilar;
Based on formula (3), derive the method calculating each pixel q place color value in target texture Z below.For each the pixel q ∈ Z in target texture, it also comprises the calculating for forward and inverse two to the contribution of integral energy value, and concrete contribution margin can be obtained by following steps:
(1) pixel q is to the contribution of forward item
represent all neighborhood (wherein q containing pixel q in target figure Z 1..., q mfor neighborhood central point, note q here 1..., q mjust can need obtain q) through corresponding skew, m value is neighborhood number, it is relevant to the Size of Neighborhood that we select, and when Size of Neighborhood is 5x5, in m=25(the present invention, every one deck of Gauss's gold tower has the neighborhood of 2 groups of different sizes, is respectively 17x17,9x9).Neighborhood arest neighbors in X is represent in with in the corresponding location of pixels (as shown in Figure 2) of q.Like this, q to the contribution margin of forward term is α Σ q ∈ Z + w x q Σ i = 1 m w ( p ⩓ i ) ( I X ( p ⩓ i ) - I Z ( q ) ) 2 (wherein w () be pixel importance value, I x(), I z() provides the pixel color value of input figure X and target figure Z respectively, lower same);
(2) pixel q is to the contribution of inverse item
for the neighborhood in X and the nearest-neighbor of these neighborhoods in Z is the neighborhood comprising q, wherein n is neighborhood number, be different from m value above, the size of n value is not fixing, different along with the difference of q.Pixel p 1..., p nfor neighborhood central point, their equally obtain pixel through corresponding skew these pixels corresponding with pixel q in Z (as Fig. 2).We can obtain the contribution margin of q to inverse term like this has 1 Σ p ∈ X + w x p Σ j = 1 n w ( p ‾ j ) ( I X ( p ‾ j ) - I Z ( q ) ) 2 ;
Described in summary, the energy of single pixel q is above-mentioned forward item and inverse item sum, is specifically calculated as follows:
Energy ( I Z ( q ) ) =
1 Σ p ∈ X + w x p Σ j = 1 n w ( p ‾ j ) ( I X ( p ‾ j ) - I Z ( q ) ) 2 + α Σ q ∈ Z + w x q Σ i = 1 m w ( p ⩓ i ) ( I X ( p ⩓ i ) - I Z ( q ) ) 2 - - - ( 4 )
Can by solving Energy (I z(q)) minimum value solve the color value at pixel q place.By equation about I zq () differentiate also equals 0, can obtain I zq the solution of () is:
I Z ( q ) = 1 Σ p ∈ X + w x p Σ j = 1 n w ( p ‾ j ) I X ( p ‾ j ) + α Σ q ∈ Z + w x wq Σ i = 1 m w ( p ⩓ i ) I X ( p ⩓ i ) 1 Σ p ∈ X + w x p Σ j = 1 n w ( p ‾ j ) + α Σ q ∈ Z + w x q Σ i = 1 m w ( p ⩓ i ) - - - ( 5 )
(3) energy optimizing method
I is solved based on formula (5) zq the iteration energy-optimizing step of () is:
Step1: each object pixel q ∈ Z is searched and its neighborhood z in input figure X qthe most similar neighborhood x q.By x qthe color value of middle pixel is taken advantage of the importance degree weight of this pixel and is multiplied by weight afterwards the value calculated is voted to z qin corresponding pixel;
Step2: contrary with step1, searches and its neighborhood x for each pixel p ∈ X in Z pthe most similar neighborhood z p.By x ppixel color value take advantage of the importance degree weight of this pixel and take advantage of afterwards the value calculated is voted to z pin corresponding pixel;
Step3: each object pixel q ∈ Z is averaged for its all ballot sum, namely divided by 1 Σ p ∈ X + w x p Σ j = 1 n w ( p ‾ j ) + α Σ q ∈ Z + w x q Σ i = 1 m w ( p ⩓ i ) , Obtain new I z(q);
Repeat step1, step2, step3 until convergence.
Step 4, decompress(ion) based on image co nvolution
Given a pair picture A and A ' (respectively corresponding former figure's and the figure that generates after it is necessarily processed), input another untreated target figure B, calculate a new figure B ' by image co nvolution method, make:
A:A'::B:B'
Namely B ' is similar with the relation of A to A ' with the relation of B.Image co nvolution algorithm can learn the relation between A and A ', and the result of study is applied to generation B '.Here it is applied to the decompress(ion) of packed data by us.
Texture after control chart after compression and compression is corresponded respectively to A and A ' in image co nvolution by us, and using former control chart as B, use image co nvolution to synthesize new B ', B ' is decompression result image.
Recover former figure by image co nvolution algorithm, thus obtain the result after decompress(ion).

Claims (1)

1. the texture compression method of importance degree driving, is made up of two parts: texture image compression and decompress(ion); In compression process, first its control chart and importance degree hum pattern are calculated for the texture inputted, then use importance degree to drive texture compression algorithm to carry out compression to former texture and obtain the rear texture of corresponding compression and control chart; In decompression phase, be similar to image co nvolution, using the control chart after compression, texture and former control chart, as input, obtain the texture output map after decompress(ion) through image co nvolution afterwards with compression; Concrete steps are as follows:
Step one, control chart calculate
All control charts are all the gray-scale map of former figure, and gray-scale map saves the luminance detail of former figure well; Adopt YIQ computation model that former figure (cromogram) is converted to gray-scale map, in this model, Y represents brightness; According to corresponding model conversation matrix, RGB is changed into the computing formula of Y as formula (1):
Y=0.299R+0.587G+0.114B (1)
Wherein R, G, B are respectively redgreenblue;
Step 2, importance degree figure calculate
The importance degree information of texture directly affects the net result of algorithm, in order to obtain better importance degree information, and the first Saliency maps of computed image, this Saliency maps is consistent with former figure resolution; Calculate Saliency maps based on Saliency Filters algorithm: first abstract is carried out to image, namely retain the relevant architectural feature of image, remove some unwanted minutias of image; Then the distribution of element and the color bottom-up information with uniqueness is calculated in image; Last both information comprehensive obtains Saliency maps saliency;
Suppose that input picture is X, adopt formula (2) to calculate the importance value w (x, y) of each pixel in importance degree figure:
Wherein saliency (x, y) is the significance value at pixel (x, y) place, and k is the weight of conspicuousness information, and usual k=1 can obtain desirable effect; I (x, y) is the brightness at image pixel (x, y) place, after equation two with for the absolute value of pixel (x, y) place Grad x durection component and y durection component; Further, the importance value that formula (2) calculates is normalized, makes the scope of the value of each pixel in importance degree figure between [0,1]; The value of w (x, y) is larger, and the importance degree of the pixel of its correspondence is higher;
The texture compression that step 3, importance degree drive
After obtaining importance degree figure and control chart, former figure and control chart are compressed;
In this method, texture compression is defined by energy function optimization problem, and this energy function calculates the two-way similarity of the neighborhood of pixels of all neighborhood of pixels of former figure and compressed images; For input figure X, it is former figure or control chart, considers the energy function of importance degree information between it and compression goal figure Z as shown in formula (3):
Wherein z/x is the sample value of Z/X, and q/p is respectively the subset Z of Z/X +/ X +in pixel position, x p/ z qrepresent the spatial neighborhood centered by p/q point, z p/ x qbe in Z/X with x p/ z qthe most similar neighborhood, α is the adjustable weighted value of user; for p vertex neighborhood x pimportance degree weight, capture element weight equal value as neighborhood x pimportance degree ; With same method zoning x qimportance degree ; || 2calculate the distance between two neighborhoods, by calculating the distance between two neighborhoods to square summation of respective pixel colour-difference in neighborhood;
The energy function of formula (3) is made up of two additions, these two similar in forms, but computing function is different;
Section 1 is called inverse item, ensure that each neighborhood x in input picture X pz similarly can be found in Z p; Section 2 is called forward item, which ensures the z that neither one in Z is new qnot with any x in X qsimilar;
Based on formula (3), derive the method calculating each pixel q place color value in target texture Z below, for each the pixel q ∈ Z in target texture, it also comprises the calculating for forward and inverse two to the contribution of integral energy value, and concrete contribution margin can be obtained by following steps:
(1) pixel q is to the contribution of forward item
represent all neighborhood (wherein q containing pixel q in target figure Z 1..., q mfor neighborhood central point, note q here 1..., q mjust can need obtain q) through corresponding skew, m value is neighborhood number, it to select Size of Neighborhood relevant, when Size of Neighborhood is 5x5, m=25; Neighborhood arest neighbors in X is represent in with in the corresponding location of pixels of q; Like this, q to the contribution margin of forward term is wherein w () be pixel importance value, I x(), I z() provides the pixel color value of input figure X and target figure Z respectively, lower same;
(2) pixel q is to the contribution of inverse item
for the neighborhood in X and the nearest-neighbor of these neighborhoods in Z is the neighborhood comprising q, wherein n is neighborhood number; Pixel p 1..., p nfor neighborhood central point, their equally obtain pixel through corresponding skew these pixels are corresponding with pixel q in Z, and can obtain the contribution margin of q to inverse term like this has
Described in summary, the energy of single pixel q is above-mentioned forward item and inverse item sum, is specifically calculated as follows:
; By equation about I zq () differentiate also equals 0, can obtain I zq the solution of () is:
(3) energy optimizing method
I is solved based on formula (5) zq the iteration energy-optimizing step of () is:
Step1: each object pixel q ∈ Z is searched and its neighborhood z in input figure X qthe most similar neighborhood x q, by x qthe color value of middle pixel is taken advantage of the importance degree weight of this pixel and is multiplied by weight afterwards the value calculated is voted to z qin corresponding pixel;
Step2: contrary with step1, searches and its neighborhood x for each pixel p ∈ X in Z pthe most similar neighborhood z p, by x ppixel color value take advantage of the importance degree weight of this pixel and take advantage of afterwards the value calculated is voted to z pin corresponding pixel;
Step3: each object pixel q ∈ Z is averaged for its all ballot sum, namely divided by obtain new I z(q);
Repeat step1, step2, step3 until convergence;
Step 4, decompress(ion) based on image co nvolution
Given a pair picture A and A ', respectively corresponding former figure's and the figure that generates after it is processed, input another untreated target figure B, calculate a new figure B ' by image co nvolution method, make B ' similar with the relation of A to A ' with the relation of B; Image co nvolution algorithm can learn the relation between A and A ', and the result of study is applied to generation B '; Here it is applied to the decompress(ion) of packed data;
Texture after control chart after compression and compression is corresponded respectively to A and A ' in image co nvolution, and using former control chart as B, use image co nvolution to synthesize new B ', B ' is decompression result image;
Recover former figure by image co nvolution algorithm, thus obtain the result after decompress(ion).
CN201310076875.5A 2013-03-11 2013-03-11 The texture compression method that importance degree drives Active CN103198495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310076875.5A CN103198495B (en) 2013-03-11 2013-03-11 The texture compression method that importance degree drives

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310076875.5A CN103198495B (en) 2013-03-11 2013-03-11 The texture compression method that importance degree drives

Publications (2)

Publication Number Publication Date
CN103198495A CN103198495A (en) 2013-07-10
CN103198495B true CN103198495B (en) 2015-10-07

Family

ID=48721000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310076875.5A Active CN103198495B (en) 2013-03-11 2013-03-11 The texture compression method that importance degree drives

Country Status (1)

Country Link
CN (1) CN103198495B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292003B (en) * 2023-11-27 2024-03-19 深圳对对科技有限公司 Picture cloud data storage method for computer network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102393966B (en) * 2011-06-15 2013-02-27 西安电子科技大学 Self-adapting image compressive sampling method based on multi-dimension saliency map
CN102867290B (en) * 2012-08-28 2015-04-22 浙江工业大学 Texture optimization-based non-homogeneous image synthesis method

Also Published As

Publication number Publication date
CN103198495A (en) 2013-07-10

Similar Documents

Publication Publication Date Title
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
WO2022267641A1 (en) Image defogging method and system based on cyclic generative adversarial network
CN106683067B (en) Deep learning super-resolution reconstruction method based on residual sub-images
CN110197468A (en) A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network
Yang et al. Coupled dictionary training for image super-resolution
CN107123089A (en) Remote sensing images super-resolution reconstruction method and system based on depth convolutional network
CN106204449A (en) A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network
CN110060204B (en) Single image super-resolution method based on reversible network
CN112837224A (en) Super-resolution image reconstruction method based on convolutional neural network
CN105046672A (en) Method for image super-resolution reconstruction
CN103455988B (en) The super-resolution image reconstruction method of structure based self-similarity and rarefaction representation
CN109685716A (en) A kind of image super-resolution rebuilding method of the generation confrontation network based on Gauss encoder feedback
CN102722865A (en) Super-resolution sparse representation method
CN104574336A (en) Super-resolution image reconstruction system based on self-adaptation submodel dictionary choice
CN109993702A (en) Based on the language of the Manchus image super-resolution rebuilding method for generating confrontation network
CN115222614A (en) Priori-guided multi-degradation-characteristic night light remote sensing image quality improving method
CN113096015B (en) Image super-resolution reconstruction method based on progressive perception and ultra-lightweight network
CN109087247A (en) The method that a kind of pair of stereo-picture carries out oversubscription
CN103198495B (en) The texture compression method that importance degree drives
CN110009568A (en) The generator construction method of language of the Manchus image super-resolution rebuilding
JP7218959B2 (en) Depth Map Synthesis Method Based on Differential Comparison Learning
CN113726976A (en) High-capacity graph hiding method and system based on coding-decoding network
CN103366384B (en) Importance degree drive towards overall redundant image compression method
CN102930510B (en) The mutual method for transformation of a kind of facial image multi-angle
Sun Colorization of gray scale images based on convolutional block attention and Pix2Pix network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant