CN104021523A - Novel method for image super-resolution amplification based on edge classification - Google Patents

Novel method for image super-resolution amplification based on edge classification Download PDF

Info

Publication number
CN104021523A
CN104021523A CN201410193840.4A CN201410193840A CN104021523A CN 104021523 A CN104021523 A CN 104021523A CN 201410193840 A CN201410193840 A CN 201410193840A CN 104021523 A CN104021523 A CN 104021523A
Authority
CN
China
Prior art keywords
image
clip
beta
alpha
interpolation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410193840.4A
Other languages
Chinese (zh)
Other versions
CN104021523B (en
Inventor
端木春江
王泽思
李林伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CN201410193840.4A priority Critical patent/CN104021523B/en
Publication of CN104021523A publication Critical patent/CN104021523A/en
Application granted granted Critical
Publication of CN104021523B publication Critical patent/CN104021523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a novel method for image super-resolution amplification based on edge classification. The method includes the steps that firstly, edge detection is conducted on an image with the low resolution, so that an edge indicative binarized image of the image with the low resolution is obtained; then, initial interpolation is conducted on the image with the low resolution, so that an initial image with the high resolution is obtained; then, a 3*3 image block in the image with the high resolution is extracted, the image is classified according to the trend of the edges in the binarized image corresponding to the block, and interpolation is conducted on certain pixels in the block again according to the type of the image with the high resolution. According to the method, the trend characteristics of the edge are well considered, interpolation is conducted on the basis, so that details, especially the edges and the portions close to the edges, of the amplified image are clear. Experiments show that the reestablished image is quite similar to an original image with the high resolution.

Description

The new method that a kind of image super-resolution based on marginal classification amplifies
Technical field
The present invention relates in image is processed the new method that a kind of super-resolution image amplifies.Especially according to the marginal information on low-resolution image, a little image block on the high-definition picture of initial amplification is classified according to the marginal point on its corresponding low-resolution image, and some pixel in image block is carried out again to the method for interpolation according to this classification.
Background technology
Along with the develop rapidly of digital camera and Internet technology, people grow with each passing day to the demand of high-definition digital image.But, being subject to the restriction of the network bandwidth and storage space, the storage of high-definition image and the cost of transmission are larger, the unusual resource of consumption systems, as need very large storage space or the very large bandwidth of consumption systems.For this reason, the compress technique of digital picture has been carried out, has produced the international standard of compression of images, as JPEG, and JPEG2000 etc.Current harmless method for compressing image, the multiple of compression is generally below 4.Meanwhile, the compression method damaging of image, the distortion that can bring image, produces blocking effect (block artifact), ringing effect (ringing artifact) etc.For this reason, the amplifying technique of the super-resolution of image has obtained people's extensive concern, and becomes the hot research problem in image processing field in recent years.
The object that image super-resolution amplifies is the method for high-definition picture that obtained by the image of low resolution.Like this, can be when transmission and storage, only the image of transmission or store low-resolution, then adopts the super-resolution amplifying technique of image to obtain high-resolution image at reciever or demonstration side.Meanwhile, the super-resolution amplifying technique of image can combine with existing Image Compression, is reduced to further storage and transmission high-definition picture and the system resource that consumes.Further, the storage of HD video and transmission, also can utilize the ultra-resolution method of image to reduce the consumption of system resource.
At present, the method for the super-resolution of image is divided into method and the method based on sample of interpolation.Wherein, the method based on sample need to first be set up a sample data storehouse consisting of low-resolution image piece and its corresponding high-definition picture piece being obtained by training, and its calculated amount is very large, is difficult to real-time application.Method based on interpolation, although its computation complexity is low, easily causes the fuzzy of image, makes the marginal portion of image unintelligible.And human eye is than more sensitive for the marginal portion of image, its lower distortion will reduce the visual effect of image greatly.
For this reason, the present invention proposes a kind ofly after initial amplification, according to the edge of image, revise the method for the value of edge pixel point in enlarged image and near the pixel it.With in interpolation, the edge of interpolation direction and image moves towards consistent, obtains sharp-edged enlarged image.
Summary of the invention
Because the above-mentioned defect of prior art the present invention proposes a kind of situation to the edge in image and classifies, then different classifications is carried out to the corresponding method of interpolation again.With according to the marginal information on low-resolution image, as didactic information, according to walking of edge, always carry out interpolation, make enlarged image edge clear, overcome the ill-defined shortcoming of the method for current existing super-resolution interpolation.
For achieving the above object, the invention provides a kind of after rim detection edge carry out 11 kinds of different classification and carry out again again the method for interpolation.For the multiple amplifying, be 2 * 2 o'clock, that is, 2 times of horizontal magnifications, longitudinally amplify 2 times.Certainly the invention is not restricted to amplify 2 * 2 times.The present invention includes:
Step 1, the initial super-resolution based on interpolation is amplified, to obtain initial high-definition picture;
Step 2, carries out rim detection and extraction to low-resolution image;
Step 3, the image after edge extracts carries out binary conversion treatment, makes the value of the binaryzation on non-marginal point all become 0, and the value of the binaryzation on marginal point is 1.That is, on the image of binaryzation, the pixel point value on image border is 1, and other is not that the pixel point value at edge is all 0;
Step 4, puts x=0, y=0;
Step 5, take the upper left corner that (x, y) be high-definition picture piece, extracts the image block of 3 * 3 sizes on high-resolution image.Binary image to low resolution, take (x/2, y/2) as the upper left corner, extracts the image block of 2 * 2 sizes.According to the image block after the edge binaryzation of this 2 * 2 size, the high-resolution image block extracting is classified, specifically can be divided into following 11 classes: be respectively coboundary class, lower limb class, left hand edge class, right hand edge class, lower-left beveled edge class, bottom right beveled edge class, edge, upper left corner class, edge, lower left corner class, edge, upper right corner class, edge, lower right corner class, other class; Then, according to the classification at edge, to the edge pixel point of the image block of 3 * 3 sizes of extracting on initial enlarged image and near pixel thereof, carry out interpolation again;
Step 6, puts x=x+2, if x≤2W-5 (width that W is low-resolution image) jumps to step 5 next image block is operated;
Step 7, puts y=y+2, if y≤2H-5 (height that H is low-resolution image) jumps to step 5 next image block is operated;
Step 8, amplifies and finishes the super-resolution of current low-resolution image.Obtain (2W) * (2H) high-definition picture of size.
Further, in described step 1, what adopt is bilinear interpolation method.That is, for following image block:
A a B b c d C e D
Wherein, A, B, C, D are respectively the pixel values of the pixel on low-resolution image, and a, b, c, d, e are the pixels that needs interpolation on high-definition picture.In bilinear interpolation method, their pixel value is obtained by following formula:
a = clip ( ( A + B ) / 2 ) b = clip ( ( A + C ) / 2 ) c = clip ( ( A + B + C + D ) / 4 ) d = clip ( ( B + D ) / 2 ) e = clip ( ( C + D ) / 2 )
Wherein, clip (x) function be the value of x is restricted to the codomain scope of a pixel point value within, that is, and clip (x)=max (I min, min (x, I max)).Here, I minand I maxrespectively that a pixel may be obtained minimum value and maximal value.
Further, in described step 2, adopt Canny operator to carry out edge extracting.The concrete computing method of Canny operator are as follows:
First, use Gaussian filter smoothed image, namely choose under normal conditions a Gaussian smoothing function, in frequency domain, its impulse function is:
H ( u , v ) = e - D 2 ( u , v ) / 2 σ 2
Wherein, D (u, v) represents be apart from Fourier transform compared with the distance of far point, σ is curvature, H (u, v) represents the degree of Gaussian curve expansion; For avoiding edge too fuzzy, the present invention utilizes the template of the convolution that scope is little.Particularly, the present invention adopts following 5 * 5 template to carry out convolution algorithm to the figure of low resolution:
H ( m , n ) = 1 256 1 4 6 4 1 4 16 24 16 4 6 24 36 24 6 4 16 24 16 4 1 4 6 4 1
Secondly, the amplitude of compute gradient and direction, namely first choose the template of a first order difference convolution:
H 1 = - 1 - 1 1 1 H 2 = 1 - 1 1 - 1
Then define low-resolution image f (m, n) (m is along slope coordinate, and n is lateral coordinates) at H 1, H 2gradient Ψ on two orthogonal directionss 1(m, n), Ψ 2(m, n) is respectively:
Ψ 1(m,n)=f(m,n)*H 1(m,n)
Ψ 2(m,n)=f(m,n)*H 2(m,n)
In the intensity and the direction that obtain edge through further computing, be shown below:
Ψ ( m , n ) = ψ 1 2 ( m , n ) + Ψ 2 2 ( m , n )
θ Ψ = tan - 1 ψ 2 ( m , n ) ψ 1 ( m , n )
In the method for original Canny operator, the gradient magnitude with having calculated is carried out to non-maximum value (NMS) convergence; The present invention improves this step.First quantize edge angle θ Ψfor θ 1, θ 1∈ 0 °, and 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° }.Then, if θ 1=0 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m, n+1); If θ 1=45 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m-1, n+1); If θ 1=90 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m-1, n); If θ 1=135 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m-1, n-1); If θ 1=180 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m, n-1); If θ 1=225 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m+1, n-1); If θ 1=270 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m+1, n), if θ 1=315 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m+1, n+1).Like this, can reduce calculated amount, and obtain good effect.
Finally, with dual threshold algorithm, come detected image border to couple together.That is, as the gradient magnitude Ψ (m, n) of the detected initial edge points of previous step > T htime, determine that this point is image border point.Then, take these marginal points as seed, scan its adjacent point around, as its gradient magnitude Ψ (m, n) > T ltime, this point is added to the set of image border point.In the present invention, according to a large amount of experiments and its result, to these two parameter T h, T lvalue be set as T h=200, T l=100.
Further, in described step 5, according to the edge in the image block of 3 * 3 sizes, to the process of image block classification, can be described below.That is, to the image block extracting in high-definition picture
a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33
In this image block, a 11, a 13, a 31, a 33pixel is the pixel belonging in low-resolution image, and in this piece, remaining pixel point value is obtained by interpolation.According to the marginal distribution figure of the binaryzation of its corresponding low resolution, classify to this image block in the position that can occur by marginal point in this piece.The classification of its classification is as follows:
(a) coboundary class.A now 11and a 13for image border point.Now, needing again the pixel of interpolation is a 21, a 22, a 23.Its interpolation formula is
a 21 = clip ( α × a 11 + β × a 31 ) a 22 = clip ( α × a 12 + β × a 32 ) a 23 = clip ( α × a 13 + β × a 33 )
Wherein, as previously mentioned, α and β are two weights in the definition of clip (x), and alpha+beta=1 satisfies condition.
(b) lower limb class.A now 31and a 33for image border point.Now, needing again the pixel of interpolation is a 21, a 22, a 23.Its interpolation formula is
a 21 = clip ( β × a 11 + α × a 31 ) a 22 = clip ( β × a 12 + α × a 32 ) a 23 = clip ( β × a 13 + α × a 33 )
(c) left hand edge class.A now 11and a 31for image border point.Now, needing again the pixel of interpolation is a 12, a 22, a 32.Its interpolation formula is
a 12 = clip ( α × a 11 + β × a 13 ) a 22 = clip ( α × a 21 + β × a 23 ) a 32 = clip ( α × a 31 + β × a 33 )
(d) right hand edge class.A now 13and a 33for image border point.Now, needing again the pixel of interpolation is a 12, a 22, a 32.Its interpolation formula is
a 12 = clip ( β × a 11 + α × a 13 ) a 22 = clip ( β × a 21 + α × a 23 ) a 32 = clip ( β × a 31 + α × a 33 )
(e) lower-left beveled edge class.A now 11and a 33for image border point.Now, needing again the pixel of interpolation is a 22, a 12, a 21, a 23, a 32.Its interpolation formula is
a 22 = clip ( a 11 + a 33 ) a 12 = clip ( α × a 11 + β × a 13 ) a 23 = clip ( α × a 33 + β × a 13 ) a 21 = clip ( α × a 11 + β × a 31 ) a 32 = clip ( α × a 33 + β × a 31 )
(f) bottom right beveled edge class.A now 13and a 31for image border point.Now, needing again the pixel of interpolation is a 22, a 12, a 21, a 23, a 32.Its interpolation formula is
a 22 = clip ( a 13 + a 31 ) a 12 = clip ( β × a 11 + α × a 13 ) a 23 = clip ( β × a 33 + α × a 13 ) a 21 = clip ( β × a 11 + α × a 31 ) a 32 = clip ( β × a 33 + α × a 31 )
(g) upper left box edge class.Lower-left beveled edge class.A now 11, a 13and a 31for image border point.Now, needing again the pixel of interpolation is a 22, a 23, a 32.Its interpolation formula is
a 22 = clip ( ( a 12 + a 21 ) × α + ( a 11 + a 31 + a 13 ) × β ) a 23 = clip ( ( a 22 + a 13 ) × α + a 33 × β ) a 32 = clip ( ( a 22 + a 31 ) × α + a 33 × β )
(h) lower-left edge edge class.Lower-left beveled edge class.A now 11, a 31and a 33for image border point.Now, needing again the pixel of interpolation is a 22, a 12, a 23.Its interpolation formula is
a 22 = clip ( ( a 21 + a 32 ) × α + ( a 11 + a 31 + a 33 ) × β ) a 12 = clip ( ( a 11 + a 22 ) × α + a 13 × β ) a 23 = clip ( ( a 22 + a 33 ) × α + a 13 × β )
(i) upper right box edge class.A now 11, a 13and a 33for image border point.Now, needing again the pixel of interpolation is a 22, a 21, a 32.Its interpolation formula is
a 22 = clip ( ( a 12 + a 23 ) × α + ( a 11 + a 13 + a 33 ) × β ) a 21 = clip ( ( a 11 + a 22 ) × α + a 31 × β ) a 32 = clip ( ( a 22 + a 33 ) × α + a 31 × β )
(j) lower right box edge class.A now 31, a 13and a 33for image border point.Now, needing again the pixel of interpolation is a 22, a 21, a 12.Its interpolation formula is
a 22 = clip ( ( a 32 + a 23 ) × α + ( a 13 + a 31 + a 33 ) × β ) a 21 = clip ( ( a 31 + a 22 ) × α + a 11 × β ) a 12 = clip ( ( a 13 + a 22 ) × α + a 11 × β )
(k) other classification.In other situation, all belong to this classification.To belonging to the situation of this classification, in the present invention
Extracted high-definition picture piece is not carried out to interpolation again.
In sum, first the present invention carries out initial interpolation amplification, can select any existing interpolation amplification method here, then by method of the present invention, makes improvements and improves performance.Because bilinear interpolation method has lower computation complexity and good performance, the present invention adopts bilinear interpolation method to carry out initial interpolation amplification.Then, the present invention utilizes Canny operator to carry out edge extracting to low resolution image.Next, extract the piece of 3 * 3 sizes in high resolution graphics, the position at the edge according to this piece in low resolution figure and trend, sort out this piece, when this piece belongs to front 10 class, according to the classification at edge, this piece carried out to interpolation again; When this piece belongs to other class, retain original interpolation.Then, extract next piece, carry out identical operation.Until pieces all in high resolving power has all been carried out to this operation.Finally, obtain sharp-edged high-resolution image.Its innovation is the extraction to the piece of 3 * 3 sizes of high-resolution image, and utilize the marginal information on low-resolution image to sort out it, and according to its classification, carry out the interpolation again of shooting the arrow at the target, or retain original numerical value, make edge in image given prominence to, more clear.Promote the effect that image amplifies.
Below with reference to accompanying drawing, the technique effect of design of the present invention, concrete structure and generation is described further, to understand fully object of the present invention, feature and effect.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the super-resolution image reconstruction algorithm based on marginal classification of the present invention;
Fig. 2 is the experimental result picture of the super-resolution image reconstruction method based on marginal classification of the present invention.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the invention are elaborated: the present embodiment is implemented under with technical solution of the present invention prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
As shown in Figure 1, the method that the new image based on marginal classification of the present invention amplifies is carried out in accordance with the following steps:
Step 1, the initial super-resolution based on bilinear interpolation is amplified, to obtain initial high-definition picture; That is, for following image block:
A a B b c d C e D
Wherein, A, B, C, D are respectively the pixel values of the pixel on low-resolution image, and a, b, c, d, e are the pixels that needs interpolation on high-definition picture.In bilinear interpolation method, their pixel value is obtained by following formula:
a = clip ( ( A + B ) / 2 ) b = clip ( ( A + C ) / 2 ) c = clip ( ( A + B + C + D ) / 4 ) d = clip ( ( B + D ) / 2 ) e = clip ( ( C + D ) / 2 )
Wherein, clip (x) function be the value of x is restricted to the codomain scope of a pixel point value within, that is, and clip (x)=max (I min, min (x, I max)).For the image of 8 bit brightness, I min=0, I max=255.
Here, I minand I maxrespectively that a pixel may be obtained minimum value and maximal value.
Step 2, carries out rim detection and extraction to low-resolution image; Here, adopt Canny operator to carry out edge extracting.The concrete computing method of Canny operator are as follows:
First, use Gaussian filter smoothed image, namely choose under normal conditions a Gaussian smoothing function, in frequency domain, its impulse function is:
H ( u , v ) = e - D 2 ( u , v ) / 2 σ 2
Wherein, what D (u, v) represented is the distance of Fourier transform, and σ is curvature, and H (u, v) represents the degree of Gaussian curve expansion; For avoiding edge too fuzzy, the present invention utilizes the convolution mask that scope is little.
Particularly, the present invention adopts following 5 * 5 template to carry out convolution algorithm to the figure of low resolution:
H ( m , n ) = 1 256 1 4 6 4 1 4 16 24 16 4 6 24 36 24 6 4 16 24 16 4 1 4 6 4 1
Secondly, the amplitude of compute gradient and direction, namely first choose the template of a first order difference convolution:
H 1 = - 1 - 1 1 1 H 2 = 1 - 1 1 - 1
Then define low-resolution image f (m, n) (m is along slope coordinate, and n is lateral coordinates) at H 1, H 2gradient Ψ on two orthogonal directionss 1(m, n), Ψ 2(m, n) is respectively:
Ψ 1(m,n)=f(m,n)*H 1(m,n)
Ψ 2(m,n)=f(m,n)*H 2(m,n)
In the intensity and the direction that obtain edge through further computing, be shown below:
Ψ ( m , n ) = ψ 1 2 ( m , n ) + Ψ 2 2 ( m , n )
θ Ψ = tan - 1 ψ 2 ( m , n ) ψ 1 ( m , n )
In the method for original Canny operator, the gradient magnitude with having calculated is carried out to non-maximum value (NMS) convergence; The present invention improves this step.First quantize edge angle θ Ψfor θ 1, θ 1∈ 0 °, and 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° }.Then, if θ 1=0 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m, n+1); If θ 1=45 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m-1, n+1); If θ 1=90 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m-1, n); If θ 1=135 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m-1, n-1); If θ 1=180 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m, n-1); If θ 1=225 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m+1, n-1); If θ 1=270 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m+1, n), if θ 1=315 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m+1, n+1).Like this, can reduce calculated amount, and obtain good effect.
Finally, with dual threshold algorithm, come detected image border to couple together.That is, as the gradient magnitude Ψ (m, n) of the detected initial edge points of previous step > T htime, determine that this point is image border point.Then, take these marginal points as seed, scan its adjacent point around, as its gradient magnitude Ψ (m, n) > T ltime, this point is added to the set of image border point.In the present invention, according to a large amount of experiments and its result, to these two parameter T h, T lvalue be set as T h=200, T l=100.
Step 3, the image after edge extracts carries out binary conversion treatment, makes the value of the binaryzation on non-marginal point all become 0, and the value of the binaryzation on marginal point is 1.That is, on the image of binaryzation, the pixel point value on image border is 1, and other is not that the pixel point value at edge is all 0;
Step 4, puts x=0, y=0;
Step 5, take the upper left corner that (x, y) be high-definition picture piece, extracts the image block of 3 * 3 sizes on high-resolution image.Binary image to low resolution, take (x/2, y/2) as the upper left corner, extracts the image block of 2 * 2 sizes.According to the image block after the edge binaryzation of this 2 * 2 size, the high-resolution image block extracting is classified, specifically can be divided into following 11 classes: be respectively coboundary class, lower limb class, left hand edge class, right hand edge class, lower-left beveled edge class, bottom right beveled edge class, edge, upper left corner class, edge, lower left corner class, edge, upper right corner class, edge, lower right corner class, other class; Then, according to the classification at edge, to the edge pixel point of the image block of 3 * 3 sizes of extracting on initial enlarged image and near pixel thereof, carry out interpolation again; According to the edge in the image block of 3 * 3 sizes, to the process of image block classification, can be described below.That is, to the image block extracting in high-definition picture
a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33
In this image block, a 11, a 13, a 31, a 33pixel is the pixel belonging in low-resolution image, and in this piece, remaining pixel point value is obtained by interpolation.According to the marginal distribution figure of the binaryzation of its corresponding low resolution, classify to this image block in the position that can occur by marginal point in this piece.The classification of its classification is as follows:
(a) coboundary class.A now 11and a 13for image border point.Now, needing again the pixel of interpolation is a 21, a 22, a 23.Its interpolation formula is
a 21 = clip ( α × a 11 + β × a 31 ) a 22 = clip ( α × a 12 + β × a 32 ) a 23 = clip ( α × a 13 + β × a 33 )
Wherein, clip (x) function be the value of x is restricted to the codomain scope of a pixel point value within, that is, and clip (x)=max (I min, min (x, I max)).Here, I minand I maxrespectively that a pixel may be obtained minimum value and maximal value.α and β are two weights, and alpha+beta=1 satisfies condition.
(b) lower limb class.A now 31and a 33for image border point.Now, needing again the pixel of interpolation is a 21, a 22, a 23.Its interpolation formula is
a 21 = clip ( β × a 11 + α × a 31 ) a 22 = clip ( β × a 12 + α × a 32 ) a 23 = clip ( β × a 13 + α × a 33 )
(c) left hand edge class.A now 11and a 31for image border point.Now, needing again the pixel of interpolation is a 12, a 22, a 32.Its interpolation formula is
a 12 = clip ( α × a 11 + β × a 13 ) a 22 = clip ( α × a 21 + β × a 23 ) a 32 = clip ( α × a 31 + β × a 33 )
(d) right hand edge class.A now 13and a 33for image border point.Now, needing again the pixel of interpolation is a 12, a 22, a 32.Its interpolation formula is
a 12 = clip ( β × a 11 + α × a 13 ) a 22 = clip ( β × a 21 + α × a 23 ) a 32 = clip ( β × a 31 + α × a 33 )
(e) lower-left beveled edge class.A now 11and a 33for image border point.Now, needing again the pixel of interpolation is a 22, a 12, a 21, a 23, a 32.Its interpolation formula is
a 22 = clip ( a 11 + a 33 ) a 12 = clip ( α × a 11 + β × a 13 ) a 23 = clip ( α × a 33 + β × a 13 ) a 21 = clip ( α × a 11 + β × a 31 ) a 32 = clip ( α × a 33 + β × a 31 )
(f) bottom right beveled edge class.A now 13and a 31for image border point.Now, needing again the pixel of interpolation is a 22, a 12, a 21, a 23, a 32.Its interpolation formula is
a 22 = clip ( a 13 + a 31 ) a 12 = clip ( β × a 11 + α × a 13 ) a 23 = clip ( β × a 33 + α × a 13 ) a 21 = clip ( β × a 11 + α × a 31 ) a 32 = clip ( β × a 33 + α × a 31 )
(g) upper left box edge class.Lower-left beveled edge class.A now 11, a 13and a 31for image border point.Now, needing again the pixel of interpolation is a 22, a 23, a 32.Its interpolation formula is
a 22 = clip ( ( a 12 + a 21 ) × α + ( a 11 + a 31 + a 13 ) × β ) a 23 = clip ( ( a 22 + a 13 ) × α + a 33 × β ) a 32 = clip ( ( a 22 + a 31 ) × α + a 33 × β )
(h) lower-left edge edge class.Lower-left beveled edge class.A now 11, a 31and a 33for image border point.Now, needing again the pixel of interpolation is a 22, a 12, a 23.Its interpolation formula is
a 22 = clip ( ( a 21 + a 32 ) × α + ( a 11 + a 31 + a 33 ) × β ) a 12 = clip ( ( a 11 + a 22 ) × α + a 13 × β ) a 23 = clip ( ( a 22 + a 33 ) × α + a 13 × β )
(i) upper right box edge class.A now 11, a 13and a 33for image border point.Now, needing again the pixel of interpolation is a 22, a 21, a 32.Its interpolation formula is
a 22 = clip ( ( a 12 + a 23 ) × α + ( a 11 + a 13 + a 33 ) × β ) a 21 = clip ( ( a 11 + a 22 ) × α + a 31 × β ) a 32 = clip ( ( a 22 + a 33 ) × α + a 31 × β )
(j) lower right box edge class.A now 31, a 13and a 33for image border point.Now, needing again the pixel of interpolation is a 22, a 21, a 12.Its interpolation formula is
a 22 = clip ( ( a 32 + a 23 ) × α + ( a 13 + a 31 + a 33 ) × β ) a 21 = clip ( ( a 31 + a 22 ) × α + a 11 × β ) a 12 = clip ( ( a 13 + a 22 ) × α + a 11 × β )
(k) other classification.In other situation, all belong to this classification.To belonging to the situation of this classification, in the present invention, extracted high-definition picture piece is not carried out to interpolation again.
Here, the present invention determines α=0.7 according to a large amount of experiments, β=0.3.
Step 6, puts x=x+2, if x≤2W-5 (width that W is low-resolution image) jumps to step 5 next image block is operated;
Step 7, puts y=y+2, if y≤2H-5 (height that H is low-resolution image) jumps to step 5 next image block is operated;
Step 8, amplifies and finishes the super-resolution of current low-resolution image.Obtain (2W) * (2H) high-definition picture of size.
What experiment of the present invention was mainly chosen is the facial image in FERET database, what wherein specifically choose is to comprise that the colour of skin four width images are in various degree for rebuilding, the size of original high resolution image is 120*120, according to every two regulations that pixel is got a pixel, degraded image size after down-sampling becomes 60*60, narrow down to originally 1/4, need the multiple of interpolation amplification should be made as on long and cross direction respectively 2 times.
It is that experimental subjects is tested that the present invention selects face database, then experiment acquired results is carried out to the quality of test experience result by the evaluation of two aspects of subjectivity and objectivity, wherein subjective assessment is exactly according to human eye, the visual effect of integral image to be also had the impression of image detail part is evaluated, and objective evaluation is exactly to adopt different formula algorithms that the quality of image is described by data.
Accompanying drawing 2 has shown the result of method proposed by the invention.The image that (a) in this figure row part is the low resolution of actual high-definition picture being carried out to down-sampling and obtaining, (b) row part is for carrying out the result of bilinear interpolation to the low-resolution image of (a) row, (c) row part is the marginal information figure that method of the present invention is extracted the image of (a) row, (d) row part processes for method of the present invention the high-resolution image obtaining to (a) row image, and (e) row part is actual high-definition picture.
From then in accompanying drawing, can find out, adopt the method proposing can reconstruct well marginal portion and submarginal part., after amplification, can find out, algorithm reconstruct of the present invention integral image smoothness is out very high meanwhile, and edge is more clear, and submarginal point also approaches real image more.Because the difference of the colour of skin can cause the difference of detected edge number, the result of the image of rebuilding is also slightly different.
PSNR based on bilinear interpolation method and method output image of the present invention in the actual FERET face database of table 1 image
SSIM based on bilinear interpolation method and method output image of the present invention in the actual FERET face database of table 2 image
The present invention mainly selects Y-PSNR (PSNR) and two kinds of method for objectively evaluating of characteristic similarity (SSIM) to evaluate the performance of proposed method.The formula that wherein calculates PSNR is as follows:
PSNR = 10 × log 10 ( ( 2 n - 1 ) MSE )
In above formula, n is that brightness of image is worth bit number used, for example, for 8 bit image brightness value n=8.MSE is root-mean-square error, is defined as:
MSE = Σ i = 0 2 H - 1 Σ j = 0 2 W - 1 ( f ′ ( i , j ) - f ( i , j ) ) 2 2 W × 2 H
Wherein image size is that (2W) * (2H), f ' (i, j) represents the value with the pixel of the high-definition picture of method reconstruct of the present invention, and f (i, j) represents the value of the pixel of original high resolution image.
The specific formula for calculation of characteristic similarity evaluation (SSIM) method is as follows:
SSIM = ( 2 μ x μ y + C 1 ) ( 2 σ xy + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
Wherein, μ x, μ yrepresent respectively original image and the average of rebuilding rear image, C 1, C 2the brightness of front and back two width images, σ are rebuild in representative x, σ yrepresent original image and the variance of rebuilding rear image. represent be the original image before rebuilding and rebuild after the contrast component of image, what represent is the structural similarity of two width images before and after rebuilding.
4 width low-resolution images in accompanying drawing 2 (a) are calculated respectively to super-resolution image that the traditional double linear interpolation method of putting reconstructs and PSNR and the SSIM of actual full resolution pricture rate, and the super-resolution image that reconstructs of method proposed by the invention and PSNR and the SSIM of actual high-definition picture, obtain table 1 and table 2.From these tables, can find out that proposed method has higher PSNR and SSIM than traditional bilinear interpolation method, rebuild effect and be better than traditional bilinear interpolation method for reconstructing, performance is more superior.
From the data and Fig. 2 of table 1 and table 2, can find out simultaneously, detected edge is more many is more conducive to the super resolution ratio reconstruction method based on marginal classification of the present invention.Also can illustrate that edge pixel point has important impact to the method for interpolation amplification, and the reconstruction algorithm based on marginal classification proposed by the invention can be rebuild marginal point and near the pixel it more accurately, be therefore conducive to practical application simultaneously.
More than describe preferred embodiment of the present invention in detail.The ordinary skill that should be appreciated that this area just can design according to the present invention be made many modifications and variations without creative work.Therefore, all technician in the art, all should be in the determined protection domain by claims under this invention's idea on the basis of existing technology by the available technical scheme of logical analysis, reasoning, or a limited experiment.

Claims (4)

1. an image super-resolution rebuilding method that adds rim detection, described method edge after rim detection carries out 11 kinds of different classification and carries out interpolation again again.For the multiple amplifying, be 2 * 2 o'clock, that is, 2 times of horizontal magnifications, longitudinally amplify 2 times (certainly the invention is not restricted to amplify 2 * 2 times), and described method comprises:
Step 1, the initial super-resolution based on interpolation is amplified, to obtain initial high-definition picture;
Step 2, carries out rim detection and extraction to low-resolution image;
Step 3, the image after edge extracts carries out binary conversion treatment, makes the value of the binaryzation on non-marginal point all become 0, and the value of the binaryzation on marginal point is 1.That is, on the image of binaryzation, the pixel point value on image border is 1, and other is not that the pixel point value at edge is all 0;
Step 4, puts x=0, y=0;
Step 5, take the upper left corner that (x, y) be high-definition picture piece, extracts the image block of 3 * 3 sizes on high-resolution image.Binary image to low resolution, take (x/2, y/2) as the upper left corner, extracts the image block of 2 * 2 sizes.According to the image block after the edge binaryzation of this 2 * 2 size, the high-resolution image block extracting is classified, specifically can be divided into following 11 classes: be respectively coboundary class, lower limb class, left hand edge class, right hand edge class, lower-left beveled edge class, bottom right beveled edge class, edge, upper left corner class, edge, lower left corner class, edge, upper right corner class, edge, lower right corner class, other class; Then, according to the classification at edge, to the edge pixel point of the image block of 3 * 3 sizes of extracting on initial enlarged image and near pixel thereof, carry out interpolation again;
Step 6, puts x=x+2, if x≤2W-5 (width that W is low-resolution image) jumps to step 5 next image block is operated;
Step 7, puts y=y+2, if y≤2H-5 (height that H is low-resolution image) jumps to step 5 next image block is operated;
Step 8, amplifies and finishes the super-resolution of current low-resolution image.Obtain (2W) * (2H) high-definition picture of size.
2. the image super-resolution rebuilding algorithm that adds rim detection as claimed in claim 1, wherein, in described step 2, the concrete computing method of Canny operator are as follows:
First, use Gaussian filter smoothed image, namely choose under normal conditions a Gaussian smoothing function, in frequency domain, its impulse function is:
H ( u , v ) = e - D 2 ( u , v ) / 2 σ 2
Wherein, what D (u, v) represented is the distance of Fourier transform, and σ is curvature, and H (u, v) represents the degree of Gaussian curve expansion; For avoiding edge too fuzzy, the present invention utilizes the convolution mask that scope is little.
Particularly, the present invention adopts following 5 * 5 template to carry out convolution algorithm to the figure of low resolution:
H ( m , n ) = 1 256 1 4 6 4 1 4 16 24 16 4 6 24 36 24 6 4 16 24 16 4 1 4 6 4 1
Secondly, the amplitude of compute gradient and direction, namely first choose the template of a first order difference convolution:
H 1 = - 1 - 1 1 1 H 2 = 1 - 1 1 - 1
Then define low-resolution image f (m, n) (m is along slope coordinate, and n is lateral coordinates) at H 1, H 2gradient Ψ on two orthogonal directionss 1(m, n), Ψ 2(m, n) is respectively:
Ψ 1(m,n)=f(m,n)*H 1(m,n)
Ψ 2(m,n)=f(m,n)*H 2(m,n)
In the intensity and the direction that obtain edge through further computing, be shown below:
Ψ ( m , n ) = ψ 1 2 ( m , n ) + Ψ 2 2 ( m , n )
θ Ψ = tan - 1 ψ 2 ( m , n ) ψ 1 ( m , n )
In the method for original Canny operator, the gradient magnitude with having calculated is carried out to non-maximum value (NMS) convergence; The present invention improves this step.First quantize edge angle θ Ψfor θ 1, θ 1∈ 0 °, and 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° }.Then, if θ 1=0 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m, n+1); If θ 1=45 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m-1, n+1); If θ 1=90 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m-1, n); If θ 1=135 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m-1, n-1); If θ 1=180 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m, n-1); If θ 1=225 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m+1, n-1); If θ 1=270 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m+1, n), if θ 1=315 °, must just judge that current point is initial marginal point by Ψ (m, n) > Ψ (m+1, n+1).Like this, can reduce calculated amount, and obtain good effect.
Finally, with dual threshold algorithm, come detected image border to couple together.That is, as the gradient magnitude Ψ (m, n) of the detected initial edge points of previous step > T htime, determine that this point is image border point.Then, take these marginal points as seed, scan its adjacent point around, as its gradient magnitude Ψ (m, n) > T ltime, this point is added to the set of image border point.In the present invention, according to a large amount of experiments and its result, to these two parameter T h, T lvalue be set as T h=200, T l=100.
3. edge classification as claimed in claim 1, and according to the method for its classification again interpolation.
According to the edge in the image block of 3 * 3 sizes, to the process of image block classification, can be described below.That is, to the image block extracting in high-definition picture
a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33
In this image block, a 11, a 13, a 31, a 33pixel is the pixel belonging in low-resolution image, and in this piece, remaining pixel point value is obtained by interpolation.According to the marginal distribution figure of the binaryzation of its corresponding low resolution, classify to this image block in the position that can occur by marginal point in this piece.The classification of its classification is as follows:
(a) coboundary class.A now 11and a 13for image border point.Now, needing again the pixel of interpolation is a 21, a 22, a 23.Its interpolation formula is
a 21 = clip ( α × a 11 + β × a 31 ) a 22 = clip ( α × a 12 + β × a 32 ) a 23 = clip ( α × a 13 + β × a 33 )
Wherein, clip (x) function be the value of x is restricted to the codomain scope of a pixel point value within, that is, and clip (x)=max (I min, min (x, I max)).Here, I minand I maxrespectively that a pixel may be obtained minimum value and maximal value.α and β are two weights, and alpha+beta=1 satisfies condition.
(b) lower limb class.A now 31and a 33for image border point.Now, needing again the pixel of interpolation is a 21, a 22, a 23.Its interpolation formula is
a 21 = clip ( β × a 11 + α × a 31 ) a 22 = clip ( β × a 12 + α × a 32 ) a 23 = clip ( β × a 13 + α × a 33 )
(c) left hand edge class.A now 11and a 31for image border point.Now, needing again the pixel of interpolation is a 12, a 22, a 32.Its interpolation formula is
a 12 = clip ( α × a 11 + β × a 13 ) a 22 = clip ( α × a 21 + β × a 23 ) a 32 = clip ( α × a 31 + β × a 33 )
(d) right hand edge class.A now 13and a 33for image border point.Now, needing again the pixel of interpolation is a 12, a 22, a 32.Its interpolation formula is
a 12 = clip ( β × a 11 + α × a 13 ) a 22 = clip ( β × a 21 + α × a 23 ) a 32 = clip ( β × a 31 + α × a 33 )
(e) lower-left beveled edge class.A now 11and a 33for image border point.Now, needing again the pixel of interpolation is a 22,
a 12、a 21、a 23、a 32。Its interpolation formula is
a 22 = clip ( a 11 + a 33 ) a 12 = clip ( α × a 11 + β × a 13 ) a 23 = clip ( α × a 33 + β × a 13 ) a 21 = clip ( α × a 11 + β × a 31 ) a 32 = clip ( α × a 33 + β × a 31 )
(f) bottom right beveled edge class.A now 13and a 31for image border point.Now, needing again the pixel of interpolation is a 22, a 12, a 21, a 23, a 32.Its interpolation formula is
a 22 = clip ( a 13 + a 31 ) a 12 = clip ( β × a 11 + α × a 13 ) a 23 = clip ( β × a 33 + α × a 13 ) a 21 = clip ( β × a 11 + α × a 31 ) a 32 = clip ( β × a 33 + α × a 31 )
(g) upper left box edge class.Lower-left beveled edge class.A now 11, a 13and a 31for image border point.Now, needing again the pixel of interpolation is a 22, a 23, a 32.Its interpolation formula is
a 22 = clip ( ( a 12 + a 21 ) × α + ( a 11 + a 31 + a 13 ) × β ) a 23 = clip ( ( a 22 + a 13 ) × α + a 33 × β ) a 32 = clip ( ( a 22 + a 31 ) × α + a 33 × β )
(h) lower-left edge edge class.Lower-left beveled edge class.A now 11, a 31and a 33for image border point.Now, needing again the pixel of interpolation is a 22, a 12, a 23.Its interpolation formula is
a 22 = clip ( ( a 21 + a 32 ) × α + ( a 11 + a 31 + a 33 ) × β ) a 12 = clip ( ( a 11 + a 22 ) × α + a 13 × β ) a 23 = clip ( ( a 22 + a 33 ) × α + a 13 × β )
(i) upper right box edge class.A now 11, a 13and a 33for image border point.Now, needing again the pixel of interpolation is a 22, a 21, a 32.Its interpolation formula is
a 22 = clip ( ( a 12 + a 23 ) × α + ( a 11 + a 13 + a 33 ) × β ) a 21 = clip ( ( a 11 + a 22 ) × α + a 31 × β ) a 32 = clip ( ( a 22 + a 33 ) × α + a 31 × β )
(j) lower right box edge class.A now 31, a 13and a 33for image border point.Now, needing again the pixel of interpolation is a 22, a 21, a 12.Its interpolation formula is
a 22 = clip ( ( a 32 + a 23 ) × α + ( a 13 + a 31 + a 33 ) × β ) a 21 = clip ( ( a 31 + a 22 ) × α + a 11 × β ) a 12 = clip ( ( a 13 + a 22 ) × α + a 11 × β )
(k) other classification.In other situation, all belong to this classification.To belonging to the situation of this classification, in the present invention, extracted high-definition picture piece is not carried out to interpolation again.
4. in the present invention, to the value of α and β, be α=0.7, β=0.3.
CN201410193840.4A 2014-04-30 2014-04-30 A kind of method of the image super-resolution amplification based on marginal classification Active CN104021523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410193840.4A CN104021523B (en) 2014-04-30 2014-04-30 A kind of method of the image super-resolution amplification based on marginal classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410193840.4A CN104021523B (en) 2014-04-30 2014-04-30 A kind of method of the image super-resolution amplification based on marginal classification

Publications (2)

Publication Number Publication Date
CN104021523A true CN104021523A (en) 2014-09-03
CN104021523B CN104021523B (en) 2017-10-10

Family

ID=51438262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410193840.4A Active CN104021523B (en) 2014-04-30 2014-04-30 A kind of method of the image super-resolution amplification based on marginal classification

Country Status (1)

Country Link
CN (1) CN104021523B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881842A (en) * 2015-05-18 2015-09-02 浙江师范大学 Image super resolution method based on image decomposition
CN105787912A (en) * 2014-12-18 2016-07-20 南京大目信息科技有限公司 Classification-based step type edge sub pixel localization method
CN106169173A (en) * 2016-06-30 2016-11-30 北京大学 A kind of image interpolation method
CN109345465A (en) * 2018-08-08 2019-02-15 西安电子科技大学 High-definition picture real time enhancing method based on GPU
CN109410177A (en) * 2018-09-28 2019-03-01 深圳大学 A kind of image quality analysis method and system of super-resolution image
CN109557101A (en) * 2018-12-29 2019-04-02 桂林电子科技大学 A kind of defect detecting device and method of nonstandard high reflection curve surface work pieces
CN112348103A (en) * 2020-11-16 2021-02-09 南开大学 Image block classification method and device and super-resolution reconstruction method and device thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024658A1 (en) * 2003-05-30 2008-01-31 Samsung Electronics Co., Ltd. Edge direction based image interpolation method
CN101499164A (en) * 2009-02-27 2009-08-05 西安交通大学 Image interpolation reconstruction method based on single low-resolution image
CN101957309A (en) * 2010-08-17 2011-01-26 招商局重庆交通科研设计院有限公司 All-weather video measurement method for visibility
CN102360498A (en) * 2011-10-27 2012-02-22 江苏省邮电规划设计院有限责任公司 Reconstruction method for image super-resolution
US20130315506A1 (en) * 2011-02-21 2013-11-28 Mitsubishi Electric Corporation Image magnification device and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024658A1 (en) * 2003-05-30 2008-01-31 Samsung Electronics Co., Ltd. Edge direction based image interpolation method
CN101499164A (en) * 2009-02-27 2009-08-05 西安交通大学 Image interpolation reconstruction method based on single low-resolution image
CN101957309A (en) * 2010-08-17 2011-01-26 招商局重庆交通科研设计院有限公司 All-weather video measurement method for visibility
US20130315506A1 (en) * 2011-02-21 2013-11-28 Mitsubishi Electric Corporation Image magnification device and method
CN102360498A (en) * 2011-10-27 2012-02-22 江苏省邮电规划设计院有限责任公司 Reconstruction method for image super-resolution

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LINGFENG ET AL.: "Edge-Directed Single-Image Super-Resolution Via Adaptive Gradient Magnitude Self-Interpolation", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
王东鹤: "一种基于细化边缘的图像放大方法", 《微电子学与计算机》 *
黄彪 等: "一宗基于边缘预测的图像实时放大技术", 《红外与激光工程》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787912B (en) * 2014-12-18 2021-07-30 南京大目信息科技有限公司 Classification-based step type edge sub-pixel positioning method
CN105787912A (en) * 2014-12-18 2016-07-20 南京大目信息科技有限公司 Classification-based step type edge sub pixel localization method
CN104881842B (en) * 2015-05-18 2019-03-01 浙江师范大学 A kind of image super-resolution method based on picture breakdown
CN104881842A (en) * 2015-05-18 2015-09-02 浙江师范大学 Image super resolution method based on image decomposition
CN106169173A (en) * 2016-06-30 2016-11-30 北京大学 A kind of image interpolation method
CN106169173B (en) * 2016-06-30 2019-12-31 北京大学 Image interpolation method
CN109345465A (en) * 2018-08-08 2019-02-15 西安电子科技大学 High-definition picture real time enhancing method based on GPU
CN109410177A (en) * 2018-09-28 2019-03-01 深圳大学 A kind of image quality analysis method and system of super-resolution image
WO2020062901A1 (en) * 2018-09-28 2020-04-02 深圳大学 Method and system for analyzing image quality of super-resolution image
CN109410177B (en) * 2018-09-28 2022-04-01 深圳大学 Image quality analysis method and system for super-resolution image
CN109557101A (en) * 2018-12-29 2019-04-02 桂林电子科技大学 A kind of defect detecting device and method of nonstandard high reflection curve surface work pieces
CN109557101B (en) * 2018-12-29 2023-11-17 桂林电子科技大学 Defect detection device and method for non-elevation reflective curved surface workpiece
CN112348103A (en) * 2020-11-16 2021-02-09 南开大学 Image block classification method and device and super-resolution reconstruction method and device thereof
CN112348103B (en) * 2020-11-16 2022-11-11 南开大学 Image block classification method and device and super-resolution reconstruction method and device thereof

Also Published As

Publication number Publication date
CN104021523B (en) 2017-10-10

Similar Documents

Publication Publication Date Title
Suryanarayana et al. Accurate magnetic resonance image super-resolution using deep networks and Gaussian filtering in the stationary wavelet domain
CN104021523A (en) Novel method for image super-resolution amplification based on edge classification
Yoo et al. Image restoration by estimating frequency distribution of local patches
Zheng et al. Residual multiscale based single image deraining
CN103475876B (en) A kind of low bit rate compression image super-resolution rebuilding method based on study
CN106952228A (en) The super resolution ratio reconstruction method of single image based on the non local self-similarity of image
CN109003265B (en) No-reference image quality objective evaluation method based on Bayesian compressed sensing
CN108491784A (en) The identification in real time of single feature towards large-scale live scene and automatic screenshot method
CN103037212B (en) The adaptive block compressed sensing method for encoding images of view-based access control model perception
Tong et al. Learning no-reference quality metric by examples
CN106530244B (en) A kind of image enchancing method
Yao et al. Improving long range and high magnification face recognition: Database acquisition, evaluation, and enhancement
CN107833182A (en) The infrared image super resolution ratio reconstruction method of feature based extraction
Singh et al. Sub-band energy constraints for self-similarity based super-resolution
US9268791B2 (en) Method and apparatus for image processing and computer readable medium
CN101430760B (en) Human face super-resolution processing method based on linear and Bayesian probability mixed model
CN109447903A (en) A kind of method for building up of half reference type super-resolution reconstruction image quality evaluation model
CN105913383A (en) Image noise reduction method based on image block prior estimation mixed framework
CN107133915A (en) A kind of image super-resolution reconstructing method based on study
CN110351453A (en) A kind of computer video data processing method
Barland et al. Blind quality metric using a perceptual importance map for JPEG-20000 compressed images
Widyantara et al. Image enhancement using morphological contrast enhancement for video based image analysis
Wang et al. From coarse to fine: a two stage conditional generative adversarial network for single image rain removal
CN105825480A (en) Image deblurring method based on sapphire surface defect automatic detection
CN107169925B (en) The method for reconstructing of stepless zooming super-resolution image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant