CN101562758A - Method for objectively evaluating image quality based on region weight and visual characteristics of human eyes - Google Patents
Method for objectively evaluating image quality based on region weight and visual characteristics of human eyes Download PDFInfo
- Publication number
- CN101562758A CN101562758A CNA2009100976770A CN200910097677A CN101562758A CN 101562758 A CN101562758 A CN 101562758A CN A2009100976770 A CNA2009100976770 A CN A2009100976770A CN 200910097677 A CN200910097677 A CN 200910097677A CN 101562758 A CN101562758 A CN 101562758A
- Authority
- CN
- China
- Prior art keywords
- image
- evaluation
- gradient
- weight
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention discloses a method for objectively evaluating the image quality based on region weight and visual characteristics of human eyes, which comprises the following steps: (1) acquiring images which are distributed according to the region weight; (2) applying modified human eyes to process images of sensitive functions; and (3) applying gradient information similarity to evaluate the images. The method is combined with the visual characteristics of the human eyes, so that the image processing result of the method is quite close to a human eye visual system, and the quality evaluation value according with the visual characteristics of the human eyes can be obtained as long as a reference image and an evaluation image are inputted. The method can be applied in the aspects of image compression, storage, transmission, reconstruction and the like, and quickly measures the image quality. Moreover, the method can be applied to iteration ending in the image restoration process, and the optimum restoration image obtained by using an evaluation algorithm to end an iteration algorithm is quite close to the optimum restoration image obtained by the evaluation of the human eyes.
Description
Technical field
The present invention relates to computer image processing technology, relate in particular to a kind of method for objectively evaluating image quality based on region weight and human-eye visual characteristic.
Background technology
Along with the development of human society to the altitude figure direction, the develop rapidly of digital picture, digital video, Digital Television and universal also will becoming must.Digital picture is being obtained, is being compressed, may be subjected to various degeneration distortions in the process such as storage, transmission and reconstruction in the various technology of Digital Image Processing, the problem of image deterioration unavoidably can occur, how more effectively the problem of assess image quality is also arisen at the historic moment.The output image quality final decision of image transmission, processing or a compressibility quality of its technology and system, therefore, the practical significance of image quality evaluation has just become the evaluation to this system and technical quality thereof.
Because image is finally watched to the people, so the optimum picture quality evaluation method is the subjective assessment of human eye.But the degree of freedom of this evaluation method is big, and is subjected to observer's self diathesis, observation purpose, observing environment and people's the influence of psychological factor etc. in a certain stage, and its operation is too numerous and diverse, consuming time, relatively costly.In fact, people's visual psychology factor is difficult to express with Mathematical Modeling accurately, thereby causes evaluation result accurate inadequately, and is not easy to the design of picture system, also inconvenience use in engineering is used.In this case, method for objectively evaluating arises at the historic moment, and its target is exactly to want to obtain fast automatically the quantizating index of picture quality.The objective image quality evaluating method is accurately strict really in definition, simple, can better determine the difference between the image, the person's that but generally all do not consider the image observation visual psychology factor, and the main body of picture appraisal---the people often plays a part very important when picture appraisal, thereby the evaluation result of method for objectively evaluating many times can't match with the result of human eye subjective assessment.
Present widely used method for objectively evaluating is as all square indifference (Mean Square Error:MSE), Y-PSNR (Peak Signal to Noise ratio:PSNR), signal to noise ratio (Signal-to-NoiseRatio:SNR) etc.These are to weigh the most frequently used algorithm of picture quality, are the image quality evaluation means of full reference type, to the image f that is of a size of M, N (m, n), MSE and PSNR are defined as:
F wherein
Ij, f
IjRepresent original image respectively and recover image, M, N is the Gao Yukuan of presentation video respectively.MSE is more little, PSNR is big more, and effect is good more.But the gray difference between its just simple measurement image or the relation of signal and noise, deficient in stability under different image degradation conditions.
Newer algorithm is SSIM (Structural Similarity) algorithm that people such as Wang Zhou proposes, but it is big slightly for fog-level, the picture appraisal poor ability that noise is big slightly, the ring ripple is many slightly.
Previous methods is relatively poor mainly to be the visual characteristic of not considering human eye, does not notice that perhaps the people observes (the degree of concern difference that zones of different is subjected to) to the selectivity of picture material.
Summary of the invention
The present invention proposes a kind of method HVSWGM (Weighted Gradient Metric based on HumanVisual System) that estimates by image-region branch weight in conjunction with human-eye visual characteristic and the variation that utilizes gradient, and image can directly carry out its result of quality evaluation after treatment can be with the visual experience unanimity of human eye.
The method for objectively evaluating image quality main thought that the present invention is based on region weight and human-eye visual characteristic is:
1, former figure is converted into the image of weight distribution meets the characteristic of human eye to the degree of concern of different images content by the picture material difference.
Because human eye has the level of differentiation and emphasis to image observation, and is in general higher for the degree of concern in marginal portion (zone that the intensity transition is bigger), because fringe region has striking contrast, human eye is very responsive to this.And the Grad larger part characterizes is exactly the marginal portion, and smaller part is the least interested flat site of human eye, so use gradient information to weigh the visual interest level of human eye.Handle image with the preliminary treatment of weight factor hum pattern, can obtain the image of the different gray scale weight distribution of zones of different.
2, combine human-eye visual characteristic, use revised contrast sensitivity function to handle image, to meeting the space that people's emmetropia is observed, this mainly uses the human eye physiological law different to the response of different space frequency with image transitions.
In daily life, human eye need be differentiated sharply marginated object, also needs to differentiate the object of obscurity boundary.The back a kind of resolution capability then be called CSF (contrast sensitivity, CS).CSF (CS) is defined as the inverse of the contrast threshold that vision system can perceive.CSF=1/ contrast threshold.Contrast threshold is low, CSF height then, and then visual performance is good.In a certain spatial frequency, vision system has certain CSF; Otherwise when same contrast, vision system has certain spatial frequency resolving power (shape feel).
With spatial frequency (spatial frequency is an abscissa SF), is ordinate with the CSF, measured curve be contrast sensitivity function (contrast sensitivity function, CSF).
Through a large amount of experiment tests, the spatial frequency response of human eye is approximately:
Here get σ=2, w=2 π f/60, spatial frequency is
U and v are the spatial frequency (u and v are the component of image space frequency f at level and vertical direction) of level and vertical direction, and unit is cycles/degree.Its curve as shown in Figure 1, transverse axis is a spatial frequency, unit is all every degree (cycles/degree), the longitudinal axis for the contrast sensitivity.
On the experiment basis, present improved spatial frequency response is: (in the formula in u, v, f, w and the last formula implication identical)
S
a(u,v)=S(w)P(w,θ)
(can be referring to seeing Y.Horita and M.Miyahara, " Image coding and qualityestimation in uniform perceptual space, " IECE, Tokyo, Japan, IECE Tech.Rep.IE87-115, Jan.1987.)
In the formula:
θ=tan
-1(v/u), expression and horizontal direction angle (frequency data point (u, v) with the line and the horizontal direction angle of initial point);
β=8;
f
0=11cycles/degree, and w
0=2 π f
0/ 60;
This expression is at f>f
0(be w>w
0) time, response curve will sharply descend.
3, the utilization marginal information reflects the feature of picture quality and uses gradient information to weigh picture quality.
Gradient can embody the change of image detail and textural characteristics, so it can weigh the quality of image.
The inventive method is by the relevant information of center symmetrical Gaussian weighting windows pointwise on gradient image neighborhood image piece that to calculate with this point be the center, thereby the information and the mapping that calculate each point correspondence image piece obtain the information that degrades that piece image is described degraded image, we claim that this figure is the gradient information index map, this figure not only can be used for the readability of evaluation map picture, and also can reflect the situation of the image ring ripple in the recuperation.
A kind of method for objectively evaluating image quality based on region weight and human-eye visual characteristic, comprise the steps:
(1) obtains the image that distributes by region weight;
Before the extraction gradient, to carrying out the gaussian filtering processing, improve anti-noise ability earlier with reference to figure f and evaluation map g (needing the final figure that estimates its quality):
Wherein
Represent filter action here, Gaussian function is:
After then gaussian filtering being handled with reference to figure f
1(x, y) and the evaluation map g of gaussian filtering after handling
1(x, y) gradient of doing next step is extracted and is handled.
The operator that gradient is extracted adopts isotropic sobel operator, and its weight is inversely proportional to the distance at zero point and center, has a square isotropism, thus detection during along different directions, the amplitude unanimity of gradient, described isotropic sobel operator is as follows:
The horizontal operator of isotropism sobel:
The vertical operator of isotropism sobel:
We determine the measurement use following formula of the gradient of arbitrary image (for example image p):
Utilize formula (4) can obtain f
1(x, y) and g
1(x, gradient image G y)
F1(x, y) and G
G1(x, y).
Here the edge of being said is a zone, also is the easiest zone of being differentiated by the people of ring ripple and fog-level, but utilizes the G of former figure gained
F1(x y) obtained the position at edge accurately, and edge contour is clearly demarcated, so to G
F1(x, y) expansion r pixel makes the marginal portion can expand, comprises the fuzzy zone of differentiating and may produce the ring ripple easily as far as possible.
So gradient map G that is expanded
Ef
G
Ef=expand(G
f1,r)
Expand represents expansive working, gets r=N/20, and N is a picture altitude.
Definition weight factor hum pattern is:
Wmap=G
EfG
G1(G
G1Promptly refer to G
G1(x, y))
Weight factor hum pattern and the same size of former figure show as each regional weight difference, each pixel weight difference.
The weight factor hum pattern that obtains above, in fact be exactly that the marginal information thought according to existing theory is most important, texture region secondly, flat site next viewpoint again calculates weight coefficient with image according to fringe region, texture region, flat site, then the size of each pixel of hum pattern corresponding the weight of these positions on reference diagram and the evaluation map.
So can obtain through weight handle with reference to figure F (x, y) and evaluation map G (x, y);
F(x,y)=f(x,y)·Wmap
G(x,y)=g(x,y)·Wmap
(2) utilization modifier eye contrast sensitivity function is handled (CSF processing), obtains meeting the clearly demarcated image of gray-level of human-eye visual characteristic.
Utilization modifier eye eye response S
a(u, v) handle F (x, y) with G (x, y),
S wherein
a(u v) is s
a(x, frequency domain response y):
F
w(x,y)=F(x,y)*s
a(x,y)
G
w(x,y)=F(x,y)*s
a(x,y)
Here * represents convolution relation.
So obtain through the F after the CSF processing
wWith G
w, at this moment both have met the image of human eye interest observation.
Here define: FG
w=F
w(x, y) G
w(x, y)
(3) utilization gradient information similarity evaluation image;
Here gradient is still used the isotropism sobel operator in the step (1) and is calculated the mode of weighing.
In to image calculation gradient process, adopt the mode of asking for of the gradient information index map of being mentioned in the technical characterictic thinking 3, by the relevant information of center symmetrical Gaussian weighting windows pointwise on gradient image neighborhood image piece that to calculate with this point be the center, thereby the information and the mapping that calculate each point correspondence image piece obtain the information that degrades that a width of cloth gradient information index map is described degraded image.
The F that from step (2), obtains respectively
w(x, y), G
w(x, y) and FG
wIn extract and big piece such as Gauss's window, each personal Gauss's weighting windows piece that extracts of opposing carries out relevant treatment earlier, obtains the piece after the relevant treatment.
Described Gauss's weighting windows:
N is generally odd number, and such as N=9, then W can be expressed as
Inner all elements and be 1 and Gaussian distributed.
Utilize the piece after isotropism sobel operator in the step (1) and gradient measurement mode are calculated relevant treatment, obtain the gradient of this piece, with the gradient mean value of this piece a pixel value, in this way with F as gradient image
w(x, y), G
w(x, y) and FG
wBlock-by-block is handled, last each self-corresponding gradient image G of three
Fw(x, y), G
Gw(x, y) and G
FwGw
Can certainly utilize in the prior art other means by F
w(x, y), G
w(x, y) and FG
wCalculate its corresponding gradient image G
Fw(x, y), G
Gw(x, y) and G
FwGw
With gradient image G
Fw(x, y), G
Gw(x, y) and G
FwGwIn each mutually corresponding some substitution formula (5), obtain evaluation index Gs.
Gs=(2G
FwGw+C)/(G
Fw 2+G
Gw 2+C) (5)
Here C=KL, K is constant (K<<1), L is the gray value dynamic range (for example the gray-scale map of 8 bit-depths is 255) of image.Adding C is in order to eliminate exist (denominator is zero situation) of morbid state.
The evaluation index Gs correspondence mappings of being had a few is become a new image, i.e. the gradient information index map.
The average of this gradient information index map is used for weighing evaluation map g picture quality.
The inventive method is handled original evaluation map in conjunction with the visual characteristic of human eye, and the result as long as import reference diagram and evaluation map, can obtain meeting the quality evaluation value of human-eye visual characteristic very near the human visual system.It can apply to image compression, storage, transmission and reconstruction or the like aspect, weighs the quality of image rapidly; The iteration that can also be applied in the image restoration stops, and is very approaching with the best restored map that this evaluation algorithms comes the best restored map that obtains behind the termination of iterations algorithm and human eye evaluation to obtain.
Description of drawings
Fig. 1 is contrast sensitivity function curve chart;
Fig. 2 is the operating process block diagram of the inventive method;
Fig. 3 is the used one group of gray-scale map (former figure and different degenerated form figure) of embodiment experiment,
Wherein:
Fig. 3 a is former figure;
Fig. 3 b is salt-pepper noise figure;
Fig. 3 c is JPEG compression figure;
Fig. 3 d is mean shift figure;
Fig. 3 e is the property taken advantage of speckle noise figure;
Fig. 3 f is an additive Gaussian noise;
Fig. 3 g is an ambiguity diagram;
Fig. 3 h schemes for contrast stretches;
Fig. 4 is by the great amount of images in the image data base is estimated, in conjunction with its subjective assessment value, difference average ratings mark (DMOS:Difference Mean Opinion Scores) and the point range figure of the inventive method HVSWGM evaluation of estimate and corresponding matched curve have been generated.
The workflow block diagram that Fig. 5 restores for the inventive method termination of iterations;
Fig. 6 is each figure in the iteration recuperation:
Fig. 6 a is former figure;
Fig. 6 b is Gaussian Blur figure;
Fig. 6 c is that HVSWGM estimates the best iteration diagram of gained;
Fig. 7 is iterations and HVSWGM graph of relation;
Embodiment
The validity that the huge database of the employing of using some gray-scale maps commonly used and VQEG (The Video Quality Experts Group) to advise is directly tested this HVSWGM algorithm as concrete experimental subjects.Use the ability of this algorithm termination of iterations of iteration recovery experiment test of mark domain.
Utilize the inventive method to handle image, as shown in Figure 2, import reference diagram and evaluation map, can obtain the evaluation of estimate of picture quality.With Fig. 3 a (reference diagram) and Fig. 3 b (evaluation map) is example:
(1) obtains the image that distributes by region weight.Will be with reference to figure 3a (to call image f in the following text) and evaluation map 3b (to call image g in the following text) input.
(a) at first handle, improve anti-noise ability, obtain with Gaussian function:
Wherein
Represent filter action here, Gaussian function is:
(b) for f
1And g
1, adopt isotropic sobel operator extraction gradient information, adopt the arithmetic square root form of the quadratic sum of horizontal operator and vertical operator:
Wherein, the horizontal operator of isotropism sobel is:
The vertical operator of isotropism sobel is:
(c) to G
F1Carry out the expansion of r pixel,
G
Ef=expand(G
f1,r)
Expand represents expansive working, gets r=N/20, and N is a picture altitude.
Definition weight factor hum pattern is: Wmap=G
EfG
G1
(d) so obtain two width of cloth images of the different weight distribution of zones of different content:
F=f·Wmap
G=g·Wmap
(2) image of the different weight distribution of the zones of different content that obtains at step (1),
Utilization modifier eye eye response S
a(u v) handles F and G,
S wherein
a(u v) is s
a(x, frequency domain response y):
F
w=F*s
a(x,y)
G
w=F*s
a(x,y)
* symbology convolution.
So obtain further through contrasting two width of cloth images of sensitivity function Filtering Processing.
Here define: FG
w=F
w.G
w
(3) obtain two width of cloth at step (2) and meet the image that human-eye visual characteristic is observed,
Processing mode according to Gauss's weighting windows piecemeal is calculated F
w, G
wWith FG
wGradient, obtain gradient image G
Fw, G
GwAnd G
FwGw
Gauss's weighting windows:
The size of getting Gauss's weighting windows is N=15 * 15, and standard deviation is 1.3.
The gradient similarity evaluation index of the image after final the processing is Gs
Gs=(2G
FwGw+C)/(G
Fw 2+G
Gw 2+C) (5)
Here C=KL, K gets constant 0.00001, and L is gray value dynamic range (gray-scale map of 8 bit-depths is 255).Adding C is in order to eliminate exist (denominator is zero situation) of morbid state.
Finally, obtained the evaluation of estimate of evaluation map as 3b (image g).
Concrete experimental result is as follows:
1, the experiment test of the present invention under the general pattern quality evaluation is used
(1) shown in Fig. 3 a~3h be reference diagram (former figure) and the evaluation map that experiment is adopted, it is as shown in table 1 to estimate the gained result.Be worth greatly more, illustrate that picture quality is good more, reference diagram is Fig. 3 (a).
Table 1
The test picture | Former figure | Salt-pepper noise figure | JPEG compresses figure | Mean shift figure | Speckle noise figure | Additive Gaussian noise | Ambiguity diagram | Contrast stretches and schemes |
The HVSWGM evaluation of |
1 | 0.8893 | 0.7541 | 0.9970 | 0.8693 | 0.8617 | 0.7744 | 0.9728 |
(2) Figure 4 shows that the experimental result that the large-scale image library that adopts VQEG to recommend is done, is a point range figure, each point represent an image evaluation information---transverse axis is represented the evaluation of estimate of the inventive method among the figure, the longitudinal axis is represented human eye subjective assessment value DMOS.Objective evaluation in the point range figure (evaluation of estimate of the present invention) meets subjective assessment (DMOS), and each point almost distributes on the curve both sides has uniformly proved the stability of this method, the also more close matched curve of all data has monotonicity preferably, and validity of the present invention has been described.
The image data base that Fig. 4 tests employing provides 982 pictures.They obtain by degenerations such as JPEG2000, JPEG, Gaussian noise, Gaussian Blur and bit transfer mistakes from 29 former figure.Database also provides the data to each figure subjective assessment, and (DMOS:Difference Mean Opinion Scores) characterizes with difference average ratings mark, and wherein the DMOS value is more little, and image is good more, and DMOS is zero and shows that this is a former figure.This image data base network address is
Http:// live.ece.utexas.edu/research/quality/subjective.htm
2, the termination of iterations aptitude tests of the inventive method in iterative image restores
The inventive method stops means as iteration, the curve chart of its iterations and evaluation of estimate should be level and smooth, have the low slope of good unimodality or tail end, stable termination ability is so just arranged.
Below the inventive method is tested, whole testing process as shown in Figure 5.
Picture quality was best relatively when the index Gs value of the image after utilizing the inventive method to handle in general was maximum, but its value needs only less than certain threshold value T within [0,1], and the quality of image is very approaching, and human eye can't be differentiated.Therefore after iteration each time, the index Gs value of computed image (HVSWGM value), analyze HVSWGM (k)-HVSWGM (k-1)<T (k=1,2 ..., T is a threshold value) whether satisfy, if then continue iteration, otherwise then stop iteration, prove to have obtained best restored image, k-1 is best iterations.
Experimental image adopts the mark domain, and shown in Fig. 6 a~6c, image restores algorithm with iteration and restores behind Gaussian Blur, and is used for termination of iterations with HVSWGM, has obtained best iterative image, adopts T=0.001 here; And, all well have the low slope of unimodality and tail end for the general nature image.
Once more, we use iteration 100 times to the Gaussian Blur figure of mark domain, and every iteration is once estimated once, and transverse axis is an iterations, and the longitudinal axis is an evaluation of estimate, the curve that draws, as shown in Figure 7.
From experiment effect, the inventive method curve smoothing, there are low slope of tail end or perhaps unimodality, can be used for termination of iterations.
Claims (3)
1. the method for objectively evaluating image quality based on region weight and human-eye visual characteristic is characterized in that, comprises the steps:
(1) extracts and obtain gradient image G carry out adopting again after gaussian filtering is handled isotropic sobel operator to carry out gradient respectively with reference to figure f and evaluation map g
F1(x is y) with gradient image G
G1(x, y); To gradient image G
F1(x, y) the gradient map G that is expanded of expansion process
Ef
Definition weight factor hum pattern is: Wmap=G
EfG
G1
With reference to figure f and evaluation map g utilize definition weight factor hum pattern carry out weight handle obtain through weight handle with reference to figure F (x, y) and the evaluation map G of process weight processing (x, y);
(2) utilize modifier eye contrast sensitivity function S
a(u, v) the process weight that obtains of treatment step (1) handle with reference to figure F (x, y) and the evaluation map G that handles through weight (x, y), obtain through after the sensitivity function processing with reference to figure F
wHandle the back with reference to figure G with the process sensitivity function
w
Described S
a(u, v)=S (w) P (w, θ);
In the formula:
σ=2,w=2πf/60,
U and v are the spatial frequency of level and vertical direction;
θ=tan
-1(v/u), expression and horizontal direction angle, β=8, f
0=11cycles/degree;
Definition FG
w=F
w(x, y) G
w(x, y);
(3) utilize the isotropism sobel operator in the step (1) and F that mode treatment step (2) that calculate to weigh obtains
w, G
wWith FG
wObtain gradient image G respectively
Fw(x, y), G
Gw(x, y) and G
FwGwWith gradient image G
Fw(x, y), G
Gw(x, y) and G
FwGwIn each mutually corresponding following formula of some substitution obtain evaluation index Gs;
Gs=(2G
FwGw+C)/(G
Fw 2+G
Gw 2+C)
C=KL wherein, K is a constant, L is the gray value dynamic range of image;
The evaluation index Gs correspondence mappings of being had a few is become the gradient information index map, utilize the average of this gradient information index map to come assess image quality.
2, method for objectively evaluating image quality as claimed in claim 1 is characterized in that, the gaussian filtering described in the step (1) is handled the Gaussian function that is adopted and is:
Wherein σ is a variance,
X, the pixel in the y presentation video.
3, method for objectively evaluating image quality as claimed in claim 1 is characterized in that, the isotropic sobel operator described in the step (1) is,
The horizontal operator of isotropism sobel:
The vertical operator of isotropism sobel:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100976770A CN101562758B (en) | 2009-04-16 | 2009-04-16 | Method for objectively evaluating image quality based on region weight and visual characteristics of human eyes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100976770A CN101562758B (en) | 2009-04-16 | 2009-04-16 | Method for objectively evaluating image quality based on region weight and visual characteristics of human eyes |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101562758A true CN101562758A (en) | 2009-10-21 |
CN101562758B CN101562758B (en) | 2010-10-27 |
Family
ID=41221344
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009100976770A Expired - Fee Related CN101562758B (en) | 2009-04-16 | 2009-04-16 | Method for objectively evaluating image quality based on region weight and visual characteristics of human eyes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101562758B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853490A (en) * | 2010-04-21 | 2010-10-06 | 中国科学院半导体研究所 | Bionic image restoration method based on human visual characteristics |
CN101996406A (en) * | 2010-11-03 | 2011-03-30 | 中国科学院光电技术研究所 | No-reference structural sharpness image quality evaluation method |
CN102075786A (en) * | 2011-01-19 | 2011-05-25 | 宁波大学 | Method for objectively evaluating image quality |
CN102142145A (en) * | 2011-03-22 | 2011-08-03 | 宁波大学 | Image quality objective evaluation method based on human eye visual characteristics |
CN102333233A (en) * | 2011-09-23 | 2012-01-25 | 宁波大学 | Stereo image quality objective evaluation method based on visual perception |
CN103281554A (en) * | 2013-04-23 | 2013-09-04 | 宁波大学 | Video objective quality evaluation method based on human eye visual characteristics |
WO2013159275A1 (en) * | 2012-04-23 | 2013-10-31 | Technicolor (China) Technology Co., Ltd. | Perceived video quality estimation considering visual attention |
WO2013177779A1 (en) * | 2012-05-31 | 2013-12-05 | Thomson Licensing | Image quality measurement based on local amplitude and phase spectra |
CN103841411A (en) * | 2014-02-26 | 2014-06-04 | 宁波大学 | Method for evaluating quality of stereo image based on binocular information processing |
CN103955921A (en) * | 2014-04-17 | 2014-07-30 | 杭州电子科技大学 | Image noise estimation method based on human eye visual features and partitioning analysis method |
CN103955689A (en) * | 2014-04-14 | 2014-07-30 | 杭州电子科技大学 | Image vision area-of-interest extraction method through frequency screening |
CN104159104A (en) * | 2014-08-29 | 2014-11-19 | 电子科技大学 | Full-reference video quality evaluation method based on multi-stage gradient similarity |
CN104933693A (en) * | 2015-07-02 | 2015-09-23 | 浙江大学 | A recovery method used for a plurality of images with saturated pixel scenes |
CN105118057A (en) * | 2015-08-18 | 2015-12-02 | 江南大学 | Image sharpness evaluation method based on quaternion wavelet transform amplitudes and phase positions |
CN105491371A (en) * | 2015-11-19 | 2016-04-13 | 国家新闻出版广电总局广播科学研究院 | Tone mapping image quality evaluation method based on gradient magnitude similarity |
CN105741274A (en) * | 2016-01-25 | 2016-07-06 | 中国计量学院 | Advanced visual characteristic based non-reference image definition evaluation method |
CN107071423A (en) * | 2017-04-24 | 2017-08-18 | 天津大学 | Application process of the vision multi-channel model in stereoscopic video quality objective evaluation |
CN110634135A (en) * | 2019-09-12 | 2019-12-31 | 金瓜子科技发展(北京)有限公司 | Image processing method and device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6888564B2 (en) * | 2002-05-24 | 2005-05-03 | Koninklijke Philips Electronics N.V. | Method and system for estimating sharpness metrics based on local edge kurtosis |
EP2119248B1 (en) * | 2007-03-07 | 2011-01-12 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Concept for determining a video quality measure for block coded images |
-
2009
- 2009-04-16 CN CN2009100976770A patent/CN101562758B/en not_active Expired - Fee Related
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853490A (en) * | 2010-04-21 | 2010-10-06 | 中国科学院半导体研究所 | Bionic image restoration method based on human visual characteristics |
CN101996406A (en) * | 2010-11-03 | 2011-03-30 | 中国科学院光电技术研究所 | No-reference structural sharpness image quality evaluation method |
CN102075786B (en) * | 2011-01-19 | 2012-10-24 | 宁波大学 | Method for objectively evaluating image quality |
CN102075786A (en) * | 2011-01-19 | 2011-05-25 | 宁波大学 | Method for objectively evaluating image quality |
CN102142145B (en) * | 2011-03-22 | 2012-12-19 | 宁波大学 | Image quality objective evaluation method based on human eye visual characteristics |
CN102142145A (en) * | 2011-03-22 | 2011-08-03 | 宁波大学 | Image quality objective evaluation method based on human eye visual characteristics |
CN102333233A (en) * | 2011-09-23 | 2012-01-25 | 宁波大学 | Stereo image quality objective evaluation method based on visual perception |
CN102333233B (en) * | 2011-09-23 | 2013-11-06 | 宁波大学 | Stereo image quality objective evaluation method based on visual perception |
WO2013159275A1 (en) * | 2012-04-23 | 2013-10-31 | Technicolor (China) Technology Co., Ltd. | Perceived video quality estimation considering visual attention |
CN104350746A (en) * | 2012-05-31 | 2015-02-11 | 汤姆逊许可公司 | Image quality measurement based on local amplitude and phase spectra |
WO2013177779A1 (en) * | 2012-05-31 | 2013-12-05 | Thomson Licensing | Image quality measurement based on local amplitude and phase spectra |
CN103281554A (en) * | 2013-04-23 | 2013-09-04 | 宁波大学 | Video objective quality evaluation method based on human eye visual characteristics |
CN103281554B (en) * | 2013-04-23 | 2015-04-29 | 宁波大学 | Video objective quality evaluation method based on human eye visual characteristics |
CN103841411B (en) * | 2014-02-26 | 2015-10-28 | 宁波大学 | A kind of stereo image quality evaluation method based on binocular information processing |
CN103841411A (en) * | 2014-02-26 | 2014-06-04 | 宁波大学 | Method for evaluating quality of stereo image based on binocular information processing |
CN103955689A (en) * | 2014-04-14 | 2014-07-30 | 杭州电子科技大学 | Image vision area-of-interest extraction method through frequency screening |
CN103955921B (en) * | 2014-04-17 | 2017-04-12 | 杭州电子科技大学 | Image noise estimation method based on human eye visual features and partitioning analysis method |
CN103955921A (en) * | 2014-04-17 | 2014-07-30 | 杭州电子科技大学 | Image noise estimation method based on human eye visual features and partitioning analysis method |
CN104159104A (en) * | 2014-08-29 | 2014-11-19 | 电子科技大学 | Full-reference video quality evaluation method based on multi-stage gradient similarity |
CN104933693A (en) * | 2015-07-02 | 2015-09-23 | 浙江大学 | A recovery method used for a plurality of images with saturated pixel scenes |
CN105118057A (en) * | 2015-08-18 | 2015-12-02 | 江南大学 | Image sharpness evaluation method based on quaternion wavelet transform amplitudes and phase positions |
CN105491371A (en) * | 2015-11-19 | 2016-04-13 | 国家新闻出版广电总局广播科学研究院 | Tone mapping image quality evaluation method based on gradient magnitude similarity |
CN105741274A (en) * | 2016-01-25 | 2016-07-06 | 中国计量学院 | Advanced visual characteristic based non-reference image definition evaluation method |
CN105741274B (en) * | 2016-01-25 | 2018-12-18 | 中国计量大学 | Non-reference picture clarity evaluation method based on advanced visual properties |
CN107071423A (en) * | 2017-04-24 | 2017-08-18 | 天津大学 | Application process of the vision multi-channel model in stereoscopic video quality objective evaluation |
CN110634135A (en) * | 2019-09-12 | 2019-12-31 | 金瓜子科技发展(北京)有限公司 | Image processing method and device |
CN110634135B (en) * | 2019-09-12 | 2022-04-15 | 金瓜子科技发展(北京)有限公司 | Image processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN101562758B (en) | 2010-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101562758B (en) | Method for objectively evaluating image quality based on region weight and visual characteristics of human eyes | |
CN101976444B (en) | Pixel type based objective assessment method of image quality by utilizing structural similarity | |
CN104902267B (en) | No-reference image quality evaluation method based on gradient information | |
Ward et al. | Image quality assessment for determining efficacy and limitations of Super-Resolution Convolutional Neural Network (SRCNN) | |
George et al. | A survey on different approaches used in image quality assessment | |
CN102663747B (en) | Stereo image objectivity quality evaluation method based on visual perception | |
Cadik et al. | Evaluation of two principal approaches to objective image quality assessment | |
CN107784651A (en) | A kind of blurred picture quality evaluating method based on fuzzy detection weighting | |
Dumic et al. | New image-quality measure based on wavelets | |
CN108109147A (en) | A kind of reference-free quality evaluation method of blurred picture | |
CN102036098B (en) | Full-reference type image quality evaluation method based on visual information amount difference | |
CN102572499A (en) | Non-reference image quality evaluation method based on wavelet-transformation multi-resolution prediction | |
CN103810712A (en) | Energy spectrum CT (computerized tomography) image quality evaluation method | |
CN107220974A (en) | A kind of full reference image quality appraisement method and device | |
CN105139394A (en) | Noise image quality evaluation method combining reconstruction with noise scatter histograms | |
CN104159104A (en) | Full-reference video quality evaluation method based on multi-stage gradient similarity | |
CN102737380A (en) | Stereo image quality objective evaluation method based on gradient structure tensor | |
Lu et al. | Statistical modeling in the shearlet domain for blind image quality assessment | |
Cui et al. | An Image Quality Metric based on Corner, Edge and Symmetry Maps. | |
Joy et al. | RECENT DEVELOPMENTS IN IMAGE QUALITY ASSESSMENT ALGORITHMS: A REVIEW. | |
CN109685757B (en) | Non-reference image quality evaluation method and system based on gray difference statistics | |
Li et al. | Gradient-weighted structural similarity for image quality assessments | |
Preece et al. | Modeling boost performance using a two dimensional implementation of the targeting task performance metric | |
JP4966945B2 (en) | Spatio-temporal image feature quantity detection device, spatio-temporal image feature quantity detection method, and spatio-temporal image feature quantity detection program | |
Tripathi et al. | A novel spatial domain image quality metric |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20101027 Termination date: 20130416 |