CN103426173B - Objective evaluation method for stereo image quality - Google Patents
Objective evaluation method for stereo image quality Download PDFInfo
- Publication number
- CN103426173B CN103426173B CN201310348550.8A CN201310348550A CN103426173B CN 103426173 B CN103426173 B CN 103426173B CN 201310348550 A CN201310348550 A CN 201310348550A CN 103426173 B CN103426173 B CN 103426173B
- Authority
- CN
- China
- Prior art keywords
- image
- redundancy
- model
- region
- weights
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 25
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000013441 quality evaluation Methods 0.000 claims description 29
- 230000000007 visual effect Effects 0.000 claims description 28
- 238000001228 spectrum Methods 0.000 claims description 18
- 230000000694 effects Effects 0.000 claims description 16
- 230000008447 perception Effects 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 11
- 230000003595 spectral effect Effects 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 8
- 239000000284 extract Substances 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 230000002146 bilateral effect Effects 0.000 claims description 3
- 230000015556 catabolic process Effects 0.000 claims description 3
- 238000006731 degradation reaction Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 230000035945 sensitivity Effects 0.000 claims description 2
- 238000012546 transfer Methods 0.000 claims description 2
- 238000005303 weighing Methods 0.000 claims 2
- 238000013507 mapping Methods 0.000 claims 1
- 230000005540 biological transmission Effects 0.000 abstract description 2
- 230000006835 compression Effects 0.000 abstract 1
- 238000007906 compression Methods 0.000 abstract 1
- 238000004422 calculation algorithm Methods 0.000 description 4
- 239000007787 solid Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 235000013351 cheese Nutrition 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000012854 evaluation process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides an objective evaluation method for stereo image quality. A stereo image is finally presented to a user through acquisition, compression, storage, transmission, reconstruction and the like, and the original image has a distortion possibility in each link. Therefore, the quality of the image finally presented to the user needs to be evaluated objectively by a person, but such method of evaluating the quality artificially and subjectively consumes a plenty of manpower as well as time and high cost, and has no real-time performance. Aiming at the problems mentioned above, an objective evaluation method for the stereo image quality is provided.
Description
Technical field
The present invention relates to computer application field, specifically a kind of method for objectively evaluating of stereo image quality.
Background technology
In recent years, under the ordering about of amusement circles and scientific application, stereo-picture becomes a wide research field.With
Stereo technology constantly to develop, generate various 3D applications, such as 3DTV.Stereo-picture chain is by image acquisition, coding pressure
Contracting, network transmission, in the process such as the post processing and display composition of receiving terminal, any stage therein is likely to cause stereopsis
Feel the distortion of quality or a certain step produces mistake in the flow process of conveyer chain.Therefore, stereo image quality evaluation is three-dimensional system
The key factor of design and parameters optimization in system.
Evaluating objective quality can monitoring image quality in quality control system.Such as, in quality collection system, quality is commented
Valency method can be monitored and obtain best image quality data with adaptive;Evaluating objective quality is in image processing system and calculation
Can be used as benchmark in method.Such as, there are 2 algorithms(Image removes noise algorithm and Image Restoration Algorithm)The matter of image can be strengthened
Amount, can with objective quality evaluating method go weigh that algorithm have better quality result;Evaluating objective quality can be with embedding
Enter the parameter setting of the optimization system in image processing system and image delivering system.
Stereo image quality evaluation at present is in the exploratory stage, also without ripe disclosed data base and evaluation criterion.Greatly
Quantifier elimination personnel are evaluating stereo image quality in the method evaluated using two-dimensional image quality.With plane picture phase
Relatively, stereo-picture needs two viewpoints, needs many times of data volume.But due to the network bandwidth or the limit of limited system resources
System, it is necessary to give user viewing again after treatment.At the same time, stereo-picture has also had more the solid of depth than two dimensional image
Information.Thus it is guaranteed that the quality of stereo-picture it is critical that.Stereo image quality evaluation is divided into subjective assessment and objective comments
Two kinds of subjective quality assessments of valency and evaluating objective quality.Most accurate image quality evaluation is exactly the subjective assessment of eye-observation,
It is subjective therefore, it is possible to develop simulation human eye but subjective assessment is to need to consume substantial amounts of energy, manpower, time and money
The three-dimensional image objective quality evaluation of evaluation is indispensable.
The content of the invention
It is an object of the invention to provide a kind of method for objectively evaluating with regard to stereo image quality.
The purpose of the present invention realizes in the following manner, and the perception of human eye is detected according to the feature of vision physiological factor
Redundancy and vision attention region, and the method combined using the characteristic for perceiving redundancy model and vision attention is evaluating axonometric chart
As quality, stereo image quality evaluation includes that quality evaluation and third dimension evaluate two kinds, and third dimension evaluation includes:Perceive redundancy
JND model and vision attention characteristic, wherein:
The distortion model that redundancy JND model refers to discover is perceived, it is to measure human visual system HVS
Visibility threshold;
Vision attention characteristic, sets up visual importance model, at the vision initial stage according to vision initial stage and attention transfer
When, what people noted observation is the important marking area of content, subsequently can be attracted by the region of poor image quality;
In quality evaluation, the left images of stereo-picture are evaluated respectively, then take its average as last picture quality
Fraction, third dimension evaluate in, by analyze third dimension Producing reason obtained be based on the antipode figure of visual disparity
Relief most direct embodiment, therefore, reference picture antipode figure is obtained by stereo pairs and distorted image is exhausted
To disparity map, third dimension is evaluated according to view-based access control model importance method further according to the two antipode figures.
Described perception redundancy model, according to visibility threshold the perception redundancy of human eye vision is eliminated, if image is can
See change in threshold value, then human eye is imperceptible distortion;If beyond the scope of visibility threshold, human eye may feel that distortion
Change, the region in distorted image to be measured in the range of visibility threshold is modified as the area of reference picture according to this characteristic
Domain, so meets visual effect;And exceed the region of the scope of visibility threshold, in order to more project distortion effect, according to can
Opinion property threshold value changes accordingly distortion zone to be measured, and according to this threshold value the visual redundancy on image is removed, and can so make evaluation
Result it is more accurate.
The characteristic of described vision attention, mainly divides the image into four kinds of regions according to marking area and region inferior:Both
Be content important area be again poor quality region, be only content important area, be only poor quality region, neither interior
It is not again poor quality region to hold important area, then gives different weights to each region by the training of weights;
It is one of attribute of human visual system HVS to perceive redundancy JND model, perceives redundancy JND model and simulates HVS's
The characteristic of effect is hidden in luminance contrast and space, it be on visually-perceptible redundancy is assessed it is effectively, on the other hand, vision
Attention characteristic in image quality evaluation is of equal importance as redundancy is perceived, in a sub-picture, only subregion
The vision attention characteristic of people can be caused, this subregion is the content importance region for noting at the vision initial stage, be also that vision turns
The poor quality region noted during shifting, therefore, visual importance region is detected in terms of the two, then according to perception redundancy
Two aspects of JND model and vision attention characteristic are comprised the following steps that carrying out quality evaluation to image:
First, the visible threshold value that redundancy JND model measures HVS is perceived, we remove first according to this threshold value
Visual redundancy on image makes the result of evaluation more accurate, secondly, to obtain vision attention characteristic area, image is divided into into 4
Subregion:It is both content important area and poor quality region, only content important area, only poor quality area
Domain, neither content important area is not again poor quality region, then, by this 4 subregion respectively on different yardsticks
Different weights are trained, gives different weights to form visual importance VS model, finally the VS Model Fusions for obtaining
The image quality evaluation of haplopia is obtained in SSIM;
Perceiving redundancy JND model can measure the observability of HVS according to the feature of signal, according to luminance contrast and cover effect
Should, then view-based access control model sensitivity, one perceives redundancy JND model and just can set up, and for image, perceives redundancy and is mainly
Caused due to luminance contrast and space shielding effect, drawn in observability threshold value redundancy JND model is perceived, image
Change be imperceptible in human eye;But beyond this threshold value, human eye can just perceive the distortion of image, therefore, it is
Make image quality evaluation more conform to human eye characteristic, pretreatment is carried out to testing image before evaluation, guiding theory is:If picture
In the observability threshold value for perceiving redundancy JND model, that human eye is imperceptible distortion to element value, then at this time will be to be measured
The pixel value of image is modified as the pixel value of reference picture, if pixel value is in the observability threshold value for perceiving redundancy JND model
Outward, for the effect for more projecting distortion, the pixel value of testing image can accordingly expand or shrink perception redundancy JND model
Value, the process that specific testing image removes redundancy is as follows:
(1) perception redundancy JND model value osjnd of reference picture is calculated using model;
(2) go to change distorted image using perception redundancy JND model characteristic:
If the pixel value difference of original image and distorted image is in observability threshold value,
Otherwise, if original image pixels value is more than distorted image pixel value, in order to project the effect of distortion, distortion map
As pixel value can deduct threshold value;
If original image pixels value is less than distorted image pixel value, in order to project the effect of distortion, distortion 56FE pictures
Pixel value can add threshold value;
The observability threshold value of redundancy is perceived, formula is as follows:
WhereinWithIt is respectively that estimation space covers function with luminance contrast, andIt is defined as foloows:
WhereinIt is to take maximum by calculating the weighted average of brightness flop of the pixel (x, y) on 4 directions
Weighted mean;
FunctionCalculate visible threshold value as follows
Marking area refers to the region of the important content that can extract image in the picture, according to image spy in a frequency domain
Extracting, according to information-theoretical viewpoint, image information can be broken down into two parts to property:Novel part and prior part are different
Image has common curvilinear trend in frequency spectrum loaarithmic curve, and the spectral redundancy part on frequency spectrum is the novel part of image,
Thus the marking area of structural map picture is carried out;
For an input picture,WithIt is respectively the frequency spectrum and phase spectrum after Fourier transform;
Log spectrumFormula it is as follows:
The general pattern of log spectrum is indicated, equivalent to given prior part, andAverage frequency spectrumWith local filter come approximateShape:
To sum up, spectral redundancyIt is defined as follows:
Now spectral redundancy means the novel part in image, i.e. signal portion, passes through inverse-Fourier transform again afterwards
Obtain notable figure, its formula is as follows:
WhereinIt is a Gaussian filter, it is therefore an objective to preferable visual effect is produced by smoothing;
Notable figure is highlighted and attracts the object that notes of human eye, in order to detect notable figure in front object, using simple threshold
The method of value, then front object figureIt is defined as:
Wherein
Watch image when, region inferior with the attracting attention of the same energy of content important area, Poor Image region
It is determined that using the model based on percentage ratio, according to reference picture and distorted image through the Quality Map that SSIM is obtained carry out from it is little to
Big sequence, if SSIM (x, y) is fallen in the range of convergence of front p%, is just labeled as image value 1 inferior, is otherwise 0, will be front
The set of the value of p% is defined as set A, then image figure VI (x, y) inferior is defined as:
4 subregions are classified into when extracting visual importance:Be both marking area but region inferior, be only
Marking area, be only region inferior, neither marking area is not again region inferior, by having obtained marking area and bad
Matter region, extracts not only notable but also image region inferior, is defined as bilateral important BI (both importance) map:
DefinitionIt is region of the only marking area without poor quality,It is inferior without notable for only image
Region, by the region of this 3 part different weights are given respectively:Weights be,Weights be,Weights be;
Give different weights according to zones of different, weights be trained to from step-length be 1 to 4000 to obtain best gain,
It is that so training gets on each yardstick, 982 width images is contained in training set LIVE data base, comprising 5 kinds of type of distortion:
JPEG, white noise, JPEG2000, Gaussian Blur and fast degradation, the various type of distortion and not extracted in LIVE data bases
The 150 width images all existed with distortion level, when highest correlation coefficient value is obtained between objective score and subjective scores, just
3 weights of the yardstick can be obtained, under normal circumstances, in marking area and region inferior weights can than only marking area or
The weights in person only region inferior are high, and according to map obtained above visual importance model VS is drawn, formula is as follows:
The objective method step for evaluating haplopia quality is as follows:
(1) again JND values are calculated by reference to image, in order to eliminate the perception redundancy of vision, and also to
More project distortion zone;
(2) Quality Map is calculated by reference to image and distorted image;
(3) by the feature of marking area and the two attractive vision attentions of region inferior come computation vision importance VS
Model;
(4) finally final quality evaluation result will be obtained in visual importance Model Fusion to multiple dimensioned SSIM;
If M is highest yardstick, if the value of M is 5, the image of each yardstickFor in the VS figures on j-th yardstick
Locus are the weights of i, then for j=1 ..., the VSSSIM of j-th yardstick of M-1 is defined as follows:
As j=M, the VSSSIM of j-th yardstick is defined as follows:
Wherein,WithThree components in difference SSIM.Last VSSSIM quilts
It is defined as:
Wherein each yardstickWeights be consistent with multiple dimensioned SSIM, yardstick M=5.。
The invention has the beneficial effects as follows:The present invention evaluates and tests solid by analyzing the standard index of four method for objectively evaluating
The accuracy of method for objectively evaluating.The evaluation criteria of these four measurement performances is:PLCC、SRCC、KRCC、RMSE.One preferably
Method for objectively evaluating should just have higher PLCC, SRCC, KRCC and relatively low RMS.Quality evaluation in the invention is commented
Estimating standard value is respectively:0.939, 0.924, 0.768, 3.8.Solid evaluate evaluation index value be respectively:0.942,
0.920,0.758,0.162.By objective evaluation index and scatterplot(Such as accompanying drawing 4 and accompanying drawing 5)As can be seen that stereo-picture
In third dimension evaluation and quality evaluation it is all very identical with the fraction of subjective assessment respectively, produce a desired effect.
Description of the drawings
Fig. 1 is that view-based access control model importance singly attempts assessment framework;
Fig. 2 is the only framework of quality evaluation in stereo-picture;
Fig. 3 is the evaluation framework of stereo-picture neutrality body-sensing;
Fig. 4 is the scatterplot of quality evaluation and subjective scores in stereo-picture;
Fig. 5 is the scatterplot of stereo-picture neutrality body-sensing evaluation and subjective scores.
Specific embodiment
Referring to the drawings, present disclosure is described with an instantiation and realizes this stereo image quality objective evaluation
Process.
In image quality evaluation, we are continued to use based on single-view image quality evaluating method:View-based access control model importance model
Method for evaluating objective quality (VSSSIM), as shown in Figure 1.We individually evaluate left image and right image, first explain left
The independent evaluation of image, the evaluation procedure of right image is consistent with left image.First by left image to be evaluated according to its reference
The JND values of image perceive redundancy section and carry out correction tape evaluation and test image.The step of amendment is:If pixel value is in the visible of JND
In property threshold value, that human eye is imperceptible distortion, then at this time the pixel value of testing image is modified as into reference picture
Pixel value.If pixel value is outside the observability threshold value of JND, for the effect for more projecting distortion, the picture of testing image
Plain value can accordingly expand or shrink the value of JND, and so amendment can make evaluation more accurate.
Then further according to revised testing image and reference picture extracting signal portion and Poor Image part.Extract
Process is as follows:It is the novel part according to the i.e. image in spectral redundancy part on frequency spectrum when signal portion is asked for.It is defeated according to one
Enter image to ask for its frequency spectrum and phase spectrum after Fourier transformation.Because different images has altogether in frequency spectrum loaarithmic curve
Same curvilinear trend, therefore spectral redundancy means the novel part in image, i.e. signal portion.Pass through inverse-Fourier again afterwards
Conversion obtains notable figure, its formula is as follows:
WhereinIt is a Gaussian filter, it is therefore an objective to preferable visual effect is produced by smoothing.
Notable figure is highlighted and attracts the object that notes of human eye, in order to detect notable figure in front object, using simple threshold
The method of value.Then front object figureIt is defined as:
Wherein。
For region inferior, the determination in Poor Image region is using the model based on percentage ratio.According to reference picture
Sequence from small to large is carried out through the Quality Map that SSIM is obtained with distorted image, if SSIM (x, y) falls into the set model of front p%
In enclosing, we are just labeled as image value 1 inferior, are otherwise 0.The set of the value of front p% is defined as into set A, then image is bad
Matter figure VI (x, y) is defined as:
4 subregions are classified into when extracting visual importance:It is both marking area and region inferior, only aobvious
Write region, be only region inferior, neither marking area is not again region inferior.Therefore, we can extract not only significantly but also
Image region inferior, is defined as bilateral important BI (both importance) map:
At this moment defineIt is region of the only marking area without poor quality,Do not have for only image poor quality
Marking area.The region of this 3 part is given respectively different weights by us:Weights be,Weights
For,Weights be.Give different weights according to zones of different, weights be trained to from step-length be 1 to
4000 to obtain best gain, is that so training gets on each yardstick, and 982 width figures are contained in training set LIVE data base
Picture, comprising 5 kinds of type of distortion:JPEG, white noise, JPEG2000, Gaussian Blur and fast degradation.The present invention training set be
The 150 width images that the various type of distortion and different distortion levels extracted in LIVE data bases are all present, main thought is objective
When highest correlation coefficient value is obtained between fraction and subjective scores, so that it may obtain 3 weights of the yardstick.Under normal circumstances,
In marking area and region inferior weights can than only marking area or only region inferior weights it is high.We can be with root
Visual importance model (VS) is drawn according to map obtained above.Formula is as follows:
Average is taken finally according to evaluation result be quality evaluation result.Therefore the result of IQA (image quality evaluation) is just
It is the average of left images score, as shown in Figure 2,
By the way that based on research visual disparity, antipode figure is to cause the relief basic point of departure of stereo-picture.Therefore,
Third dimension (SS) is evaluated based on the antipode figure of left images, is also the method for having continued to use view-based access control model importance model.
First, we draw stereoscopic image disparity figure according to the left images of stereo pairs.WithRepresent respectively and refer to axonometric chart
As to left image and right image,WithThe left image and right image of distortion stereo pairs are represented respectively.Then represent respectively
Reference picture antipode figureWith distorted image antipode figureFormula be:
Then using reference picture disparity map and distorted image disparity map as relief input picture, then further according to sense
Know that redundancy and vision attention characteristic incorporate and carry out in SSIM third dimension evaluation.Specific framework is as shown in Figure 3.
Scatterplot is the factor directly perceived for watching method for evaluating objective quality quality.Dissipating according to subjective scores and objective score
Point diagram is known that the quality of method for evaluating objective quality.Subjective scores and objective score have a consistent distribution trend, and intuitively
Be fitted to curve, this illustrates that the method for objectively evaluating is good, conversely, the distribution trend of subjective scores and objective score compares point
Dissipate, when being similar to cheese formula, then illustrate that the method for objectively evaluating is excessively poor.Quality evaluation and subjective quality in the present invention point
As shown in Figure 4, the scatterplot of relief evaluation and subjective solid fraction is as shown in Figure 5 for several scatterplot.Each point
Piece image is represented, the abscissa of the point represents the subjective scores that the image is drawn by each method, and the longitudinal axis represents the visitor of the image
See fraction.We can see that objective score and the more concentration of subjective scores fitting are put down closer to one from the two scatterplot
Sliding curve.Will be closer to subjective scores so after fitting, this absolutely proves the method for objectively evaluating and subjectivity of proposition
Fraction is consistent, produces a desired effect.
In addition to the technical characteristic described in description, the known technology of those skilled in the art is.
Claims (1)
1. a kind of method for objectively evaluating with regard to stereo image quality, it is characterised in that examined according to the feature of vision physiological factor
Survey perception redundancy and the vision attention region of human eye, and the method combined using the characteristic for perceiving redundancy model and vision attention
To evaluate stereo image quality, stereo image quality evaluation includes that quality evaluation and third dimension evaluate two kinds, third dimension evaluation bag
Include:Redundancy model JND model and vision attention characteristic are perceived, wherein:
The distortion model that redundancy model JND model refers to discover is perceived, it is to measure human visual system HVS
Observability threshold value;
Vision attention characteristic, according to vision initial stage and attention transfer setting up visual importance model, the vision initial stage when
Wait, what people noted observation is the important marking area of content, subsequently can be attracted by the region of poor image quality;
In quality evaluation, the left images of stereo-picture are evaluated respectively, then take its average as last image quality score,
In third dimension is evaluated, the antipode figure obtained based on visual disparity by analyzing third dimension Producing reason is third dimension
Most directly embody, therefore, reference picture antipode figure and distorted image antipode are obtained by stereo pairs
Figure, third dimension is evaluated further according to the two antipode figures according to view-based access control model importance method;
It is one of attribute of human visual system HVS to perceive redundancy JND model, perceives the brightness that redundancy JND model simulates HVS
The characteristic of effect is hidden in contrast and space, it be on visually-perceptible redundancy is assessed it is effectively, on the other hand, vision attention
Characteristic in image quality evaluation is of equal importance as redundancy is perceived, and in a sub-picture, only subregion can draw
The vision attention characteristic of people is played, this subregion is the content importance region for noting at the vision initial stage, when being also that vision is shifted
The poor quality region for noting, therefore, visual importance region is detected in terms of the two, then according to perception redundancy JND
Two aspects of model and vision attention characteristic are comprised the following steps that carrying out quality evaluation to image:
First, the observability threshold value that redundancy JND model measures HVS is perceived, we remove first figure according to this threshold value
As upper visual redundancy makes the result of evaluation more accurate, secondly, to obtain vision attention characteristic area, image is divided into into 4 parts
Region:Be both content important area but poor quality region, be only content important area, be only poor quality region,
Neither content important area is not again poor quality region, then, this 4 subregion is trained respectively on different weights
Go out different weights, give different weights to form visual importance VS model, finally the VS Model Fusions for obtaining are arrived
The image quality evaluation of single vision is obtained in SSIM;
Perceiving redundancy JND model can measure the observability of HVS according to the feature of signal, according to luminance contrast and shielding effect,
View-based access control model sensitivity again, one perceives redundancy JND model and just can set up, for image, perceive redundancy mainly by
Cause in luminance contrast and space shielding effect, draw in observability threshold value redundancy JND model is perceived, image
Change is imperceptible in human eye;But beyond this threshold value, human eye can just perceive the distortion of image, therefore, to make
Image quality evaluation more conforms to human eye characteristic, and pretreatment is carried out to testing image before evaluation, and guiding theory is:If pixel
It is worth in the observability threshold value for perceiving redundancy JND model, that human eye is imperceptible distortion, then will at this time treat mapping
The pixel value of picture is modified as the pixel value of reference picture, if pixel value is outside the observability threshold value for perceiving redundancy JND model,
For the effect for more projecting distortion, the pixel value of testing image can accordingly expand or shrink the value for perceiving redundancy JND model,
The process that specific testing image removes redundancy is as follows:
(1) perception redundancy JND model value osjnd of reference picture is calculated using model;
(2) go to change distorted image using perception redundancy JND model characteristic:
If the pixel value difference of original image and distorted image is in observability threshold value,
Distortion (i, j)=original (i, j)
Otherwise, if original image pixels value is more than distorted image pixel value, in order to project the effect of distortion, distorted image picture
Plain value can deduct observability threshold value;
Distortion (i, j)=distortion (i, j)-osjnd (i, j)
If original image pixels value is less than distorted image pixel value, in order to project the effect of distortion, distorted image pixel value
Observability threshold value can be added;
Distortion (i, j)=distortion (i, j)+osjnd (i, j)
The observability threshold value of redundancy is perceived, formula is as follows:
SJND=max { f1(bg(x,y),mg(x,y)),f2(bg(x,y))}
Wherein f1(bg(x,y),mg(x,y),)f2(bg (x, y)) is respectively that estimation space covers function with luminance contrast, and f1
(bg (x, y), mg (x, y)) is defined as foloows:
f1(bg (x, y), mg (x, y))=mg (x, y) * α (bg (x, y))+β (bg (x, y))
Wherein mg (x, y) is to take adding for maximum by calculating the weighted average of brightness flop of the pixel (x, y) on 4 directions
Weight average value;
Function f2It is as follows that (bg (x, y)) calculates visible threshold value:
Function f2In T0Observability threshold value when representing that background gray level is 0, γ is that the tangent line of higher background luminance is oblique
Rate, ε is represented:During arbitrary constant ε, the impact to background luminance observability threshold value function f2, rule of thumb, when ε=3,
Background luminance observability threshold value can be fully demonstrated;
Marking area refers to the region of the important content that can extract image in the picture, comes according to image characteristic in a frequency domain
Extract, according to information-theoretical viewpoint, image information can be broken down into two parts:Novel part and prior part, different images
There is common curvilinear trend in frequency spectrum loaarithmic curve, and the spectral redundancy part on frequency spectrum is the novel part of image, thus
Carry out the marking area of structural map picture;
I (x, y) is an input picture, and A (u, v) and P (u, v) is respectively the frequency spectrum and phase spectrum after Fourier transform;Logarithm
The formula of frequency spectrum L (u, v) is as follows:
L (u, v)=log (A (u, v))
A (u, v) indicates the general pattern of log spectrum, equivalent to given prior part, and A (u, v) average frequency spectrum A1 (u,
V) with local filter come the shape of approximate A (u, v):
A1 (u, v)=hn(u,v)*L(u,v)
To sum up, spectral redundancy R (u, v) is defined as follows:
R (u, v)=L (u, v)-A1 (u, v)
Now spectral redundancy means the novel part in image, i.e. signal portion, is obtained by inverse-Fourier transform again afterwards
Notable figure S (x, y), its formula is as follows:
Wherein g (x, y) is a Gaussian filter, it is therefore an objective to preferable visual effect is produced by smoothing;
Notable figure is highlighted and attracts the object that notes of human eye, in order to detect notable figure in front object, using simple threshold value
Method, then front object figure O (x, y) be defined as:
Wherein threshold=E (S (x, y)) * 3
When image is watched, region inferior is with the attracting attention of the same energy of content important area, the determination in Poor Image region
Using the model based on percentage ratio, carried out from small to large through the Quality Map that SSIM is obtained according to reference picture and distorted image
Sequence, if SSIM (x, y) is fallen in the range of convergence of front s%, is just labeled as image value 1 inferior, is otherwise 0, by front s%
The set of value be defined as set A, then image figure VI (x, y) inferior is defined as:
4 subregions are classified into when extracting visual importance:It is both marking area and region inferior, only notable area
Domain, be only region inferior, neither marking area is not again region inferior, by having obtained marking area and area inferior
Domain, extracts not only notable but also image region inferior, is defined as bilateral important BI (both importance) map:
Θ (x, y) is defined to be only merely marking area, to be only merely region inferior, BI (x, y) is both aobvious to be to Ψ (x, y)
It is again region inferior to write region, and by the region of this 3 part different weights are given respectively:The weights of Θ (x, y) are rs>=1, Ψ
The weights of (x, y) are rviThe weights of >=1, BI (x, y) are rboth≥1;
Give different weights according to zones of different, weights be trained to from step-length be 1 to 4000 to obtain best gain, each
All it is that so training gets on weights, 982 width images is contained in training set LIVE data base, comprising 5 kinds of type of distortion:JPEG、
White noise, JPEG2000, Gaussian Blur and fast degradation, the various type of distortion extracted in LIVE data bases and different distortions
The 150 width images that degree is all present, when highest correlation coefficient value is obtained between objective score and subjective scores, just can obtain
3 weights, under normal circumstances, be both marking area and the weights in the region in region inferior can than be only marking area or
Person is only that the weights in region inferior are high, and according to map obtained above visual importance model VS is drawn, formula is as follows:
The objective method step for evaluating haplopia quality is as follows:
(1) JND values are calculated by reference to image, in order to the perception redundancy of vision is eliminated, and also to more dash forward
Go out distortion zone;
(2) Quality Map is calculated by reference to image and distorted image;
(3) by the feature of marking area and the two attractive vision attentions of region inferior come computation vision importance VS moulds
Type;
(4) finally final quality evaluation result will be obtained in visual importance Model Fusion to multiple weighing value SSIM;
If M is highest weights, if the value of M is 5, wj,iFor the locus in the VS figures on j-th weights for i weights,
Then for j=1 ..., the VSSSIM of j-th weights of M-1 is defined as follows:
As j=M, the VSSSIM of j-th weights is defined as follows:
Wherein l (Xj,i,Yj,i), c (Xj,i,Yj,i) and s (Xj,i,Yj,i) it is respectively three components in SSIM, last VSSSIM
It is defined as:
Wherein each weights βiWeights be consistent with multiple weighing value SSIM, weights M=5, { β1, β2, β3, β4, β5}=
{ 0.0448,0.2856,0.3001,0.2363,0.1333 }.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310348550.8A CN103426173B (en) | 2013-08-12 | 2013-08-12 | Objective evaluation method for stereo image quality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310348550.8A CN103426173B (en) | 2013-08-12 | 2013-08-12 | Objective evaluation method for stereo image quality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103426173A CN103426173A (en) | 2013-12-04 |
CN103426173B true CN103426173B (en) | 2017-05-10 |
Family
ID=49650863
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310348550.8A Active CN103426173B (en) | 2013-08-12 | 2013-08-12 | Objective evaluation method for stereo image quality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103426173B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123723A (en) * | 2014-07-08 | 2014-10-29 | 上海交通大学 | Structure compensation based image quality evaluation method |
CN105719264B (en) * | 2014-11-30 | 2018-08-21 | 中国科学院沈阳自动化研究所 | A kind of image enhancement evaluation method based on human-eye visual characteristic |
CN104850893A (en) * | 2014-12-01 | 2015-08-19 | 厦门易联创质检技术服务有限公司 | Quality perception information management method and system based on three dimensional evaluation and time domain tracing |
CN105496459B (en) * | 2016-01-15 | 2018-09-21 | 飞依诺科技(苏州)有限公司 | Automatic adjustment method and system for ultrasonic imaging equipment |
CN107871305B (en) * | 2016-09-27 | 2020-04-21 | 深圳正品创想科技有限公司 | Picture quality rating method and device and terminal equipment |
CN106803952B (en) * | 2017-01-20 | 2018-09-14 | 宁波大学 | In conjunction with the cross validation depth map quality evaluating method of JND model |
JP6560707B2 (en) * | 2017-04-20 | 2019-08-14 | ファナック株式会社 | Machined surface quality evaluation device |
CN110163901A (en) * | 2019-04-15 | 2019-08-23 | 福州瑞芯微电子股份有限公司 | A kind of post-processing evaluation method and system |
CN112330585B (en) * | 2019-07-31 | 2024-07-02 | 北京金山云网络技术有限公司 | Image quality detection method and device and electronic equipment |
CN112233065B (en) * | 2020-09-15 | 2023-02-24 | 西北大学 | Total-blind image quality evaluation method based on multi-dimensional visual feature cooperation under saliency modulation |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102905130A (en) * | 2012-09-29 | 2013-01-30 | 浙江大学 | Multi-resolution JND (Just Noticeable Difference) model building method based on visual perception |
-
2013
- 2013-08-12 CN CN201310348550.8A patent/CN103426173B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102905130A (en) * | 2012-09-29 | 2013-01-30 | 浙江大学 | Multi-resolution JND (Just Noticeable Difference) model building method based on visual perception |
Also Published As
Publication number | Publication date |
---|---|
CN103426173A (en) | 2013-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103426173B (en) | Objective evaluation method for stereo image quality | |
CN103763552B (en) | Stereoscopic image non-reference quality evaluation method based on visual perception characteristics | |
CN105959684B (en) | Stereo image quality evaluation method based on binocular fusion | |
Md et al. | Full-reference stereo image quality assessment using natural stereo scene statistics | |
CN108765414B (en) | No-reference stereo image quality evaluation method based on wavelet decomposition and natural scene statistics | |
CN105654142B (en) | Based on natural scene statistics without reference stereo image quality evaluation method | |
CN106462771A (en) | 3D image significance detection method | |
CN109255358B (en) | 3D image quality evaluation method based on visual saliency and depth map | |
CN104954778B (en) | Objective stereo image quality assessment method based on perception feature set | |
CN103780895B (en) | A kind of three-dimensional video quality evaluation method | |
CN104994375A (en) | Three-dimensional image quality objective evaluation method based on three-dimensional visual saliency | |
Geng et al. | A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property | |
Wang et al. | No-reference synthetic image quality assessment with convolutional neural network and local image saliency | |
CN102595185A (en) | Stereo image quality objective evaluation method | |
Chen et al. | Blind quality index for tone-mapped images based on luminance partition | |
Ma et al. | Reduced-reference stereoscopic image quality assessment using natural scene statistics and structural degradation | |
Hachicha et al. | Stereo image quality assessment using a binocular just noticeable difference model | |
CN103873854A (en) | Method for determining number of stereoscopic image subjective assessment testees and experiment data | |
Wang et al. | Perceptual quality of asymmetrically distorted stereoscopic images: the role of image distortion types | |
CN112950596A (en) | Tone mapping omnidirectional image quality evaluation method based on multi-region and multi-layer | |
Liu et al. | Blind stereoscopic image quality assessment accounting for human monocular visual properties and binocular interactions | |
Voo et al. | Quality assessment of stereoscopic image by 3D structural similarity | |
CN103955921B (en) | Image noise estimation method based on human eye visual features and partitioning analysis method | |
CN105049835A (en) | Perceived stereoscopic image quality objective evaluation method | |
Chetouani | Full reference image quality metric for stereo images based on cyclopean image computation and neural fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |