CN107578404A - The complete of view-based access control model notable feature extraction refers to objective evaluation method for quality of stereo images - Google Patents
The complete of view-based access control model notable feature extraction refers to objective evaluation method for quality of stereo images Download PDFInfo
- Publication number
- CN107578404A CN107578404A CN201710721546.XA CN201710721546A CN107578404A CN 107578404 A CN107578404 A CN 107578404A CN 201710721546 A CN201710721546 A CN 201710721546A CN 107578404 A CN107578404 A CN 107578404A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- mtd
- image
- mfrac
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
- Image Processing (AREA)
Abstract
Complete the invention discloses a kind of extraction of view-based access control model notable feature refers to objective evaluation method for quality of stereo images.The left and right view of stereo pairs is handled to obtain corresponding disparity map;Middle reference and distorted image are obtained to the left and right view progress image co-registration of stereo pairs respectively, reference and distortion Saliency maps are obtained using residual error vision significance model is composed, and logical integration obtains vision significance figure, depth information feature is extracted from middle reference and distorted image extraction visual information feature and from the disparity map of stereo pairs, carry out similarity measurement, obtain the Measure Indexes for each visual information feature that vision significantly increases, it is supported vector machine training prediction, obtain objective quality scores, realize the mapping to stereo image quality, complete measure and the evaluation of stereo image quality.Picture quality objective evaluation proposed by the invention has good uniformity with subjective assessment, and performance is better than existing stereo image quality evaluation method.
Description
Technical field
The invention belongs to technical field of image processing, more particularly to a kind of full reference of view-based access control model notable feature extraction are vertical
Body method for objectively evaluating image quality.
Background technology
Various distortions always occur in image sampling, transmission, compression and reconstruction processes for image, and the mankind are to figure
As the quality of information also requires more and more higher, so with the development of 3D videos and technology, stereo image quality assessment technique exists
Become in human society life more and more important.Because the stereo image quality evaluation method of subjectivity requires that human viewer is every
Individual image is offered to the mass fraction of subjectivity.These methods are time consuming nature and the defects of weak projectivity, it is quite necessary to development visitor
The stereo image quality evaluation method of sight, with realize it is automatic, efficiently, objectively evaluate stereo image quality.Preferable objective IQA
Index should have the ability of good prognostic chart picture quality, and completely the same with subjective measurement.
Stereo image quality objective evaluation can be divided into three classifications:Full reference method, half reference method and without reference side
Method.The main distinction of this three classes method is different to the degree of dependence of original reference image.And ground due to now both domestic and external
Study carefully focus and be concentrated mainly on full reference and without measure is evaluated with reference to stereo image quality, wherein on full reference picture quality
The research of evaluation is more, and technology is comparatively more ripe, and the model established on this basis and subjective measurement uniformity are higher.
But due to stereo-picture vision mode system also imperfection, therefore three-dimensional image objective quality evaluation is still the heat studied now
Point and difficult point.
The content of the invention
Complete the invention discloses a kind of extraction of vision notable feature refers to objective evaluation method for quality of stereo images.Its mesh
Be to utilize vision significance model, assist extraction stereo-picture visual signature, and comprehensive visual characteristic information with realize opposition
The mapping of body picture quality, complete measure and the evaluation of stereo image quality.
The present invention adopts the technical scheme that:
First, the left and right view of reference and distortion stereo pairs is handled respectively using structural similarity algorithm,
Obtain corresponding disparity map;The left and right view of reference and distortion stereo pairs is carried out respectively using binocular view blending algorithm
Image co-registration, obtain middle reference and distorted image.Secondly, middle reference and distorted image are utilized and composes residual error vision significance
Model obtains reference and distortion Saliency maps, and integrates vision significance figure by maximizing formula.Then, from middle reference and
Distorted image extracts visual information feature and depth information feature is extracted from disparity map, and in the guidance of vision significance figure
Lower progress similarity measurement is assisted, obtains the Measure Indexes of each visual information feature.Finally, each visual information characteristic measure is referred to
Mark is supported vector machine training prediction, realizes the mapping to stereo image quality, completes the measure of stereo image quality with commenting
Valency.
The technical solution adopted for the present invention to solve the technical problems is as follows:
Step (1) inputs refer to stereo pairs and distortion stereo pairs, wherein each stereo pairs include respectively
Left view and right view image;
Step (2) builds Log Gabor filter models, and the stereo pairs in step (1) are carried out at convolution algorithm
Reason, respectively obtain the energy response figure of reference and distortion stereo image pair or so view;
The expression formula of Log Gabor filters is as follows:
Wherein, f0And θ0It is centre frequency and the azimuth of Log Gabor filters, σθAnd σfWave filter is represented respectively
Azimuth bandwidth and radial bandwidth, f and θ represent radial coordinate and the azimuth of wave filter respectively.
After Log Gabor filters are carried out into convolution with reference and distortion stereo image pair or so view, obtain corresponding
Energy response figure, expression formula is as follows:
Wherein, I (x, y) is the left view or right view of reference and distortion stereo pairs,For convolution algorithm;
The reference stereo pairs and distortion stereo pairs that step (3) inputs to step (1) extract disparity map respectively;
The right view for the stereo pairs that step (4) inputs step (1) is according to the disparity map obtained in step (3)
The level that parallax value carries out pixel moves to right, the calibration right view I that construction and left view pixel coordinate pair are answeredR((x+d), y), so
Left view is obtained based on step (2) afterwards and calibrates the energy response figure of right view, calculates normalized left view weight map WL(x,
And calibration right view weight map W y)R((x+d), y), expression is as follows:
Wherein, FL(x, y) and FR((x+d), y) is respectively the energy sound of the left view that step (2) obtains and calibration right view
Ying Tu, d are the parallax value of respective coordinates in the disparity map D that step (3) is calculated;
The reference that step (5) is based in step (1) and the ginseng that the left view and step (4) of distortion stereo pairs obtain
Examine and the calibration right view of distortion stereo pairs and normalized left view weight map and calibration right view weight map, utilization
Binocular view Fusion Model realizes the image co-registration to stereo-picture, respectively obtains reference and distortion intermediate image;Binocular
The formula of view fusion is as follows:
CI (x, y)=WL(x,y)×IL(x,y)+WR((x+d),y)×IR((x+d),y) (5-1)
Wherein, CI (x, y) be binocular view fusion after intermediate image, IL(x, y) and IR((x+d), y) is respectively vertical
The left view and calibration right view of body image pair;
The middle reference and distorted image that step (6) obtains to step (5) extract respectively to be referred to and distortion vision significance
Figure, and it is integrated, the vision significance figure S established after integratingf。
The disparity map extraction of reference stereo pairs and distortion stereo pairs that step (7) is obtained using step (3) is deep
Characteristic information is spent, and measurement is made to the distortion level of the disparity map of distortion stereo pairs;Using the method for pixel domain error
The similitude of the depth characteristic information of extraction reference and distortion stereo pairs, as reaction distortion stereo pairs in disparity map
Improve quality the index of distortion level, expression formula is as follows:
Wherein, DrefRepresent the disparity map of reference picture, DdisRepresent the disparity map of distorted image, E () is mean value function, ε
For the constant more than zero, it is zero, Index to prevent denominator1And Index2It is two similarity measurements figureofmerits of depth characteristic information;
The middle reference and distorted image that step (8) obtains to step (5) extract edge and textural characteristics respectively;
By Prewitt operators with carrying out process of convolution by altimetric image, obtain including the gradient map of edge contour information, utilize
The expression formula of the marginal information feature of Prewitt operator extractions middle reference and distorted image is as follows:
Wherein, f (x, y) is the left/right view of stereo pairs,For convolution algorithm, hxAnd hyIt is 3 × 3 Prewitt
Vertical formwork and horizontal shuttering, it is respectively intended to the horizontal edge and vertical edge of detection image.Template expression formula is as follows:
The extraction of texture information feature is as follows using local binary patterns LBP, LBP expression formula:
Wherein, gcIt is the gray value of the central pixel point of image, gpIt is the gray value of the neighbor pixel of image, x and y generations
The coordinate value of table central pixel point.Sgn (x) is jump function.
The middle reference and the visual information feature and step (6) of distorted image that step (9) extracts step (8) are established
The multiplication put pixel-by-pixel of vision significance figure, obtain the visual information feature of vision significance enhancing, expression
It is as follows:
GMSR(x, y)=GMR(x,y)*Sf(x,y) GMSD(x, y)=GMD(x,y)*Sf(x,y) (9-1)
TISR(x, y)=TIR(x,y)*Sf(x,y) TISD(x, y)=TID(x,y)*Sf(x,y) (9-2)
ISR(x, y)=IR(x,y)*Sf(x,y) ISD(x, y)=ID(x,y)*Sf(x,y) (9-3)
Wherein, GMR、TIRAnd IRIt is the edge, texture and brightness information of middle reference image respectively, GMD、TIDAnd ID
It is the edge, texture and brightness information of middle distorted image respectively;SfVision after the integration obtained for step (6) is notable
Property figure;
The visual information feature that step (10) strengthens the conspicuousness of extraction in step (9) carries out similarity measurement, expression
Formula is as follows:
Wherein, GMSR、TISRAnd ISRRepresent that edge, texture and the monochrome information of the conspicuousness enhancing of middle reference image are special
Sign, GMSD、TISDAnd ISDEdge, texture and the monochrome information of the conspicuousness enhancing of distorted image, Index among representing3、Index4
And Index5The similarity measurements figureofmerit at edge, texture and monochrome information feature, C are represented respectively4It is a constant more than zero,
The purpose is to prevent that denominator from being zero;
The middle reference and distorted image that step (11) obtains to step (5) carry out down-sampled processing, obtain p yardstick
Middle reference and distorted image, to the middle reference under p metric space and the same applying step of distorted image (6), (9) and
(10) method carries out the foundation of vision significance figure, the extraction of visual signature and similarity measurement, and n similitude is always obtained
Measure Indexes, then n=2p+2;
Down-sampled method is as follows:Piece image is inputted, filtering image is obtained by low pass filter, then filtering is schemed
As carry out decimation factor be m down-sampled processing, obtain it is down-sampled after image.
The each Measure Indexes obtained in step (12) integration steps (8) and (11), it is supported vector machine SVR training
Prediction, optimum prediction model is obtained, and be mapped as the objective assessment score of picture quality.
Wherein, the complete of view-based access control model notable feature extraction according to patent refers to stereo image quality objective evaluation side
Method, it is characterised in that described step (3) comprises the following steps:
Step (3.1) will be moved to right n times with reference to the right view all pixels point level with distortion stereo image pair respectively,
The step-length moved every time is s pixels, obtains the k width amendment right views I after horizontal move to rightR((x+i*s), y), (i=1,2 ...
K), then k=n/s, marked as i corresponding to each width amendment right view, (i=1,2 ... k);
Step (3.2) calculates left view and the k width amendments of stereo image pair using structural similarity algorithm SSIM respectively
The structural similarity of right view, obtain k width structural similarity figures.SSIM algorithm expression formulas are as follows:
Wherein, μxAnd μyAn image corresponding in the left view and amendment right view image of stereo pairs is represented respectively
Average in block;、σxAnd σyAn image block corresponding in the left view and amendment right view image of stereo pairs is represented respectively
Interior variance yields;σxyIn left view for stereo pairs and the covariance between an image block of amendment right view image
Covariance.C1And C2For the constant more than zero, it is zero to prevent denominator;
Step (3.3) takes partial structurtes in its k width structural similarity figure for each pixel (x, y) of left view
The maximum width of similarity, corresponding marked as i, (i=1,2 ... k), then i is parallax value corresponding to (x, y) pixel, record
For d, so as to build disparity map D.
The complete of view-based access control model notable feature extraction according to patent refers to objective evaluation method for quality of stereo images, its
It is characterised by that described step (6) is specially:
For vision significance figure extracting method using the vision significance model (SR) of spectrum residual error, particular content is as follows:
Given piece image I (x, y), has:
A (f)=Re { F [I (x, y)] }
P (f)=Angle { F [I (x, y)] }
L (f)=log [A (f)] (6-1)
Wherein, F () and F-1() is two-dimensional Fourier transform and its inverse transformation, and Re () represents to take real part computing, Angle ()
Expression takes argument computing, and S (x, y) is the Saliency maps obtained by composing residual error method, and g (x, y) is gauss low frequency filter, hn
(f) it is local mean value wave filter, its expression formula is as follows:
Wherein, σ is the standard deviation in probability distribution;
Middle reference and distorted image are referred to by composing the method for residual error and distortion vision significance figure, as the following formula
The vision significance figure that shown method is established after integrating;
Sf(x, y)=Max [Sref(x,y),Sdis(x,y)] (6-4)
Wherein, SrefAnd SdisThe respectively vision significance figure of middle reference and distorted image, SfFor the vision after integration
Saliency maps.
The complete of view-based access control model notable feature extraction according to patent refers to objective evaluation method for quality of stereo images, its
Vector machine SVR training predictions are supported in step (12) described in being characterised by, obtaining optimum prediction model is specially:
SVR training Forecasting Methodology is specifically trained and test model, concrete scheme are as follows using 5- folding cross validations:
Sample is randomly divided into mutually disjoint five parts by step (12.1), is selected wherein four parts and is carried out SVR training to obtain
Best model, remaining portion is then applied into the model up, obtain corresponding objective quality value to enter to subjective quality
Row prediction;
Step (12.2) repeats the operation of step (12.1) repeatedly, takes all data result average values to characterize
The performance of proposed model;
Expression is as follows:
Q=SVR (Index1,Index2,…,Indexn) (12-1)
Wherein, Q is evaluating objective quality fraction.
Beneficial effects of the present invention:
The present invention assists extraction visual information feature by vision significance figure, and comprehensive visual information characteristics are to realize pair
The mapping of stereo image quality, realize the objective evaluation to distortion stereo pairs quality.Test result indicates that based on the present invention
Evaluation performance of the proposed method to stereo image quality has good uniformity with subjective assessment, better than conventional stereo image
Quality evaluating method.
Brief description of the drawings
Fig. 1 is the full principle for referring to objective evaluation method for quality of stereo images of view-based access control model notable feature of the present invention extraction
Figure.
Embodiment
The inventive method is described further below in conjunction with the accompanying drawings.
Step (1) reads in the 3D LIVE view data of texas,U.S university Austin using Matlab softwares successively
Stage I and stage II reference stereo pairs and corresponding distortion stereo pairs in storehouse, wherein each stereo pairs point
Bao Kuo not left and right view image.
Step (2) builds Log Gabor filter models, and the stereo pairs in step (1) are carried out at convolution algorithm
Reason, respectively obtain the energy response figure of reference and distortion stereo image pair or so view;
The expression formula of Log Gabor filters is as follows:
Wherein, f0And θ0It is centre frequency and the azimuth of Log Gabor filters, σθAnd σfWave filter is represented respectively
Azimuth bandwidth and radial bandwidth, f and θ represent radial coordinate and the azimuth of wave filter respectively.Wherein, σθ=π/18, σf=
0.75, f0=1/6, θ0=0, f=0, π/4, π/3,3 π/4, θ=0, π/5,2 π/5,3 π/5,4 π/5.Thus 4 × 5=20 is obtained
Individual LoG Gabor filters energy response figure, the local energy of Log Gabor filters response are defined as the energy between each yardstick
The maximum of amount, and the local energy in each yardstick is defined as each azimuth and corresponds to local energy sum.
After Log Gabor filters are carried out into convolution with reference and distortion stereo image pair or so view, obtain corresponding
Energy response figure, expression formula is as follows:
Wherein, I (x, y) is the left view or right view of reference and distortion stereo pairs,For convolution algorithm;
The reference stereo pairs and distortion stereo pairs that step (3) inputs to step (1) extract disparity map respectively;
The right view for the stereo pairs that step (4) inputs step (1) is according to the disparity map obtained in step (3)
The level that parallax value carries out pixel moves to right, the calibration right view I that construction and left view pixel coordinate pair are answeredR((x+d), y), so
Left view is obtained based on step (2) afterwards and calibrates the energy response figure of right view, calculates normalized left view weight map WL(x,
And calibration right view weight map W y)R((x+d), y), expression is as follows:
Wherein, FL(x, y) and FR((x+d), y) is respectively the energy sound of the left view that step (2) obtains and calibration right view
Ying Tu, d are the parallax value of respective coordinates in the disparity map D that step (3) is calculated;
The reference that step (5) is based in step (1) and the ginseng that the left view and step (4) of distortion stereo pairs obtain
Examine and the calibration right view of distortion stereo pairs and normalized left view weight map and calibration right view weight map, utilization
Binocular view Fusion Model realizes the image co-registration to stereo-picture, respectively obtains reference and distortion intermediate image;Binocular
The formula of view fusion is as follows:
CI (x, y)=WL(x,y)×IL(x,y)+WR((x+d),y)×IR((x+d),y) (5-1)
Wherein, CI (x, y) be binocular view fusion after intermediate image, IL(x, y) and IR((x+d), y) is respectively vertical
The left view and calibration right view of body image pair;
The middle reference and distorted image that step (6) obtains to step (5) extract respectively to be referred to and distortion vision significance
Figure, and it is integrated, the vision significance figure S established after integratingf。
The disparity map extraction of reference stereo pairs and distortion stereo pairs that step (7) is obtained using step (3) is deep
Characteristic information is spent, and measurement is made to the distortion level of the disparity map of distortion stereo pairs;Using the method for pixel domain error
The similitude of the depth characteristic information of extraction reference and distortion stereo pairs, as reaction distortion stereo pairs in disparity map
Improve quality the index of distortion level, expression formula is as follows:
Wherein, DrefRepresent the disparity map of reference picture, DdisRepresent the disparity map of distorted image, E () is mean value function, ε
For the constant more than zero, it is zero, ε=0.001, Index to prevent denominator1And Index2It is two similitudes of depth characteristic information
Measure Indexes;
The middle reference and distorted image that step (8) obtains to step (5) extract edge and textural characteristics respectively;
By Prewitt operators with carrying out process of convolution by altimetric image, obtain including the gradient map of edge contour information, utilize
The expression formula of the marginal information feature of Prewitt operator extractions middle reference and distorted image is as follows:
Wherein, f (x, y) is the left/right view of stereo pairs,For convolution algorithm, hxAnd hyIt is 3 × 3 Prewitt
Vertical formwork and horizontal shuttering, it is respectively intended to the horizontal edge and vertical edge of detection image.Template expression formula is as follows:
The extraction of texture information feature is as follows using local binary patterns LBP, LBP expression formula:
Wherein, gcIt is the gray value of the central pixel point of image, gpIt is the gray value of the neighbor pixel of image, x and y generations
The coordinate value of table central pixel point.Sgn (x) is jump function.
The middle reference and the visual information feature and step (6) of distorted image that step (9) extracts step (8) are established
The multiplication put pixel-by-pixel of vision significance figure, obtain the visual information feature of vision significance enhancing, expression
It is as follows:
GMSR(x, y)=GMR(x,y)*Sf(x,y) GMSD(x, y)=GMD(x,y)*Sf(x,y) (9-1)
TISR(x, y)=TIR(x,y)*Sf(x,y) TISD(x, y)=TID(x,y)*Sf(x,y) (9-2)
ISR(x, y)=IR(x,y)*Sf(x,y) ISD(x, y)=ID(x,y)*Sf(x,y) (9-3)
Wherein, GMR、TIRAnd IRIt is the edge, texture and brightness information of middle reference image respectively, GMD、TIDAnd ID
It is the edge, texture and brightness information of middle distorted image respectively;SfVision after the integration obtained for step (6) is notable
Property figure;
The visual information feature that step (10) strengthens the conspicuousness of extraction in step (9) carries out similarity measurement, expression
Formula is as follows:
Wherein, GMSR、TISRAnd ISRRepresent that edge, texture and the monochrome information of the conspicuousness enhancing of middle reference image are special
Sign, GMSD、TISDAnd ISDEdge, texture and the monochrome information of the conspicuousness enhancing of distorted image, Index among representing3、Index4
And Index5The similarity measurements figureofmerit at edge, texture and monochrome information feature, C are represented respectively4It is a constant more than zero,
The purpose is to prevent that denominator from being zero, C4=0.5.
The middle reference and distorted image that step (11) obtains to step (5) carry out down-sampled processing, obtain p yardstick
Middle reference and distorted image, p=3.To the middle reference under p metric space and the same applying step of distorted image (6),
(9) and the method for (10) carries out the foundation of vision significance figure, the extraction of visual signature and similarity measurement, is always obtained n
Similarity measurements figureofmerit, then n=2p+2, then, and n=8.
Down-sampled method is as follows:Piece image is inputted, filtering image is obtained by low pass filter, then filtering is schemed
As carry out decimation factor be m down-sampled processing, m=2, obtain it is down-sampled after image.
The each Measure Indexes obtained in step (12) integration steps (8) and (11), it is supported vector machine SVR training
Prediction, optimum prediction model is obtained, and be mapped as the objective assessment score of picture quality.
Wherein, the complete of view-based access control model notable feature extraction according to patent refers to stereo image quality objective evaluation side
Method, it is characterised in that described step (3) comprises the following steps:
Step (3.1) will be moved to right n times with reference to the right view all pixels point level with distortion stereo image pair respectively,
The step-length moved every time is s pixels, obtains the k width amendment right views I after horizontal move to rightR((x+i*s), y), (i=1,2 ...
K), then k=n/s, in this s=1, n=25, then k=25.Marked as i corresponding to each width amendment right view, (i=1,2 ...
k)。
Step (3.2) calculates the left view of stereo image pair respectively using structural similarity (SSIM) algorithm and k width is repaiied
The structural similarity of positive right view, obtains k width structural similarity figures.SSIM algorithm expression formulas are as follows:
Wherein, μxAnd μyAn image corresponding in the left view and amendment right view image of stereo pairs is represented respectively
Average in block;、σxAnd σyAn image block corresponding in the left view and amendment right view image of stereo pairs is represented respectively
Interior variance yields;σxyIn left view for stereo pairs and the covariance between an image block of amendment right view image
Covariance.C1And C2For the constant more than zero, it is zero to prevent denominator, in this C1=6.5025, C2=58.5225.
Step (3.3) takes partial structurtes in its k width structural similarity figure for each pixel (x, y) of left view
The maximum width of similarity, corresponding marked as i, (i=1,2 ... k), then i is parallax value corresponding to (x, y) pixel, record
For d, so as to build disparity map D.
The complete of view-based access control model notable feature extraction according to patent refers to objective evaluation method for quality of stereo images, its
It is characterised by that described step (6) is specially:
For vision significance figure extracting method using the vision significance model (SR) of spectrum residual error, particular content is as follows:
Given piece image I (x, y), has:
A (f)=Re { F [I (x, y)] }
P (f)=Angle { F [I (x, y)] }
L (f)=log [A (f)] (6-1)
Wherein, F () and F-1() is two-dimensional Fourier transform and its inverse transformation, and Re () represents to take real part computing, Angle ()
Expression takes argument computing, and S (x, y) is the Saliency maps obtained by composing residual error method, and g (x, y) is gauss low frequency filter, hn
(f) it is local mean value wave filter, its expression formula is as follows:
Wherein, σ is the standard deviation in probability distribution, σ=1.5;
Middle reference and distorted image are referred to by composing the method for residual error and distortion vision significance figure, as the following formula
The vision significance figure that shown method is established after integrating;
Sf(x, y)=Max [Sref(x,y),Sdis(x,y)] (6-4)
Wherein, SrefAnd SdisThe respectively vision significance figure of middle reference and distorted image, SfFor the vision after integration
Saliency maps.
The complete of view-based access control model notable feature extraction according to patent refers to objective evaluation method for quality of stereo images, its
Vector machine SVR training predictions are supported in step (12) described in being characterised by, obtaining optimum prediction model is specially:
SVR training Forecasting Methodology is specifically trained and test model, concrete scheme are as follows using 5- folding cross validations:
Sample is randomly divided into mutually disjoint five parts by step (12.1), is selected wherein four parts and is carried out SVR training to obtain
Best model, remaining portion is then applied into the model up, obtain corresponding objective quality value to enter to subjective quality
Row prediction;
Step (12.2) repeats the operation of step (12.1) repeatedly, takes all data result average values to characterize
The performance of proposed model;
Expression is as follows:
Q=SVR (Index1,Index2,…,Indexn) (12-1)
Wherein, Q is evaluating objective quality fraction.
Claims (4)
1. a kind of the complete of view-based access control model notable feature extraction refers to objective evaluation method for quality of stereo images, it is characterised in that including
Following steps:
Step (1) inputs refer to stereo pairs and distortion stereo pairs, wherein each stereo pairs include left view respectively
Figure and right view image;
Step (2) builds Log Gabor filter models, and convolution algorithm processing is carried out to the stereo pairs in step (1),
Respectively obtain the energy response figure of reference and distortion stereo image pair or so view;
The expression formula of Log Gabor filters is as follows:
<mrow>
<msub>
<mi>h</mi>
<mrow>
<mi>L</mi>
<mi>G</mi>
</mrow>
</msub>
<mo>=</mo>
<mi>exp</mi>
<mo>{</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>&lsqb;</mo>
<mi>l</mi>
<mi>o</mi>
<mi>g</mi>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>/</mo>
<msub>
<mi>f</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mn>2</mn>
</msup>
<mrow>
<mn>2</mn>
<msup>
<mrow>
<mo>&lsqb;</mo>
<mi>l</mi>
<mi>o</mi>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&sigma;</mi>
<mi>f</mi>
</msub>
<mo>/</mo>
<msub>
<mi>f</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>(</mo>
<mi>&theta;</mi>
<mo>-</mo>
<msub>
<mi>&theta;</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mrow>
<mn>2</mn>
<msubsup>
<mi>&sigma;</mi>
<mi>&theta;</mi>
<mn>2</mn>
</msubsup>
</mrow>
</mfrac>
<mo>}</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, f0And θ0It is centre frequency and the azimuth of Log Gabor filters, σθAnd σfThe azimuth of wave filter is represented respectively
Bandwidth and radial bandwidth, f and θ represent radial coordinate and the azimuth of wave filter respectively;
After Log Gabor filters are carried out into convolution with reference and distortion stereo image pair or so view, corresponding energy is obtained
Response diagram is measured, expression formula is as follows:
<mrow>
<mi>F</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>I</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>&CircleTimes;</mo>
<msub>
<mi>h</mi>
<mrow>
<mi>L</mi>
<mi>G</mi>
</mrow>
</msub>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>-</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, I (x, y) is the left view or right view of reference and distortion stereo pairs,For convolution algorithm;
The reference stereo pairs and distortion stereo pairs that step (3) inputs to step (1) extract disparity map respectively;
The right view for the stereo pairs that step (4) inputs step (1) according to the disparity map obtained in step (3) parallax
The level that value carries out pixel moves to right, the calibration right view I that construction and left view pixel coordinate pair are answeredR((x+d), y), Ran Houji
Left view is obtained in step (2) and calibrates the energy response figure of right view, calculates normalized left view weight map WL(x, y) and
Calibrate right view weight map WR((x+d), y), expression is as follows:
<mrow>
<msub>
<mi>W</mi>
<mi>L</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mi>L</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msub>
<mi>F</mi>
<mi>L</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>F</mi>
<mi>R</mi>
</msub>
<mrow>
<mo>(</mo>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>+</mo>
<mi>d</mi>
</mrow>
<mo>)</mo>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>4</mn>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msub>
<mi>W</mi>
<mi>R</mi>
</msub>
<mrow>
<mo>(</mo>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>+</mo>
<mi>d</mi>
</mrow>
<mo>)</mo>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>F</mi>
<mi>R</mi>
</msub>
<mrow>
<mo>(</mo>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>+</mo>
<mi>d</mi>
</mrow>
<mo>)</mo>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msub>
<mi>F</mi>
<mi>L</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>F</mi>
<mi>R</mi>
</msub>
<mrow>
<mo>(</mo>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>+</mo>
<mi>d</mi>
</mrow>
<mo>)</mo>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>4</mn>
<mo>-</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, FL(x, y) and FR((x+d), y) is respectively the energy response of the left view that step (2) obtains and calibration right view
Figure, d are the parallax value of respective coordinates in the disparity map D that step (3) is calculated;
Step (5) be based on the reference and distortion stereo pairs in step (1) the obtained reference of left view and step (4) and
The calibration right view of distortion stereo pairs and normalized left view weight map and calibration right view weight map, utilize binocular
View Fusion Model realizes the image co-registration to stereo-picture, respectively obtains reference and distortion intermediate image;Binocular view
The formula of fusion is as follows:
CI (x, y)=WL(x,y)×IL(x,y)+WR((x+d),y)×IR((x+d),y) (5-1)
Wherein, CI (x, y) be binocular view fusion after intermediate image, IL(x, y) and IR((x+d), y) is respectively stereogram
The left view and calibration right view of picture pair;
The middle reference and distorted image that step (6) obtains to step (5) extract reference and distortion vision significance figure respectively,
And it is integrated, the vision significance figure S established after integratingf;
The reference stereo pairs and the disparity map extraction depth of distortion stereo pairs that step (7) is obtained using step (3) are special
Reference ceases, and makes measurement to the distortion level of the disparity map of distortion stereo pairs;Extracted using the method for pixel domain error
With reference to the similitude of the depth characteristic information with distortion stereo pairs, as reaction distortion stereo pairs matter on disparity map
The index of distortion level is measured, expression formula is as follows:
<mrow>
<msub>
<mi>Index</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<mi>m</mi>
<mi>e</mi>
<mi>a</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mfrac>
<msqrt>
<mrow>
<mo>|</mo>
<msubsup>
<mi>D</mi>
<mrow>
<mi>r</mi>
<mi>e</mi>
<mi>f</mi>
</mrow>
<mn>2</mn>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>D</mi>
<mrow>
<mi>d</mi>
<mi>i</mi>
<mi>s</mi>
</mrow>
<mn>2</mn>
</msubsup>
<mo>|</mo>
</mrow>
</msqrt>
<mn>255</mn>
</mfrac>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mi>I</mi>
<mi>n</mi>
<mi>d</mi>
<mi>e</mi>
<mi>x</mi>
<mn>2</mn>
<mo>=</mo>
<mfrac>
<mrow>
<mo>|</mo>
<msqrt>
<mrow>
<mi>E</mi>
<mo>&lsqb;</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>D</mi>
<mrow>
<mi>r</mi>
<mi>e</mi>
<mi>f</mi>
</mrow>
</msub>
<mo>-</mo>
<mover>
<msub>
<mi>D</mi>
<mrow>
<mi>r</mi>
<mi>e</mi>
<mi>f</mi>
</mrow>
</msub>
<mo>&OverBar;</mo>
</mover>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</msqrt>
<mo>-</mo>
<msqrt>
<mrow>
<mi>E</mi>
<mo>&lsqb;</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>D</mi>
<mrow>
<mi>d</mi>
<mi>i</mi>
<mi>s</mi>
</mrow>
</msub>
<mo>-</mo>
<mover>
<msub>
<mi>D</mi>
<mrow>
<mi>d</mi>
<mi>i</mi>
<mi>s</mi>
</mrow>
</msub>
<mo>&OverBar;</mo>
</mover>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</msqrt>
<mo>|</mo>
</mrow>
<mrow>
<msqrt>
<mrow>
<mi>E</mi>
<mo>&lsqb;</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>D</mi>
<mrow>
<mi>r</mi>
<mi>e</mi>
<mi>f</mi>
</mrow>
</msub>
<mo>-</mo>
<mover>
<msub>
<mi>D</mi>
<mrow>
<mi>r</mi>
<mi>e</mi>
<mi>f</mi>
</mrow>
</msub>
<mo>&OverBar;</mo>
</mover>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</msqrt>
<mo>+</mo>
<mi>&epsiv;</mi>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>-</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, DrefRepresent the disparity map of reference picture, DdisThe disparity map of distorted image is represented, E () is mean value function, and ε is big
In zero constant, it is zero, Index to prevent denominator1And Index2It is two similarity measurements figureofmerits of depth characteristic information;
The middle reference and distorted image that step (8) obtains to step (5) extract edge and textural characteristics respectively;
By Prewitt operators with carrying out process of convolution by altimetric image, obtain including the gradient map of edge contour information, utilize
The expression formula of the marginal information feature of Prewitt operator extractions middle reference and distorted image is as follows:
<mrow>
<mi>G</mi>
<mi>M</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msqrt>
<mrow>
<msup>
<mrow>
<mo>&lsqb;</mo>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>&CircleTimes;</mo>
<msub>
<mi>h</mi>
<mi>x</mi>
</msub>
<mo>&rsqb;</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>&lsqb;</mo>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>&CircleTimes;</mo>
<msub>
<mi>h</mi>
<mi>y</mi>
</msub>
<mo>&rsqb;</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, f (x, y) is the left/right view of stereo pairs,For convolution algorithm, hxAnd hyIt is 3 × 3 vertical moulds of Prewitt
Plate and horizontal shuttering, are respectively intended to the horizontal edge and vertical edge of detection image, and template expression formula is as follows:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>h</mi>
<mi>x</mi>
</msub>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>h</mi>
<mi>y</mi>
</msub>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>-</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
The extraction of texture information feature is as follows using local binary patterns LBP, LBP expression formula:
<mrow>
<mi>T</mi>
<mi>I</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>p</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mi>P</mi>
</munderover>
<msup>
<mn>2</mn>
<mi>p</mi>
</msup>
<mo>&times;</mo>
<mi>sgn</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>g</mi>
<mi>c</mi>
</msub>
<mo>-</mo>
<msub>
<mi>g</mi>
<mi>p</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>-</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mi>sgn</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mn>1</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>x</mi>
<mo>&GreaterEqual;</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>0</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>x</mi>
<mo><</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>-</mo>
<mn>4</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, gcIt is the gray value of the central pixel point of image, gpIt is the gray value of the neighbor pixel of image, during x and y are represented
The coordinate value of imago vegetarian refreshments, sgn (x) are jump functions;
What the middle reference and the visual information feature and step (6) of distorted image that step (9) extracts step (8) were established regards
Feel the multiplication that Saliency maps are put pixel-by-pixel, obtain the visual information feature of vision significance enhancing, expression is as follows:
GMSR(x, y)=GMR(x,y)*Sf(x,y)GMSD(x, y)=GMD(x,y)*Sf(x,y) (9-1)
TISR(x, y)=TIR(x,y)*Sf(x,y)TISD(x, y)=TID(x,y)*Sf(x,y) (9-2)
ISR(x, y)=IR(x,y)*Sf(x,y)ISD(x, y)=ID(x,y)*Sf(x,y) (9-3)
Wherein, GMR、TIRAnd IRIt is the edge, texture and brightness information of middle reference image respectively, GMD、TIDAnd IDRespectively
It is the edge, texture and brightness information of middle distorted image;SfVision significance after the integration obtained for step (6)
Figure;
The visual information feature that step (10) strengthens the conspicuousness of extraction in step (9) carries out similarity measurement, and expression formula is such as
Under:
<mrow>
<msub>
<mi>Index</mi>
<mn>3</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<mo>*</mo>
<msub>
<mi>GM</mi>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<msub>
<mi>GM</mi>
<mrow>
<mi>S</mi>
<mi>D</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>4</mn>
</msub>
</mrow>
<mrow>
<msubsup>
<mi>GM</mi>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
<mn>2</mn>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msubsup>
<mi>GM</mi>
<mrow>
<mi>S</mi>
<mi>D</mi>
</mrow>
<mn>2</mn>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>4</mn>
</msub>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>10</mn>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msub>
<mi>Index</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>&Sigma;</mi>
<mi>x</mi>
<mi>N</mi>
</munderover>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<msub>
<mi>TI</mi>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mover>
<mrow>
<msub>
<mi>TI</mi>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
</msub>
</mrow>
<mo>&OverBar;</mo>
</mover>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<msub>
<mi>TI</mi>
<mrow>
<mi>S</mi>
<mi>D</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mover>
<mrow>
<msub>
<mi>TI</mi>
<mrow>
<mi>S</mi>
<mi>D</mi>
</mrow>
</msub>
</mrow>
<mo>&OverBar;</mo>
</mover>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mrow>
<msqrt>
<mrow>
<munderover>
<mi>&Sigma;</mi>
<mi>x</mi>
<mi>N</mi>
</munderover>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<msub>
<mi>TI</mi>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mover>
<mrow>
<msub>
<mi>TI</mi>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
</msub>
</mrow>
<mo>&OverBar;</mo>
</mover>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<munderover>
<mi>&Sigma;</mi>
<mi>i</mi>
<mi>N</mi>
</munderover>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<msub>
<mi>TI</mi>
<mrow>
<mi>S</mi>
<mi>D</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mover>
<mrow>
<msub>
<mi>TI</mi>
<mrow>
<mi>S</mi>
<mi>D</mi>
</mrow>
</msub>
</mrow>
<mo>&OverBar;</mo>
</mover>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mrow>
</msqrt>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mrow>
<mn>10</mn>
<mo>-</mo>
<mn>2</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msub>
<mi>Index</mi>
<mn>5</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
<mo>-</mo>
<mfrac>
<mn>20</mn>
<mn>255</mn>
</mfrac>
<mo>*</mo>
<mi>lg</mi>
<mo>{</mo>
<mfrac>
<mn>255</mn>
<mrow>
<mfrac>
<mn>1</mn>
<mi>P</mi>
</mfrac>
<msubsup>
<mi>&Sigma;</mi>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mi>P</mi>
</msubsup>
<msup>
<mrow>
<mo>&lsqb;</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>S</mi>
<mi>R</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>S</mi>
<mi>D</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
<mo>}</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>10</mn>
<mo>-</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, GMSR、TISRAnd ISREdge, texture and the monochrome information feature of the conspicuousness enhancing of middle reference image are represented,
GMSD、TISDAnd ISDEdge, texture and the monochrome information of the conspicuousness enhancing of distorted image, Index among representing3、Index4With
Index5The similarity measurements figureofmerit at edge, texture and monochrome information feature, C are represented respectively4It is a constant more than zero, its
Purpose is to prevent that denominator from being zero;
The middle reference and distorted image that step (11) obtains to step (5) carry out down-sampled processing, obtain in p yardstick
Between refer to and distorted image, to the middle reference under p metric space and the same applying step of distorted image (6), (9) and (10)
Method carry out the foundation of vision significance figure, the extraction of visual signature and similarity measurement, n similarity measurement is always obtained
Index, then n=2p+2;
Down-sampled method is as follows:Piece image is inputted, filtering image is obtained by low pass filter, then filtering image entered
Row decimation factor be m down-sampled processing, obtain it is down-sampled after image;
The each Measure Indexes obtained in step (12) integration steps (8) and (11), vector machine SVR training predictions are supported,
Optimum prediction model is obtained, and is mapped as the objective assessment score of picture quality.
2. the complete of view-based access control model notable feature extraction according to claim 1 refers to stereo image quality objective evaluation side
Method, it is characterised in that described step (3) comprises the following steps:
Step (3.1) will be moved to right n times with reference to the right view all pixels point level with distortion stereo image pair respectively, every time
Mobile step-length is s pixels, obtains the k width amendment right views I after horizontal move to rightR((x+i*s), y), (i=1,2 ... k),
Then k=n/s, each width amendment right view is corresponding marked as i, and (i=1,2 ... k);
Step (3.2) calculates the left view of stereo image pair respectively using structural similarity algorithm SSIM and the k width amendments right side regards
The structural similarity of figure, obtains k width structural similarity figures, and SSIM algorithm expression formulas are as follows:
<mrow>
<mi>S</mi>
<mi>S</mi>
<mi>I</mi>
<mi>M</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mrow>
<mo>(</mo>
<mrow>
<mn>2</mn>
<msub>
<mi>&mu;</mi>
<mi>x</mi>
</msub>
<msub>
<mi>&mu;</mi>
<mi>y</mi>
</msub>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<mrow>
<mn>2</mn>
<msub>
<mi>&sigma;</mi>
<mrow>
<mi>x</mi>
<mi>y</mi>
</mrow>
</msub>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mrow>
<mo>(</mo>
<mrow>
<msubsup>
<mi>&mu;</mi>
<mi>x</mi>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>&mu;</mi>
<mi>y</mi>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mi>x</mi>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>&sigma;</mi>
<mi>y</mi>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mrow>
<mn>3</mn>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
Wherein, μxAnd μyRepresent respectively corresponding in the left view and amendment right view image of stereo pairs in an image block
Average;、σxAnd σyRepresent respectively corresponding in the left view and amendment right view image of stereo pairs in an image block
Variance yields;σxyAssociation side in left view for stereo pairs and the covariance between an image block of amendment right view image
Difference, C1And C2For the constant more than zero, it is zero to prevent denominator;
Step (3.3) takes partial structurtes in its k width structural similarity figure similar for each pixel (x, y) of left view
Property the maximum width of value, it is corresponding marked as i, (i=1,2 ... k), then i is parallax value corresponding to (x, y) pixel, is recorded as d,
So as to build disparity map D.
3. the complete of view-based access control model notable feature extraction according to claim 1 refers to stereo image quality objective evaluation side
Method, it is characterised in that described step (6) is specially:
For vision significance figure extracting method using the vision significance model (SR) of spectrum residual error, particular content is as follows:
Given piece image I (x, y), has:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<mi>A</mi>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>Re</mi>
<mrow>
<mo>{</mo>
<mrow>
<mi>F</mi>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<mi>I</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mrow>
<mo>}</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>P</mi>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>A</mi>
<mi>n</mi>
<mi>g</mi>
<mi>l</mi>
<mi>e</mi>
<mrow>
<mo>{</mo>
<mrow>
<mi>F</mi>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<mi>I</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mrow>
<mo>}</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>log</mi>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<mi>A</mi>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>h</mi>
<mi>n</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
<mo>&CircleTimes;</mo>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>S</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>g</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mo>&CircleTimes;</mo>
<msup>
<mi>F</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<msup>
<mrow>
<mo>{</mo>
<mrow>
<mi>exp</mi>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mi>i</mi>
<mi>P</mi>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mrow>
<mo>}</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mrow>
<mn>6</mn>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
Wherein, F () and F-1() is two-dimensional Fourier transform and its inverse transformation, and Re () is represented to take real part computing, and Angle () is represented
Take argument computing, S (x, y) is the Saliency maps obtained by composing residual error method, and g (x, y) is gauss low frequency filter, hn(f)
It is local mean value wave filter, its expression formula is as follows:
<mrow>
<mi>g</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msqrt>
<mrow>
<mn>2</mn>
<msup>
<mi>&pi;&sigma;</mi>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mfrac>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<mrow>
<msup>
<mi>x</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mi>y</mi>
<mn>2</mn>
</msup>
</mrow>
<mrow>
<mn>2</mn>
<msup>
<mi>&sigma;</mi>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>-</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, σ is the standard deviation in probability distribution;
Middle reference and distorted image are referred to by composing the method for residual error and distortion vision significance figure, it is shown as the following formula
Method establish integrate after vision significance figure;
Sf(x, y)=Max [Sref(x,y),Sdis(x,y)] (6-4)
Wherein, SrefAnd SdisThe respectively vision significance figure of middle reference and distorted image, SfFor the vision significance after integration
Figure.
4. the complete of view-based access control model notable feature extraction according to claim 1 refers to stereo image quality objective evaluation side
Method, it is characterised in that vector machine SVR training predictions are supported in described step (12), it is specific to obtain optimum prediction model
For:
SVR training Forecasting Methodology is specifically trained and test model, concrete scheme are as follows using 5- folding cross validations:
Sample is randomly divided into mutually disjoint five parts by step (12.1), is selected wherein four parts and is carried out SVR training to obtain most preferably
Model, remaining portion is then applied into the model up, obtain corresponding objective quality value to carry out in advance subjective quality
Survey;
Step (12.2) repeats the operation of step (12.1) repeatedly, takes all data result average values to be carried to characterize
Go out the performance of model;
Expression is as follows:
Q=SVR (Index1,Index2,…,Indexn) (12-1)
Wherein, Q is evaluating objective quality fraction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710721546.XA CN107578404B (en) | 2017-08-22 | 2017-08-22 | View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710721546.XA CN107578404B (en) | 2017-08-22 | 2017-08-22 | View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107578404A true CN107578404A (en) | 2018-01-12 |
CN107578404B CN107578404B (en) | 2019-11-15 |
Family
ID=61034182
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710721546.XA Active CN107578404B (en) | 2017-08-22 | 2017-08-22 | View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107578404B (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108335289A (en) * | 2018-01-18 | 2018-07-27 | 天津大学 | A kind of full image method for evaluating objective quality with reference to fusion |
CN108449596A (en) * | 2018-04-17 | 2018-08-24 | 福州大学 | A kind of 3D stereo image quality appraisal procedures of fusion aesthetics and comfort level |
CN108520510A (en) * | 2018-03-19 | 2018-09-11 | 天津大学 | It is a kind of based on entirety and partial analysis without referring to stereo image quality evaluation method |
CN108629763A (en) * | 2018-04-16 | 2018-10-09 | 海信集团有限公司 | A kind of evaluation method of disparity map, device and terminal |
CN108648180A (en) * | 2018-04-20 | 2018-10-12 | 浙江科技学院 | A kind of full reference picture assessment method for encoding quality of view-based access control model multiple characteristics depth integration processing |
CN109242834A (en) * | 2018-08-24 | 2019-01-18 | 浙江大学 | It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method |
CN109255358A (en) * | 2018-08-06 | 2019-01-22 | 浙江大学 | A kind of 3D rendering quality evaluating method of view-based access control model conspicuousness and depth map |
CN109257593A (en) * | 2018-10-12 | 2019-01-22 | 天津大学 | Immersive VR quality evaluating method based on human eye visual perception process |
CN109345552A (en) * | 2018-09-20 | 2019-02-15 | 天津大学 | Stereo image quality evaluation method based on region weight |
CN109345502A (en) * | 2018-08-06 | 2019-02-15 | 浙江大学 | A kind of stereo image quality evaluation method based on disparity map stereochemical structure information extraction |
CN109525838A (en) * | 2018-11-28 | 2019-03-26 | 上饶师范学院 | Stereo image quality evaluation method based on binocular competition |
CN109523506A (en) * | 2018-09-21 | 2019-03-26 | 浙江大学 | The complete of view-based access control model specific image feature enhancing refers to objective evaluation method for quality of stereo images |
CN109714593A (en) * | 2019-01-31 | 2019-05-03 | 天津大学 | Three-dimensional video quality evaluation method based on binocular fusion network and conspicuousness |
CN109872305A (en) * | 2019-01-22 | 2019-06-11 | 浙江科技学院 | It is a kind of based on Quality Map generate network without reference stereo image quality evaluation method |
CN110084782A (en) * | 2019-03-27 | 2019-08-02 | 西安电子科技大学 | Full reference image quality appraisement method based on saliency detection |
CN110399881A (en) * | 2019-07-11 | 2019-11-01 | 深圳大学 | A kind of quality enhancement method and device based on binocular stereo image end to end |
CN111598826A (en) * | 2019-02-19 | 2020-08-28 | 上海交通大学 | Image objective quality evaluation method and system based on joint multi-scale image characteristics |
CN111738270A (en) * | 2020-08-26 | 2020-10-02 | 北京易真学思教育科技有限公司 | Model generation method, device, equipment and readable storage medium |
CN112233089A (en) * | 2020-10-14 | 2021-01-15 | 西安交通大学 | No-reference stereo mixed distortion image quality evaluation method |
CN112508847A (en) * | 2020-11-05 | 2021-03-16 | 西安理工大学 | Image quality evaluation method based on depth feature and structure weighted LBP feature |
US11244157B2 (en) * | 2019-07-05 | 2022-02-08 | Baidu Online Network Technology (Beijing) Co., Ltd. | Image detection method, apparatus, device and storage medium |
WO2023142753A1 (en) * | 2022-01-27 | 2023-08-03 | 华为技术有限公司 | Image similarity measurement method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150078654A1 (en) * | 2013-09-13 | 2015-03-19 | Interra Systems, Inc. | Visual Descriptors Based Video Quality Assessment Using Outlier Model |
CN105744256A (en) * | 2016-03-31 | 2016-07-06 | 天津大学 | Three-dimensional image quality objective evaluation method based on graph-based visual saliency |
CN105825503A (en) * | 2016-03-10 | 2016-08-03 | 天津大学 | Visual-saliency-based image quality evaluation method |
CN105959684A (en) * | 2016-05-26 | 2016-09-21 | 天津大学 | Stereo image quality evaluation method based on binocular fusion |
CN106780476A (en) * | 2016-12-29 | 2017-05-31 | 杭州电子科技大学 | A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic |
CN106920232A (en) * | 2017-02-22 | 2017-07-04 | 武汉大学 | Gradient similarity graph image quality evaluation method and system based on conspicuousness detection |
-
2017
- 2017-08-22 CN CN201710721546.XA patent/CN107578404B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150078654A1 (en) * | 2013-09-13 | 2015-03-19 | Interra Systems, Inc. | Visual Descriptors Based Video Quality Assessment Using Outlier Model |
CN105825503A (en) * | 2016-03-10 | 2016-08-03 | 天津大学 | Visual-saliency-based image quality evaluation method |
CN105744256A (en) * | 2016-03-31 | 2016-07-06 | 天津大学 | Three-dimensional image quality objective evaluation method based on graph-based visual saliency |
CN105959684A (en) * | 2016-05-26 | 2016-09-21 | 天津大学 | Stereo image quality evaluation method based on binocular fusion |
CN106780476A (en) * | 2016-12-29 | 2017-05-31 | 杭州电子科技大学 | A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic |
CN106920232A (en) * | 2017-02-22 | 2017-07-04 | 武汉大学 | Gradient similarity graph image quality evaluation method and system based on conspicuousness detection |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108335289A (en) * | 2018-01-18 | 2018-07-27 | 天津大学 | A kind of full image method for evaluating objective quality with reference to fusion |
CN108520510A (en) * | 2018-03-19 | 2018-09-11 | 天津大学 | It is a kind of based on entirety and partial analysis without referring to stereo image quality evaluation method |
CN108520510B (en) * | 2018-03-19 | 2021-10-19 | 天津大学 | No-reference stereo image quality evaluation method based on overall and local analysis |
CN108629763A (en) * | 2018-04-16 | 2018-10-09 | 海信集团有限公司 | A kind of evaluation method of disparity map, device and terminal |
CN108629763B (en) * | 2018-04-16 | 2022-02-01 | 海信集团有限公司 | Disparity map judging method and device and terminal |
CN108449596A (en) * | 2018-04-17 | 2018-08-24 | 福州大学 | A kind of 3D stereo image quality appraisal procedures of fusion aesthetics and comfort level |
CN108648180A (en) * | 2018-04-20 | 2018-10-12 | 浙江科技学院 | A kind of full reference picture assessment method for encoding quality of view-based access control model multiple characteristics depth integration processing |
CN108648180B (en) * | 2018-04-20 | 2020-11-17 | 浙江科技学院 | Full-reference image quality objective evaluation method based on visual multi-feature depth fusion processing |
CN109255358A (en) * | 2018-08-06 | 2019-01-22 | 浙江大学 | A kind of 3D rendering quality evaluating method of view-based access control model conspicuousness and depth map |
CN109345502A (en) * | 2018-08-06 | 2019-02-15 | 浙江大学 | A kind of stereo image quality evaluation method based on disparity map stereochemical structure information extraction |
CN109255358B (en) * | 2018-08-06 | 2021-03-26 | 浙江大学 | 3D image quality evaluation method based on visual saliency and depth map |
CN109345502B (en) * | 2018-08-06 | 2021-03-26 | 浙江大学 | Stereo image quality evaluation method based on disparity map stereo structure information extraction |
CN109242834A (en) * | 2018-08-24 | 2019-01-18 | 浙江大学 | It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method |
CN109345552A (en) * | 2018-09-20 | 2019-02-15 | 天津大学 | Stereo image quality evaluation method based on region weight |
CN109523506B (en) * | 2018-09-21 | 2021-03-26 | 浙江大学 | Full-reference stereo image quality objective evaluation method based on visual salient image feature enhancement |
CN109523506A (en) * | 2018-09-21 | 2019-03-26 | 浙江大学 | The complete of view-based access control model specific image feature enhancing refers to objective evaluation method for quality of stereo images |
CN109257593A (en) * | 2018-10-12 | 2019-01-22 | 天津大学 | Immersive VR quality evaluating method based on human eye visual perception process |
CN109257593B (en) * | 2018-10-12 | 2020-08-18 | 天津大学 | Immersive virtual reality quality evaluation method based on human eye visual perception process |
CN109525838A (en) * | 2018-11-28 | 2019-03-26 | 上饶师范学院 | Stereo image quality evaluation method based on binocular competition |
CN109872305B (en) * | 2019-01-22 | 2020-08-18 | 浙江科技学院 | No-reference stereo image quality evaluation method based on quality map generation network |
CN109872305A (en) * | 2019-01-22 | 2019-06-11 | 浙江科技学院 | It is a kind of based on Quality Map generate network without reference stereo image quality evaluation method |
CN109714593A (en) * | 2019-01-31 | 2019-05-03 | 天津大学 | Three-dimensional video quality evaluation method based on binocular fusion network and conspicuousness |
CN111598826A (en) * | 2019-02-19 | 2020-08-28 | 上海交通大学 | Image objective quality evaluation method and system based on joint multi-scale image characteristics |
CN111598826B (en) * | 2019-02-19 | 2023-05-02 | 上海交通大学 | Picture objective quality evaluation method and system based on combined multi-scale picture characteristics |
CN110084782A (en) * | 2019-03-27 | 2019-08-02 | 西安电子科技大学 | Full reference image quality appraisement method based on saliency detection |
CN110084782B (en) * | 2019-03-27 | 2022-02-01 | 西安电子科技大学 | Full-reference image quality evaluation method based on image significance detection |
US11244157B2 (en) * | 2019-07-05 | 2022-02-08 | Baidu Online Network Technology (Beijing) Co., Ltd. | Image detection method, apparatus, device and storage medium |
CN110399881A (en) * | 2019-07-11 | 2019-11-01 | 深圳大学 | A kind of quality enhancement method and device based on binocular stereo image end to end |
CN111738270A (en) * | 2020-08-26 | 2020-10-02 | 北京易真学思教育科技有限公司 | Model generation method, device, equipment and readable storage medium |
CN112233089A (en) * | 2020-10-14 | 2021-01-15 | 西安交通大学 | No-reference stereo mixed distortion image quality evaluation method |
CN112508847A (en) * | 2020-11-05 | 2021-03-16 | 西安理工大学 | Image quality evaluation method based on depth feature and structure weighted LBP feature |
WO2023142753A1 (en) * | 2022-01-27 | 2023-08-03 | 华为技术有限公司 | Image similarity measurement method and device |
Also Published As
Publication number | Publication date |
---|---|
CN107578404B (en) | 2019-11-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107578404A (en) | The complete of view-based access control model notable feature extraction refers to objective evaluation method for quality of stereo images | |
CN107578403B (en) | The stereo image quality evaluation method for instructing binocular view to merge based on gradient information | |
CN109255358B (en) | 3D image quality evaluation method based on visual saliency and depth map | |
CN109523506B (en) | Full-reference stereo image quality objective evaluation method based on visual salient image feature enhancement | |
CN106228528B (en) | A kind of multi-focus image fusing method based on decision diagram and rarefaction representation | |
CN111563418A (en) | Asymmetric multi-mode fusion significance detection method based on attention mechanism | |
CN106462771A (en) | 3D image significance detection method | |
CN103426200B (en) | Tree three-dimensional reconstruction method based on unmanned aerial vehicle aerial photo sequence image | |
CN106709950A (en) | Binocular-vision-based cross-obstacle lead positioning method of line patrol robot | |
CN105744256A (en) | Three-dimensional image quality objective evaluation method based on graph-based visual saliency | |
CN107635136B (en) | View-based access control model perception and binocular competition are without reference stereo image quality evaluation method | |
CN109831664B (en) | Rapid compressed stereo video quality evaluation method based on deep learning | |
CN105488541A (en) | Natural feature point identification method based on machine learning in augmented reality system | |
CN110246111A (en) | Based on blending image with reinforcing image without reference stereo image quality evaluation method | |
CN107993228A (en) | A kind of vulnerable plaque automatic testing method and device based on cardiovascular OCT images | |
CN107360416A (en) | Stereo image quality evaluation method based on local multivariate Gaussian description | |
CN108830856A (en) | A kind of GA automatic division method based on time series SD-OCT retinal images | |
CN103871066A (en) | Method for constructing similarity matrix in ultrasound image Ncut segmentation process | |
CN105261006A (en) | Medical image segmentation algorithm based on Fourier transform | |
CN109510981A (en) | A kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform | |
CN103618891B (en) | Objective evaluation method of stereo camera microspur convergence shooting quality | |
CN103914835A (en) | Non-reference quality evaluation method for fuzzy distortion three-dimensional images | |
CN111105387B (en) | Visual angle synthesis quality prediction method based on statistical characteristics and information data processing terminal | |
CN107578406A (en) | Based on grid with Wei pool statistical property without with reference to stereo image quality evaluation method | |
CN102982532B (en) | Stereo image objective quality evaluation method base on matrix decomposition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |