CN105611272A - Eye exactly perceptible stereo image distortion analyzing method based on texture complexity - Google Patents

Eye exactly perceptible stereo image distortion analyzing method based on texture complexity Download PDF

Info

Publication number
CN105611272A
CN105611272A CN201511003001.2A CN201511003001A CN105611272A CN 105611272 A CN105611272 A CN 105611272A CN 201511003001 A CN201511003001 A CN 201511003001A CN 105611272 A CN105611272 A CN 105611272A
Authority
CN
China
Prior art keywords
stereo
picture
coding
width
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511003001.2A
Other languages
Chinese (zh)
Other versions
CN105611272B (en
Inventor
蒋刚毅
杜宝祯
郁梅
徐升阳
方树清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yongchun County Product Quality Inspection Institute Fujian fragrance product quality inspection center, national incense burning product quality supervision and Inspection Center (Fujian)
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201511003001.2A priority Critical patent/CN105611272B/en
Publication of CN105611272A publication Critical patent/CN105611272A/en
Application granted granted Critical
Publication of CN105611272B publication Critical patent/CN105611272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses an eye exactly perceptible stereo image distortion analyzing method based on texture complexity. From the point of the texture complexity, when an asymmetric stereo image is coded, the maximum quantization parameter coding range of the quality of a right viewpoint image capable of being reduced relative to the quality of a left viewpoint image is analyzed; through a large number of subjective tests, under a condition that the quality of the left viewpoint image is fixed, the critical value of the eye perceptible stereo image change quality of the right viewpoint is measured; the quantitative mathematic model relation among the maximum tolerable quantization parameter threshold value and the texture complexity of the right viewpoint image, and a coding quantization parameter is obtained through linear fitting; the coding compression efficiency can be improved through reducing the quality of the right viewpoint image; moreover, the reduction of the quality of the right viewpoint image cannot be sensed by an observer through a stereo vision masking effect; and therefore, the integral quality of the stereo image is ensured.

Description

The proper discernable distortion analysis method of stereo-picture human eye based on Texture complication
Technical field
The present invention relates to a kind of measurement and analytical technology of stereo-picture vision perception characteristic, especially relate to a kind of based onThe proper discernable distortion analysis method of stereo-picture human eye of Texture complication, it is specially adapted to asymmetric stereo-picture and videoThe minimum discernable distortion analysis of human eye of coding.
Background technology
Beginning psychology research is found, in human-eye stereoscopic vision perception, has masking effect, is forming stereo-pictureLeft visual point image and right visual point image in, the measured visual point image of matter is larger to the contribution of overall stereo-picture perceived quality.Based on this psychology basis, researcher further studies discovery by subjective experiment, although human eye is being watched by left viewpoint figureWhen stereo-picture that picture and right visual point image form, have a this three-dimensional masking effect, still this three-dimensional masking effect must be limitFix in a threshold range. For one group of stereo-picture, ensureing the good image quality of one of them viewpoint and maintainingConstant, and in the situation that the picture quality of another viewpoint declines gradually, although the stereoscopic vision quality of entirety has just started notCan be affected, but when the deterioration of image quality of another viewpoint is to a certain extent time, human eye can perceive this standingThe decline of weight, the i.e. threshold point of three-dimensional masking effect. Current research work has disclosed stereo-picture and can allow to loseGenuine threshold value. Utilize this three-dimensional masking effect, do not reducing under the prerequisite of human eye vision perceived quality, can be in threshold rangeReduce the picture quality of one of them viewpoint, and ensure the image high-quality of another viewpoint, fully to remove in stereo-picturePerception redundancy, promote overall code efficiency.
There are some researches show, human-eye stereoscopic vision masking effect be subject to Texture complication in stereo-picture, brightness, color etc. because ofThe joint effect of element, and the tolerable distortion difference of different Factors on Human eye stereoscopic vision masking effect. Existing stereo-pictureThe proper discernable distortion analysis method of human eye is three-dimensional subjective perception experiment, is mainly from another viewpoint of viewpoint mass conservationThe angle that quality entirety declines, measuring human eye can perception stereoscopic vision change, had to a right visual point image with respect toThe entirety of left visual point image just can perceptual distortion threshold value, and has ignored the different regional area Texture complications, bright of stereo-pictureThe impact of the difference of the factors such as degree, color on the local masking threshold of stereoscopic vision. Obviously, for thering is different texture and brightness dividesThe natural stereo-picture of cloth, if only adopt the entirety just can perceptual distortion threshold value, can not reflect stereo-picture wellLocal perception redundancy, compressed sensing redundancy improves compression coding efficiency to greatest extent. Because Texture complication is shadowRing the key factor of three-dimensional masking effect, research Texture complication is significant on the impact of three-dimensional masking effect, and closes at presentHow the rarely seen report of quantitative study achievement in Texture complication on three-dimensional masking effect impact, therefore, design an analysis sideMethod is very necessary with this critical value of quantitative analysis.
Summary of the invention
Technical problem to be solved by this invention is to provide a kind of stereo-picture human eye based on Texture complication and just can examinesFeel distortion analysis method, it can be for the discrepancy adjustment perception redundancy of different local grain complexities in stereo-picture, canFarthest utilize three-dimensional masking effect to improve compression coding efficiency with compressed sensing redundancy, thereby can meet better non-The needs of symmetrical stereo scopic video coding.
The present invention solves the problems of the technologies described above adopted technical scheme: a kind of stereo-picture based on Texture complicationThe proper discernable distortion analysis method of human eye, is characterized in that comprising the following steps:
1. utilize three-dimensional modeling and make software, obtaining the different stereo-picture of texture density of N single object,And the texture density of supposing the single object in N width stereo-picture is for close from dredging, wherein, N > 1; Then in three-dimensional modeling and systemDo in software, edit the alpha passage of the left visual point image of every width stereo-picture and the right visual point image single object in separately,Needs retain single object be set to non-transmission region and fill pure white, remainder be set to transmission region and fillBlack, then play up left visual point image and the right visual point image mask bianry image separately of producing every width stereo-picture; ThenAccording to the left visual point image of every width stereo-picture and right visual point image mask bianry image separately, by calculating average part sideExtent is reappraised the average texture complexity of every width stereo-picture, will adopt n width stereo-picture average localVariance is calculated the average texture complexity of trying to achieve and is designated as ALVn, wherein, 1≤n≤N;
2. obtain K the coding stereo-picture contrast test collection that every width stereo-picture is corresponding, N width stereo-picture has N × KIndividual coding stereo-picture contrast test collection, wherein, K the coding stereo-picture contrast test collection that n width stereo-picture is correspondingAcquisition process is:
2. _ 1, setting a left viewpoint coded quantization parameter value interval is [QPmin,QPmax); Then from [QPmin,QPmax) medium step-length gets K different left viewpoint coded quantization parameter, is respectively QPL,1,QPL,2,…,QPL,K, and QPL,1=QPmin; Wherein, QPminRepresent the minimum code quantization parameter of setting, QPmaxRepresent the maximum coded quantization parameter of setting, K > 1,QPL,1,QPL,2,…,QPL,KRepresent get the 1st, the 2nd ..., K left viewpoint coded quantization parameter, QPL,1<QPL,2<…<QPL,K
2. _ 2, by k left viewpoint coded quantization parameter QPL,kBe defined as when front left viewpoint coded quantization parameter, wherein, 1≤ k≤K, the initial value of k is 1;
2. _ 3, utilize Video coding software, and adopt when the left side of front left viewpoint coded quantization parameter to n width stereo-pictureVisual point image is encoded with intraframe coding method, and left the coding obtaining image is designated as to Ln,k
2. _ 4, setting a right viewpoint coded quantization parameter value interval is [QPL,k,QPmax]; Then from [QPL,k,QPmax] medium step-length gets M right viewpoint coded quantization parameter, is respectively QPR,1,QPR,2,…,QPR,M, and QPR,1=QPL,k; ItsIn, M > 1, QPR,1,QPR,2,…,QPR,MRepresent get the 1st, the 2nd ..., M right viewpoint coded quantization parameter, QPR,1<QPR,2<…<QPR,M
2. _ 5, utilize Video coding software, and adopt M right viewpoint coded quantization parameter respectively to n width stereo-pictureRight visual point image encode with intraframe coding method, obtain altogether the right image of coding of M width different quality, will adopt mRight viewpoint coded quantization parameter QPR,mThe right visual point image of n width stereo-picture is encoded and obtained with intraframe coding methodThe n width right image of encoding is designated as Rn,k,m, wherein, 1≤m≤M;
2. _ 6, by Ln,kRespectively with the encode coding stereo-picture of right image construction M width different quality of M width, by Ln,kWithRn,k,mThe coding stereo-picture forming is designated as Sn,k,m; Then by Sn,k,1As reference coding stereo-picture, by Sn,k,2,Sn,k,3,…,Sn,k,MRespectively as Test code stereo-picture, wherein, Sn,k,1Represent Ln,kWith the 1st width right image R that encodesn,k,1StructureThe coding stereo-picture becoming, Sn,k,2Represent Ln,kWith the 2nd width right image R that encodesn,k,2The coding stereo-picture forming, Sn,k,3RepresentLn,kWith the 3rd width right image R that encodesn,k,3The coding stereo-picture forming, Sn,k,MRepresent Ln,kWith the M width right image R that encodesn,k,MThe coding stereo-picture forming; Then by Sn,k,1Respectively with Sn,k,2,Sn,k,3,…,Sn,k,MBe combined into one by one coding stereo-pictureRight, the M-1 that combination is obtained a coding stereo-picture is k coding stereo-picture contrast test to the sets definition formingCollection, is designated as { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,M, wherein, Sn,k,1Sn,k,2Represent Sn,k,1With Sn,k,2Be combined intoCoding stereo-picture pair, Sn,k,1Sn,k,3Represent Sn,k,1With Sn,k,3The coding stereo-picture pair being combined into, Sn,k,1Sn,k,MRepresentSn,k,1With Sn,k,MThe coding stereo-picture pair being combined into;
2. _ 7, make k=k+1, then by k left viewpoint coded quantization parameter QPL,kAs working as front left viewpoint coded quantizationParameter, then return to step 2. _ 2 and continue to carry out, until K left viewpoint coded quantization parameter selection is complete, obtain altogether n width solidK the coding stereo-picture contrast test collection that image is corresponding, wherein, "=" in k=k+1 is assignment;
3. organize the participant of stereo-picture subjective quality assessment several; Then contrasted between two by every participant's employingMode, subjective observation contrast each coding stereo-picture contrast test collection that every width stereo-picture is corresponding on three-dimensional displayIn be combined into two right width codings stereo-pictures of each coding stereo-picture, then quality is passed judgment in judgement marking; Then statistics is allParticipant becomes each coding stereo-picture to each coding stereo-picture contrast test central combination corresponding to every width stereo-pictureThe mark of two right width coding stereo-pictures;
Wherein, subjective observation contrast corresponding k coding stereo-picture of n width stereo-picture on three-dimensional displayContrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn be combined into each coding stereo-picture right two width compileBefore code stereo-picture, first determine playing sequence, detailed process is: will be combined into two right width codings of each coding stereo-picturePosition, the left and right randomization that stereo-picture shows on three-dimensional display, and make Flag represent reference encoder stereo-picture Sn,k,1?The position mark showing on three-dimensional display, if reference encoder stereo-picture Sn,k,1The position showing on three-dimensional display is for leftLimit, makes Flag=1; If reference encoder stereo-picture Sn,k,1The position showing on three-dimensional display is the right, makes Flag=0; Meanwhile, by M-1 coding stereo-picture to the playing sequence randomization showing on three-dimensional display;
Wherein, every participant is at k the coding stereo-picture contrast test collection corresponding to n width stereo-picture{Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn be combined into two right width codings stereo-pictures of each coding stereo-pictureWhen quality is passed judgment in judgement marking, two right width coding stereo-pictures of each coding stereo-picture on three-dimensional display, show timeBetween be T1kSecond, be T2 two coding stereo-pictures right broadcasting interval times that play front and backkSecond, 1s < T2k<T1k, Qian HouboIn the right broadcasting interval of put two coding stereo-pictures, display gray scale figure is for every participant's rest, and the evaluation option of setting isThree, be respectively that left image quality is good, similar, right image quality is good, if at T1kIn second, every participant thinks demonstrationQuality at the coding stereo-picture on the left side of three-dimensional display is good, and the measured evaluation option of left image matter obtains 1 point, and poorThe measured evaluation option of few evaluation option and right image matter all obtains 0 point; If at T1kIn second, every participant thinks demonstrationQuality at the coding stereo-picture on the right of three-dimensional display is good, and the measured evaluation option of right image matter obtains 1 point, and poorThe measured evaluation option of few evaluation option and left image matter all obtains 0 point; If at T1kIn second, every participant thinks demonstrationAt the coding stereo-picture on the left side of three-dimensional display be presented at the quality of the coding stereo-picture on the right of three-dimensional displayAlmost, much the same evaluation option obtains 1 point, and the measured evaluation option of left image matter and right image matter is measured commentsDetermine option and all obtain 0 point; If at T1kSecond, interior every participant cannot judge the coding stereogram on the left side that is presented at three-dimensional displayThe quality that looks like and be presented at the coding stereo-picture on the right of three-dimensional display, much the same evaluation option obtains 1 point, and the left sideThe measured evaluation option of the evaluation option that picture quality is good and right image matter all obtains 0 point;
4. calculate each coding of concentrating for each coding stereo-picture contrast test corresponding to every width stereo-picture verticalNumber and distortion probability of detection are found in the right distortion of volume image, by for k the stereogram of encoding corresponding to n width stereo-picturePicture contrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn g coding stereo-picture to Sn,k, 1Sn,k,g-1Distortion find that number and distortion probability of detection correspondence are designated as numn,k,gAnd Pn,k,gIf, Flag=1, numn,k,gForThe measured gross score of left image matter in evaluation option; If Flag=0, numn,k,gFor evaluation option in right image quality goodGross score;Wherein, 1≤g≤M-1, NUM represents participant's total number of persons;
Then calculate the concentrated coding stereo-picture of each coding stereo-picture contrast test that every width stereo-picture is correspondingThe right viewpoint coded quantization parameter that the right image of coding of the Test code stereo-picture of centering adopts is adopted with respect to the left image of codingWith the tolerable of left viewpoint coded quantization parameter increase value range, by corresponding n width stereo-picture k the stereogram of encodingPicture contrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn the Test code of coding stereo image pair verticalThe left viewpoint encoding amount that the right viewpoint coded quantization parameter that the right image of coding of volume image adopts adopts with respect to the left image of codingThe tolerable of changing parameter increases value range and is designated as Δ QPn,k,th,ΔQPn,k,th=QPn,k,th-QPL,k, wherein, QP n , k , t h = QP R , a ( P n , k , b - 0.5 ) + QP R , b ( 0.5 - P n , k , a ) P n , k , b - P n , k , a , Pn,k,aRepresent for k volume corresponding to n width stereo-pictureCode stereo-picture contrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn all coding stereo-pictures rightIn distortion probability of detection, be less than 0.5 and approach 0.5 distortion probability of detection, P mostn,k,bRepresent for n width stereo-picture correspondenceK coding stereo-picture contrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn all codings verticalIn the right distortion probability of detection of volume image, be greater than 0.5 and approach 0.5 distortion probability of detection, QP mostR,aRepresent Pn,k,aCorresponding volumeThe right viewpoint coded quantization parameter that the right image of coding of the Test code stereo-picture of code stereo image pair adopts, QPR,bRepresentPn,k,bThe right viewpoint coded quantization that the right image of coding of the Test code stereo-picture of corresponding coding stereo image pair adoptsParameter;
5. to each left viewpoint coded quantization parameter, N corresponding tolerable increases value range and carries out linear fit, obtainsLinear fit equation corresponding to each left viewpoint coded quantization parameter, by QPL,kCorresponding linear fit the Representation Equation is: ΔQPk,th=Ak×ALV+Bk, wherein, Δ QPk,thRepresent QPL,kThe JND threshold value of corresponding linear fit equation, ginsengNumber AkRepresent QPL,kThe slope of corresponding linear fit equation, B parameterkRepresent QPL,kThe intercept of corresponding linear fit equation, ginsengNumber AkAnd B parameterkIn the time of linear fit, directly obtain, ALV represents any width to treat the solid of the proper discernable distortion analysis of human eyeImage adopts average local variance to calculate the average texture complexity of trying to achieve;
6. common K the linear fit equation unified representation 5. step being obtained is linear equation family formula, is: Δ QPth=A(QPL)×ALV+B(QPL), the stereo-picture human eye that this formula is based on Texture complication can perception stereoscopic vision minimumThe quantitative relationship of discernable variation quantization parameter, wherein, Δ QPthRepresent that minimum discernable variation quantizes parameter threshold, A (QPL)Represent the slope of linear equation family, B (QPL) represent the intercept of linear equation family, QPLRepresent left viewpoint coded quantization parameter, A(QPL)、B(QPL) be QPLFunction, A (QPL) linear equation compiled by K left viewpoint coded quantization parameter and K left viewpointThe slope linear fit of the each self-corresponding linear fit equation of code quantization parameter obtains, B (QPL) linear equation looked by a K left sideThe intercept linear fit of a some coded quantization parameter and K each self-corresponding linear fit equation of left viewpoint coded quantization parameter obtainsArrive.
The process that described step obtains the different stereo-picture of the texture density of N single object in is 1.:
1. _ a1, utilize three-dimensional modeling with make software, set up three-dimensional model using any one object as single object,This three-dimensional model comprises background and single object model;
1. the even grain figure carrying in _ a2, employing three-dimensional modeling and making software, enters the surface of single object modelThe surperficial pinup picture of row, by adjusting three-dimensional modeling and making N pinup picture density rating in the texture editor in software, plays up systemMake the different stereo-picture of texture density of N single object, the texture on the single object in every width stereo-pictureUniformity.
Described step is middle ALV 1.nAcquisition process be:
1. _ b1, a left side to n width stereo-picture of the mask bianry image of left visual point image that utilizes n width stereo-pictureVisual point image carries out mask process, obtains n width mask rear left image; And utilize the right visual point image of n width stereo-pictureMask bianry image carries out mask process to the right visual point image of n width stereo-picture, obtains n width mask rear right image;
1. _ b2, utilize the computing formula of average local variance, calculate the average local variance of n width mask rear left imageValue, is designated as ALVn,LAnd utilize the computing formula of average local variance, calculate n width maskThe average local variance value of rear right image, is designated as ALVn,RWherein, PnumRepresent that n width coversTotal number of the pixel in film rear left image, that is to say total number of the pixel in n width mask rear right image, MBnumTableShow total number of the image block of the non-overlapping copies that the size in n width mask rear left image is 8 × 8, that is to say that n width coversTotal number of the image block of the non-overlapping copies that the size in film rear right image is 8 × 8,1≤i≤MBnum,vn,L,i,8×8RepresentThe variance of i image block in n width mask rear left image, vn,R,i,8×8Show i figure in n width mask rear right imageThe variance of picture piece;
The average local variance value of 1. _ b3, calculating n width stereo-picture, is designated as ALV'nThen make ALVn=ALV'n, use ALV'nRepresent the average texture complexity of n width stereo-picture, wherein, ALVn=ALV'nIn "=" be assignment.
In described step 2. _ 1, get QPmin=20, get QPmax=51。
QP in described step 2. _ 4R,2=QPR,1k,QPR,M=QPR,1+(M-1)δk, wherein, δkBe illustrated in and look when front leftUnder some coded quantization parameter, the absolute value of the difference of adjacent two right viewpoint coded quantization parameters, works as QPmax-QPL,kWhen=M, getδk=1; Work as QPmax-QPL,k> when M, getRound () is round function.
Compared with prior art, the invention has the advantages that:
The inventive method is from Texture complication angle, the matter of right visual point image while analyzing asymmetric stereo image codingThe amount maximum that can decline with respect to the quality of left visual point image quantizes parameter coding scope, by a large amount of subjective experiments, withIn the changeless situation of left view-point image quality, measuring human eye can change the critical of right view-point image quality by perception stereo-pictureBe worth, obtain largest tolerable quantization parameter threshold value and Texture complication and the coded quantization ginseng of right visual point image by linear fitQuantitative Mathematical Modeling relation between number, makes to reach raising compression coding effect by reducing right view-point image qualityThe object of rate utilizes again stereoscopic vision masking effect to make observer can not perceive the decline of right view-point image quality simultaneously, fromAnd ensure the total quality of stereo-picture.
Prior art is mainly the angle declining from another viewpoint quality entirety of a viewpoint mass conservation, measures peopleEye can perception stereoscopic vision change, and has had to the threshold value of a macroscopical perceptual distortion, and the inventive method can be for figureThe redundancy of the discrepancy adjustment perception of different local grain complexities in picture content, and provided quantitative analytical formula, maximum journeyUtilize three-dimensional masking effect degree, thereby can meet better the needs of asymmetric stereo scopic video coding.
Brief description of the drawings
Fig. 1 be the inventive method totally realize block diagram;
Fig. 2 a is the left visual point image (resolution ratio is 928 × 928) of the 1st width stereo-picture;
Fig. 2 b is the right visual point image (resolution ratio is 928 × 928) of the 1st width stereo-picture;
Fig. 3 a is the left visual point image (resolution ratio is 928 × 928) of the 2nd width stereo-picture;
Fig. 3 b is the right visual point image (resolution ratio is 928 × 928) of the 2nd width stereo-picture;
Fig. 4 a is the left visual point image (resolution ratio is 928 × 928) of the 3rd width stereo-picture;
Fig. 4 b is the right visual point image (resolution ratio is 928 × 928) of the 3rd width stereo-picture;
Fig. 5 a is the left visual point image (resolution ratio is 928 × 928) of the 4th width stereo-picture;
Fig. 5 b is the right visual point image (resolution ratio is 928 × 928) of the 4th width stereo-picture;
Fig. 6 a is the left visual point image (resolution ratio is 928 × 928) of the 5th width stereo-picture;
Fig. 6 b is the right visual point image (resolution ratio is 928 × 928) of the 5th width stereo-picture;
Fig. 7 a is the left visual point image (resolution ratio is 928 × 928) of the 6th width stereo-picture;
Fig. 7 b is the right visual point image (resolution ratio is 928 × 928) of the 6th width stereo-picture;
Fig. 8 a is the mask bianry image of Fig. 2 a;
Fig. 8 b is the mask bianry image of Fig. 2 b;
Fig. 9 is QPL,1=20、QPL,2=26、QPL,3=32、QPL,4Article 1: 4, about Texture complication ALV and ΔQPk,thMatched curve;
Figure 10 a is by four groups of data (QPL,k,Ak) A (QP that obtains of linear fitL) matched curve;
Figure 10 b is by four groups of data (QPL,k,Bk) B (QP that obtains of linear fitL) matched curve;
Figure 11 is the Δ QP of linear equation familyth=A(QPL)×ALV+B(QPL) surface chart.
Detailed description of the invention
Below in conjunction with accompanying drawing, embodiment is described in further detail the present invention.
A kind of proper discernable distortion analysis method of stereo-picture human eye based on Texture complication that the present invention proposes, it is totalBody is realized block diagram as shown in Figure 1, and it comprises the following steps:
1. the stereo-picture of considering natural scene is too complicated, is difficult to obtain only having texture variations and color, background are brightSeveral stereo-pictures that the factors such as degree, parallax, contrast remain unchanged, and the improper quantitative study texture information that is used for is for verticalThe impact of body vision threshold of perception current, therefore utilizes existing three-dimensional modeling and makes software, and the texture that obtains N single object is closeSpend different stereo-picture, and suppose that the texture density of the single object in N width stereo-picture is close from dredging, wherein, N> 1; Then three-dimensional modeling with make in software, edit the left visual point image of every width stereo-picture and right visual point image separately inThe alpha passage of single object, the single object that needs retain be set to non-transmission region and fill pure white, by its remaining partPoint be set to transmission region and fill black, then playing up that to produce left visual point image and the right visual point image of every width stereo-picture eachFrom mask bianry image; Because the texture percentage of setting in three-dimensional modeling and making software does not possess versatility, therefore originallyInvention is then according to the left visual point image of every width stereo-picture and right visual point image mask bianry image separately, flat by calculatingAll the size of local variance is reappraised the average texture complexity of every width stereo-picture, in the time evaluating Texture complication, only examinesConsider the texture of single object and do not consider background texture, will adopt average local variance calculating to try to achieve to n width stereo-pictureAverage texture complexity is designated as ALVn, wherein, 1≤n≤N.
In the present embodiment, three-dimensional modeling adopts Maya2015 with making software; Consider the comprehensive of subjective experimentAnd workload, desirable N=6.
In the present embodiment, step obtains the mistake of the different stereo-picture of the texture density of N single object in 1.Cheng Wei:
1. _ a1, utilize three-dimensional modeling and make software, setting up three-dimensional using any one object as single object as beadModel, this three-dimensional model comprises background and single object model.
1. the even grain figure carrying in _ a2, employing three-dimensional modeling and making software, enters the surface of single object modelThe surperficial pinup picture of row only changes texture close in the situation that factors such as ensureing color, background luminance, parallax, contrast remains unchangedDegree, (is human eye perceives by adjusting three-dimensional modeling with N pinup picture density rating in the texture editor of making in softwareThe density grade arriving), play up the different stereo-picture of texture density of producing N single object, every width stereo-pictureIn single object on texture uniformity.
Fig. 2 a and Fig. 2 b correspondence have provided left visual point image and the right visual point image of the 1st width stereo-picture, Fig. 3 a and Fig. 3 bCorrespondence has provided left visual point image and the right visual point image of the 2nd width stereo-picture, and Fig. 4 a and Fig. 4 b correspondence have provided the 3rd width solidThe left visual point image of image and right visual point image, Fig. 5 a and Fig. 5 b correspondence provided the 4th width stereo-picture left visual point image andRight visual point image, Fig. 6 a and Fig. 6 b correspondence have provided left visual point image and the right visual point image of the 5th width stereo-picture, Fig. 7 a and figure7b correspondence has provided left visual point image and the right visual point image of the 6th width stereo-picture. From Fig. 2 a to Fig. 7 b, can find out the 1stSingle object in width to the 6 width stereo-pictures is that the texture density of bead is successively from dredging close increase.
Fig. 8 a has provided the mask bianry image of Fig. 2 a, and Fig. 8 b has provided the mask bianry image of Fig. 2 b.
In the present embodiment, 1. middle ALV of stepnAcquisition process be:
1. _ b1, a left side to n width stereo-picture of the mask bianry image of left visual point image that utilizes n width stereo-pictureVisual point image carries out mask process, removes background, obtains n width mask rear left image, as looked for a left side for the 1st width stereo-pictureDot image, utilizes Fig. 8 a to carry out mask process to Fig. 2 a; And utilize the mask binary map of the right visual point image of n width stereo-picturePicture carries out mask process to the right visual point image of n width stereo-picture, removes background, obtains n width mask rear right image, as rightIn the right visual point image of the 1st width stereo-picture, utilize Fig. 8 b to carry out mask process to Fig. 2 b.
1. _ b2, utilize the computing formula of average local variance (AverageofLocalVariance, ALV), calculateThe average local variance value of n width mask rear left image, is designated as ALVn,LAnd utilize averageThe computing formula of local variance, the average local variance value of calculating n width mask rear right image, is designated as ALVn,RWherein, PnumThe total number that represents the pixel in n width mask rear left image, that is to sayTotal number of the pixel in n width mask rear right image, MBnumRepresent that the size in n width mask rear left image is 8Total number of the image block of × 8 non-overlapping copies, that is to say that size in n width mask rear right image is 8 × 8 mutually notTotal number of overlapping image block, is first divided into respectively multiple by n width mask rear left image and n width mask rear right imageSize is the image block of 8 × 8 non-overlapping copies, 1≤i≤MBnum,vn,L,i,8×8Represent in n width mask rear left imageThe variance of i image block, vn,R,i,8×8Show the variance of i image block in n width mask rear right image.
The average local variance value of 1. _ b3, calculating n width stereo-picture, is designated as ALV'nThen make ALVn=ALV'n, use ALV'nRepresent the average texture complexity of n width stereo-picture, wherein, ALVn=ALV'nIn "=" be assignment.
In the present embodiment, table 1 has provided the 1st width to the 6 width stereo-pictures average texture complexity separately.
Table 1 the 1st width to the 6 width stereo-pictures average texture complexity separately
Stereo-picture Average texture complexity
The 1st width 0.5503
The 2nd width 3.3571
The 3rd width 4.5538 7 -->
The 4th width 6.2705
The 5th width 8.1500
The 6th width 8.2547
2. obtain K the coding stereo-picture contrast test collection that every width stereo-picture is corresponding, N width stereo-picture has N × KIndividual coding stereo-picture contrast test collection, wherein, K the coding stereo-picture contrast test collection that n width stereo-picture is correspondingAcquisition process is:
2. _ 1, setting a left viewpoint coded quantization parameter value interval is [QPmin,QPmax); Then from [QPmin,QPmax) medium step-length gets K different left viewpoint coded quantization parameter, is respectively QPL,1,QPL,2,…,QPL,K, and QPL,1=QPmin; Wherein, QPminRepresent the minimum code quantization parameter of setting, QPmaxRepresent the maximum coded quantization parameter of setting, K > 1,QPL,1,QPL,2,…,QPL,KRepresent get the 1st, the 2nd ..., K left viewpoint coded quantization parameter, QPL,1<QPL,2<…<QPL,K
In the present embodiment, get QPmin=20, get QPmax=51; Consider left viewpoint coded quantization ginseng in subjective experimentComprehensive and the workload of number value, gets K=4, like this QPL,1,QPL,2,QPL,3,QPL,4Can corresponding value be 20,26,32,38,For the left visual point image of N width stereo-picture, the K choosing a different left viewpoint coded quantization parameter is all the same, for N widthThe left visual point image of stereo-picture, the 1st the left viewpoint coded quantization parameter of choosing is identical, the like.
2. _ 2, by k left viewpoint coded quantization parameter QPL,kBe defined as when front left viewpoint coded quantization parameter, wherein, 1≤ k≤K, the initial value of k is 1.
2. _ 3, utilize existing Video coding software, and adopt when front left viewpoint coded quantization parameter is to n width stereogramThe left visual point image of picture is encoded with intraframe coding method, and left the coding obtaining image is designated as to Ln,k
In the present embodiment, Video coding software can adopt HEVC video encoding standard reference software HM14.0.
2. _ 4, setting a right viewpoint coded quantization parameter value interval is [QPL,k,QPmax]; Then from [QPL,k,QPmax] medium step-length gets M right viewpoint coded quantization parameter, is respectively QPR,1,QPR,2,…,QPR,M, and QPR,1=QPL,k; ItsIn, M > 1, QPR,1,QPR,2,…,QPR,MRepresent get the 1st, the 2nd ..., M right viewpoint coded quantization parameter, QPR,1<QPR,2<…<QPR,M
In the present embodiment, consider the comprehensive and work of right viewpoint coded quantization parameter value in subjective experimentAmount, gets M=13.
In the present embodiment, QP in step 2. _ 4R,2=QPR,1k,QPR,M=QPR,1+(M-1)δk, wherein, δkBe illustrated inWhen the absolute value of the difference of adjacent two right viewpoint coded quantization parameters under front left viewpoint coded quantization parameter, work as QPmax-QPL,kWhen=M, get δk=1; Work as QPmax-QPL,k> when M, getRound () is roundFunction.
Table 2 has provided QPL,1,QPL,2,QPL,3,QPL,4Lower each self-corresponding M right viewpoint coded quantization parameter.
Table 2QPL,1,QPL,2,QPL,3,QPL,4Lower each self-corresponding M right viewpoint coded quantization parameter
2. _ 5, utilize existing Video coding software, and adopt M right viewpoint coded quantization parameter vertical to n width respectivelyThe right visual point image of volume image is encoded with intraframe coding method, obtains altogether the right image of coding of M width different quality, will adoptM right viewpoint coded quantization parameter QPR,mRight visual point image to n width stereo-picture is encoded with intraframe coding methodThe n width the obtaining right image of encoding is designated as Rn,k,m, wherein, 1≤m≤M.
2. _ 6, by Ln,kRespectively with the encode coding stereo-picture of right image construction M width different quality of M width, by Ln,kWithRn,k,mThe coding stereo-picture forming is designated as Sn,k,m; The left viewpoint adopting while coding due to the left visual point image of stereo-picture is compiledWhen the right viewpoint coded quantization parameter that the right visual point image of code quantization parameter and stereo-picture adopts while coding equates, the volume obtainingCode stereo-picture best in quality, therefore then by Sn,k,1As reference coding stereo-picture, by Sn,k,2,Sn,k,3,…,Sn,k,MRespectively as Test code stereo-picture, wherein, Sn,k,1Represent Ln,kWith the 1st width right image R that encodesn,k,1The coding solid formingImage, Sn,k,2Represent Ln,kWith the 2nd width right image R that encodesn,k,2The coding stereo-picture forming, Sn,k,3Represent Ln,kCompile with the 3rd widthThe right image R of coden,k,3The coding stereo-picture forming, Sn,k,MRepresent Ln,kWith the M width right image R that encodesn,k,MThe coding forming is verticalVolume image; Then by Sn,k,1Respectively with Sn,k,2,Sn,k,3,…,Sn,k,MBe combined into one by one coding stereo-picture pair, combination is obtainedM-1 coding stereo-picture be k the stereo-picture contrast test collection of encoding to the sets definition forming, be designated as { Sn,k, 1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,M, wherein, Sn,k,1Sn,k,2Represent Sn,k,1With Sn,k,2The coding stereogram being combined intoPicture is right, Sn,k,1Sn,k,3Represent Sn,k,1With Sn,k,3The coding stereo-picture pair being combined into, Sn,k,1Sn,k,MRepresent Sn,k,1With Sn,k,MGroupSynthetic coding stereo-picture pair.
2. _ 7, make k=k+1, then by k left viewpoint coded quantization parameter QPL,kAs working as front left viewpoint coded quantizationParameter, then return to step 2. _ 2 and continue to carry out, until K left viewpoint coded quantization parameter selection is complete, obtain altogether n width solidK the coding stereo-picture contrast test collection that image is corresponding, wherein, "=" in k=k+1 is assignment.
3. organize the participant of stereo-picture subjective quality assessment several; Then contrasted between two by every participant's employingMode, subjective observation contrast each coding stereo-picture contrast test collection that every width stereo-picture is corresponding on three-dimensional displayIn be combined into two right width codings stereo-pictures of each coding stereo-picture, then quality is passed judgment in judgement marking; Then statistics is allParticipant becomes each coding stereo-picture to each coding stereo-picture contrast test central combination corresponding to every width stereo-pictureThe mark of two right width coding stereo-pictures.
In the present embodiment, the participant of experiment has 20,12 male 8 female, and the mean age was 25 one full year of life, everyone eyesight is justNormal or correct defects of vision normal.
In the present embodiment, subjective observation carries out on three-dimensional display, if the stereoscopic display device using is threeStar UA65F9000AJ ultra high-definition UHD shutter stereotelevision (65 inches, 16:9), the stereoscopic display spatial resolution of displayFor 1920x1080; While watching, participant need to wear shutter stereo spectacles, and the distance of watching is approximately the image of 4 timesHighly, be about 2.8 meters.
On three-dimensional display, show k the coding stereo-picture contrast test collection { S that n width stereo-picture is correspondingn,k, 1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn while being combined into the right two width codings stereo-picture of each coding stereo-picture, ifSn,k,1All the time be presented at the left side of three-dimensional display, and Sn,k,2、Sn,k,3、…、Sn,k,MQuality by good to poor variation successively,There will be unavoidably vision inertia, and psychological hint to a certain degree. Therefore the present invention is for fear of subjective observation contrast experimentIn vision inertia, reflect more objectively experimental result, need to be first to { S before subjective contrast experimentn,k,1Sn,k,2,Sn,k, 1Sn,k,3,…,Sn,k,1Sn,k,MIn be combined into two right width codings stereo-pictures of each coding stereo-picture on three-dimensional displayDisplay position and { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn be combined into the right demonstration of each coding stereo-pictureOrder rearranges. I.e. subjective observation to contrast corresponding k coding of n width stereo-picture vertical on three-dimensional displayVolume image contrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn to be combined into each coding stereo-picture rightBefore two width codings stereo-pictures, first determine playing sequence, detailed process is: by be combined into each coding stereo-picture right twoPosition, the left and right randomization that width coding stereo-picture shows on three-dimensional display, i.e. reference encoder stereo-picture Sn,k,1Go out at randomThe left side or the right of three-dimensional display now, for the later stage adds up convenient, and makes Flag represent reference encoder stereo-picture Sn,k,1The position mark showing on three-dimensional display, if reference encoder stereo-picture Sn,k,1The position showing on three-dimensional display isThe left side, makes Flag=1; If reference encoder stereo-picture Sn,k,1The position showing on three-dimensional display is the right, orderFlag=0; Meanwhile, by M-1 coding stereo-picture to the playing sequence randomization showing on three-dimensional display.
It (is QP that table 3 has provided the 1st coding stereo-picture contrast test collection corresponding to the 1st width stereo-pictureL,1=20 o'clock)In M-1 coding stereo-picture to the playing sequence showing on three-dimensional display.
M-1 the concentrated coding stereogram of the 1st coding stereo-picture contrast test that table 3 the 1st width stereo-picture is correspondingPicture is to the playing sequence showing on three-dimensional display
Every participant is at k the coding stereo-picture contrast test collection { S corresponding to n width stereo-picturen,k, 1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn be combined into two right width codings stereo-pictures judgements of each coding stereo-pictureWhen quality is passed judgment in marking, the time that two right width coding stereo-pictures of each coding stereo-picture show on three-dimensional display isT1kSecond, be T2 two coding stereo-pictures right broadcasting interval times that play front and backkSecond, 1s < T2k<T1k, at the present embodimentIn get T1k=10s,T2k=3s, every of display gray scale figure confession in two right broadcasting intervals of coding stereo-pictures that play front and backParticipant has a rest, and the evaluation option of setting is three, is respectively that left image quality is good, similar, right image quality is good, ifAt T1kIn second, every participant thinks that the quality of coding stereo-picture on the left side that is presented at three-dimensional display is good, left side figureThe measured evaluation option of picture element obtains 1 point, and the measured evaluation option of much the same evaluation option and right image matter all obtains 0 point;If at T1kIn second, every participant thinks that the quality of coding stereo-picture on the right of being presented at three-dimensional display is good, the rightThe evaluation option that picture quality is good obtains 1 point, and the measured evaluation option of much the same evaluation option and left image matter all obtains 0Point; If at T1kIn second every participant think the left side that is presented at three-dimensional display coding stereo-picture and be presented at solidThe coding stereo-picture on the right of display of poor quality few, much the same evaluation option obtains 1 point, and left image qualityGood evaluation option and the measured evaluation option of right image matter all obtain 0 point; If at T1kIn second, every participant cannot judge aobviousBe shown in three-dimensional display the left side coding stereo-picture and be presented at the matter of coding stereo-picture on the right of three-dimensional displayAmount,, for ensureing accuracy, makes much the same evaluation option obtain 1 point, and the measured evaluation option of left image matter and the right figureThe measured evaluation option of picture element all obtains 0 point.
Table 4 has provided the 1st coding stereo-picture contrast test central combination corresponding to the 1st width stereo-picture and has become each volumeThe marking table of the right two width codings stereo-pictures of code stereo-picture.
The 1st the coding stereo-picture contrast test central combination that table 4 the 1st width stereo-picture is corresponding becomes each coding three-dimensionalThe marking table of the right two width codings stereo-pictures of image
Table 5 has provided 20 participants the 1st coding stereo-picture contrast test corresponding to the 1st width stereo-picture has been concentratedBe combined into the statistics of two right width coding stereo-picture marking of each coding stereo-picture.
The 1st the coding stereo-picture contrast test central combination that table 520 participant is corresponding to the 1st width stereo-pictureBecome the statistics of two right width coding stereo-picture marking of each coding stereo-picture
4. calculate each coding of concentrating for each coding stereo-picture contrast test corresponding to every width stereo-picture verticalNumber and distortion probability of detection are found in the right distortion of volume image, by for k the stereogram of encoding corresponding to n width stereo-picturePicture contrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn g coding stereo-picture to Sn,k, 1Sn,k,g-1Distortion find that number and distortion probability of detection correspondence are designated as numn,k,gAnd Pn,k,gIf, Flag=1, numn,k,gForThe measured gross score of left image matter in evaluation option; If Flag=0, numn,k,gFor evaluation option in right image quality goodGross score; Because " almost " illustrates that left image and right image it seems indifference, can not find distortion, be not therefore listed asEnter to calculate;Wherein, 1≤g≤M-1, NUM represents participant's total number of persons.
His-and-hers watches 5 are pressed QPR,mOrder from small to large rearranges, and then calculated distortion is found number and distortion probability of detection,Result is as listed in table 6.
Number and distortion probability of detection are found in the right distortion of each coding stereo-picture in table 6 table 5
Then calculate the concentrated coding stereo-picture of each coding stereo-picture contrast test that every width stereo-picture is correspondingThe right viewpoint coded quantization parameter that the right image of coding of the Test code stereo-picture of centering adopts is adopted with respect to the left image of codingWith the tolerable of left viewpoint coded quantization parameter increase value range, by corresponding n width stereo-picture k the stereogram of encodingPicture contrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn the Test code of coding stereo image pair verticalThe left viewpoint encoding amount that the right viewpoint coded quantization parameter that the right image of coding of volume image adopts adopts with respect to the left image of codingThe tolerable of changing parameter increases value range and is designated as Δ QPn,k,th,ΔQPn,k,th=QPn,k,th-QPL,k, wherein, QP n , k , t h = QP R , a ( P n , k , b - 0.5 ) + QP R , b ( 0.5 - P n , k , a ) P n , k , b - P n , k , a , Pn,k,aRepresent for k volume corresponding to n width stereo-pictureCode stereo-picture contrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn all coding stereo-pictures rightIn distortion probability of detection, be less than 0.5 and approach 0.5 distortion probability of detection, P mostn,k,bRepresent for n width stereo-picture correspondenceK coding stereo-picture contrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn all codings verticalIn the right distortion probability of detection of volume image, be greater than 0.5 and approach 0.5 distortion probability of detection, QP mostR,aRepresent Pn,k,aCorresponding volumeThe right viewpoint coded quantization parameter that the right image of coding of the Test code stereo-picture of code stereo image pair adopts, QPR,bRepresentPn,k,bThe right viewpoint coded quantization that the right image of coding of the Test code stereo-picture of corresponding coding stereo image pair adoptsParameter.
As in table 6, Pn,k,a=0.2,Pn,k,b=0.65,QPR,a=30,QPR,b=32,QPn,k,th=31.3,ΔQPn,k,th=11.3。
It is considered herein that in subjective marking experiment, having half participant to think has found distortion, and second half participant thinksThe situation of not finding distortion is critical point of observation, and while reaching critical point of observation, distortion probability of detection is 50%, due to 50%Right viewpoint coded quantization parameter corresponding to distortion probability of detection may not be experiment in use coded quantization parameter, therefore thisThe bright linear interpolation of utilizing is calculated Δ QPn,k,th
Table 7 has provided that listed each self-corresponding 4 the coding stereo-picture contrast tests of 6 width stereo-picture of table 1 concentrateThe right viewpoint coded quantization parameter that the right image of the coding of Test code stereo-picture of coding stereo image pair adopts with respect toThe tolerable of the left viewpoint coded quantization parameter that the left image of encoding adopts increases value range.
The right viewpoint coded quantization parameter that the right image of coding of table 7 Test code stereo-picture adopts is with respect to the left figure of codingThe tolerable of the left viewpoint coded quantization parameter that picture adopts increases value range
5. to each left viewpoint coded quantization parameter, N corresponding tolerable increases value range and carries out linear fit, obtainsLinear fit equation corresponding to each left viewpoint coded quantization parameter, by QPL,kCorresponding linear fit the Representation Equation is: ΔQPk,th=Ak×ALV+Bk, wherein, Δ QPk,thRepresent QPL,kThe JND threshold value of corresponding linear fit equation, ginsengNumber AkRepresent QPL,kThe slope of corresponding linear fit equation, B parameterkRepresent QPL,kThe intercept of corresponding linear fit equation, ginsengNumber AkAnd B parameterkIn the time of linear fit, directly obtain, ALV represents any width to treat the solid of the proper discernable distortion analysis of human eyeImage adopts average local variance to calculate the average texture complexity of trying to achieve.
In the present embodiment, Ak、BkThe preparation method of parameter is: for table 7, work as QPL,1=20 o'clock, N width stereo-pictureTexture complication with to QPL,1N in=20 respective column tolerable increases value range and forms N group data: (0.5503,11.3),(3.3571,14.8), (4.5538,16.7), (6.2705,18.6), (8.1500,20.1), (8.2547,20.1), to this N groupData acquisition carries out linear fit by classical linear fit mathematical method, can obtain QPL,1The linear fit equation of=20 o'clock andParameter A1、B1Value; Work as QPL,2=26 o'clock, the Texture complication of N width stereo-picture with to QPL,2N in=26 respective column canTolerance increases value range and forms N group data: (0.5503,7.6), (3.3571,10.4), (4.5538,11.2), (6.2705,12.8), (8.1500,14.1), (8.2547,14.3), this N group data acquisition is carried out to line by classical linear fit mathematical methodProperty matching, can obtain QPL,2The linear equation of=26 o'clock and parameter A2、B2Value; The like, obtain altogether 4 linear fitsEquation, the slope A of each linear fit equationkWith intercept BkAs listed in table 8,44 matching songs that linear fit equation is correspondingLine as shown in Figure 9.
Slope and the intercept of each linear fit equation of table 8
6. common K the linear fit equation unified representation 5. step being obtained is linear equation family formula, is: Δ QPth=A(QPL)×ALV+B(QPL), the stereo-picture human eye that this formula is based on Texture complication can perception stereoscopic vision minimumThe quantitative relationship of discernable variation quantization parameter, wherein, Δ QPthRepresent that minimum discernable variation quantizes parameter threshold, A (QPL)Represent the slope of linear equation family, B (QPL) represent the intercept of linear equation family, QPLRepresent left viewpoint coded quantization parameter, A(QPL)、B(QPL) be QPLFunction, A (QPL) linear equation compiled by K left viewpoint coded quantization parameter and K left viewpointThe slope linear fit of the each self-corresponding linear fit equation of code quantization parameter obtains, B (QPL) linear equation looked by a K left sideThe intercept linear fit of a some coded quantization parameter and K each self-corresponding linear fit equation of left viewpoint coded quantization parameter obtainsArrive.
Utilize 4 left viewpoint coded quantization parameters and 4 corresponding slopes in table 8 to form 4 groups of data: (20,1.1495) (26,0.8441) (32,0.5905) (38,0.0614), to this classical linear fit mathematics side for 4 groups of data acquisitionsMethod is carried out linear fit, obtains linear equation A (QPL)=p1QPL+q1, wherein, parameter p1Represent the slope of equation, parameter q1TableShow the intercept of equation, can obtain parameter p by linear fit1、q1Be respectively-0.0586,2.3601.
Utilize 4 left viewpoint coded quantization parameters and 4 corresponding intercepts in table 8 to form 4 groups of data: (20,10.9790) (26,7.3383) (32,3.3962) (38,2.7381), to this classical linear fit mathematics side for 4 groups of data acquisitionsMethod is carried out linear fit, obtains linear equation B (QPL)=p2QPL+q2, wherein, parameter p2Represent the slope of equation, parameter q2TableShow the intercept of equation, can obtain parameter p by linear fit2、q2Be respectively-0.4782,19.9808.
Treat the stereo-picture of the proper discernable distortion analysis of human eye for any width, the left viewpoint coding that its left viewpoint adoptsQuantization parameter is known, A (QPL) and B (QPL) known, and its average texture complexity is known, therefore can calculate any widthThe discernable variation of minimum for the treatment of the stereo-picture of the proper discernable distortion analysis of human eye quantizes parameter threshold.
Figure 10 a has provided A (QPL) fit line sexual intercourse, Figure 10 b has provided B (QPL) fit line sexual intercourse, Figure 11 givesThe Δ QP of Chu Liao linear equation familyth=A(QPL)×ALV+B(QPL) surface chart.

Claims (5)

1. the proper discernable distortion analysis method of the stereo-picture human eye based on Texture complication, is characterized in that comprising followingStep:
1. utilize three-dimensional modeling and make software, obtaining the different stereo-picture of texture density of N single object, and falseIf the texture density of the single object in N width stereo-picture is close from dredging, wherein, N > 1; Then soft with making at three-dimensional modelingIn part, edit the alpha passage of the left visual point image of every width stereo-picture and the right visual point image single object in separately, needThe single object retaining is set to non-transmission region and fills pure whitely, and remainder is set to transmission region and fills pureBlack, then play up left visual point image and the right visual point image mask bianry image separately of producing every width stereo-picture; Then rootAccording to left visual point image and the right visual point image mask bianry image separately of every width stereo-picture, by calculating average local varianceSize reappraise the average texture complexity of every width stereo-picture, will adopt average part side to n width stereo-pictureThe poor average texture complexity of trying to achieve of calculating is designated as ALVn, wherein, 1≤n≤N;
2. obtain K the coding stereo-picture contrast test collection that every width stereo-picture is corresponding, N width stereo-picture has N × K volumeCode stereo-picture contrast test collection, wherein, the obtaining of K the stereo-picture contrast test collection of encoding that n width stereo-picture is correspondingProcess is:
2. _ 1, setting a left viewpoint coded quantization parameter value interval is [QPmin,QPmax); Then from [QPmin,QPmax) inGet K different left viewpoint coded quantization parameter etc. step-length, be respectively QPL,1,QPL,2,…,QPL,K, and QPL,1=QPmin; ItsIn, QPminRepresent the minimum code quantization parameter of setting, QPmaxRepresent the maximum coded quantization parameter of setting, K > 1, QPL,1,QPL,2,…,QPL,KRepresent get the 1st, the 2nd ..., K left viewpoint coded quantization parameter, QPL,1<QPL,2<…<QPL,K
2. _ 2, by k left viewpoint coded quantization parameter QPL,kBe defined as when front left viewpoint coded quantization parameter, wherein, 1≤k≤ K, the initial value of k is 1;
2. _ 3, utilize Video coding software, and adopt when the left viewpoint of front left viewpoint coded quantization parameter to n width stereo-pictureImage is encoded with intraframe coding method, and left the coding obtaining image is designated as to Ln,k
2. _ 4, setting a right viewpoint coded quantization parameter value interval is [QPL,k,QPmax]; Then from [QPL,k,QPmax] inGet M right viewpoint coded quantization parameter etc. step-length, be respectively QPR,1,QPR,2,…,QPR,M, and QPR,1=QPL,k; Wherein, M > 1,QPR,1,QPR,2,…,QPR,MRepresent get the 1st, the 2nd ..., M right viewpoint coded quantization parameter, QPR,1<QPR,2<…<QPR,M
2. _ 5, utilize Video coding software, and adopt M the right viewpoint coded quantization parameter right side to n width stereo-picture respectivelyVisual point image is encoded with intraframe coding method, obtains altogether the right image of coding of M width different quality, and m right side of employing lookedPoint coded quantization parameter QPR,mTo the right visual point image of n width stereo-picture with the intraframe coding method n obtaining that encodesThe width right image of encoding is designated as Rn,k,m, wherein, 1≤m≤M;
2. _ 6, by Ln,kRespectively with the encode coding stereo-picture of right image construction M width different quality of M width, by Ln,kWith Rn,k,mStructureThe coding stereo-picture becoming is designated as Sn,k,m; Then by Sn,k,1As reference coding stereo-picture, by Sn,k,2,Sn,k,3,…,Sn,k,MRespectively as Test code stereo-picture, wherein, Sn,k,1Represent Ln,kWith the 1st width right image R that encodesn,k,1The coding solid formingImage, Sn,k,2Represent Ln,kWith the 2nd width right image R that encodesn,k,2The coding stereo-picture forming, Sn,k,3Represent Ln,kCompile with the 3rd widthThe right image R of coden,k,3The coding stereo-picture forming, Sn,k,MRepresent Ln,kWith the M width right image R that encodesn,k,MThe coding forming is verticalVolume image; Then by Sn,k,1Respectively with Sn,k,2,Sn,k,3,…,Sn,k,MBe combined into one by one coding stereo-picture pair, combination is obtainedM-1 coding stereo-picture be k the stereo-picture contrast test collection of encoding to the sets definition forming, be designated as { Sn,k, 1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,M, wherein, Sn,k,1Sn,k,2Represent Sn,k,1With Sn,k,2The coding stereogram being combined intoPicture is right, Sn,k,1Sn,k,3Represent Sn,k,1With Sn,k,3The coding stereo-picture pair being combined into, Sn,k,1Sn,k,MRepresent Sn,k,1With Sn,k,MGroupSynthetic coding stereo-picture pair;
2. _ 7, make k=k+1, then by k left viewpoint coded quantization parameter QPL,kAs joining when front left viewpoint coded quantizationNumber, then return to step 2. _ 2 and continue to carry out, until K left viewpoint coded quantization parameter selection is complete, obtain altogether n width stereogramK the coding stereo-picture contrast test collection that picture is corresponding, wherein, "=" in k=k+1 is assignment;
3. organize the participant of stereo-picture subjective quality assessment several; Then adopt the side of contrast between two by every participantFormula, subjective observation contrast each coding stereo-picture contrast test that every width stereo-picture is corresponding and concentrate on three-dimensional displayBe combined into two right width coding stereo-pictures of each coding stereo-picture, then quality is passed judgment in judgement marking; Then add up all ginsengsWith person, each coding stereo-picture contrast test central combination corresponding to every width stereo-picture become to each coding stereo-picture pairThe mark of two width codings stereo-pictures;
Wherein, subjective observation contrast the contrast of corresponding k coding stereo-picture of n width stereo-picture on three-dimensional displayTest set { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn to be combined into two right width codings of each coding stereo-picture verticalBefore volume image, first determine playing sequence, detailed process is: by being combined into, two right width codings of each coding stereo-picture are three-dimensionalPosition, the left and right randomization that image shows on three-dimensional display, and make Flag represent reference encoder stereo-picture Sn,k,1In solidThe position mark showing on display, if reference encoder stereo-picture Sn,k,1The position showing on three-dimensional display is the left side,Make Flag=1; If reference encoder stereo-picture Sn,k,1The position showing on three-dimensional display is the right, makes Flag=0;Meanwhile, by M-1 coding stereo-picture to the playing sequence randomization showing on three-dimensional display;
Wherein, every participant is at k the coding stereo-picture contrast test collection { S corresponding to n width stereo-picturen,k, 1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn be combined into two right width codings stereo-pictures judgements of each coding stereo-pictureWhen quality is passed judgment in marking, the time that two right width coding stereo-pictures of each coding stereo-picture show on three-dimensional display isT1kSecond, be T2 two coding stereo-pictures right broadcasting interval times that play front and backkSecond, 1s < T2k<T1k, play front and backIn two right broadcasting intervals of coding stereo-picture, display gray scale figure is for every participant's rest, and the evaluation option of setting is three, be respectively that left image quality is good, similar, right image quality is good, if at T1kIn second, every participant thinks and is presented atThe quality of the coding stereo-picture on the left side of three-dimensional display is good, and the measured evaluation option of left image matter obtains 1 point, and difference is notThe measured evaluation option of many evaluation options and right image matter all obtains 0 point; If at T1kIn second, every participant thinks and is presented atThe quality of the coding stereo-picture on the right of three-dimensional display is good, and the measured evaluation option of right image matter obtains 1 point, and difference is notThe measured evaluation option of many evaluation options and left image matter all obtains 0 point; If at T1kIn second, every participant thinks and is presented atThe coding stereo-picture on the left side of three-dimensional display and be presented at coding stereo-picture of poor quality on the right of three-dimensional displaySeldom, much the same evaluation option obtains 1 point, and the measured evaluation option of left image matter and the measured evaluation of right image matterOption all obtains 0 point; If at T1kSecond, interior every participant cannot judge the coding stereo-picture on the left side that is presented at three-dimensional displayWith the quality of coding stereo-picture on the right that is presented at three-dimensional display, much the same evaluation option obtains 1 point, and left side figureThe measured evaluation option of picture element and the measured evaluation option of right image matter all obtain 0 point;
4. calculate each coding stereogram of concentrating for each coding stereo-picture contrast test corresponding to every width stereo-pictureNumber and distortion probability of detection are found in the right distortion of picture, by for k the stereo-picture pair of encoding corresponding to n width stereo-pictureThan test set { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn g coding stereo-picture to Sn,k,1Sn,k,g-1'sDistortion finds that number and distortion probability of detection correspondence are designated as numn,k,gAnd Pn,k,gIf, Flag=1, numn,k,gFor evaluation optionThe measured gross score of middle left image matter; If Flag=0, numn,k,gFor the measured total score of right image matter in evaluation optionNumber;Wherein, 1≤g≤M-1, NUM represents participant's total number of persons;
Then calculate the concentrated coding stereo image pair of each coding stereo-picture contrast test that every width stereo-picture is correspondingThe right image of the coding of the Test code stereo-picture right viewpoint coded quantization parameter that adopts adopt with respect to the left image of codingThe tolerable of left viewpoint coded quantization parameter increases value range, by k corresponding n width stereo-picture coding stereo-picture pairThan test set { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn the Test code stereogram of coding stereo image pairThe left viewpoint coded quantization ginseng that the right viewpoint coded quantization parameter that the right image of coding of picture adopts adopts with respect to the left image of codingThe tolerable of number increases value range and is designated as Δ QPn,k,th,ΔQPn,k,th=QPn,k,th-QPL,k, wherein, QP n , k , t h = QP R , a ( P n , k , b - 0.5 ) + QP R , b ( 0.5 - P n , k , a ) P n , k , b - P n , k , a , Pn,k,aRepresent for k volume corresponding to n width stereo-pictureCode stereo-picture contrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn all coding stereo-pictures rightIn distortion probability of detection, be less than 0.5 and approach 0.5 distortion probability of detection, P mostn,k,bRepresent for n width stereo-picture correspondenceK coding stereo-picture contrast test collection { Sn,k,1Sn,k,2,Sn,k,1Sn,k,3,…,Sn,k,1Sn,k,MIn all codings verticalIn the right distortion probability of detection of volume image, be greater than 0.5 and approach 0.5 distortion probability of detection, QP mostR,aRepresent Pn,k,aCorresponding volumeThe right viewpoint coded quantization parameter that the right image of coding of the Test code stereo-picture of code stereo image pair adopts, QPR,bRepresentPn,k,bThe right viewpoint coded quantization that the right image of coding of the Test code stereo-picture of corresponding coding stereo image pair adoptsParameter;
5. to each left viewpoint coded quantization parameter, N corresponding tolerable increases value range and carries out linear fit, obtains eachLinear fit equation corresponding to left viewpoint coded quantization parameter, by QPL,kCorresponding linear fit the Representation Equation is:
ΔQPk,th=Ak×ALV+Bk, wherein, Δ QPk,thRepresent QPL,kThe proper discernable distortion threshold of corresponding linear fit equationValue, parameter AkRepresent QPL,kThe slope of corresponding linear fit equation, B parameterkRepresent QPL,kCutting of corresponding linear fit equationDistance, parameter AkAnd B parameterkIn the time of linear fit, directly obtain, ALV represents any width to treat the proper discernable distortion analysis of human eyeStereo-picture adopt average local variance to calculate the average texture complexity of trying to achieve;
6. common K the linear fit equation unified representation 5. step being obtained is linear equation family formula, is:
ΔQPth=A(QPL)×ALV+B(QPL), the stereo-picture human eye that this formula is based on Texture complication can perception solidThe quantitative relationship of the minimum discernable variation quantization parameter of vision, wherein, Δ QPthRepresent minimum discernable variation quantization parameter thresholdValue, A (QPL) represent the slope of linear equation family, B (QPL) represent the intercept of linear equation family, QPLRepresent left viewpoint coded quantizationParameter, A (QPL)、B(QPL) be QPLFunction, A (QPL) linear equation by K left viewpoint coded quantization parameter and a K left sideThe slope linear fit of the each self-corresponding linear fit equation of viewpoint coded quantization parameter obtains, B (QPL) linear equation by KThe intercept Linear Quasi of individual left viewpoint coded quantization parameter and K the each self-corresponding linear fit equation of left viewpoint coded quantization parameterClose and obtain.
2. the proper discernable distortion analysis method of the stereo-picture human eye based on Texture complication according to claim 1, itsBe characterised in that the process that obtains the different stereo-picture of the texture density of N single object during described step is is 1.:
1. _ a1, utilize three-dimensional modeling and make software, setting up three-dimensional model using any one object as single object, this is verticalBody Model comprises background and single object model;
1. the even grain figure carrying in _ a2, employing three-dimensional modeling and making software, shows the surface of single object modelFace Map, by adjusting three-dimensional modeling and making N pinup picture density rating in the texture editor in software, plays up and producesThe different stereo-picture of texture density of N single object, the texture on the single object in every width stereo-picture is evenUnanimously.
3. the proper discernable distortion analysis method of the stereo-picture human eye based on Texture complication according to claim 1 and 2,It is characterized in that 1. middle ALV of described stepnAcquisition process be:
1. _ b1, the left viewpoint to n width stereo-picture of the mask bianry image of left visual point image of utilizing n width stereo-pictureImage carries out mask process, obtains n width mask rear left image; And utilize the mask of the right visual point image of n width stereo-pictureBianry image carries out mask process to the right visual point image of n width stereo-picture, obtains n width mask rear right image;
1. _ b2, utilize the computing formula of average local variance, calculate the average local variance value of n width mask rear left image, noteFor ALVn,LAnd utilize the computing formula of average local variance, calculate n width mask rear rightThe average local variance value of image, is designated as ALVn,RWherein, PnumRepresent after n width maskTotal number of the pixel in left image, that is to say total number of the pixel in n width mask rear right image, MBnumRepresent theTotal number of the image block of the non-overlapping copies that the size in n width mask rear left image is 8 × 8, that is to say after n width maskSize in right image is total number of the image block of 8 × 8 non-overlapping copies, 1≤i≤MBnum,vn,L,i,8×8Represent nThe variance of i image block in width mask rear left image, vn,R,i,8×8Show i image block in n width mask rear right imageVariance;
The average local variance value of 1. _ b3, calculating n width stereo-picture, is designated as ALV'nThen make ALVn=ALV'n, use ALV'nRepresent the average texture complexity of n width stereo-picture, wherein, ALVn=ALV'nIn "=" be assignment.
4. the proper discernable distortion analysis method of the stereo-picture human eye based on Texture complication according to claim 1, itsBe characterised in that in described step 2. _ 1 and get QPmin=20, get QPmax=51。
5. the proper discernable distortion analysis method of the stereo-picture human eye based on Texture complication according to claim 1, itsBe characterised in that QP in described step 2. _ 4R,2=QPR,1k,QPR,M=QPR,1+(M-1)δk, wherein, δkBe illustrated in and look when front leftUnder some coded quantization parameter, the absolute value of the difference of adjacent two right viewpoint coded quantization parameters, works as QPmax-QPL,kWhen=M, getδk=1; Work as QPmax-QPL,k> when M, getRound () is round function.
CN201511003001.2A 2015-12-28 2015-12-28 Eye exactly perceptible stereo image distortion analyzing method based on texture complexity Active CN105611272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511003001.2A CN105611272B (en) 2015-12-28 2015-12-28 Eye exactly perceptible stereo image distortion analyzing method based on texture complexity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511003001.2A CN105611272B (en) 2015-12-28 2015-12-28 Eye exactly perceptible stereo image distortion analyzing method based on texture complexity

Publications (2)

Publication Number Publication Date
CN105611272A true CN105611272A (en) 2016-05-25
CN105611272B CN105611272B (en) 2017-05-03

Family

ID=55990774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511003001.2A Active CN105611272B (en) 2015-12-28 2015-12-28 Eye exactly perceptible stereo image distortion analyzing method based on texture complexity

Country Status (1)

Country Link
CN (1) CN105611272B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146216A (en) * 2017-04-07 2017-09-08 浙江科技学院 A kind of non-reference picture method for evaluating objective quality based on gradient self-similarity

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103442226A (en) * 2013-07-30 2013-12-11 宁波大学 Method for quickly coding multi-view color videos based on binocular just perceptible distortion
CN103607589A (en) * 2013-11-14 2014-02-26 同济大学 Level selection visual attention mechanism-based image JND threshold calculating method in pixel domain
KR20140148080A (en) * 2013-06-21 2014-12-31 한국과학기술원 Stereoscopic imaging method and system for visually comfortable 3D images
CN104954778A (en) * 2015-06-04 2015-09-30 宁波大学 Objective stereo image quality assessment method based on perception feature set
CN105120252A (en) * 2015-09-02 2015-12-02 天津大学 Depth perception enhancing method for virtual multi-view drawing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140148080A (en) * 2013-06-21 2014-12-31 한국과학기술원 Stereoscopic imaging method and system for visually comfortable 3D images
CN103442226A (en) * 2013-07-30 2013-12-11 宁波大学 Method for quickly coding multi-view color videos based on binocular just perceptible distortion
CN103607589A (en) * 2013-11-14 2014-02-26 同济大学 Level selection visual attention mechanism-based image JND threshold calculating method in pixel domain
CN104954778A (en) * 2015-06-04 2015-09-30 宁波大学 Objective stereo image quality assessment method based on perception feature set
CN105120252A (en) * 2015-09-02 2015-12-02 天津大学 Depth perception enhancing method for virtual multi-view drawing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FENG SHAO ET AL: "JND-based Asymmetric Coding of Stereoscopic Video for Mobile 3DTV Applications", 《2011 4TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING》 *
吴爱红等: "基于感知的右视点质量可分级编码算法", 《宁波大学学报(理工版)》 *
张冠军等: "基于立体掩蔽效应的非对称立体视频编码算法", 《宁波大学学报(理工版)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146216A (en) * 2017-04-07 2017-09-08 浙江科技学院 A kind of non-reference picture method for evaluating objective quality based on gradient self-similarity

Also Published As

Publication number Publication date
CN105611272B (en) 2017-05-03

Similar Documents

Publication Publication Date Title
CN102750695B (en) Machine learning-based stereoscopic image quality objective assessment method
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN106097327B (en) In conjunction with the objective evaluation method for quality of stereo images of manifold feature and binocular characteristic
CN104394403B (en) A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts
De Silva et al. 3D video assessment with just noticeable difference in depth evaluation
CN107578403A (en) The stereo image quality evaluation method of binocular view fusion is instructed based on gradient information
CN103780895B (en) A kind of three-dimensional video quality evaluation method
CN102595185A (en) Stereo image quality objective evaluation method
CN102708567A (en) Visual perception-based three-dimensional image quality objective evaluation method
CN103136748B (en) The objective evaluation method for quality of stereo images of a kind of feature based figure
CN104767993B (en) A kind of stereoscopic video objective quality evaluation based on matter fall time domain weighting
CN104811691A (en) Stereoscopic video quality objective evaluation method based on wavelet transformation
CN103986925A (en) Method for evaluating vision comfort of three-dimensional video based on brightness compensation
CN103873854A (en) Method for determining number of stereoscopic image subjective assessment testees and experiment data
CN113425243B (en) Stereoscopic vision testing method based on random point stereogram
CN102497565A (en) Method for measuring brightness range influencing comfort degree of stereoscopic image
CN102722888A (en) Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision
CN101795411B (en) Analytical method for minimum discernable change of stereopicture of human eyes
Yun et al. The objective quality assessment of stereo image
CN103745457B (en) A kind of three-dimensional image objective quality evaluation method
CN105611272A (en) Eye exactly perceptible stereo image distortion analyzing method based on texture complexity
CN102999911A (en) Three-dimensional image quality objective evaluation method based on energy diagrams
CN102271279B (en) Objective analysis method for just noticeable change step length of stereo images
CN104378625B (en) Based on image details in a play not acted out on stage, but told through dialogues brightness JND values determination method, the Forecasting Methodology of area-of-interest
Solh et al. MIQM: A multicamera image quality measure

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190812

Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee after: Huzhou You Yan Intellectual Property Service Co., Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201211

Address after: 362000 604, building C1, Guangyi garden, west side, Tianan Road, Fengze Street, Fengze District, Quanzhou City, Fujian Province

Patentee after: Quanzhou Yancheng Safety Consulting Co., Ltd

Address before: 313000 room 1020, science and Technology Pioneer Park, 666 Chaoyang Road, Nanxun Town, Nanxun District, Huzhou, Zhejiang.

Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201222

Address after: Liu'an Development Zone, Yongchun County, Quanzhou City, Fujian Province (east side of Taoxi bridge)

Patentee after: Yongchun County Product Quality Inspection Institute Fujian fragrance product quality inspection center, national incense burning product quality supervision and Inspection Center (Fujian)

Address before: 362000 604, building C1, Guangyi garden, west side, Tianan Road, Fengze Street, Fengze District, Quanzhou City, Fujian Province

Patentee before: Quanzhou Yancheng Safety Consulting Co., Ltd