CN106803952B - In conjunction with the cross validation depth map quality evaluating method of JND model - Google Patents

In conjunction with the cross validation depth map quality evaluating method of JND model Download PDF

Info

Publication number
CN106803952B
CN106803952B CN201710041375.6A CN201710041375A CN106803952B CN 106803952 B CN106803952 B CN 106803952B CN 201710041375 A CN201710041375 A CN 201710041375A CN 106803952 B CN106803952 B CN 106803952B
Authority
CN
China
Prior art keywords
pixel
tar
ref
coordinate position
middle coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710041375.6A
Other languages
Chinese (zh)
Other versions
CN106803952A (en
Inventor
陈芬
陈嘉丽
彭宗举
蒋刚毅
郁梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lizhuan Technology Transfer Center Co ltd
Shenzhen Yiqi Culture Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201710041375.6A priority Critical patent/CN106803952B/en
Publication of CN106803952A publication Critical patent/CN106803952A/en
Application granted granted Critical
Publication of CN106803952B publication Critical patent/CN106803952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of cross validation depth map quality evaluating methods of combination JND model, and differential chart is obtained using the cromogram on the corresponding cromogram of depth map and auxiliary view;The number of pixels being mapped in the cromogram on auxiliary view at each coordinate through 3D Warping using depth map and corresponding cromogram blocks mask to obtain;Then the differential chart after blocking is obtained using blocking the pixel that is blocked in mask removal differential chart;Then the cromogram on auxiliary view is divided into three regions in flat, edge and texture and obtains zone marker figure;JND model is introduced later, and calmodulin binding domain CaM label figure obtains the error visual threshold value of each pixel in the cromogram on auxiliary view;Last basis removes the differential chart after blocking and error visual threshold value, obtains depth error figure, and then the ratio of the erroneous pixel point in acquisition depth map is as quality evaluation value;Advantage, which is it, can effectively improve evaluation result and draw the consistency between the obtained quality of virtual view.

Description

In conjunction with the cross validation depth map quality evaluating method of JND model
Technical field
The present invention relates to a kind of image quality evaluating methods, more particularly, to a kind of combination JND (Just-noticeable- Distortion, just discernable distortion) model cross validation depth map quality evaluating method.
Background technology
In recent years, video technique rapidly develops, and many new applications occurs, such as 3D videos and free viewpoint video (FVV, Free Viewpoint Video).Compared with traditional two-dimensional video, 3D videos provide depth information, bring more Visual experience true to nature.Depth map plays basic role in many 3D Video Applications, for example, depth map can be used for by Arbitrary new viewpoint image can be generated with interpolation at viewpoint or extrapolated image;In addition, the depth map of high quality is to solve computer Challenging problem in vision provides help.The gain in performance of many 3D Video Applications is in accurate and high quality depth The estimation or acquisition of figure are spent, by matching corrected coloured image or depth map can be obtained using depth camera.Vertical In body matching technique, due to being influenced with large area homogeneous area by blocking, inaccurate depth map is usually will produce, though The intrinsic difficulty of right Stereo Matching Algorithm can be solved using depth camera, but be inevitable sensor noise problem according to So exist, affects the precision of depth and the form of object.
One the main direction of development of 3D video techniques is the free viewpoint video system based on colored plus depth, the system Basic framework include the links such as acquisition, pretreatment, coding, transmission, decoding, drawing virtual view image and display.Based on coloured silk The viewpoint that the free viewpoint video system of color plus depth can allow user to freely select any position is watched, and is enhanced man-machine Interactivity.Realize that a key technology of free viewpoint video system is exactly virtual view generation technique, it is mainly used for Overcome camera to obtain the limitation of true viewpoint ability, generates the virtual view of any position.Influence the factor of virtual view quality There are two main:First, the quality of depth map and corresponding coloured image;Second is that virtual viewpoint rendering algorithm.Currently, being based on depth Drafting (DIBR, Depth Image Based Rendering) technology of figure is that one kind that industry is most widely used virtually regards Point generation technique.In the rendering technique based on depth map, depth information is the key that the virtual view for generating high quality, depth Information errors will lead to parallax mistake, cause the offset of location of pixels and object distortion in virtual view, influence user's perception.It is deep Spend information representative is range information of the corresponding scene to camera imaging plane, and actual distance value quantization is arrived [0,255] by it.By It is expensive in depth camera, therefore the depth map currently used for test is obtained by estimation of Depth software mostly.In order to promote Using and reduce cost, the depth information for virtual viewpoint rendering is not suitable for generating by estimation of Depth in receiving terminal, need It acquires or estimates in transmitting terminal, then coding sends receiving terminal to.Therefore, the limitation of depth map acquisition algorithm and depth map are compiled Code can cause estimation of Depth inaccurate and depth-compression distortion.
The core concept of rendering technique based on depth map is will be in reference picture using depth information and camera parameter Pixel projection can be generally divided into two steps to destination virtual viewpoint, first believe the pixel in former reference view using its depth Re-projection is ceased to their corresponding three-dimensional space positions;Then according to the position of virtual view (such as camera translation, rotation parameter Deng) these three dimensions point reprojections to virtual camera plane are imaged to obtain the pixel in virtual view.Virtual view When drafting, needs deep conversion to be parallax, position of the reference image vegetarian refreshments in virtual view, depth value can be acquired by parallax Determine the offset distance of the pixel in reference view.If the depth value variation of adjacent pixel is violent, can be between two pixels Cavity is generated, depth value variation is more sharp, then the cavity generated is bigger.Since preceding background intersection depth value changes greatly, because Background intersection before the generation in this cavity is normally at.When the background area blocked by foreground object in reference picture is in virtual graph When visible as in, will occur cavity in virtual image, and when the background area that do not blocked by foreground object in reference picture is in void When invisible in quasi- image, then block.
Virtual view distortion is mostly location of pixels offset and object distortion in virtual view, and the distortion zone detected is simultaneously It is non-to be discovered well by human eye.Image is made of edge, texture and flat site three parts, different zones difference width Influence of the distortion of degree to human eye vision effect is not quite similar, and Texture complication is higher or the similar region of textural characteristics often may be used To tolerate more distortions, and the variation of adjacent edges then can most cause the visual perception of human eye.Vision physiological, psychology etc. Research find that mankind's properties of human visual system and masking effect play very important effect to image procossing, when image fault is small When a certain range, human eye can not feel such influence, and based on this, there has been proposed just discernable distortion (JND, Just- Noticeable-distortion) model.Common masking effect includes:1) brightness masking characteristics, human eye is to being observed object Absolute brightness judge force difference, and it is stronger to the relative different judgment of brightness, to its sensibility of the noise attached by highlight bar It is larger;2) texture masking, human visual system are significantly larger than texture region to the sensibility in image smoothing region, and texture is multiple The miscellaneous higher region of degree can often tolerate more distortions.
Due to being widely used for depth map, the quality evaluation of depth map becomes most important, can promote many practical applications. For example, in free viewpoint video system, detection depth distortion can help to carry out depth enhancing, be enhanced by depth, Ke Yijin One step improves the quality of virtual view, and spectators is allow to enjoy better viewing experience.One letter of the quality evaluation of depth map Folk prescription method is to be compared depth map to be tested with undistorted reference depth figure, and this method corresponds to full reference depth quality Measurement, the precision for the figure that can accurately fathom, however, in most of practical applications, due to depth map error not It can avoid, undistorted reference depth figure can not usually obtain, therefore with without more reasonable with reference to evaluation method assessment depth map. What Xiang et al. was proposed is missed by matching the edge of coloured image and depth map to detect without reference depth plot quality evaluation scheme Difference calculates bad point rate to evaluate the quality of depth map, has preferable consistency with the quality for drawing obtained virtual image, still The program only considered the mistake of adjacent edges, have ignored other smooth regions, and detected is fractional error pixel, And different attribute and the error distribution of scene are affected to the performance of the program.Depth map is not used directly for watching, but As auxiliary information for drawing virtual view, it is therefore desirable to evaluate the quality of depth map from the angle of application.
Invention content
Technical problem to be solved by the invention is to provide a kind of cross validation depth map quality evaluations of combination JND model Method does not need undistorted reference depth figure, and can effectively improve evaluation result and draw obtained virtual view Consistency between quality.
Technical solution is used by the present invention solves above-mentioned technical problem:A kind of cross validation of combination JND model is deep Spend plot quality evaluation method, it is characterised in that include the following steps:
1. depth map to be evaluated is denoted as Dtar, by DtarCorresponding cromogram is denoted as Τtar, D will be removedtarAnd ΤtarPlace Another known viewpoint outside viewpoint is defined as auxiliary view, and the cromogram on auxiliary view is denoted as Tref;Then pass through by DtarIn the pixel value of all pixels point be converted into parallax value, by ΤtarIn all pixels point be mapped to through 3D-Warping TrefIn;Wherein, Dtar、ΤtarAnd TrefVertical direction on pixel total number be M, Dtar、ΤtarAnd TrefLevel side The total number of upward pixel is N;
2. enabling EtarIndicate size and DtarThe identical differential chart of size, by EtarMiddle coordinate position is (x, y) The pixel value of pixel be denoted as Etar(x, y), when auxiliary view is in DtarAnd ΤtarWhen the left side of place viewpoint, y+d is judgedtar,p Whether (x, y) is more than N, if it is, enabling EtarOtherwise (x, y)=0 meets u=x, v=y+dtar,p(x, y), Etar(x, y)= |Ιtar(x,y)-Ιref(u,v)|;When auxiliary view is in DtarAnd ΤtarWhen the right of place viewpoint, y-d is judgedtar,pWhether (x, y) Less than 1, if it is, enabling EtarOtherwise (x, y)=0 meets u=x, v=y-dtar,p(x, y), Etar(x, y)=| Ιtar(x, y)-Ιref(u,v)|;Wherein, 1≤x≤M, 1≤y≤N, 1≤u≤M, 1≤v≤N, dtar,p(x, y) indicates DtarMiddle coordinate position For the parallax value that the pixel value of the pixel of (x, y) converts, symbol " | | " it is the symbol that takes absolute value, Ιtar(x, y) is indicated ΤtarMiddle coordinate position is the luminance component of the pixel of (x, y), Ιref(u, v) indicates TrefMiddle coordinate position is the picture of (u, v) The luminance component of vegetarian refreshments;
3. C is enabled to indicate size and DtarSize it is identical block mask image, by coordinate position in C be (x, Y) pixel value of pixel is denoted as C (x, y), the pixel value of each pixel in C is initialized as 0, by ΤtarIt is middle through 3D- Warping is mapped to TrefMiddle coordinate position is that the total number of the pixel at (u, v) is denoted as N(u,v);Work as N(u,v)When=1, C is enabled (x, y)=0;Work as N(u,v)>When 1,Wherein, N(u,v) Value be 0 or be 1 or be more than 1, Dtar(x, y) indicates DtarMiddle coordinate position is the pixel value of the pixel of (x, y), and max () is It is maximized function, 1≤x(u,v),i≤M,1≤y(u,v),i≤ N, (x(u,v),i,y(u,v),i) indicate ΤtarIt is middle to be reflected through 3D-Warping It is mapped to TrefMiddle coordinate position is the N at (u, v)(u,v)Ith pixel point in a pixel is in ΤtarIn coordinate position, Dtar (x(u,v),i,y(u,v),i) indicate DtarMiddle coordinate position is (x(u,v),i,y(u,v),i) pixel pixel value;
4. removing E using CtarIn the pixel that is blocked, obtain the differential chart after blocking, be denoted as E'tar, by E'tarIn Coordinate position is that the pixel value of the pixel of (x, y) is denoted as E'tar(x, y), E'tar(x, y)=Etar(x,y)×(1-C(x,y));
5. calculating TrefIn each pixel the texture estimation factor, by TrefMiddle coordinate position is the pixel of (u, v) The texture estimation factor be denoted as z (u, v),Wherein, 1≤u≤M, 1 ≤ v≤N, zh(u, v) indicates TrefMiddle coordinate position is the texture estimation factor of the horizontal direction of the pixel of (u, v), zh(u,v) Value be 1 or 0, zh(u, v)=1 indicates TrefMiddle coordinate position is that the pixel of (u, v) is the texture pixel point of horizontal direction, zh (u, v)=0 indicates TrefMiddle coordinate position is that the pixel of (u, v) is the non-grain pixel of horizontal direction, zv(u, v) is indicated TrefMiddle coordinate position is the texture estimation factor of the vertical direction of the pixel of (u, v), zvThe value of (u, v) is 1 or 0, zv(u,v) =1 indicates TrefMiddle coordinate position is that the pixel of (u, v) is the texture pixel point of vertical direction, zv(u, v)=0 indicates TrefIn Coordinate position is that the pixel of (u, v) is the non-grain pixel of vertical direction;
6. T is enabled to indicate size and TrefThe identical zone marker figure of size, by coordinate position in T be (u, v) The pixel value of pixel be denoted as T (u, v), the pixel value of each pixel in T is initialized as 0;It is examined using Canny operators Measure TrefIn fringe region, it is assumed that TrefMiddle coordinate position be (u, v) pixel belong to fringe region, then enable T (u, v)= 1;Assuming that TrefMiddle coordinate position is the texture estimation factor z (u, v)=1 of the pixel of (u, v), then is determined as T (u, v)=0 TrefMiddle coordinate position is that the pixel of (u, v) belongs to texture region, and enables T (u, v)=2 again;Wherein, the value of T (u, v) is 0 Or 1 or 2, T (u, v)=0 represents TrefMiddle coordinate position is that the pixel of (u, v) belongs to flat site, and T (u, v)=1 represents Tref Middle coordinate position is that the pixel of (u, v) belongs to fringe region, and T (u, v)=2 represents TrefMiddle coordinate position is the pixel of (u, v) Point belongs to texture region;
7. the JND model based on brightness masking and texture masking effect is introduced, using JND model, and according to TrefIn it is every A pixel affiliated area calculates TrefIn each pixel error visual threshold value, by TrefMiddle coordinate position is (u's, v) The error visual threshold value of pixel is denoted as Th (u, v), Wherein, max () is to be maximized function, and min () is to be minimized function, and bg (u, v) indicates TrefMiddle coordinate position is (u, v) Pixel average background brightness, mg (u, v) indicate TrefMiddle coordinate position be (u, v) pixel surrounding brightness most Big average weighted, LA (u, v) indicate TrefMiddle coordinate position is the brightness masking effect of the pixel of (u, v), f (bg (u, v), mg (u, v))=mg (u, v) × α (bg (u, v))+β (bg (u, v)), α (bg (u, v))=bg (u, v) × 0.0001+0.115, β (bg (u, v))=0.5-bg (u, v) × 0.01;
8. E is enabled to indicate size and DtarThe identical depth error figure of size, by coordinate position in E be (x, y) The pixel value of pixel be denoted as E (x, y), work as E'tarWhen (x, y)=0, E (x, y)=0;Work as E'tarWhen (x, y) ≠ 0,Wherein, V(x,y)=(u, v) indicates that a mapping process, (x, y) are Τtar In pixel coordinate position, (u, v) be TrefIn pixel coordinate position, work as TrefPlace viewpoint is in ΤtarPlace regards When the left side of point, meet u=x, v=y+dtar,p(x,y);Work as TrefPlace viewpoint is in ΤtarWhen the right of place viewpoint, meet u =x, v=y-dtar,p(x,y);
9. counting the total number for the pixel that pixel value is 1 in E, it is denoted as numE;Then D is calculatedtarIn erroneous pixel point Ratio as DtarQuality evaluation value, be denoted as EPR,
The step 1. in by DtarIn the pixel value of all pixels point be converted into the detailed process of parallax value and be:It is right In DtarMiddle coordinate position is the pixel of (x, y), and the parallax value that its pixel value converts is denoted as dtar,p(x, y),Wherein, 1≤x≤M, 1≤y≤N, b are indicated between camera Parallax range, f indicate camera focal length, ZnearFor the nearest practical depth of field, ZfarFor the farthest practical depth of field, Dtar(x, y) is indicated DtarMiddle coordinate position is the pixel value of the pixel of (x, y).
The step 5. in zh(u, v) and zvThe acquisition process of (u, v) is:
5. _ 1, calculating TrefIn each pixel differential signal in the horizontal direction, by TrefMiddle coordinate position is (u, v) Pixel differential signal in the horizontal direction be denoted as dh(u, v), Wherein, Iref(u, v+1) indicates TrefMiddle coordinate position is the luminance component of the pixel of (u, v+1);
5. _ 2, calculating TrefIn each pixel differential signal in the horizontal direction characteristic symbol, by dh(u's, v) Characteristic symbol is denoted as symdh(u, v),
5. _ 3, calculating zh(u, v),Wherein, dhsym(u, v) is intermediate variable,symdh(u, v+1) is indicated TrefMiddle coordinate position is the characteristic symbol of the differential signal of the pixel of (u, v+1) in the horizontal direction;
5. _ 4, calculating TrefIn each pixel differential signal vertically, by TrefMiddle coordinate position is (u, v) Pixel differential signal vertically be denoted as dv(u, v), Wherein, Iref(u+1, v) indicates TrefMiddle coordinate position is the luminance component of the pixel of (u+1, v);
5. _ 5, calculating TrefIn each pixel differential signal vertically characteristic symbol, by dv(u's, v) Characteristic symbol is denoted as symdv(u, v),
5. _ 6, calculating zv(u, v),Wherein, dvsym(u, v) is intermediate variable,symdv(u+1, v) is indicated TrefMiddle coordinate position is the characteristic symbol of the differential signal of the pixel of (u+1, v) vertically.
Compared with the prior art, the advantages of the present invention are as follows:
1) the method for the present invention has fully considered that effect of the depth map in virtual viewpoint rendering, depth map are not used to directly Viewing, and is to provide location of pixels offset information, therefore virtual view distortion caused by being distorted with depth is distorted come mark depths Region is more reasonable.
2) the method for the present invention has furtherd investigate influence of the depth map distortion to virtual view quality, and depth distortion can cause profit The offset of location of pixels and object distort in the virtual view drawn with the depth information, and the brightness value of respective pixel is wrong Accidentally, the distortion of general depth is more serious, and the brightness value error of virtual view pixel is bigger, so as to by the bright of virtual view pixel Error flag(s) of the error amount as corresponding depth pixel is spent, differential chart is obtained.
3) circumstance of occlusion of boundary pixel when the method for the present invention has fully considered virtual viewpoint rendering, the picture in coloured image After 3D-Warping is mapped on auxiliary view, the neighbouring pixel closer from imaging plane of object boundary may be kept off vegetarian refreshments The firmly pixel from imaging plane farther out, since the distortion for the pixel being blocked does not have shadow to the quality of final virtual view It rings, therefore the pixel that these can be blocked is marked to obtain and blocks mask, the pixel that is blocked is removed in differential chart Error flag(s), may make depth map quality evaluation result and virtual view quality objective results more consistent.
4) the method for the present invention has fully considered human-eye visual characteristic, by the coloured image on auxiliary view be divided into edge, Three parts of texture and flat site, it is each to obtain different piece using the JND model based on brightness masking and texture masking effect The error visual threshold value of pixel, in the differential chart after going to block by mapping after be less than the error mark of corresponding error visual threshold value Note removal, obtains final depth error figure, depth map quality evaluation result is made to be more in line with human eye characteristic.
Description of the drawings
Fig. 1 is that the overall of the method for the present invention realizes block diagram;
Fig. 2 is the schematic diagram of cross-validation process;
Fig. 3 is to block schematic diagram;
Fig. 4 a are the depth map that the 2nd viewpoint of Cones sequences is estimated by AdaptBP methods;
Fig. 4 b are the corresponding cromogram of depth map shown in Fig. 4 a;
Fig. 4 c are the cromogram of the 3rd viewpoint of Cones sequences;
Fig. 4 d are the differential chart of depth map shown in Fig. 4 a for being obtained after cross validation;
Fig. 5 a are to be mapped to the 3rd viewpoint of Cones sequences using the pixel value of the pixel in depth map shown in Fig. 4 a to obtain To block mask image;
Fig. 5 b, which are that depth map is corresponding shown in Fig. 4 a, removes the differential chart after blocking;
Fig. 5 c are the corresponding depth error figure of depth map shown in Fig. 4 a.
Specific implementation mode
Below in conjunction with attached drawing embodiment, present invention is further described in detail.
A kind of cross validation depth map quality evaluating method of combination JND model proposed by the present invention, it is overall to realize frame Figure is as shown in Figure 1, it includes the following steps:
1. depth map to be evaluated is denoted as Dtar, by DtarCorresponding cromogram is denoted as Τtar, D will be removedtarAnd ΤtarPlace Another known viewpoint outside viewpoint is defined as auxiliary view, and the cromogram on auxiliary view is denoted as Tref;Then pass through by DtarIn the pixel value of all pixels point be converted into parallax value, by ΤtarIn all pixels point be mapped to through 3D-Warping TrefIn;Wherein, Dtar、ΤtarAnd TrefVertical direction on pixel total number be M, Dtar、ΤtarAnd TrefLevel side The total number of upward pixel is N.
In this particular embodiment, step 1. in by DtarIn the pixel value of all pixels point be converted into the tool of parallax value Body process is:For DtarMiddle coordinate position is the pixel of (x, y), and the parallax value that its pixel value converts is denoted as dtar,p (x, y),Wherein, 1≤x≤M, 1≤y≤N, b table Show that the parallax range between camera, f indicate the focal length of camera, ZnearFor the nearest practical depth of field, ZfarFor the farthest practical depth of field, Dtar (x, y) indicates DtarMiddle coordinate position is the pixel value of the pixel of (x, y).
2. enabling EtarIndicate size and DtarThe identical differential chart of size, by EtarMiddle coordinate position is (x, y) The pixel value of pixel be denoted as Etar(x, y), when auxiliary view is in DtarAnd ΤtarWhen the left side of place viewpoint, y+d is judgedtar,p Whether (x, y) is more than N, if it is, enabling EtarOtherwise (x, y)=0 meets u=x, v=y+dtar,p(x, y), Etar(x, y)= |Ιtar(x,y)-Ιref(u,v)|;When auxiliary view is in DtarAnd ΤtarWhen the right of place viewpoint, y-d is judgedtar,pWhether (x, y) Less than 1, if it is, enabling EtarOtherwise (x, y)=0 meets u=x, v=y-dtar,p(x, y), Etar(x, y)=| Ιtar(x, y)-Ιref(u,v)|;Wherein, 1≤x≤M, 1≤y≤N, 1≤u≤M, 1≤v≤N, dtar,p(x, y) indicates DtarMiddle coordinate position For the parallax value that the pixel value of the pixel of (x, y) converts, symbol " | | " it is the symbol that takes absolute value, Ιtar(x, y) is indicated ΤtarMiddle coordinate position is the luminance component of the pixel of (x, y), Ιref(u, v) indicates TrefMiddle coordinate position is the picture of (u, v) The luminance component of vegetarian refreshments.
Step 1. in by ΤtarIn all pixels point be mapped to T through 3D-WarpingrefIn process and step mistake 2. Journey is cross-validation process, and Fig. 2 gives the schematic diagram of cross-validation process, wherein Tl、Tr、Dl、DrIt is corresponding to indicate left view point Cromogram, right viewpoint cromogram, left view point depth map and right viewpoint depth map.When the corresponding difference of left view point depth map to be obtained When figure, cross validation is carried out using right viewpoint cromogram as auxiliary information.TlMiddle coordinate position is (xl,yl) pixel correspond to Brightness value be Ιl1, utilize DlIn depth information, by 3D-Warping map procedures to right viewpoint cromogram TrOn, if super Go out image range, then LdMiddle coordinate position is (xl,yl) pixel be assigned a value of 0, if being mapped to right viewpoint cromogram TrMiddle coordinate Position is (xlr,yl) pixel at, corresponding brightness value be Ιr1, then by the difference of the brightness value of two pixels | Ιl1r1| It is assigned to LdMiddle coordinate position is (xl,yl) pixel, LdThe as corresponding differential chart of left view point depth map.Similarly, the right side is obtained When the corresponding differential chart of viewpoint depth map, by left view point cromogram TlCross validation is carried out as auxiliary information, you can obtains the right side The corresponding differential chart R of viewpoint depth mapd
Use the depth map that the 2nd viewpoint of Cones sequences is estimated by AdaptBP methods as depth map to be evaluated, such as Shown in Fig. 4 a;Fig. 4 b are the corresponding cromogram of depth map shown in Fig. 4 a;Use the cromogram of the 3rd viewpoint of Cones sequences as auxiliary Cromogram in viewpoint, as illustrated in fig. 4 c;The differential chart obtained after cross validation is as shown in figure 4d.
3. C is enabled to indicate size and DtarSize it is identical block mask image, by coordinate position in C be (x, Y) pixel value of pixel is denoted as C (x, y), the pixel value of each pixel in C is initialized as 0, by ΤtarIt is middle through 3D- Warping is mapped to TrefMiddle coordinate position is that the total number of the pixel at (u, v) is denoted as N(u,v);Work as N(u,v)When=1, C is enabled (x, y)=0;Work as N(u,v)>When 1,Wherein, N(u,v) Value be 0 or be 1 or be more than 1, Dtar(x, y) indicates DtarMiddle coordinate position is the pixel value of the pixel of (x, y), and max () is It is maximized function, 1≤x(u,v),i≤M,1≤y(u,v),i≤ N, (x(u,v),i,y(u,v),i) indicate ΤtarIt is middle to be reflected through 3D-Warping It is mapped to TrefMiddle coordinate position is the N at (u, v)(u,v)Ith pixel point in a pixel is in ΤtarIn coordinate position, Dtar (x(u,v),i,y(u,v),i) indicate DtarMiddle coordinate position is (x(u,v),i,y(u,v),i) pixel pixel value.
Fig. 3, which gives, blocks schematic diagram, and left reference view foreground boundary point, background border point are expressed as from left to rightRight reference view foreground boundary point, background border point are expressed as from left to rightDuring 3D-Warping, the boundary point in left reference view It is respectively mapped in the virtual view drawn by it Similarly, the boundary point in right reference viewIt is respectively mapped to be painted by it In the virtual view of systemIn left reference virtual view, from It arrivesThis is a part of, and the foreground pixel point mapping in existing left reference picture comes, and the pixel that also has powerful connections mapping comes. Likewise, in right reference virtual view, fromIt arrivesThis is a part of, the foreground pixel point in existing right reference picture Mapping comes, and the pixel that also has powerful connections mapping comes.These parts have occurred foreground pixel point and are blocked to background pixel point, It carries out in the differential chart obtained after cross validation, the background pixel point being blocked also can be labeled out, but this is not due to Caused by depth value mistake, so needing this part background pixel point removal.It is same for being mapped to after 3D-Warping The pixel of one position compares its depth value size, retains the maximum pixel of depth value, and rest of pixels point is marked to obtain Block mask image.
Use depth map shown in Fig. 4 a as depth map to be evaluated, cromogram is as on auxiliary view shown in Fig. 4 c Cromogram, obtain to block mask image as shown in Figure 5 a.
4. removing E using CtarIn the pixel that is blocked, obtain the differential chart after blocking, be denoted as E'tar, by E'tarIn Coordinate position is that the pixel value of the pixel of (x, y) is denoted as E'tar(x, y), E'tar(x, y)=Etar(x,y)×(1-C(x,y))。
Fig. 5 b are to block the pixel being blocked in mask image shown in removal Fig. 5 a in the differential chart shown in Fig. 4 d The differential chart obtained afterwards.
5. calculating TrefIn each pixel the texture estimation factor, by TrefMiddle coordinate position is the pixel of (u, v) The texture estimation factor be denoted as z (u, v),Wherein, 1≤u≤M, 1 ≤ v≤N, zh(u, v) indicates TrefMiddle coordinate position is the texture estimation factor of the horizontal direction of the pixel of (u, v), zh(u,v) Value be 1 or 0, zh(u, v)=1 indicates TrefMiddle coordinate position is that the pixel of (u, v) is the texture pixel point of horizontal direction, zh (u, v)=0 indicates TrefMiddle coordinate position is that the pixel of (u, v) is the non-grain pixel of horizontal direction, zv(u, v) is indicated TrefMiddle coordinate position is the texture estimation factor of the vertical direction of the pixel of (u, v), zvThe value of (u, v) is 1 or 0, zv(u,v) =1 indicates TrefMiddle coordinate position is that the pixel of (u, v) is the texture pixel point of vertical direction, zv(u, v)=0 indicates TrefIn Coordinate position is that the pixel of (u, v) is the non-grain pixel of vertical direction.
In this particular embodiment, step 5. in zh(u, v) and zvThe acquisition process of (u, v) is:
5. _ 1, calculating TrefIn each pixel differential signal in the horizontal direction, by TrefMiddle coordinate position is (u, v) Pixel differential signal in the horizontal direction be denoted as dh(u, v),Its In, Iref(u, v+1) indicates TrefMiddle coordinate position is the luminance component of the pixel of (u, v+1).
5. _ 2, calculating TrefIn each pixel differential signal in the horizontal direction characteristic symbol, by dh(u's, v) Characteristic symbol is denoted as symdh(u, v),
5. _ 3, calculating zh(u, v),Wherein, dhsym(u, v) is intermediate variable,symdh(u, v+1) is indicated TrefMiddle coordinate position is the characteristic symbol of the differential signal of the pixel of (u, v+1) in the horizontal direction.
5. _ 4, calculating TrefIn each pixel differential signal vertically, by TrefMiddle coordinate position is (u, v) Pixel differential signal vertically be denoted as dv(u, v), Wherein, Iref(u+1, v) indicates TrefMiddle coordinate position is the luminance component of the pixel of (u+1, v).
5. _ 5, calculating TrefIn each pixel differential signal vertically characteristic symbol, by dv(u's, v) Characteristic symbol is denoted as symdv(u, v),
5. _ 6, calculating zv(u, v),Wherein, dvsym(u, v) is intermediate variable,symdv(u+1, v) is indicated TrefMiddle coordinate position is the characteristic symbol of the differential signal of the pixel of (u+1, v) vertically.
6. T is enabled to indicate size and TrefThe identical zone marker figure of size, by coordinate position in T be (u, v) The pixel value of pixel be denoted as T (u, v), the pixel value of each pixel in T is initialized as 0;It is examined using Canny operators Measure TrefIn fringe region, it is assumed that TrefMiddle coordinate position be (u, v) pixel belong to fringe region, then enable T (u, v)= 1;Assuming that TrefMiddle coordinate position is the texture estimation factor z (u, v)=1 of the pixel of (u, v), then is determined as T (u, v)=0 TrefMiddle coordinate position is that the pixel of (u, v) belongs to texture region, and enables T (u, v)=2 again;Wherein, the value of T (u, v) is 0 Or 1 or 2, T (u, v)=0 represents TrefMiddle coordinate position is that the pixel of (u, v) belongs to flat site, and T (u, v)=1 represents Tref Middle coordinate position is that the pixel of (u, v) belongs to fringe region, and T (u, v)=2 represents TrefMiddle coordinate position is the pixel of (u, v) Point belongs to texture region.
7. the JND model based on brightness masking and texture masking effect is introduced, using JND model, and according to TrefIn it is every A pixel affiliated area calculates TrefIn each pixel error visual threshold value, by TrefMiddle coordinate position is (u's, v) The error visual threshold value of pixel is denoted as Th (u, v),Its In, max () is to be maximized function, and min () is to be minimized function, and bg (u, v) indicates TrefMiddle coordinate position is (u's, v) The average background brightness of pixel, bg (u, v) are calculated by the low pass operator weighted, and mg (u, v) indicates TrefMiddle coordinate position For the maximum average weighted of the surrounding brightness of the pixel of (u, v), LA (u, v) indicates TrefMiddle coordinate position is the pixel of (u, v) The brightness masking effect of point,f(bg(u,v),mg (u, v))=mg (u, v) × α (bg (u, v))+β (bg (u, v)), α (bg (u, v))=bg (u, v) × 0.0001+0.115, β (bg (u, v))=0.5-bg (u, v) × 0.01.
8. E is enabled to indicate size and DtarThe identical depth error figure of size, by coordinate position in E be (x, y) The pixel value of pixel be denoted as E (x, y), work as E'tarWhen (x, y)=0, E (x, y)=0;Work as E'tarWhen (x, y) ≠ 0,Wherein, V(x,y)=(u, v) indicates that a mapping process, (x, y) are Τtar In pixel coordinate position, (u, v) be TrefIn pixel coordinate position, work as TrefPlace viewpoint is in ΤtarPlace regards When the left side of point, meet u=x, v=y+dtar,p(x,y);Work as TrefPlace viewpoint is in ΤtarWhen the right of place viewpoint, meet u =x, v=y-dtar,p(x,y)。
Fig. 5 c are depth error of the removal in figure 5b less than Fig. 4 a obtained after the pixel of corresponding error visual threshold value Figure.
9. counting the total number for the pixel that pixel value is 1 in E, it is denoted as numE;Then D is calculatedtarIn erroneous pixel point Ratio as DtarQuality evaluation value, be denoted as EPR,
In order to test the performance of the method for the present invention, a variety of algorithms of different provided using Middlebury databases are estimated Obtained depth map is tested, and four scenes have been selected:" Tsukuba ", " Venus ", " Teddy " and " Cones ", for every A scene, the depth map for having used the 2nd nine different types of Stereo Matching Algorithms of viewpoint to estimate, totally 36 width depth maps composition comment Valence database.This nine different types of Stereo Matching Algorithm is respectively:AdaptBP、WarpMat、P-LinearS、VSW、 BPcompressed, Layered, SNCC, ReliabilityDP and Infection.
Table 1 give in rating database " Tsukuba ", " Venus ", " Teddy " and " Cones " entirely refer to objective matter The value of evaluation index PBMP (Percentage of Bad Matching Pixels) is measured, PBMP is with estimating depth figure and nothing Distortion reference depth map makes comparisons to calculate error, if as soon as the parallactic error of some pixel is more than a pixel wide, It is considered as erroneous pixel point.It is referred to due to the use of undistorted depth map, therefore PBMP is a kind of accurate and reliable full ginseng Examine index.
The value of the PBMP (%) of different depth figure in 1 rating database of table
Method Tsukuba Venus Teddy Cones
AdaptBP 1.37 0.21 7.06 7.92
WarpMat 1.35 0.24 9.30 8.47
P-LinearS 1.67 0.89 12.00 8.44
VSW 1.88 0.81 13.3 8.85
BPcompressed 3.63 1.89 13.9 9.85
Layered 1.87 1.85 14.3 14.70
SNCC 6.08 1.73 11.10 9.02
ReliabilityDP 3.39 3.48 16.90 19.90
Infection 9.54 5.53 25.10 21.30
Table 2 give " Tsukuba " in the rating database that the method for the present invention obtains, " Venus ", " Teddy " and The quality evaluation value of " Cones ".Table 3 gives the related coefficient of the evaluation result and full reference index PBMP of the method for the present invention, Related coefficient has weighed the degree of consistency of the two, and the value of Pearson's coefficient and linear regression coeffficient is all better closer to 1.By Known to table 3:The result that the method for the present invention acquires has good consistency with PBMP, illustrates that the method for the present invention can accurately detect depth The quality of error and evaluation depth map.
The quality evaluation value EPR (%) of different depth figure in 2 rating database of table
Correlation between 3 quality evaluation value EPR and PBMP of table
Tsukuba Venus Teddy Cones
Pearson's coefficient 0.94 0.90 0.84 0.97
Linear regression coeffficient 0.89 0.80 0.71 0.93
Table 4 gives the related coefficient of the evaluation result and virtual view quality of the method for the present invention, and virtual view quality is used Objective evaluation index mean square error MSE is weighed.Because virtual view synthesis is carried out based on depth map, depth quality is poorer It will lead to occur more mistakes in virtual view, this shows that MSE should increase with the increase of quality evaluation value EPR, uses The linear regression coeffficient of MSE and quality evaluation value EPR indicate the accuracy of measurement.In " Tsukuba ", " Venus ", " Teddy " In " Cones ", the linear regression coeffficient of quality evaluation value EPR and MSE are more than 0.75.Particularly, in " Tsukuba " center line Property regression coefficient has been more than 0.92.This shows that the quality of quality evaluation value EPR and virtual view has good consistency.
Correlation between 4 quality evaluation value EPR of table and virtual view quality
Tsukuba Venus Teddy Cones
Linear regression coeffficient 0.93 0.76 0.84 0.91

Claims (3)

1. a kind of cross validation depth map quality evaluating method of combination JND model, it is characterised in that include the following steps:
1. depth map to be evaluated is denoted as Dtar, by DtarCorresponding cromogram is denoted as Τtar, D will be removedtarAnd ΤtarPlace viewpoint Another outer known viewpoint is defined as auxiliary view, and the cromogram on auxiliary view is denoted as Tref;Then by by DtarIn The pixel value of all pixels point be converted into parallax value, by ΤtarIn all pixels point be mapped to T through 3D-WarpingrefIn; Wherein, Dtar、ΤtarAnd TrefVertical direction on pixel total number be M, Dtar、ΤtarAnd TrefHorizontal direction on The total number of pixel is N;
2. enabling EtarIndicate size and DtarThe identical differential chart of size, by EtarMiddle coordinate position is the picture of (x, y) The pixel value of vegetarian refreshments is denoted as Etar(x, y), when auxiliary view is in DtarAnd ΤtarWhen the left side of place viewpoint, y+d is judgedtar,p(x, Y) whether it is more than N, if it is, enabling EtarOtherwise (x, y)=0 meets u=x, v=y+dtar,p(x, y), Etar(x, y)=| Ιtar(x,y)-Ιref(u,v)|;When auxiliary view is in DtarAnd ΤtarWhen the right of place viewpoint, y-d is judgedtar,pWhether (x, y) Less than 1, if it is, enabling EtarOtherwise (x, y)=0 meets u=x, v=y-dtar,p(x, y), Etar(x, y)=| Ιtar(x, y)-Ιref(u,v)|;Wherein, 1≤x≤M, 1≤y≤N, 1≤u≤M, 1≤v≤N, dtar,p(x, y) indicates DtarMiddle coordinate position For the parallax value that the pixel value of the pixel of (x, y) converts, symbol " | | " it is the symbol that takes absolute value, Ιtar(x, y) is indicated ΤtarMiddle coordinate position is the luminance component of the pixel of (x, y), Ιref(u, v) indicates TrefMiddle coordinate position is the picture of (u, v) The luminance component of vegetarian refreshments;
3. C is enabled to indicate size and DtarSize it is identical block mask image, be (x, y) by coordinate position in C The pixel value of pixel is denoted as C (x, y), the pixel value of each pixel in C is initialized as 0, by ΤtarIt is middle through 3D- Warping is mapped to TrefMiddle coordinate position is that the total number of the pixel at (u, v) is denoted as N(u,v);Work as N(u,v)When=1, C is enabled (x, y)=0;Work as N(u,v)>When 1,Wherein, N(u,v) Value be 0 or be 1 or be more than 1, Dtar(x, y) indicates DtarMiddle coordinate position is the pixel value of the pixel of (x, y), and max () is It is maximized function, 1≤x(u,v),i≤M,1≤y(u,v),i≤ N, (x(u,v),i,y(u,v),i) indicate ΤtarIt is middle to be reflected through 3D-Warping It is mapped to TrefMiddle coordinate position is the N at (u, v)(u,v)Ith pixel point in a pixel is in ΤtarIn coordinate position, Dtar (x(u,v),i,y(u,v),i) indicate DtarMiddle coordinate position is (x(u,v),i,y(u,v),i) pixel pixel value;
4. removing E using CtarIn the pixel that is blocked, obtain the differential chart after blocking, be denoted as E'tar, by E'tarMiddle coordinate Position is that the pixel value of the pixel of (x, y) is denoted as E'tar(x, y), E'tar(x, y)=Etar(x,y)×(1-C(x,y));
5. calculating TrefIn each pixel the texture estimation factor, by TrefMiddle coordinate position is the line of the pixel of (u, v) Reason judges that the factor is denoted as z (u, v),Wherein, 1≤u≤M, 1≤v≤ N, zh(u, v) indicates TrefMiddle coordinate position is the texture estimation factor of the horizontal direction of the pixel of (u, v), zhThe value of (u, v) It is 1 or 0, zh(u, v)=1 indicates TrefMiddle coordinate position is that the pixel of (u, v) is the texture pixel point of horizontal direction, zh(u, V) T=0 is indicatedrefMiddle coordinate position is that the pixel of (u, v) is the non-grain pixel of horizontal direction, zv(u, v) indicates Tref Middle coordinate position is the texture estimation factor of the vertical direction of the pixel of (u, v), zvThe value of (u, v) is 1 or 0, zv(u, v)=1 Indicate TrefMiddle coordinate position is that the pixel of (u, v) is the texture pixel point of vertical direction, zv(u, v)=0 indicates TrefMiddle seat The pixel that mark is set to (u, v) is the non-grain pixel of vertical direction;
6. T is enabled to indicate size and TrefThe identical zone marker figure of size, by coordinate position in T be (u, v) picture The pixel value of vegetarian refreshments is denoted as T (u, v), and the pixel value of each pixel in T is initialized as 0;It is detected using Canny operators TrefIn fringe region, it is assumed that TrefMiddle coordinate position is that the pixel of (u, v) belongs to fringe region, then enables T (u, v)=1;It is false If TrefMiddle coordinate position is the texture estimation factor z (u, v)=1 of the pixel of (u, v), then determines T as T (u, v)=0ref Middle coordinate position is that the pixel of (u, v) belongs to texture region, and enables T (u, v)=2 again;Wherein, the value of T (u, v) is 0 or 1 Or 2, T (u, v)=0 represents TrefMiddle coordinate position is that the pixel of (u, v) belongs to flat site, and T (u, v)=1 represents TrefIn Coordinate position is that the pixel of (u, v) belongs to fringe region, and T (u, v)=2 represents TrefMiddle coordinate position is the pixel of (u, v) Belong to texture region;
7. the JND model based on brightness masking and texture masking effect is introduced, using JND model, and according to TrefIn each picture Vegetarian refreshments affiliated area calculates TrefIn each pixel error visual threshold value, by TrefMiddle coordinate position is the pixel of (u, v) The error visual threshold value of point is denoted as Th (u, v),Its In, max () is to be maximized function, and min () is to be minimized function, and bg (u, v) indicates TrefMiddle coordinate position is (u's, v) The average background brightness of pixel, mg (u, v) indicate TrefMiddle coordinate position is the maximum of the surrounding brightness of the pixel of (u, v) Average weighted, LA (u, v) indicate TrefMiddle coordinate position is the brightness masking effect of the pixel of (u, v),
F (bg (u, v), mg (u, v))=mg (u, v) × α (bg (u, v))+β (bg (u, v)), α (bg (u, v))=bg (u, v) × 0.0001+0.115, β (bg (u, v))=0.5-bg (u, v) × 0.01;
8. E is enabled to indicate size and DtarThe identical depth error figure of size, by coordinate position in E be (x, y) picture The pixel value of vegetarian refreshments is denoted as E (x, y), works as E'tarWhen (x, y)=0, E (x, y)=0;Work as E'tarWhen (x, y) ≠ 0,Wherein, V(x,y)=(u, v) indicates that a mapping process, (x, y) are Τtar In pixel coordinate position, (u, v) be TrefIn pixel coordinate position, work as TrefPlace viewpoint is in ΤtarPlace regards When the left side of point, meet u=x, v=y+dtar,p(x,y);Work as TrefPlace viewpoint is in ΤtarWhen the right of place viewpoint, meet u =x, v=y-dtar,p(x,y);
9. counting the total number for the pixel that pixel value is 1 in E, it is denoted as numE;Then D is calculatedtarIn erroneous pixel point ratio Rate is as DtarQuality evaluation value, be denoted as EPR,
2. the cross validation depth map quality evaluating method of combination JND model according to claim 1, it is characterised in that institute The step of stating 1. in by DtarIn the pixel value of all pixels point be converted into the detailed process of parallax value and be:For DtarMiddle coordinate Position is the pixel of (x, y), and the parallax value that its pixel value converts is denoted as dtar,p(x, y),Wherein, 1≤x≤M, 1≤y≤N, b are indicated between camera Parallax range, f indicate camera focal length, ZnearFor the nearest practical depth of field, ZfarFor the farthest practical depth of field, Dtar(x, y) is indicated DtarMiddle coordinate position is the pixel value of the pixel of (x, y).
3. the cross validation depth map quality evaluating method of combination JND model according to claim 1 or 2, feature exist In the step 5. in zh(u, v) and zvThe acquisition process of (u, v) is:
5. _ 1, calculating TrefIn each pixel differential signal in the horizontal direction, by TrefMiddle coordinate position is the picture of (u, v) The differential signal of vegetarian refreshments in the horizontal direction is denoted as dh(u, v),Its In, Iref(u, v+1) indicates TrefMiddle coordinate position is the luminance component of the pixel of (u, v+1);
5. _ 2, calculating TrefIn each pixel differential signal in the horizontal direction characteristic symbol, by dhThe character symbol of (u, v) Number it is denoted as symdh(u, v),
5. _ 3, calculating zh(u, v),Wherein, dhsym(u, v) is intermediate variable,symdh(u, v+1) is indicated TrefMiddle coordinate position is the characteristic symbol of the differential signal of the pixel of (u, v+1) in the horizontal direction;
5. _ 4, calculating TrefIn each pixel differential signal vertically, by TrefMiddle coordinate position is the picture of (u, v) The differential signal of vegetarian refreshments vertically is denoted as dv(u, v), Wherein, Iref(u+1, v) indicates TrefMiddle coordinate position is the luminance component of the pixel of (u+1, v);
5. _ 5, calculating TrefIn each pixel differential signal vertically characteristic symbol, by dvThe character symbol of (u, v) Number it is denoted as symdv(u, v),
5. _ 6, calculating zv(u, v),Wherein, dvsym(u, v) is intermediate variable,symdv(u+1, v) is indicated TrefMiddle coordinate position is the characteristic symbol of the differential signal of the pixel of (u+1, v) vertically.
CN201710041375.6A 2017-01-20 2017-01-20 In conjunction with the cross validation depth map quality evaluating method of JND model Active CN106803952B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710041375.6A CN106803952B (en) 2017-01-20 2017-01-20 In conjunction with the cross validation depth map quality evaluating method of JND model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710041375.6A CN106803952B (en) 2017-01-20 2017-01-20 In conjunction with the cross validation depth map quality evaluating method of JND model

Publications (2)

Publication Number Publication Date
CN106803952A CN106803952A (en) 2017-06-06
CN106803952B true CN106803952B (en) 2018-09-14

Family

ID=58987216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710041375.6A Active CN106803952B (en) 2017-01-20 2017-01-20 In conjunction with the cross validation depth map quality evaluating method of JND model

Country Status (1)

Country Link
CN (1) CN106803952B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110544233B (en) * 2019-07-30 2022-03-08 北京的卢深视科技有限公司 Depth image quality evaluation method based on face recognition application
CN110691228A (en) * 2019-10-17 2020-01-14 北京迈格威科技有限公司 Three-dimensional transformation-based depth image noise marking method and device and storage medium
CN111402152B (en) * 2020-03-10 2023-10-24 北京迈格威科技有限公司 Processing method and device of disparity map, computer equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7006568B1 (en) * 1999-05-27 2006-02-28 University Of Maryland, College Park 3D wavelet based video codec with human perceptual model
JP5496914B2 (en) * 2008-01-18 2014-05-21 トムソン ライセンシング How to assess perceptual quality
CN103002306B (en) * 2012-11-27 2015-03-18 宁波大学 Depth image coding method
CN103426173B (en) * 2013-08-12 2017-05-10 浪潮电子信息产业股份有限公司 Objective evaluation method for stereo image quality
CN103957401A (en) * 2014-05-12 2014-07-30 武汉大学 Three-dimensional mixed minimum perceivable distortion model based on depth image rendering
TW201601522A (en) * 2014-06-23 2016-01-01 國立臺灣大學 Perceptual video coding method based on just-noticeable- distortion model
CN104754320B (en) * 2015-03-27 2017-05-31 同济大学 A kind of 3D JND threshold values computational methods
CN104954778B (en) * 2015-06-04 2017-05-24 宁波大学 Objective stereo image quality assessment method based on perception feature set
CN105828061B (en) * 2016-05-11 2017-09-29 宁波大学 A kind of virtual view quality evaluating method of view-based access control model masking effect

Also Published As

Publication number Publication date
CN106803952A (en) 2017-06-06

Similar Documents

Publication Publication Date Title
CN105959684B (en) Stereo image quality evaluation method based on binocular fusion
CN106803952B (en) In conjunction with the cross validation depth map quality evaluating method of JND model
CN101996407B (en) Colour calibration method for multiple cameras
CN104754322B (en) A kind of three-dimensional video-frequency Comfort Evaluation method and device
CN102665086B (en) Method for obtaining parallax by using region-based local stereo matching
CN106408513B (en) Depth map super resolution ratio reconstruction method
CN103248906B (en) Method and system for acquiring depth map of binocular stereo video sequence
WO2017156905A1 (en) Display method and system for converting two-dimensional image into multi-viewpoint image
CN108109147A (en) A kind of reference-free quality evaluation method of blurred picture
CN109345502B (en) Stereo image quality evaluation method based on disparity map stereo structure information extraction
CN103384343B (en) A kind of method and device thereof filling up image cavity
CN108596975A (en) A kind of Stereo Matching Algorithm for weak texture region
CN102523477A (en) Stereoscopic video quality evaluation method based on binocular minimum discernible distortion model
Kytö et al. Improving relative depth judgments in augmented reality with auxiliary augmentations
CN106530336A (en) Stereo matching algorithm based on color information and graph-cut theory
TW201315209A (en) System and method of rendering stereoscopic images
CN111798402A (en) Power equipment temperature measurement data visualization method and system based on three-dimensional point cloud model
Schmeing et al. Time-consistency of disocclusion filling algorithms in depth image based rendering
Pan et al. Accurate depth extraction method for multiple light-coding-based depth cameras
CN107339938A (en) A kind of special-shaped calibrating block and scaling method for single eye stereo vision self-calibration
CN114187208B (en) Semi-global stereo matching method based on fusion cost and self-adaptive penalty term coefficient
CN109345552A (en) Stereo image quality evaluation method based on region weight
CN104661013B (en) A kind of virtual viewpoint rendering method based on spatial weighting
Jin et al. Validation of a new full reference metric for quality assessment of mobile 3DTV content
KR101841750B1 (en) Apparatus and Method for correcting 3D contents by using matching information among images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240704

Address after: 518000 1407, Phase II, Qianhai Shimao Financial Center, No. 3040 Xinghai Avenue, Nanshan Street, Qianhai Shenzhen Hong Kong Cooperation Zone, Shenzhen, Guangdong

Patentee after: Shenzhen Yiqi Culture Co.,Ltd.

Country or region after: China

Address before: 509 Kangrui Times Square, Keyuan Business Building, 39 Huarong Road, Gaofeng Community, Dalang Street, Longhua District, Shenzhen, Guangdong Province, 518000

Patentee before: Shenzhen lizhuan Technology Transfer Center Co.,Ltd.

Country or region before: China

Effective date of registration: 20240703

Address after: 509 Kangrui Times Square, Keyuan Business Building, 39 Huarong Road, Gaofeng Community, Dalang Street, Longhua District, Shenzhen, Guangdong Province, 518000

Patentee after: Shenzhen lizhuan Technology Transfer Center Co.,Ltd.

Country or region after: China

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Patentee before: Ningbo University

Country or region before: China