CN107330930A - Depth of 3 D picture information extracting method - Google Patents

Depth of 3 D picture information extracting method Download PDF

Info

Publication number
CN107330930A
CN107330930A CN201710502470.1A CN201710502470A CN107330930A CN 107330930 A CN107330930 A CN 107330930A CN 201710502470 A CN201710502470 A CN 201710502470A CN 107330930 A CN107330930 A CN 107330930A
Authority
CN
China
Prior art keywords
pixel
formula
image
depth
max
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710502470.1A
Other languages
Chinese (zh)
Other versions
CN107330930B (en
Inventor
邓浩
于荣
陈树强
余金清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinjiang Tide Photoelectric Technology Co Ltd
University of Electronic Science and Technology of China
Original Assignee
Jinjiang Tide Photoelectric Technology Co Ltd
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinjiang Tide Photoelectric Technology Co Ltd, University of Electronic Science and Technology of China filed Critical Jinjiang Tide Photoelectric Technology Co Ltd
Priority to CN201710502470.1A priority Critical patent/CN107330930B/en
Publication of CN107330930A publication Critical patent/CN107330930A/en
Application granted granted Critical
Publication of CN107330930B publication Critical patent/CN107330930B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of three dimensional object depth extraction method, the problem of algorithm during current 3D rendering extraction of depth information is excessively complicated is solved.The three dimensional object depth extraction method, by calculating the similarity of adjacent pixel in each element image in two-dimensional image, and then obtains the depth of the pixel.The extracting method is a kind of fast and accurately three-dimensional body depth extraction method, consider integrated imaging (II) and synthetic aperture integration imaging, by assuming that 3D objects are the surfaces being made up of many aspects, the mathematical framework of depth extraction based on Patch Match algorithm developments.Disclosed herein as well is the device using above-mentioned three dimensional object depth extraction method.

Description

Depth of 3 D picture information extracting method
Technical field
The present invention relates to the information extraction technology in Optical information engineering field, more particularly to depth of 3 D picture information extraction Technology.
Background technology
As Display Technique of future generation, three-dimensional (3D) imaging technique is developed rapidly in recent years.Integrated imaging (II) (English For:Integrated imaging (II)) it is related to its high-resolution and full parallax.It is with traditional image processing techniques (as surpassed Resolution ratio and images match) it is compatible.In order to realize 3D imagings and show.Integrated imaging (II) needs (to be referred to as member from 3D objects Sketch map picture) different visual angles, these objects are generally imaged lenslet array in (II) system by general comprehensive and pick up.Due to making With standard 2D images, the single cheap camera with lenslet array or cheap imager array can be used to build many chis Spend 3D imaging systems.Prior art has been realized in many achievements in research, including 3D is shown and automatic target detection.
Depth extraction is referred to as one of sixty-four dollar question of integrated imaging (II).Many researchers have been noted that synthesis It is imaged the depth extraction of (II).However, the method that prior art has been proposed have the drawback that low resolution element image or Complicated algorithm.
The content of the invention
According to the one side of the application, a kind of extracting method of depth of 3 D picture information is proposed, is solved three-dimensional at present The problem of algorithm is excessively complicated in image depth information extraction process.The extracting method is that a kind of fast and accurately three-dimensional body is deep Spend extracting method, it is contemplated that (English is for integrated imaging (II) and synthetic aperture integration imaging:synthetic aperture Integral imaging), by assuming that three dimensional object is the surface being made up of many aspects, based on Patch-Match algorithms Develop the mathematical framework of depth extraction.
The three dimensional object depth extraction method, by calculating in two-dimensional image adjacent pixel in each element image Similarity, and then obtain the depth of the pixel.
Preferably, in each element image in calculating two-dimensional image during the similarity of pixel, it is assumed that adjacent pixel At grade, and with multiple facets surface is modeled.
Preferably, the similarity for calculating adjacent pixel in each element image in two-dimensional image, using adjacent Pixel propagates the multi cycle algorithm with random optimization.
Preferably, including by horizontal pixel it is initialized as the step of random planar and the similarity of iterative calculation adjacent pixel Suddenly.
It is further preferred that described be initialized as random planar, including step by horizontal pixel:
Horizontal pixel is initialized as random planar;
The ID of each pixel is set as random value, by the normal to a surface vector of each pixel be arranged to Machine unit vector.
It is further preferred that described be initialized as random planar, including following process by horizontal pixel:
By the plane of the depth coordinate of the horizontal pixel, represented by formula (5),
Z=f1▽px+f2▽py+f3Formula (5)
Wherein, z is the depth coordinate of the horizontal pixel, and pxAnd pyFor random planar, f1、f2And f3Respectively such as formula Shown in (6-1), formula (6-2) and formula (6-3),
f1=-n1/n3Formula (6-1)
f2=-n2/n3Formula (6-2)
f3=(n1·x0+n2·y0+n3·z0)/n3Formula (6-3)
In formula (6-1), formula (6-2) and formula (6-3), n1、n2And n3It is scalar, is as the numerical value vector as shown in formula (7) Plane, x are possible to where the minimum polymerization cost of expression0And y0The coordinate values of the horizontal pixel respectively initialized, z0For the ID value of the horizontal pixel of initialization,
M is provided by formula (8) in formula (7),
In formula (8), w is adaptive weighted for realizing, w is provided by formula (9);E represents Similarity measures factor, and E is by formula (10) provide;▽ represents Grad, WpExpression concentrates on a p square window,
In formula (9), | | Ip-Iq| | the distance between two adjacent pixel ps and q is represented, p is horizontal pixel, and q is same flat with p Adjacent pixel in face,
E=α | | Ii-Ij||+(1-α)||▽Ii-▽Ij| | formula (10)
In formula (10), I is the intensity of pixel in element image, and subscript i, j are the index of element image, Ii, IjRepresent respectively I-th, the intensity of the respective pixel in j-th of element image, IiAnd IjIt is projected onto identical spatial point, IiAnd IjCoordinate by Formula (11) is calculated and obtained, | | Ii-Ij| | it is the I in rgb spaceiAnd IjColor manhatton distance, ▽ IiWith ▽ IjIt is pixel Gray value gradient, | | ▽ Ii-▽Ij| | represent in IiAnd IjThe absolute difference of the shade of gray of calculating, α is the weight without unit The factor, the influence for balancing color and gradual change;
In formula (11), uiIt is to correspond to local coordinate of the coordinate for the pixel of y and z point in each element image.
As a specific embodiment, methods described is performed on the computer using integrated imaging (II) system.
It is further preferred that the similarity of the iterative calculation adjacent pixel, including step:
A, a horizontal pixel in one random planar of initialization simultaneously calculate its depth coordinate and vector value, calculate it and gather Originally, this polymerization cost is used as with reference to polymerization cost for synthesis;
Any one adjacent pixel of b, calculating with horizontal pixel in step a in the same plane polymerize cost;
It polymerize cost with adjacent pixel in step b with reference to polymerization cost in c, comparison step a;
D. it regard polymerization cost respective pixel smaller in step c as new reference value;
E. reference value respective pixel new in step d is set to adjacent with the respective pixel upper left of the contrast reference value;
F. impose a condition:New reference value correspondence depth value is in the permitted maximum range in step d;
If g. step f conditions are set up, circulation performs step a to step f;
L. step f conditions are invalid, image Far Left pixel will be used as in last time circulation step e;
M. on the basis of step l, image bottom right carries out declining even iteration;
N. the calculation times of each pixel are calculated according to step m iterations.
It is further preferred that the similarity of the iterative calculation adjacent pixel, including spatial and the step of plane refine Suddenly;
In the step of spatial, neighbor pixel is set as in approximately the same plane, is assessed not by formula (8) first With the cost m of situation,
In formula (8), p represents current pixel, fpIt is the vector of its corresponding plane, q is p adjacent pixel, in p (x0, y0) It is lower to use f respectivelypAnd fqCalculate, to assess the cost of both of these case;Shown in inspection condition such as formula (12),
m(x0, y0, fp')<m(x0, y0, fp);Formula (12)
It is in formula (12) and obtained respectively by formula (8);
If the expression formula shown in formula (12) is set up, fqIt is accepted as p new vector, i.e. fp=fq
In odd number iteration, q is the left side and coboundary;
In even number iteration, q is right margin and lower boundary;
In the step of plane refine, by fpBe converted to normal vector np, two parameter ▽ z and ▽ n are defined as limiting respectively Z processed0With n maximum allowable change, z0' it is calculated as z0'=z0+ ▽ z, wherein ▽ z are located at [- ▽ zmax, ▽ zmax], and n'=u (n+ ▽ n), u () represents unit of account vector, and ▽ n are located at [- ▽ nmax, ▽ nmax];
Finally, a new f is obtained by p and n'p', if m (x0, y0, fp')<m(x0, y0, fp), then fp=fp';
In the step of plane refine, from setting ▽ zmax=maxdisp/2 starts, and wherein maxdisp is allowed most Big parallax, ▽ nmax=1, every time after refinement, parameter will be updated to ▽ zmax=▽ zmax/2、▽nmax=▽ nmax/2;Until ▽ zmax<Resolution/2, the resolution ratio minimized;For odd number iteration, since on the left of image, carried out to bottom right Even number iteration;
The similarity of adjacent pixel is obtained after iteration, and then obtains the depth of the three dimensional object.
It is further preferred that z0All pixels are initialized as with fixed value, and add conditioning step before the iteration.
According to the one side of the application there is provided a kind of device that three-dimensional image information is obtained by two-dimension picture, mesh is solved The problem of algorithm is excessively complicated during preceding 3D rendering extraction of depth information.This is obtained the dress of three-dimensional image information by two-dimension picture It is a kind of fast and accurately three-dimensional body depth extraction device to put, it is contemplated that integrated imaging (II) and synthetic aperture integration imaging (English is:Synthetic aperture integral imaging), by assuming that 3D objects are made up of many aspects Surface, the mathematical framework of depth extraction based on Patch-Match algorithm developments.
The device for obtaining three-dimensional image information by two-dimension picture includes picture collection unit, image storage unit and figure As processing unit;
The picture collection unit and image storage unit electrical connection, described image memory cell and graphics processing unit electricity Connection;
Described image processing unit obtains two dimension using at least one of three dimensional object depth extraction method described above The depth information of object, sets up 3-D view in picture.
The beneficial effect of technical scheme of the present invention includes but is not limited to:
(1) present applicant proposes a kind of new calculation method of three dimensional object depth extraction, this method calculates each elemental map The similarity of pixel as in, while being projected into possible depth.
(2) extracting method for the depth of 3 D picture information that the application is provided, it is contemplated that the continuity on surface, is divided with improving Resolution.
(3) extracting method for the depth of 3 D picture information that the application is provided, in integrated imaging (II) system, using Patches match method, greatly reduces amount of calculation, accelerates calculating speed, the equipment for applying inventive algorithm can be made more universal Change.
Brief description of the drawings
Fig. 1 is 3D rendering extraction of depth information Method And Principle schematic diagram of the present invention;Wherein Fig. 1 (a) is integrated imaging (II) Pick-up schematic diagram;Fig. 1 (b) is the projection section schematic diagram of integrated imaging (II).
Fig. 2 is that light propagates figure in integrated imaging (II) system of body surface face and free space voxel.
Fig. 3 is the 3D objects that image information is extracted, and wherein Fig. 3 (a) is the object being imaged using integrated imaging (II); Fig. 3 (b) is the object being imaged using synthetic aperture integration imaging.
Fig. 4 is, by mathematical software technology, to be tied using the application method imaging results with being imaged using art methods The contrast of fruit;Wherein Fig. 4 (a) is the result obtained using the extraction of depth information method of micropin;Fig. 4 (b) is to use catadioptric The result that omnidirectional's extracting method is obtained;Fig. 4 (c) is integrated imaging (II) result obtained using the method for the invention;Fig. 4 (d) the synthetic aperture integration imaging result to be obtained using the method for the invention.
Fig. 5 is to reduce the Comparative result after white noise influence;Wherein Fig. 5 (a) be reduce white noise influence before synthesis into As (II) result;Fig. 5 (b) is to reduce integrated imaging (II) result after white noise influence;Fig. 5 (c) is reduction white noise influence Preceding synthetic aperture integration imaging result;Fig. 5 (d)) it is to reduce the synthetic aperture integration imaging result after white noise influence.
Fig. 6 is the back-projected chart using 3D rendering extraction of depth information method of the present invention;Wherein Fig. 6 (a) is Fig. 3 (a) Using the back-projected chart of 3D rendering extraction of depth information method of the present invention;Fig. 6 (b) is Fig. 3 (b) deep using 3D rendering of the present invention Spend the back-projected chart of information extracting method.
Embodiment
With reference to embodiment, technical scheme is described in detail.
The present invention proposes a kind of new calculation method extracted for 3D subject depths in integrated imaging (II) system, calculates The similarity of pixel in each element image, while being projected into possible depth.The continuity on surface is additionally contemplates that, to carry High-resolution.And accelerate calculating speed using Patches match method.
Fig. 1 (a) and Fig. 1 (b) is the principle schematic of integrated imaging (II).Wherein Fig. 1 (a) picks up for integrated imaging (II's) The schematic diagram of part is taken, Fig. 1 (b) is the projection section schematic diagram of integrated imaging (II).
Reference picture 1 (a), alphabetical A represents three-dimensional (3D) object.Z (capitalization) is represented between object and lenslet array Distance, g represents the distance between lens array and image planes.The intensity of light from 3D objects and direction are recorded by lenslet In different positions.Different element images are also show in Fig. 1 (a), three images of right side from top to bottom are seen.
In the projection section as shown in Fig. 1 (b), each element image projects to object space by corresponding pin hole. In Fig. 1 (b), z (lowercase) represents projector distance, and g represents the distance between image and pin hole.And these projected images The z/g factor is exaggerated in reconstruction plane.Finally, by these amplification image it is overlapping and accumulate in the corresponding of output plane In pixel.
In the system that Fig. 1 (a) and Fig. 1 (b) are shown, projector distance z is set by experimenter, therefore, when projector distance z with Its spatial depth Z is mismatched, and projected image is fuzzy pixel.Therefore, if it is known that each pixel in different projector distance Fuzziness, it is possible to calculate corresponding projector distance.
Fig. 2 is that light propagates figure in integrated imaging (II) system of body surface face and free space voxel.This is shown in Fig. 2 The 2D structures of the method for invention, it is shown that the y-z plane in 3d space, imaging object is shown as a face on the left side.It is small Lens array is on the y axis.And z-axis is depth direction.The coordinate of each lenslet is labeled as Si.Imaging object plane is marked as u.The distance of u to lenslet array is g.Element image is projected into z as shown in Fig. 2 working as0During plane, obtained result images In (y0, z0) will be clearly for the high similitude of its respective pixel.Z is projected to as a comparison, working as1During plane, (y1, z1) It is fuzzy, because (y1, z1) place and (y0, z0) pixel on the object u different piece, such as color institute different in Fig. 2 Show.It can be obtained corresponding to the local coordinate point of each pixel of projection by formula (1):
Wherein uiIt is the local coordinate for the pixel for corresponding to point (y, z) in each element image.G is the plane of delineation and small The distance between lens array.siIt is the index of the coordinate of lenslet, i.e. lenslet.Using the equation, can by equation (1) come Similarity of the estimated projection to the pixel of identical point.
E=α | | Ii-Ij||+(1-α)||▽Ii-▽Ij| | formula (2)
In this equation, E is the evaluation factor of similitude.E more small pixels are more similar.I is pixel in element image Intensity.Subscript i, j are the indexes of element image.Ii, IjThe respective pixel in i-th, j-th of element image is represented respectively.IiAnd Ij Identical spatial point is projected onto, their coordinate is calculated by equation (1) and obtained.||Ii-Ij| | calculate the I in rgb spaceiWith IjColor L1 distance (i.e. manhatton distance).▽ I are the gray value gradients of pixel;| | ▽ Ii- ▽ Ij | | represent in IiAnd Ij The absolute difference of the shade of gray of calculating.α is the weight factor without unit, is user-defined parameter, for balance color and The influence of gradual change.From the equation can calculate the pixel for projecting to same spatial location between similarity.
Therefore, in 3D patterns, the depth of the imaged object surface with crosswise spots (x, y) can be extracted by finding Z Degree so that E (x, y, z) is minimized in the range of Z=[Z min, Z max].This assumes expression such as formula (3) mathematically It is shown:
Answer can be found by checking all possible z.However, the possible z found by means of which be from Scattered, limited by resolution ratio.This mode have ignored the continuity on surface, and be computationally intensive, obtain Sub- resolving effect (sub-resolution effect) is, it is necessary to more surface informations.The present invention considers the company of body surface Continuous property.Adjacent pixel is assumed at grade, so the present invention is modeled with many facets to surface.
Pass through (x0, y0, z0) surface can be expressed as formula (4):
n1x+n2y+n3Z=n1x0+n2y0+n3z0Formula (4)
n(n1, n2, n3) it is normal vector.In the present invention, horizontal pixel is referred to as p (px, py), z is the depth coordinate of requirement, So formula (5) and (6) can be obtained with change type (4):
Z=f1▽px+f2▽py+f3Formula (5)
f1=-n1/n3, f2=-n2/n3, f3=(n1·x0+n2·y0+n3·z0)/n3Formula (6)
Therefore, the problem of finding z is changed into finding f, and vector f is the minimum polymerization matching cost in all possible plane One of, it can be expressed as formula (7):
Wherein F represents the set of the infinitely great institute's directed quantity of size.P (p are matched according to vector fx, py) polymerization cost m lead to Cross formula (8) and formula (9) is calculated and obtained:
In formula (9), wpRepresent with p (px, py) centered on square window;W is used to realize adaptive weight Stereo matching, Edge can be overcome to breed problem;γ is user-defined parameter;IpRepresent image p image pixel intensities;IqRepresent image q picture Plain intensity.The E of neighbouring pixel is also calculated with identical vector f, discloses them in the same plane.Institute directed quantity F set It is unlimited Label space, it is impossible to use common practice, simply just checks all possible label.
3D rendering extraction of depth information method proposed by the present invention, this method is based on Patch-Match, and its basic thought is Most of adjacent pixels should be in same plane.According to this it is assumed that the present invention is developed is propagated comprising adjacent pixel and random excellent The multi cycle algorithm of change.
Based on above-mentioned analysis, 3D rendering extraction of depth information method proposed by the present invention comprises the following steps:
Step 1:Initialization
By horizontal pixel p (x0, y0) it is initialized as random planar;
Plane can be determined by point and normal vector.The z of each pixel0Initialized by random value, and pass through the picture The normal to a surface vector of element is arranged to random unitary vector n (n1, n2, n3).Vector f can be by normal state n and point p (x0, y0, z0) export.
The depth coordinate of the horizontal pixel is z (x0, y0, z0), random planar is expressed as p (px, py), can by z plane It is expressed as:
Z=f1▽px+f2▽py+f3
Wherein referring to formula (6), f1=-n1/n3f2=-n2/n3, f3=(n1·x0+n2·y0+n3·z0)/n3, n is scalar, It is numerical value vectorPlane is possible to where the minimum polymerization cost of expression,
The horizontal pixel initialized in the step is p (x0, y0), its corresponding depth value is z0, and its polymerization cost m is:
Wherein,||Ip-Iq| | the distance between two adjacent pixel ps and q is represented, p is horizontal pixel, Q is the adjacent pixel with p in the same plane, and w is adaptive weighted for realizing, E represents that identical calculates factor, and ▽ represents ladder Angle value, WpRepresent that a square window concentrates on p (px, py)。
Identical calculations factor E is represented by:
E=α | | Ii-Ij||+(1-α)||▽Ii-▽Ij
Wherein, I is the intensity of pixel in element image.Subscript i, j are the indexes of element image.Ii, IjI-th is represented respectively, The intensity of respective pixel in j-th of element image.IiAnd IjIdentical spatial point is projected onto, their coordinate is by equation (1) Calculating is obtained.It is corresponding to the local coordinate point of each pixel of projection:
Wherein uiIt is the local coordinate for the pixel for corresponding to point (y, z) in each element image.G is the plane of delineation and small The distance between lens array (referring to Fig. 1).siIt is the index of the coordinate of lenslet, i.e. lenslet.IiAnd IjAsked using the formula .
||Ii-Ij| | calculate the I in rgb spaceiAnd IjColor L1 distance (i.e. manhatton distance).▽ I are pixels Gray value gradient;||▽Ii-▽Ij| | represent in IiAnd IjThe absolute difference of the shade of gray of calculating.α be the weight without unit because Son, is user-defined parameter, the influence for balancing color and gradual change.It can be calculated by the equation and project to identical sky Between position pixel between similarity.
Because of many predicted values, after this random initializtion, at least one area with plane in the region The pixel in domain is close to right value.The propagation steps of other pixels are delivered to by the plane, a good predicted value is enough Operationalize algorithm.
Step 2:Iteration
In iteration, each pixel runs two stages:It is spatial first, next to that plane refine.
(2-1) spatial
Consecutive points are assumed to be and normally occurred in approximately the same plane.This is the key point propagated.P represents current picture Element, fpIt is the vector of its corresponding plane.Q is p adjacent pixel.Formula (8) is in p (x0, y0) under use f respectivelypAnd fqCalculate, to comment Estimate the cost of both of these case.Inspection condition m (x0, y0, fp')<m(x0, y0, fp)。
M (x in formula (12)0, y0, fp') and m (x0, y0, fp) obtained respectively by formula (8);
If the expression formula is set up, fqIt is accepted as p new vector, i.e. fp=fq.In odd number iteration, q be the left side and Coboundary, in even number iteration, q is right margin and lower boundary.
(2-2) plane refine
The target of plane refine is that the parameter of plane is improved at pixel p.Carried to further reduce the Z in formula (6) Take the depth of the imaged object surface with crosswise spots.
By fpBe converted to normal vector np.Two parameter ▽ z and ▽ n are defined as limiting z respectively0With n maximum allowable change Change.z0' it is calculated as z0'=z0+ ▽ z, wherein ▽ z are located at [- ▽ zmax, ▽ zmax].And n'=u (n+ ▽ n), u () represent meter Unit vector is calculated, ▽ n are located at [- ▽ nmax, ▽ nmax].Finally, a new f is obtained by p and n'p'.If m (x0, y0, fp')<m(x0, y0, fp), then fp=fp'。
This method from set ▽ zmax=maxdisp/2 starts, and wherein maxdisp is allowed maximum disparity, ▽ nmax= 1.Every time after refinement, parameter will be updated to ▽ zmax=▽ zmax/ 2, ▽ nmax=▽ nmax/ 2, so as to reduce hunting zone.We Special propagation is turned again to, until ▽ zmax<Resolution/2, its intermediate-resolution such as document [DaneshPanah M, Javidi B.“Profilometry and optical slicing by passive three-dimensional imaging[J]” .Optics letters,2009,34(7):1105-1107] shown minimum.For odd number iteration, opened on the left of image Begin, even number iteration is carried out to bottom right.Last result is obtained after iteration.
In order to verify the practicality of this method, it is proposed that two kinds of II types experiments.And also carry out checking all possible z Conventional method be compared.First, the model of tractor is used in Automated library system imaging system as 3D objects.The thing Shown in body such as Fig. 3 (a).As a result there are 20 × 28 element images, each element image has 100 × 100 pixels.Lenslet Focal length is 1.5mm.The depth of tractor isα is set as that 0.5, γ is set as 5. by calculating, minimum-depth point Resolution is 0.005mm.
And in synthetic aperture integration imaging, shown in element image such as Fig. 3 (b).3D objects include a structure block, one Doll and a toy elephant, respectively positioned at 53-57 centimetres, 89-93 centimetres and 131~136 centimetres.System includes 6 × 6 perspectives Figure.And image is trapped on the regular grid with 5mm spacing.The focal length of camera is 16mm.α is set as that 0.5, γ is set For 5. by calculating, minimum-depth resolution ratio is 2mm.
The algorithm is calculated in Matlab.By set forth herein method receive result and common method result ratio More as shown in the figure.In Fig. 4 (b) and (d), the horizontal stripe of bottom is the folding of background.
From such results, it can be seen that both approaches are showed well all in object part.But pass through common method Obtained result such as Fig. 4 (a) and 4 (b) are shown, and side fattening is quite obvious.Set forth herein method in, such as Fig. 4 (c) and 4 (d) Shown, blank parts as a result are filled with white noise.Change in depth to these regions is insensitive, because it is difficult to distinguish difference. So the depth of this part is still the value that it is initialized.There are some a bit inaccurate inside object, it is necessary to pass through one Some more iteration eliminate this point.
In order to reduce the influence of white noise, z0All pixels are initialized as with fixed value, and add essence before the iteration Repair step.More preferable result as shown in Figure 5 is obtained, white noise is eliminated well.In Fig. 5 (a), particularly in object edge The depth of tractor is accurately extracted at edge.But the object edge in Fig. 5 (b) still has some white noises, this may be by The limitation of experiment condition.
Although it is contemplated that the continuity on surface, but from the point of view of this result, depth is continually changing not directly perceived.This is probably Caused due to the rougher resolution ratio of this geometrical model.
In order to further verify proposed method, each pixel is projected to the depth calculated by the method proposed, As shown in Figure 6.The depth used in the step is Gaussian smoothing and filters out background.Because algorithm not yet optimizes, calculate Time is difficult to the computational efficiency for assessing this method.It therefore, it can the calculating time by core factor to assess m, it is not necessary to count Calculate all z spaces;Computing resource is paid sub- resolution ratio and calculated.In simulation light field, maxdisp is set to 10mm, resolution ratio For 0.005mm, 12 iteration are calculated in the algorithm.In each iteration, in each pixel (fp, fq, fp', wherein q is two Individual adjacent pixel) in calculate m times 4 times., should compared with calculating 10/0.005=2000 times of common method of each possible position The m of 12 × 4=48 times calculating of each pixel is nearly 40 times in method.From this view point, the algorithm of proposition can be effective Reduce amount of calculation in ground.
It is described above, only it is several embodiments of the present invention, any type of limitation is not done to the present invention, though So the present invention with preferred embodiment disclose as above, but and be not used to limitation the present invention, any those skilled in the art, In the range of technical scheme is not departed from, make a little variation using the technology contents of the disclosure above or modification is impartial Equivalence enforcement case is same as, is belonged in the range of technical scheme.

Claims (10)

1. a kind of three dimensional object depth extraction method, it is characterised in that by calculating each element image in two-dimensional image The similarity of middle adjacent pixel, and then obtain the depth of the pixel.
2. three dimensional object depth extraction method according to claim 1, it is characterised in that in two-dimensional image is calculated In each element image during the similarity of pixel, it is assumed that adjacent pixel at grade, and is entered with multiple facets to surface Row modeling.
3. three dimensional object depth extraction method according to claim 1, it is characterised in that the calculating two-dimensional image In in each element image adjacent pixel similarity, propagated using adjacent pixel and random optimization multi cycle algorithm.
4. three dimensional object depth extraction method according to claim 1, it is characterised in that including horizontal pixel is initialized The step of for random planar and the similarity of iterative calculation adjacent pixel.
5. three dimensional object depth extraction method according to claim 4, it is characterised in that described to initialize horizontal pixel For random planar, including step:
Horizontal pixel is initialized as random planar;
The ID of each pixel is set as random value, and random list is arranged to by the normal to a surface vector of each pixel Bit vector.
6. three dimensional object depth extraction method according to claim 4, it is characterised in that described to initialize horizontal pixel For random planar, including following process:
By the plane of the depth coordinate of the horizontal pixel, represented by formula (5),
Z=f1▽px+f2▽py+f3Formula (5)
Wherein, z is the depth coordinate of the horizontal pixel, and pxAnd pyFor random planar, f1、f2And f3Respectively such as formula (6-1), formula Shown in (6-2) and formula (6-3),
f1=-n1/n3Formula (6-1)
f2=-n2/n3Formula (6-2)
f3=(n1·x0+n2·y0+n3·z0)/n3Formula (6-3)
In formula (6-1), formula (6-2) and formula (6-3), n1、n2And n3It is scalar, is as the numerical value vector as shown in formula (7)Represent Plane, x are possible to where minimum polymerization cost0And y0The coordinate values of the horizontal pixel respectively initialized, z0For The ID value of the horizontal pixel of initialization,
M is provided by formula (8) in formula (7),
In formula (8), w is adaptive weighted for realizing, w is provided by formula (9);E represents Similarity measures factor, and E is carried by formula (10) For;▽ represents Grad, WpExpression concentrates on a p square window,
In formula (9), | | Ip-Iq| | represent the distance between two adjacent pixel ps and q, p is horizontal pixel, q be with p in the same plane Adjacent pixel,
E=α | | Ii-Ij||+(1-α)||▽Ii-▽Ij| | formula (10)
In formula (10), I is the intensity of pixel in element image, and subscript i, j are the index of element image, Ii, IjI-th is represented respectively, The intensity of respective pixel in j-th of element image, IiAnd IjIt is projected onto identical spatial point, IiAnd IjCoordinate by formula (11) calculate and obtain, | | Ii-Ij| | it is the I in rgb spaceiAnd IjColor manhatton distance, ▽ IiWith ▽ IjIt is pixel Gray value gradient, | | ▽ Ii-▽Ij| | represent in IiAnd IjThe absolute difference of the shade of gray of calculating, α be the weight without unit because Son, the influence for balancing color and gradual change;
In formula (11), uiIt is to correspond to local coordinate of the coordinate for the pixel of y and z point in each element image.
7. three dimensional object depth extraction method according to claim 4, it is characterised in that described three dimensional object is deep Spend extracting method to perform on the computer using integrated imaging (II) system, the similarity of the iterative calculation adjacent pixel, Including step:
A, a horizontal pixel in one random planar of initialization simultaneously calculate its depth coordinate and vector value, calculate it and aggregate into This, using this polymerization cost as with reference to polymerization cost;
Any one adjacent pixel of b, calculating with horizontal pixel in step a in the same plane polymerize cost;
It polymerize cost with adjacent pixel in step b with reference to polymerization cost in c, comparison step a;
D. it regard polymerization cost respective pixel smaller in step c as new reference value;
E. reference value respective pixel new in step d is set to adjacent with the respective pixel upper left of the contrast reference value;
F. impose a condition:New reference value correspondence depth value is in the permitted maximum range in step d;
If g. step f conditions are set up, circulation performs step a to step f;
L. step f conditions are invalid, image Far Left pixel will be used as in last time circulation step e;
M. on the basis of step l, image bottom right carries out declining even iteration;
N. the calculation times of each pixel are calculated according to step m iterations.
8. three dimensional object depth extraction method according to claim 4, it is characterised in that the iterative calculation adjacent pixel Similarity, including the step of spatial and plane refine;
In the step of spatial, neighbor pixel is set as in approximately the same plane, is assessed do not sympathize with by formula (8) first The cost m of condition,
In formula (8), p represents current pixel, fpIt is the vector of its corresponding plane, q is p adjacent pixel, and f is used respectively under pp And fqCalculate, to assess the cost of both of these case;Shown in inspection condition such as formula (12),
m(x0, y0, fp')<m(x0, y0, fp);Formula (12)
M (x in formula (12)0, y0, fp') and m (x0, y0, fp) obtained respectively by formula (8);
If the expression formula shown in formula (12) is set up, fqIt is accepted as p new vector, i.e. fp=fq
In odd number iteration, q is the left side and coboundary;
In even number iteration, q is right margin and lower boundary;
In the step of plane refine, by fpBe converted to normal vector np, two parameter ▽ z and ▽ n are defined as limiting z respectively0 With n maximum allowable change, z0' it is calculated as z0'=z0+ ▽ z, wherein ▽ z are located at [- ▽ zmax, ▽ zmax], and n'=u (n+ ▽ n), wherein u represents unit of account vector, and ▽ n are located at [- ▽ nmax, ▽ nmax];
Finally, a new f is obtained by p and n'p', if m (x0, y0, fp')<m(x0, y0, fp), then fp=fp';
In the step of plane refine, from setting ▽ zmax=maxdisp/2 starts, and wherein maxdisp is allowed maximum and regarded Difference, ▽ nmax=1, every time after refinement, parameter will be updated to ▽ zmax=▽ zmax/2、▽nmax=▽ nmax/2;Until ▽ zmax< Resolution/2, the resolution ratio minimized;For odd number iteration, since on the left of image, even number is carried out to bottom right Iteration;
The similarity of adjacent pixel is obtained after iteration, and then obtains the depth of the three dimensional object.
9. three dimensional object depth extraction method according to claim 8, it is characterised in that z0Institute is initialized as with fixed value There is pixel, and add conditioning step before the iteration.
10. a kind of device that three-dimensional image information is obtained by two-dimension picture, it is characterised in that including picture collection unit, image Memory cell and graphics processing unit;
The picture collection unit and image storage unit electrical connection, described image memory cell and graphics processing unit are electrically connected Connect;
Described image processing unit obtains X-Y scheme using any one of claim 1 to the 9 three dimensional object depth extraction method The depth information of object, sets up 3-D view in piece.
CN201710502470.1A 2017-06-27 2017-06-27 Three-dimensional image depth information extraction method Expired - Fee Related CN107330930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710502470.1A CN107330930B (en) 2017-06-27 2017-06-27 Three-dimensional image depth information extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710502470.1A CN107330930B (en) 2017-06-27 2017-06-27 Three-dimensional image depth information extraction method

Publications (2)

Publication Number Publication Date
CN107330930A true CN107330930A (en) 2017-11-07
CN107330930B CN107330930B (en) 2020-11-03

Family

ID=60198148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710502470.1A Expired - Fee Related CN107330930B (en) 2017-06-27 2017-06-27 Three-dimensional image depth information extraction method

Country Status (1)

Country Link
CN (1) CN107330930B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154549A (en) * 2017-12-25 2018-06-12 太平洋未来有限公司 A kind of three dimensional image processing method
CN110807798A (en) * 2018-08-03 2020-02-18 华为技术有限公司 Image recognition method, system, related device and computer readable storage medium
CN111465818A (en) * 2017-12-12 2020-07-28 索尼公司 Image processing apparatus, image processing method, program, and information processing system
WO2020191731A1 (en) * 2019-03-28 2020-10-01 深圳市大疆创新科技有限公司 Point cloud generation method and system, and computer storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279982A (en) * 2013-05-24 2013-09-04 中国科学院自动化研究所 Robust rapid high-depth-resolution speckle three-dimensional rebuilding method
CN103793911A (en) * 2014-01-24 2014-05-14 北京科技大学 Scene depth obtaining method based on integration image technology
CN104036481A (en) * 2014-06-26 2014-09-10 武汉大学 Multi-focus image fusion method based on depth information extraction
CN104065947A (en) * 2014-06-18 2014-09-24 长春理工大学 Depth image obtaining method for integrated imaging system
CN104240217A (en) * 2013-06-09 2014-12-24 周宇 Binocular camera image depth information acquisition method and device
CN104715504A (en) * 2015-02-12 2015-06-17 四川大学 Robust large-scene dense three-dimensional reconstruction method
CN105551050A (en) * 2015-12-29 2016-05-04 深圳市未来媒体技术研究院 Optical field based image depth estimation method
CN105574926A (en) * 2014-10-17 2016-05-11 华为技术有限公司 Method and device for generating three-dimensional image
US20160173849A1 (en) * 2013-10-25 2016-06-16 Ricoh Innovations Corporation Processing of Light Fields by Transforming to Scale and Depth Space
CN106257537A (en) * 2016-07-18 2016-12-28 浙江大学 A kind of spatial depth extracting method based on field information
CN106296706A (en) * 2016-08-17 2017-01-04 大连理工大学 A kind of depth calculation method for reconstructing combining global modeling and non local filtering
CN106447718A (en) * 2016-08-31 2017-02-22 天津大学 2D-to-3D depth estimation method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279982A (en) * 2013-05-24 2013-09-04 中国科学院自动化研究所 Robust rapid high-depth-resolution speckle three-dimensional rebuilding method
CN104240217A (en) * 2013-06-09 2014-12-24 周宇 Binocular camera image depth information acquisition method and device
US20160173849A1 (en) * 2013-10-25 2016-06-16 Ricoh Innovations Corporation Processing of Light Fields by Transforming to Scale and Depth Space
CN103793911A (en) * 2014-01-24 2014-05-14 北京科技大学 Scene depth obtaining method based on integration image technology
CN104065947A (en) * 2014-06-18 2014-09-24 长春理工大学 Depth image obtaining method for integrated imaging system
CN104036481A (en) * 2014-06-26 2014-09-10 武汉大学 Multi-focus image fusion method based on depth information extraction
CN105574926A (en) * 2014-10-17 2016-05-11 华为技术有限公司 Method and device for generating three-dimensional image
CN104715504A (en) * 2015-02-12 2015-06-17 四川大学 Robust large-scene dense three-dimensional reconstruction method
CN105551050A (en) * 2015-12-29 2016-05-04 深圳市未来媒体技术研究院 Optical field based image depth estimation method
CN106257537A (en) * 2016-07-18 2016-12-28 浙江大学 A kind of spatial depth extracting method based on field information
CN106296706A (en) * 2016-08-17 2017-01-04 大连理工大学 A kind of depth calculation method for reconstructing combining global modeling and non local filtering
CN106447718A (en) * 2016-08-31 2017-02-22 天津大学 2D-to-3D depth estimation method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FENGLI YU等: ""Depth generation method for 2D to 3D conversion"", 《 2011 3DTV CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO (3DTV-CON)》 *
SHULU WANG等: ""An integral imaging method for depth extraction with lens array in an optical tweezer system"", 《OPTOELETRONIC DEVICES AND INTEGRATION V 9270》 *
杜军辉等: ""基于集成成像光场信息的三维计算重建方法研究"", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *
王宇等: ""基于多视差函数拟合的集成成像深度提取方法"", 《光学学报》 *
路海明等: ""基于向量场的深度计算方法"", 《清华大学学报(自然科学版)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111465818A (en) * 2017-12-12 2020-07-28 索尼公司 Image processing apparatus, image processing method, program, and information processing system
CN108154549A (en) * 2017-12-25 2018-06-12 太平洋未来有限公司 A kind of three dimensional image processing method
CN110807798A (en) * 2018-08-03 2020-02-18 华为技术有限公司 Image recognition method, system, related device and computer readable storage medium
CN110807798B (en) * 2018-08-03 2022-04-12 华为技术有限公司 Image recognition method, system, related device and computer readable storage medium
WO2020191731A1 (en) * 2019-03-28 2020-10-01 深圳市大疆创新科技有限公司 Point cloud generation method and system, and computer storage medium

Also Published As

Publication number Publication date
CN107330930B (en) 2020-11-03

Similar Documents

Publication Publication Date Title
Hartmann et al. Learned multi-patch similarity
Shen Accurate multiple view 3d reconstruction using patch-based stereo for large-scale scenes
Furukawa et al. Accurate, dense, and robust multiview stereopsis
Tola et al. Daisy: An efficient dense descriptor applied to wide-baseline stereo
Hiep et al. Towards high-resolution large-scale multi-view stereo
Dall'Asta et al. A comparison of semiglobal and local dense matching algorithms for surface reconstruction
Wu et al. A triangulation-based hierarchical image matching method for wide-baseline images
CN107330930A (en) Depth of 3 D picture information extracting method
CN107679537A (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matchings
CN104361627B (en) Binocular vision bituminous paving Micro texture 3-D view reconstructing method based on SIFT
Köser et al. Dense 3d reconstruction of symmetric scenes from a single image
CN104463899A (en) Target object detecting and monitoring method and device
CN105335952B (en) Matching power flow computational methods and device and parallax value calculating method and equipment
Pound et al. A patch-based approach to 3D plant shoot phenotyping
CN114241031A (en) Fish body ruler measurement and weight prediction method and device based on double-view fusion
CN108564620A (en) Scene depth estimation method for light field array camera
CN113705796B (en) Optical field depth acquisition convolutional neural network based on EPI feature reinforcement
US11394945B2 (en) System and method for performing 3D imaging of an object
CN106257537A (en) A kind of spatial depth extracting method based on field information
CN109034374A (en) The relative depth sequence estimation method of convolutional network is intensively connected to using multi-scale
Brandt et al. Efficient binocular stereo correspondence matching with 1-D max-trees
Shen Depth-map merging for multi-view stereo with high resolution images
Nicosevici et al. Efficient 3D scene modeling and mosaicing
Skuratovskyi et al. Outdoor mapping framework: from images to 3d model
Frisky et al. Investigation of single image depth prediction under different lighting conditions: A case study of ancient Roman coins

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201103

Termination date: 20210627