CN106815865A - Image depth estimation method, depth drawing generating method and device - Google Patents
Image depth estimation method, depth drawing generating method and device Download PDFInfo
- Publication number
- CN106815865A CN106815865A CN201510859849.9A CN201510859849A CN106815865A CN 106815865 A CN106815865 A CN 106815865A CN 201510859849 A CN201510859849 A CN 201510859849A CN 106815865 A CN106815865 A CN 106815865A
- Authority
- CN
- China
- Prior art keywords
- image
- depth
- estimation
- variance
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Abstract
A kind of image depth estimation method, depth drawing generating method and device.Described image depth estimation method includes:Obtain the N image { I shot in same visual angle, identical focal length but different image distances1,I2,…,IN};Respectively according to described image { I1,I2,…,INThe corresponding local variance figure of generation;Search for the maximum image of the corresponding variance of each pixel from each local variance figure, and the maximum corresponding object distance of image of the variance that will be searched is used as the estimation of Depth value of the pixel.The complexity and hardware cost of picture depth estimation procedure can be reduced using described image depth estimation method.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of image depth estimation method, depth
Drawing generating method and device.
Background technology
At present, in focusing (refocusing), augmented reality (augmented reality), and mesh again
In the implementation process of image processing algorithm such as mark detection and identification (object detection and recognition),
It is required to use depth map (depth map).Estimation of Depth is carried out to image to obtain corresponding depth map,
There is important effect to the treatment of image.
In actual applications, the estimation procedure of picture depth is as follows:Images match first is carried out to original image
(Image Alignment), obtains corresponding disparity estimation figure;Original is carried out further according to the disparity estimation figure
Beginning estimation of Depth, obtains original depth and estimates figure (raw depth map estimation);Finally to the original
Beginning estimation of Depth figure carries out treatment (post-processing) later, obtains final depth map.
At present, during estimation of Depth is carried out to original image, it is mainly based upon Stereo matching (stereo
Matching algorithm) is calculated.On the one hand, the computation complexity of Stereo matching process is higher, uncomfortable
Real-time application is closed, on the other hand, Stereo matching process needs to use two cameras for being located at different visual angles
To obtain corresponding original image respectively, cause hardware cost higher, thus cause above-mentioned picture depth to be estimated
Meter process computation complexity and hardware cost are higher.
The content of the invention
The problem to be solved in the present invention is the complexity and hardware cost for how reducing picture depth estimation procedure.
To solve the above problems, a kind of image depth estimation method, the side are the embodiment of the invention provides
Method includes:
Obtain the N image { I shot in same visual angle, identical focal length but different image distances1, I2..., IN, N
>=3, and N is positive integer;
Respectively according to described image { I1, I2..., INThe corresponding local variance figure of generation;
The maximum image of the corresponding variance of each pixel is searched for from each local variance figure, and will be searched
The maximum corresponding object distance of image of variance that rope is arrived as the pixel estimation of Depth value.
Alternatively, it is described respectively according to described image { I1, I2..., INThe corresponding local variance figure of generation, bag
Include:
To acquired N image { I1, I2..., INCarry out image alignment treatment;
Respectively according to the image after each alignment, to described image { I1, I2..., INSeek local variance, obtain with
Described image { I1, I2..., INThe corresponding local variance figure of difference.
The embodiment of the present invention additionally provides a kind of generation method of depth map, and methods described includes:
The estimation of Depth value of each pixel is calculated, including:N is obtained in same visual angle, identical focal length
But image { the I that different image distances shoot1, I2..., IN, N >=3, and N is positive integer;Respectively according to the figure
As { I1, I2..., INThe corresponding local variance figure of generation;Each pixel is searched for from each local variance figure
The maximum image of the corresponding variance of point, and the maximum corresponding object distance of image of the variance that will be searched is used as institute
State the estimation of Depth value of pixel;
Corresponding original depth is obtained according to the estimation of Depth value and estimates figure;
Smooth region is removed from the original depth estimation figure, corresponding sparse depth is obtained and is estimated figure;
Treatment is filled to the sparse depth estimation figure, complete depth map is obtained.
Alternatively, it is described respectively according to described image { I1, I2..., INThe corresponding local variance figure of generation, bag
Include:
To acquired N image { I1, I2..., INCarry out image alignment treatment;
Respectively according to the image after each alignment, to described image { I1, I2..., INSeek local variance, obtain with
Described image { I1, I2..., INThe corresponding local variance figure of difference.
Alternatively, it is described that smooth region is removed from the original depth estimation figure, obtain corresponding sparse
Estimation of Depth figure, including:
The variance to each pixel is averaging respectively, obtains corresponding with each local variance figure difference flat
Equal local variance figure;
Choose in the average local variance figure, estimation of Depth value is less than the corresponding area of pixel of preset value
Domain, and using selected region as the smooth region;
The smooth region is estimated from the original depth to be removed in figure, and by the original depth after removal
Estimate that figure is estimated to scheme as the sparse depth.
Alternatively, it is described that treatment is filled to the sparse depth estimation figure, complete depth map is obtained,
Including:
Image after each alignment is averaged, the average image of the image after the alignment is obtained;
Using the average image, treatment is filled to the sparse depth estimation figure, obtained complete
Depth map.
Alternatively, it is described that treatment is filled to the sparse depth estimation figure, including:
Treatment is filled to the sparse depth estimation figure using 2*2 overlaid windows.
The embodiment of the present invention additionally provides a kind of picture depth estimation unit, and described device includes:
Image acquisition unit, is suitable to obtain N in same visual angle, identical focal length but different image distance shootings
Image { I1, I2..., IN, N >=3, and N is positive integer;
Local variance figure generation unit, is suitable to respectively according to described image { I1, I2..., INThe corresponding office of generation
Portion's variogram;
Depth calculation unit, is suitable to search for the corresponding variance of each pixel from each local variance figure
Maximum image, and the maximum corresponding object distance of image of the variance that will be searched is used as the depth of the pixel
Degree estimate.
Alternatively, the local variance figure generation unit includes:
Alignment subelement, is suitable to acquired N image { I1, I2..., INCarry out image alignment treatment;
Computation subunit, is suitable to respectively according to the image after each alignment, to described image { I1, I2..., INAsk
Local variance, obtains and described image { I1, I2..., INThe corresponding local variance figure of difference.
The embodiment of the present invention additionally provides a kind of depth map generation device, and described device includes:
Picture depth estimation unit, is suitable to calculate the estimation of Depth value of each pixel, including:Image is obtained
Subelement is taken, is suitable to obtain the N image shot in same visual angle, identical focal length but different image distances
{I1, I2..., IN, N >=3, and N is positive integer;Local variance figure generates subelement, is suitable to basis respectively
Described image { I1, I2..., INThe corresponding local variance figure of generation;Depth calculation subelement, is suitable to from each institute
State and the maximum image of the corresponding variance of each pixel is searched in local variance figure, and the variance that will be searched
The maximum corresponding object distance of image as the pixel estimation of Depth value;
First image generation unit, is suitable to obtain corresponding original depth estimation according to the estimation of Depth value
Figure;
Second image generation unit, is suitable to remove smooth region from the original depth estimation figure, obtains
Corresponding sparse depth estimates figure;
3rd image generation unit, is suitable to be filled treatment to the sparse depth estimation figure, has obtained
Whole depth map.
Alternatively, the local variance figure generation subelement includes:
Alignment module, is suitable to acquired N image { I1, I2..., INCarry out image alignment treatment;
Computing module, is suitable to respectively according to the image after each alignment, to described image { I1, I2..., INAsk office
Portion's variance, obtains and described image { I1, I2..., INThe corresponding local variance figure of difference.
Alternatively, second image generation unit includes:
Computation subunit, is suitable to the variance respectively to each pixel and is averaging, and obtains and each part side
The corresponding average local variance figure of Cha Tu branches;
Subelement is chosen, is suitable to choose in the original depth estimation figure, estimation of Depth value is less than preset value
The corresponding region of pixel, and using selected region as the smooth region;
Treatment subelement, is suitable to remove the smooth region from the original depth estimation figure, and will
Original depth after removal estimates that figure is estimated to scheme as the sparse depth.
Alternatively, the 3rd image generation unit includes:
Average subelement, is suitable to average the image after each alignment, obtains the figure after the alignment
The average image of picture;
Filling subelement, is suitable to, using the average image, be filled the sparse depth estimation figure
Treatment, obtains complete depth map.
Alternatively, the filling subelement is suitable for use with 2*2 overlaid windows to the sparse depth estimation figure
It is filled treatment.
Compared with prior art, technical scheme at least has advantages below:
By obtaining the N image shot in same visual angle, identical focal length but different image distances, then root respectively
Corresponding local variance figure is generated according to accessed image, and then is searched for from each local variance figure
The maximum image of the corresponding variance of each pixel, and the maximum corresponding thing of image of the variance that will be searched
Away from the estimation of Depth value as the pixel, and estimation of Depth need not be carried out by the method for Stereo matching,
Therefore the complexity in whole image depth estimation procedure can be reduced.Further, since acquired image
It is the image at same visual angle, identical focal length but different image distances, therefore, it is only necessary to use a camera
Obtain, it is possible thereby to reduce the hardware cost of whole image depth estimation procedure.
Brief description of the drawings
Fig. 1 is a kind of flow chart of image depth estimation method in the embodiment of the present invention;
Fig. 2 is a kind of schematic diagram of input picture in the embodiment of the present invention;
Fig. 3 is the schematic diagram after input picture in Fig. 2 is processed through image alignment;
Fig. 4 is a kind of flow chart of depth drawing generating method in the embodiment of the present invention;
Fig. 5 is a kind of sparse depth map generalization process schematic in the embodiment of the present invention;
Fig. 6 is the corresponding complete depth map of input picture in Fig. 2;
Fig. 7 is a kind of generating process schematic diagram of complete depth map in the embodiment of the present invention;
Fig. 8 is a kind of structural representation of picture depth estimation unit in the embodiment of the present invention;
Fig. 9 is a kind of structural representation of depth map generation device in the embodiment of the present invention.
Specific embodiment
At present, when picture depth estimation is carried out, first divided using two cameras for being located at different visual angles
Do not obtain corresponding original image, then two original images to being obtained carry out Stereo matching, obtain phase
The disparity estimation figure answered, estimation of Depth is carried out finally according to the disparity estimation figure.
In above-mentioned picture depth estimation procedure, on the one hand, because the process complexity of Stereo matching is higher,
Be not suitable for real-time application, thus cause whole image depth estimation procedure complexity higher.On the other hand,
Original image must be obtained by two cameras of different visual angles, thus cause whole image depth estimation procedure
Hardware cost is higher.
Regarding to the issue above, The embodiment provides a kind of image depth estimation method, the side
Method obtains the N image shot in same visual angle, identical focal length but different image distances, then basis respectively first
Accessed image generates corresponding local variance figure, and then is searched for from each local variance figure every
The maximum image of the corresponding variance of individual pixel, and the maximum corresponding object distance of image of the variance that will be searched
As the estimation of Depth value of the pixel, and estimation of Depth need not be carried out by the method for Stereo matching,
Therefore the complexity in whole image depth estimation procedure can be reduced.Further, since acquired image
It is the image at same visual angle, identical focal length but different image distances, therefore, it is only necessary to use a camera
Obtain, it is possible thereby to reduce the hardware cost of whole image depth estimation procedure.
It is understandable to enable the above objects, features and advantages of the present invention to become apparent, below in conjunction with the accompanying drawings
Specific embodiment of the invention is explained.
As shown in figure 1, the embodiment of the invention provides a kind of image depth estimation method, methods described can
To comprise the following steps:
Step 11, obtains the N image shot in same visual angle, identical focal length but different image distances
{I1, I2..., IN, N >=3, and N is positive integer.
In specific implementation, due to described image { I1, I2..., INThe image at same visual angle is, therefore only
Can be obtained using a camera.Specifically, by adjusting the focus of camera so that the camera
Focused in different positions, such that it is able to make subject be located in same focal length.Due to described image
{I1, I2..., INThe image at same visual angle is, therefore, the quantity of the pixel that each described image includes
And the pixel value all same of each pixel.
For example, Fig. 2 is the 8 image { I got using the above method1, I2..., I8}.By image I1Extremely
Image I8, focusing position is by closely to remote.Wherein, image I1Focusing position recently, image I8Focusing position
It is set to infinity.
Step 12, respectively according to described image { I1, I2..., INThe corresponding local variance figure of generation.
In specific implementation, image { I is being obtained by constantly adjustment focusing position1, I2..., INWhen,
The change of focusing position and the shake of camera, it will usually so that between each image for being obtained pixel it
Between corresponding relation be destroyed.For example, in Fig. 1, image I1-I8In same position pixel institute
Corresponding scene location is different.
Therefore, in order to reduce the change of focusing position and the shake of camera pixel between each image it
Between corresponding relation influence, respectively according to described image { I1, I2..., INThe corresponding local variance of generation
During figure, image { I first is opened to the N for obtaining1, I2..., INImage alignment treatment is carried out, then respectively according to each right
Image after neat, to described image { I1, I2..., INLocal variance is sought, obtain and described image
{I1, I2..., INThe corresponding local variance figure of difference.
In specific implementation, image { I can be opened to the N for obtaining using various methods1, I2..., INCarry out figure
As registration process.For example, Fig. 3 is using the image based on affine transformation (affine transformation)
8 images accessed in Fig. 1 are carried out image alignment treatment by alignment algorithm, are alignd
Image { I afterwards1', I2' ..., I8’}.Wherein, image I1' it is image I1The image after image alignment treatment is carried out,
Image I2' it is image I2The image after image alignment treatment is carried out ..., image I8' it is image I8Carry out image
Image after registration process.After being processed through image alignment, in image I1~I8In same position pixel
Corresponding scene location is essentially identical, i.e., the corresponding relation basic of pixel between the image after each alignment
Cause.
In specific implementation, in order to obtain and described image { I1, I2..., INOne-to-one local variance figure,
To described image { I1, I2..., INCarry out registration process after, according to the image after each alignment, calculate the figure
As { I1, I2..., INLocal variance, that is, calculate described image { I1, I2..., INIn each pixel
Local variance.
In one embodiment of this invention, the local variance of pixel (x, y) can be calculated using equation below
G (x, y):
Wherein, M is the regional area centered on (x, y)Size;(x ', y ') it is regionIn
Location variable, represent each pixel;μ (x, y) is regionIn each pixel pixel
The average of value, and μ (x, y) meets below equation:
By formula (1) and (2), can obtain and described image { I1, I2..., INThe corresponding part of difference
Variogram.
Step 13, searches for the maximum image of the corresponding variance of each pixel from each local variance figure,
And the maximum corresponding object distance of image of the variance that will search is used as the estimation of Depth value of the pixel.
In specific implementation, the pixel (x, y) of same position has difference in each local variance figure
Variance yields, because the image distance of either objective point is to be focused (in focus) by focus of this impact point
When, image plane arrives the distance between principal plane, now office of the corresponding pixel of the impact point where it
The variance in portion region is maximum, therefore, after each local variance figure is obtained, from each part side
The maximum image of the corresponding variance of each pixel, the maximum image of the variance for being searched are searched in difference figure
Corresponding object distance is the estimation of Depth value of the pixel.
As shown in the above, image depth estimation method described in the embodiment of the present invention, is surveyed by focusing
Away from the estimation of Depth value of each pixel is obtained, i.e., by obtaining N in same visual angle, identical focal length
But the image that different image distances shoot, then the image according to accessed by generates corresponding local variance figure respectively,
And then the maximum image of the corresponding variance of each pixel is searched for from each local variance figure, and will search
The maximum corresponding object distance of image of variance that rope is arrived as the pixel estimation of Depth value, not only can be with
The complexity in whole image depth estimation procedure is reduced, and whole image estimation of Depth mistake can be reduced
The hardware cost of journey.
As shown in figure 4, the embodiment of the present invention additionally provides a kind of depth drawing generating method, methods described can
To comprise the following steps:
Step 41, calculates the estimation of Depth value of each pixel.
In specific implementation, can be obtained using image depth estimation method described in the embodiment of the present invention
The estimation of Depth value of each pixel.The above-mentioned description to embodiment shown in Fig. 1 is specifically referred to enter
Row is implemented, and here is omitted.
Step 42, obtains corresponding original depth and estimates figure according to the estimation of Depth value.
In specific implementation, after obtaining the estimation of Depth value of each pixel, it is possible to obtain corresponding original
Beginning estimation of Depth figure
Step 43, removes smooth region from the original depth estimation figure, obtains corresponding sparse depth
Estimate figure.
In specific implementation, due to image { I1, I2..., INThere is smooth area in corresponding natural target scene
Domain, that is, lack the region of texture, and estimation of Depth to the smooth region and unreliable.Therefore, it is
The accuracy of depth map is improved, first can estimate to scheme from the original depthMiddle removal smooth region, obtains
Obtain corresponding sparse depth and estimate figureThe sparse depth is estimated again schemeSubsequent treatment is carried out, to obtain
Obtain depth map completely
In specific implementation, can estimate to scheme from the original depth using various methodsMiddle removal smooth area
Domain.For example, average local variance figure first can be obtained according to the local variance figure, threshold value behaviour is recycled
Make the positioning smooth region, obtain sparse depth estimation figure
Specifically, according to formula (3), first from each local variance figure, pixel (x, y) is obtained
Local variance gn(x, y) and it is averaging, obtains the average local variance of pixel (x, y)Its
In, n is image InCorresponding local variance figure, n ∈ [0, N]:
After the average local variance for obtaining each pixel, an average local variance figure can be obtained.
Then, pixel pair of the estimation of Depth value less than preset value H is chosen from the average local variance figure
The region answered, sparse depth estimation figure is obtained according to below equation
Wherein,It is the original depth estimate of pixel (x, y),It is pixel (x, y)
Sparse depth estimate.
As shown in figure 5, Fig. 5 a) and 5b) be respectively in Fig. 1 be input into image { I1, I2..., I8Carry out
During estimation of Depth, the original depth of acquisition estimates figure and average local variance figure.Wherein, in Fig. 5 b) in,
The corresponding region of black color part is the selected smooth region for taking out, from Fig. 5 a) in remove the black part
Point corresponding region, obtains Fig. 5 c) shown in sparse depth estimate figure.
Step 44, treatment is filled to the sparse depth estimation figure, obtains complete depth map.
In specific implementation, by assuming that the adjacent pixel with similar luminance has similar depth
Estimate, estimates the sparse depth to schemeBe filled treatment, can will original depth estimate figure in it is right
It is the estimate of non-smooth areas in reliable area, diffuses to whole image, obtains complete depth map
In specific implementation, the sparse depth can be estimated using many algorithms to schemeIt is filled treatment.
In one embodiment of this invention, the image after the alignment can be averaged respectively first, obtain with it is each
The corresponding the average image of image after the alignment, recycles the average image, to the sparse depth
Estimate figureTreatment is filled, complete depth map is obtained
In specific implementation, after obtaining the average image, can be obtained corresponding according to equation below
The value of matting Laplacian matrix Ls:
L is a N*N matrix, and N is the pixel sum of the average image, wherein (p, q)
Relation between individual element corresponding pixel points p and q, it is right that p and q is two-dimensional image coordinate (x, y) institute
The one-dimensional coordinate answered, p or q are the regional area centered on (x, y)In any pixel
Point.δP, qIt is kronecker delta (Kronecker delta), μX, yAnd gX, yRespectively regionIt is average
It is worth and variance yields, M isSize, ∈ be one for avoid " division by 0 " occur regularisation parameter
(regularisation parameter), is typically set to 1.Ip、IqThe respectively pixel value of pixel p and q.
It should be noted that in specific implementation, one window of each region correspondence.According to as defined above,
If, not in the same window, L (p, q) value is 0 for p and q.Therefore L is a large-scale sparse matrix,
Computing optimization can be carried out using this feature.In addition, sparse degree and corresponding computational complexity are inversely proportional,
In order to further reduce complexity, using 2*2 overlaid windows can calculate the value for obtaining the L.
After obtaining the value of the L, the cost function E in formula (5) can be minimized by quadratic programming,
The sparse depth is estimated to schemeIt is filled treatment:
Wherein, U is a diagonal matrix, when in the diagonal matrix pair of horns element for pixel
For reliable area pixel when, the diagonal element be 1, be otherwise 0.Scalar lambda is the light of output image
The regulation parameter of slippery, can be configured by those skilled in the art according to actual conditions, and λ is smaller, described complete
Whole depth mapIt is more smooth.
According to formula (5), when E values take minimum value, we obtain optimal solutionBy further derivation,
Above-mentioned quadratic programming problem can be reduced to an Ax=b problem, can by efficient L, U decompose come
Realize, i.e.,
The sparse depth is estimated to schemeIn a pixel sparse depth estimate substitute into formula (6), can
To obtain the pixel respectively in the complete depth mapIn estimation of Depth value, it is hereby achieved that institute
State complete depth map
Fig. 6 is the generation method according to depth map in the embodiment of the present invention, to the input picture in Fig. 1
{I1, I2..., I8Generation complete depth map.From fig. 6 it can be seen that the complete depth map was both
Ensure that the adjacent area of similar luminance has similar depth value, and also the marginal position in input picture is protected
Depth interruption (depth discontinuety) is stayed.
As shown in fig. 7, in order to clearly implement depth map generalization side described in the embodiment of the present invention
Method, the embodiment of the present invention additionally provides a kind of generating process schematic diagram of the depth map.As shown in fig. 7,
Served as theme with the result of different phase image, the depth map is illustrated with reference to different treatment operations
Generating process.With reference to Fig. 7, the generating process to the depth map is described:
The input picture 71 at same visual angle, identical focal length but different image distances, is processed through image alignment, is obtained
Image 72 after corresponding alignment.Local variance is asked to the image 72 after the alignment respectively, is corresponded to
Local variance Figure 73.The method for carrying out focusing range finding to described local variance Figure 73 again, obtains original depth
Degree estimates Figure 74.
In order to further obtain complete depth Figure 75, after acquisition original depth estimates Figure 74, first to described
Local variance Figure 73 averages, and obtains average local variance Figure 76.The original depth is estimated again scheme
74 and average local variance Figure 76 performs threshold operation, obtains sparse depth and estimates Figure 77.Will be described right
Image after neat is averaged, and obtains corresponding the average image 78.Finally using the average image 78 and dilute
Estimation of Depth Figure 77 is dredged, described complete depth Figure 75 is obtained.
In order that those skilled in the art more fully understand and realize the present invention, below to above method correspondence
Device be described in detail.
As shown in figure 8, the embodiment of the invention provides a kind of picture depth estimation unit 80.Described device
80 can include:Image acquisition unit 81, local variance figure generation unit 82 and depth calculation unit
83.Wherein:
Described image acquiring unit 81 is suitable to obtain N in same visual angle, identical focal length but different image distance bats
Image { the I for taking the photograph1, I2..., IN, N >=3, and N is positive integer.The local variance figure generation unit 82
It is suitable to respectively according to described image { I1, I2..., INThe corresponding local variance figure of generation.The depth calculation list
Unit 83, is suitable to search for the maximum image of the corresponding variance of each pixel from each local variance figure,
And the maximum corresponding object distance of image of the variance that will search is used as the estimation of Depth value of the pixel.
In specific implementation, the local variance figure generation unit 82 can include:Alignment subelement 821
And computation subunit 822.Wherein, the alignment subelement 821 is suitable to acquired N image
{I1, I2..., INCarry out image alignment treatment.The computation subunit 822 be suitable to respectively according to each alignment after
Image, to described image { I1, I2..., INLocal variance is sought, obtain and described image { I1, I2..., INPoint
Not corresponding local variance figure.
As shown in figure 9, the embodiment of the present invention additionally provides a kind of depth map generation device 90.Described device
90 can include:Picture depth estimation unit 91, the first image generation unit 92, the second image generation is single
The image generation unit 94 of unit 93 and the 3rd.Wherein:
Described image depth estimation unit 91 is suitable to calculate the estimation of Depth value of each pixel.Described first
Image generation unit 92 is suitable to obtain corresponding original depth estimation figure according to the estimation of Depth valueThe
Two image generation units 93 are suitable to from the original depth estimation figureMiddle removal smooth region, is corresponded to
Sparse depth estimate figure3rd image generation unit 94 is suitable to the sparse depth estimation figureCarry out
Filling is processed, and obtains complete depth map
In specific implementation, described image depth estimation unit 91 can include:Image obtains subelement 911,
Local variance figure generates subelement 912 and depth calculation subelement 913.Wherein, described image obtains son
Unit 911 is suitable to obtain the N image shot in same visual angle, identical focal length but different image distances
{I1, I2..., IN, N >=3, and N is positive integer.The local variance figure generates subelement 912, is suitable to
Respectively according to described image { I1, I2..., INThe corresponding local variance figure of generation.The depth calculation subelement
913, it is suitable to search for the maximum image of the corresponding variance of each pixel from each local variance figure, and
Estimation of Depth value of the corresponding object distance of image of the variance maximum that will be searched as the pixel.
Wherein, the local variance figure generation subelement 912 can include:Alignment module (not shown)
And computing module (not shown).Wherein, the alignment module is suitable to acquired N image
{I1, I2..., INCarry out image alignment treatment.The computing module is suitable to respectively according to the image after each alignment,
To described image { I1, I2..., INLocal variance is sought, obtain and described image { I1, I2..., INCorresponding respectively
Local variance figure.
In specific implementation, second image generation unit 93 can include:Computation subunit 931,
Choose subelement 932 and treatment subelement 933.Wherein, the computation subunit 931 is suitable to right respectively
The variance of each pixel is averaging, and obtains average local variance corresponding with each local variance figure branch
Figure.The selection subelement 932 is suitable to choose the original depth estimation figureIn, estimation of Depth value is less than
The corresponding region of pixel of preset value, and using selected region as the smooth region.The place
Reason subelement 933 is suitable to the smooth region from the original depth estimation figureMiddle removal, and will remove
Original depth afterwards estimates figureEstimate to scheme as the sparse depth
In specific implementation, the 3rd image generation unit 94 can include:Average subelement 941 and
Filling subelement 942.Wherein, the average subelement 941 is suitable to ask equal to the image after each alignment
Value, obtains the average image of the image after the alignment.The filling subelement 942 is suitable to using described
The average image, estimates the sparse depth to schemeTreatment is filled, complete depth map is obtained
In specific implementation, the filling subelement 942 is being estimated the sparse depth to schemeIt is filled
During treatment, 2*2 overlaid windows can be used, the sparse depth is estimated to schemeIt is filled treatment.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment
Rapid to can be by program to instruct the hardware of correlation to complete, the program can be stored in a computer can
Read in storage medium, storage medium can include:ROM, RAM, disk or CD etc..
Although present disclosure is as above, the present invention is not limited to this.Any those skilled in the art,
Without departing from the spirit and scope of the present invention, can make various changes or modifications, therefore guarantor of the invention
Shield scope should be defined by claim limited range.
Claims (14)
1. a kind of image depth estimation method, it is characterised in that including:
Obtain the N image { I shot in same visual angle, identical focal length but different image distances1,I2,…,IN, N
>=3, and N is positive integer;
Respectively according to described image { I1,I2,…,INThe corresponding local variance figure of generation;
The maximum image of the corresponding variance of each pixel is searched for from each local variance figure, and will be searched
The maximum corresponding object distance of image of variance that rope is arrived as the pixel estimation of Depth value.
2. image depth estimation method as claimed in claim 1, it is characterised in that described respectively according to described
Image { I1,I2,…,INThe corresponding local variance figure of generation, including:
To acquired N image { I1,I2,…,INCarry out image alignment treatment;
Respectively according to the image after each alignment, to described image { I1,I2,…,INSeek local variance, obtain with
Described image { I1,I2,…,INThe corresponding local variance figure of difference.
3. a kind of generation method of depth map, it is characterised in that including:
The estimation of Depth value of each pixel is calculated, including:N is obtained in same visual angle, identical focal length
But image { the I that different image distances shoot1,I2,…,IN, N >=3, and N is positive integer;Respectively according to the figure
As { I1,I2,…,INThe corresponding local variance figure of generation;Each pixel is searched for from each local variance figure
The maximum image of the corresponding variance of point, and the maximum corresponding object distance of image of the variance that will be searched is used as institute
State the estimation of Depth value of pixel;
Corresponding original depth is obtained according to the estimation of Depth value and estimates figure;
Smooth region is removed from the original depth estimation figure, corresponding sparse depth is obtained and is estimated figure;
Treatment is filled to the sparse depth estimation figure, complete depth map is obtained.
4. the generation method of depth map as claimed in claim 3, it is characterised in that described respectively according to described
Image { I1,I2,…,INThe corresponding local variance figure of generation, including:
To acquired N image { I1,I2,…,INCarry out image alignment treatment;
Respectively according to the image after each alignment, to described image { I1,I2,…,INSeek local variance, obtain with
Described image { I1,I2,…,INThe corresponding local variance figure of difference.
5. the generation method of depth map as claimed in claim 4, it is characterised in that described from the original depth
Smooth region is removed in degree estimation figure, corresponding sparse depth is obtained and is estimated figure, including:
The variance to each pixel is averaging respectively, obtains corresponding with each local variance figure difference flat
Equal local variance figure;
Choose in the average local variance figure, estimation of Depth value is less than the corresponding area of pixel of preset value
Domain, and using selected region as the smooth region;
The smooth region is estimated from the original depth to be removed in figure, and by the original depth after removal
Estimate that figure is estimated to scheme as the sparse depth.
6. the generation method of depth map as claimed in claim 4, it is characterised in that described to the sparse depth
Degree estimation figure is filled treatment, obtains complete depth map, including:
Image after each alignment is averaged, the average image of the image after the alignment is obtained;
Using the average image, treatment is filled to the sparse depth estimation figure, obtained complete
Depth map.
7. the generation method of depth map as claimed in claim 6, it is characterised in that described to the sparse depth
Degree estimation figure is filled treatment, including:The sparse depth estimation figure is entered using 2*2 overlaid windows
Row filling is processed.
8. a kind of picture depth estimation unit, it is characterised in that including:
Image acquisition unit, is suitable to obtain N in same visual angle, identical focal length but different image distance shootings
Image { I1,I2,…,IN, N >=3, and N is positive integer;
Local variance figure generation unit, is suitable to respectively according to described image { I1,I2,…,INThe corresponding office of generation
Portion's variogram;
Depth calculation unit, is suitable to search for the corresponding variance of each pixel from each local variance figure
Maximum image, and the maximum corresponding object distance of image of the variance that will be searched is used as the depth of the pixel
Degree estimate.
9. picture depth estimation unit as claimed in claim 8, it is characterised in that the local variance figure life
Include into unit:
Alignment subelement, is suitable to acquired N image { I1,I2,…,INCarry out image alignment treatment;
Computation subunit, is suitable to respectively according to the image after each alignment, to described image { I1,I2,…,INAsk
Local variance, obtains and described image { I1,I2,…,INThe corresponding local variance figure of difference.
10. a kind of depth map generation device, it is characterised in that including:
Picture depth estimation unit, is suitable to calculate the estimation of Depth value of each pixel, including:Image is obtained
Subelement is taken, is suitable to obtain the N image shot in same visual angle, identical focal length but different image distances
{I1,I2,…,IN, N >=3, and N is positive integer;Local variance figure generates subelement, is suitable to basis respectively
Described image { I1,I2,…,INThe corresponding local variance figure of generation;Depth calculation subelement, is suitable to from each institute
State and the maximum image of the corresponding variance of each pixel is searched in local variance figure, and the variance that will be searched
The maximum corresponding object distance of image as the pixel estimation of Depth value;
First image generation unit, is suitable to obtain corresponding original depth estimation according to the estimation of Depth value
Figure;
Second image generation unit, is suitable to remove smooth region from the original depth estimation figure, obtains
Corresponding sparse depth estimates figure;
3rd image generation unit, is suitable to be filled treatment to the sparse depth estimation figure, has obtained
Whole depth map.
11. depth map generation devices as claimed in claim 10, it is characterised in that the local variance figure life
Include into subelement:
Alignment module, is suitable to acquired N image { I1,I2,…,INCarry out image alignment treatment;
Computing module, is suitable to respectively according to the image after each alignment, to described image { I1,I2,…,INAsk office
Portion's variance, obtains and described image { I1,I2,…,INThe corresponding local variance figure of difference.
12. depth map generation devices as claimed in claim 11, it is characterised in that the second image generation
Unit includes:
Computation subunit, is suitable to the variance respectively to each pixel and is averaging, and obtains and each part side
The corresponding average local variance figure of Cha Tu branches;
Subelement is chosen, is suitable to choose in the original depth estimation figure, estimation of Depth value is less than preset value
The corresponding region of pixel, and using selected region as the smooth region;
Treatment subelement, is suitable to remove the smooth region from the original depth estimation figure, and will
Original depth after removal estimates that figure is estimated to scheme as the sparse depth.
13. depth map generation devices as claimed in claim 11, it is characterised in that the 3rd image generation
Unit includes:
Average subelement, is suitable to average the image after each alignment, obtains the figure after the alignment
The average image of picture;
Filling subelement, is suitable to, using the average image, be filled the sparse depth estimation figure
Treatment, obtains complete depth map.
14. depth map generation devices as claimed in claim 13, it is characterised in that the filling subelement is fitted
Treatment is filled to the sparse depth estimation figure in 2*2 overlaid windows is used.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510859849.9A CN106815865A (en) | 2015-11-30 | 2015-11-30 | Image depth estimation method, depth drawing generating method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510859849.9A CN106815865A (en) | 2015-11-30 | 2015-11-30 | Image depth estimation method, depth drawing generating method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106815865A true CN106815865A (en) | 2017-06-09 |
Family
ID=59155834
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510859849.9A Pending CN106815865A (en) | 2015-11-30 | 2015-11-30 | Image depth estimation method, depth drawing generating method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106815865A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101312539A (en) * | 2008-07-03 | 2008-11-26 | 浙江大学 | Hierarchical image depth extracting method for three-dimensional television |
CN102103248A (en) * | 2009-12-21 | 2011-06-22 | 索尼公司 | Autofocus with confidence measure |
US20110181770A1 (en) * | 2010-01-27 | 2011-07-28 | Zoran Corporation | Depth from defocus calibration |
CN102509294A (en) * | 2011-11-08 | 2012-06-20 | 清华大学深圳研究生院 | Single-image-based global depth estimation method |
CN103049906A (en) * | 2012-12-07 | 2013-04-17 | 清华大学深圳研究生院 | Image depth extraction method |
CN103559701A (en) * | 2013-09-26 | 2014-02-05 | 哈尔滨商业大学 | Two-dimensional single-view image depth estimation method based on DCT coefficient entropy |
-
2015
- 2015-11-30 CN CN201510859849.9A patent/CN106815865A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101312539A (en) * | 2008-07-03 | 2008-11-26 | 浙江大学 | Hierarchical image depth extracting method for three-dimensional television |
CN102103248A (en) * | 2009-12-21 | 2011-06-22 | 索尼公司 | Autofocus with confidence measure |
US20110181770A1 (en) * | 2010-01-27 | 2011-07-28 | Zoran Corporation | Depth from defocus calibration |
CN102509294A (en) * | 2011-11-08 | 2012-06-20 | 清华大学深圳研究生院 | Single-image-based global depth estimation method |
CN103049906A (en) * | 2012-12-07 | 2013-04-17 | 清华大学深圳研究生院 | Image depth extraction method |
CN103559701A (en) * | 2013-09-26 | 2014-02-05 | 哈尔滨商业大学 | Two-dimensional single-view image depth estimation method based on DCT coefficient entropy |
Non-Patent Citations (2)
Title |
---|
KROTKOV E P: "Focusing", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》 * |
江静,张雪松: "基于计算机视觉的深度估计方法", 《光电技术应用》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10755428B2 (en) | Apparatuses and methods for machine vision system including creation of a point cloud model and/or three dimensional model | |
TWI567693B (en) | Method and system for generating depth information | |
US10129455B2 (en) | Auto-focus method and apparatus and electronic device | |
KR102149276B1 (en) | Method of image registration | |
JP5197279B2 (en) | Method for tracking the 3D position of an object moving in a scene implemented by a computer | |
US20160173869A1 (en) | Multi-Camera System Consisting Of Variably Calibrated Cameras | |
CN107316326B (en) | Edge-based disparity map calculation method and device applied to binocular stereo vision | |
Alagoz | Obtaining depth maps from color images by region based stereo matching algorithms | |
US9807372B2 (en) | Focused image generation single depth information from multiple images from multiple sensors | |
Im et al. | High quality structure from small motion for rolling shutter cameras | |
US20150178595A1 (en) | Image processing apparatus, imaging apparatus, image processing method and program | |
CN111127522B (en) | Depth optical flow prediction method, device, equipment and medium based on monocular camera | |
JP7378219B2 (en) | Imaging device, image processing device, control method, and program | |
CN104506775A (en) | Image collection jitter removing method and device based on stereoscopic visual matching | |
CN103177432A (en) | Method for obtaining panorama by using code aperture camera | |
US20080226159A1 (en) | Method and System For Calculating Depth Information of Object in Image | |
CN110443228B (en) | Pedestrian matching method and device, electronic equipment and storage medium | |
KR100943635B1 (en) | Method and apparatus for generating disparity map using digital camera image | |
CA2796543A1 (en) | System and method for performing depth estimation utilizing defocused pillbox images | |
CN105335959B (en) | Imaging device quick focusing method and its equipment | |
CN107909611A (en) | A kind of method using differential geometric theory extraction space curve curvature feature | |
US20090316994A1 (en) | Method and filter for recovery of disparities in a video stream | |
CN104754316A (en) | 3D imaging method and device and imaging system | |
CN106815865A (en) | Image depth estimation method, depth drawing generating method and device | |
JP2016148588A (en) | Depth estimation model generation device and depth estimation device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170609 |
|
RJ01 | Rejection of invention patent application after publication |