CN102509294A - Single-image-based global depth estimation method - Google Patents
Single-image-based global depth estimation method Download PDFInfo
- Publication number
- CN102509294A CN102509294A CN2011103502294A CN201110350229A CN102509294A CN 102509294 A CN102509294 A CN 102509294A CN 2011103502294 A CN2011103502294 A CN 2011103502294A CN 201110350229 A CN201110350229 A CN 201110350229A CN 102509294 A CN102509294 A CN 102509294A
- Authority
- CN
- China
- Prior art keywords
- pixel
- original image
- gray
- variance
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a single-image-based global depth estimation method, which comprises the following steps of: 1) performing Gaussian blurring on a single original image to be processed to obtain a blurred image; 2) converting the original image and the blurred image from a red, green and blue (RGB) three-channel space into a single-channel gray value space respectively; 3) acquiring a gray variance value of each pixel in the original image and the blurred image according to gray values respectively; 4) calculating a variance ratio corresponding to each pixel in the original image according to the gray variance values; 5) performing normalization operation on a depth information value corresponding to each pixel in the original image, and performing conversion to obtain an initial global depth map of the original image; and 6) bilaterally filtering the initial global depth map obtained by the step 5) to obtain a final global depth map of the original image. By the single-image-based global depth estimation method, information in a global range can be effectively obtained, so that the accuracy of the obtained final global depth map can be ensured.
Description
Technical field
The present invention relates to the depth estimation method in the computer vision field, particularly relate to a kind of global depth estimation method based on single image.
Background technology
Depth estimation method is used for estimating the depth information of pending each pixel of image, obtains the global depth figure of pending image, plays an important role in computer vision and computer graphics application.There are some estimation of Depth to depend on the processing of several out-of-focus images, under different camera parameters, same scene shot obtained the set of diagrams picture, and then use the various processes of Fuzzy Processing again and measure fuzzy parameter.But all there are some problems in practice in these class methods, like occlusion issue and to require the scenery in the picture be static, this just big limitations they ranges of application in actual scene.
Also there is certain methods to extract depth map, for example in the certain methods, does rim detection respectively with image after the Gaussian Blur processing through former figure based on single image; With corresponding edge image single order gradient distribution; Ask for edge gradient, and the gradient ratio of corresponding gradient operator, initial sparse depth map obtained; Behind the associating bilateral filtering, thereby re-use the depth map that Matting Laplce interpolation obtains the overall situation.Under this method; Only use the edge pixel point to obtain the value of relevant depth information; The depth information of the pixel of great amount of images center section then is to use method of interpolation to obtain; Feasible global depth figure that finally obtains and the information of underusing each pixel in the image, global depth figure and out of true.For example in the certain methods, do wavelet transformation again, obtain 256 kinds of coefficient values, estimate relative depth according to the number of non-zero wavelet coefficient again by the macroblock partition image of 16 x 16.The depth map that this method obtains then is that according to pixels macro block is that depth value is composed by unit; Remain and use one part of pixel point estimation in the image to obtain the depth information of the overall situation; Underuse the information of each pixel in the image equally; Finally obtaining depth information that depth map comprises and discontinuous, is not truly complete global depth figure.The algorithm computation process more complicated that relates in this method simultaneously, calculated amount is bigger.
Summary of the invention
Technical matters to be solved by this invention is: the deficiency that remedies above-mentioned prior art; A kind of global depth estimation method based on single image is proposed; Can make full use of the information of each pixel in the image, thereby guarantee that the global depth figure that finally obtains is more accurate.
Technical matters of the present invention solves through following technical scheme:
A kind of global depth estimation method based on single image may further comprise the steps: 1) pending single width original image is carried out Gaussian Blur and handle, obtain blurred picture; 2) blurred picture that respectively original image and step 1) is obtained is single pass gray-scale value space from RGB triple channel space conversion; 3) according to step 2) gray-scale value that obtains obtains the gray variance value of each pixel in original image and the blurred picture respectively; 4) the gray variance value that obtains according to step 3) calculates each pixel place corresponding variance ratio in the original image, thereby obtains the corresponding depth information value in each pixel place in the said original image; 5) the corresponding depth information value in each pixel place carries out being converted to after the normalization operation the initial global depth figure of original image in the original image that step 4) is obtained; 6) the ID figure that step 5) is obtained carries out bilateral filtering to be handled, and obtains the final global depth figure of original image.
The beneficial effect of the present invention and prior art contrast is:
Global depth estimation method based on single image of the present invention; Use Fuzzy Processing technology; Ask for depth information through ratio; When obtaining depth map, estimate to obtain the corresponding depth information in each pixel place according to the information of each pixel self, no longer only be to use the information of one part of pixel point to estimate to obtain the depth information of overall pixel, the global depth figure that obtains uses the information of each pixel in the image; Can effectively obtain the information of global scope, therefore can guarantee that the global depth figure that finally obtains is more accurate.Simultaneously, the global depth estimation method based on single image of the present invention uses the gradation of image value to calculate; Use gray-scale value to obtain the gray variance value; And then obtain the variance ratio that reflects depth information, and the computation process that relates in the method is simpler, and calculated amount is also less.
Description of drawings
Fig. 1 is the process flow diagram in the global depth estimation method in the specific embodiment of the invention one;
Fig. 2 is the process flow diagram in the global depth estimation method in the specific embodiment of the invention two.
Embodiment
Below in conjunction with embodiment and contrast accompanying drawing the present invention is explained further details.
Embodiment one
As shown in Figure 1, the process flow diagram for the global depth estimation method in this embodiment may further comprise the steps:
U1) pending single width original image is carried out Gaussian Blur and handle, obtain blurred picture.As shown in Figure 1, handle for directly carrying out Gaussian Blur in this embodiment along a direction, obtain the blurred picture after a width of cloth blurs.This direction can be set arbitrarily by the user.
U2) be single pass gray-scale value space with the blurred picture that obtains after original image and direct the bluring from RGB triple channel space conversion respectively.
In this embodiment by formula the conversion formula shown in 5 carry out gradation conversion, formula 5 is:
G(x,y)=0.11×R(x,y)+0.59×G’(x,y)+0.3×B(x,y);
Wherein, and G (x, y) coordinate is (x, the gray-scale value of pixel y) after conversion, R (x on the presentation video; Y), and G ' (x, y), B (x; Y) respectively on the presentation video coordinate be (x, pixel y) R in the RGB triple channel color space before conversion, G, the brightness value of three components of B.Certainly, also can carry out the gray space conversion according to alternate manner, such as the more accurate conversion formula of coefficient in the formula, the above-mentioned formula of choosing in this embodiment is merely a kind of conversion formula that comparatively is suitable for commonly used.
Particularly, for original image, with each pixel in the original image at the R of RGB triple channel color space, G, the brightness value of three components of B is brought above-mentioned formula respectively into, promptly obtains each gray values of pixel points in the original image.For blurred picture, with step U1) in the blurred picture that obtains each pixel at the R of RGB triple channel color space, G, the brightness value of three components of B is brought above-mentioned formula respectively into, promptly obtains each gray values of pixel points in the blurred picture.
U3) according to step U2) gray-scale value that obtains obtains the gray variance value of each pixel in original image and the blurred picture respectively.
Be core calculations gray variance value with the diamond block in this embodiment.Choose diamond block, said diamond block is a central point with current pixel point to be calculated, is the summit with four adjacent pixels around the current pixel point.
For being positioned at image center section coordinate for (then choosing coordinate is (x-1 for x, the current pixel point that can form diamond block y); Y), (x+1, y); (x, y-1), (x; Y+1) four consecutive point amount to five pixels, calculate the gray variance value Var of current pixel point to be calculated according to formula 6
0, formula 6 is:
Wherein, G1; G2, G3, G4 and G5 represent above-mentioned five gray values of pixel points respectively; E representes the average gray of five pixels, and E=
(G1+G2+G3+G4+G5).
Coordinate is (x1 on the horizontal edge line for being positioned at; Y1) can not form the current pixel point of diamond block, then select pixel contiguous on the vertical direction (x1, y1-1) or (x1; Y1+1) gray variance value replaces as current pixel point (x1, gray variance value y1).
Coordinate is (x2 on the vertical edge line for being positioned at; Y2) can not form the current pixel point of diamond block, then select pixel contiguous on the horizontal direction (x2-1, y2) or (x2+1; Y2) gray variance value replaces as current pixel point (x2, gray variance value y2).
Particularly,, each pixel in the original image according to aforesaid way or make up diamond block, or is chosen consecutive point, calculate the gray variance value of each pixel in the original image for original image.For blurred picture, each pixel in the blurred picture according to aforesaid way or make up diamond block, or is chosen consecutive point, calculate the gray variance value of each pixel in the blurred picture.
U4) according to step U3) the gray variance value that obtains calculates each pixel place corresponding variance ratio in the original image, thereby obtains the corresponding depth information value in each pixel place in the said original image.
In this embodiment; Step U1) directly carrying out Gaussian Blur along arbitrary direction in handles; Obtained an original image and a secondary blurred picture, through step U2), obtain the gray variance value of each pixel in gray variance value and the blurred picture of each pixel in the original image after U3) respectively; The gray variance value of each pixel in two width of cloth images is brought in the formula 1, promptly obtain the variance ratio R of corresponding each pixel in the original image.Formula 1 is:
Wherein, Var is the variance yields of pixel in the original image, Var
1Variance yields for pixel corresponding in the blurred picture.The variance yields of each pixel of substitution successively promptly obtains the variance ratio R of each pixel.
(piece) window of a given specific size, the pixel brightness value of each pixel is more approaching in the wherein fuzzy part, shows as thick lines, has the lower one difference; And the pixel brightness value of each pixel is inconsistent in the clear part, shows as hachure, has higher variance yields.Like this, through Fuzzy Processing, use the variance yields of pixel in the original image to compare with the variance yields of each pixel in the blurred picture, the part variance ratio result that original image is fuzzy is less, and part variance ratio result is bigger more clearly.Therefore also explanation, the variance ratio R is more little, and image is fuzzy more, and scenery is far away from camera lens, and the degree of depth is big.Otherwise the variance ratio R is big more, and image is clear more, and scenery is near more from camera lens, and depth value is more little.The variance ratio R size of above-mentioned each pixel that obtains follows the depth information at original image corresponding pixel points place that the relation of approximately linear is promptly arranged, and the variance ratio R is big more, and the degree of depth is more little; The variance ratio R is more little, and the degree of depth is big more.Therefore obtain promptly having obtained the corresponding depth information value in each pixel place behind the above-mentioned variance ratio.
U5) to step U4) the corresponding depth information value in each pixel place carries out being converted to after the normalization operation the initial global depth figure of original image in the original image that obtains.
In this embodiment, normalization operating basis formula 7 carries out with formula 8;
Formula 8 is: Y=y * 255;
Wherein, Bring the depth information value of pixel to be converted into formula 7 as x; Y representes to change intermediate value; The depth information value that Y representes to change the back pixel is corresponding to the gray-scale value among the initial global depth figure, and MaxValue and MinValue are respectively maximal value and the minimum value in the corresponding depth information value in all pixel places.Corresponding gray Y value according to after the conversion can convert the depth information value to depth information figure, promptly obtains the initial global depth figure of original image.
U6) to step U5) the ID figure that obtains carries out bilateral filtering and handles, and obtains the final global depth figure of original image.Two-sided filter is based on the Gaussian function of space distribution; Has the function of protecting the limit denoising, effectively filtering noise information, the good preserving edge information of ability simultaneously; Guarantee that the global depth figure noise that finally obtains is less, the depth information that global depth figure reflects is more accurate.
In this embodiment; Estimate to obtain the corresponding depth information in each pixel place according to the information of each pixel self when obtaining depth map; No longer only be to use the information of one part of pixel point to estimate to obtain the depth information of the pixel of the overall situation; Can effectively obtain the information of global scope, therefore can guarantee that the global depth figure that finally obtains is more accurate.Simultaneously, when obtaining depth information, use the gradation of image value to calculate, use gray-scale value to obtain the gray variance value, and then obtain the variance ratio of reflection depth information, the computation process that relates in the method is simpler, and calculated amount is also less.
Embodiment two
This embodiment is with the difference of embodiment one: when Gaussian Blur is handled in this embodiment; Carrying out Gaussian Blur along transverse axis x direction, vertical pivot y direction handles; During Calculation variance ratio, obtain the corresponding variance ratio R in each pixel place in the original image through the variance ratio Rx of weighting x direction and the variance ratio Ry of y direction.
As shown in Figure 2, the process flow diagram for the global depth estimation method in this embodiment may further comprise the steps:
W1) pending single width original image is carried out Gaussian Blur and handle, obtain blurred picture.As shown in Figure 2, in this embodiment for respectively along transverse axis x direction, vertical pivot y direction is carried out Gaussian Blur and is handled, and obtains the blurred picture of two width of cloth after fuzzy, is respectively x direction blurred picture and y direction blurred picture.
W2) respectively with original image, x direction blurred picture and y direction blurred picture are single pass gray-scale value space from RGB triple channel space conversion.
Identical in gradation conversion mode and the embodiment one in this embodiment, also by formula the conversion formula shown in 5 carries out gradation conversion, and concrete formula and transfer process are in this not repeat specification.Through after the gradation conversion, obtain each gray values of pixel points in the original image, each gray values of pixel points in each gray values of pixel points and the y direction blurred picture in the x direction blurred picture in this embodiment.
W3) according to step W2) gray-scale value that obtains obtains original image respectively, the gray variance value of each pixel in x direction blurred picture and the y direction blurred picture.
The mode of obtaining the gray variance value in this embodiment also with embodiment one in identical, also promptly for the pixel that can make up diamond block, by formula make up diamond block shown in 6 and ask for the gray variance value; Be positioned at the pixel on the edge line for not making up diamond block, the gray variance value of then getting neighbor pixel is the gray variance value of pixel on the edge line, and concrete formula and computation process are in this not repeat specification.Process step W3 in this embodiment) after, obtains the gray variance value of each pixel in the original image, the gray variance value of each pixel in the gray variance value of each pixel and the y direction blurred picture in the x direction blurred picture.
W4) according to step W3) the gray variance value of each pixel calculates each pixel place corresponding variance ratio in the original image in three sub-pictures that obtain, thus obtain the corresponding depth information value in each pixel place in the said original image.
In this embodiment; Step W1) along transverse axis x direction, vertical pivot y direction is carried out the Gaussian Blur processing, has obtained original image and two secondary blurred pictures in; Through step W2); W3) obtain the gray variance value of each pixel in the original image after respectively, the gray variance value of each pixel in the gray variance value of each pixel and the y direction blurred picture in the x direction blurred picture is with corresponding the bringing in the formula 2 of gray variance value of each pixel in three width of cloth images; Can obtain the variance ratio Rx of x direction respectively, the variance ratio Ry of y direction.Use the variance ratio Ry of variance ratio Rx and y direction of weight wy weighting x direction of weight wx and the y direction of x direction promptly to obtain the variance ratio R of corresponding each pixel in the original image then.
Wherein, formula 2 is:
In the formula, Var is the variance yields of pixel in the original image, Var
1mThe place brings the variance yields Var of pixel corresponding in the x direction blurred picture into
1xThe time, the ratio R m that obtains is the variance ratio Rx of x direction; Var
1mThe place brings the variance yields Var of pixel corresponding in the y direction blurred picture into
1yThe time, the ratio R m that obtains is the variance ratio Ry of y direction.
Calculate the weight wx of x direction and the weight wy of y direction according to formula 3 by gray-scale value in this embodiment, formula 3 is:
Wherein, N representes total number of pixel in the original image; Gi representes i gray values of pixel points in the original image; When m got x, Gmi=Gxi promptly correspondingly brought N gray values of pixel points in the x direction blurred picture into successively, and the value wm that obtains is the weight wx of x direction; When m got y, Gmi=Gyi promptly correspondingly brought N gray values of pixel points in the y direction blurred picture into successively, and the value wm that obtains is the weight wy of y direction.
Obtain the variance ratio Rx of x direction, the variance ratio Ry of y direction behind the weight wy of the weight wx of x direction and y direction, obtains the variance ratio R of corresponding each pixel in the original image according to formula 4 weightings.Formula 4 is:
With the same in the embodiment one, obtain the corresponding depth information value in big each pixel place of I reflection of the variance ratio R of each pixel.In addition; Owing to carry out according to x direction and y direction during Fuzzy Processing in this embodiment; Variance ratio weighting with both direction during Calculation variance ratio obtains; Make full use of the information of image in all directions, the global depth figure that can guarantee finally to obtain is more accurate with respect to the result that the Fuzzy Processing of embodiment one single direction obtains.
W5) to step W4) the corresponding depth information value in each pixel place carries out being converted to after the normalization operation the initial global depth figure of original image in the original image that obtains.
In this embodiment the mode of normalization operation also with embodiment one in identical, also promptly obtain the depth information value corresponding to the gray-scale value in the depth map according to formula 7 and formula 8, concrete formula and computation process are in this not repeat specification.
W6) to step W5) the ID figure that obtains carries out bilateral filtering and handles, and obtains the final global depth figure of original image.The mode that bilateral filtering is handled in this embodiment also with embodiment one in identical, the effect of limit denoising is protected in same performance, effective filtering noise information simultaneously can good preserving edge information.
In this embodiment; With embodiment one; Estimate to obtain the corresponding depth information in each pixel place according to the information of each pixel self when obtaining depth map, can effectively obtain the information of global scope, therefore can guarantee that the global depth figure that finally obtains is more accurate.Simultaneously, the computation process that relates in the method is simpler, and calculated amount is also less.In addition; In this embodiment; Carry out according to x direction and y direction during Fuzzy Processing, the variance ratio weighting with both direction during Calculation variance ratio obtains, the global depth figure that obtains with respect to the processing of embodiment one single direction; Made full use of the information of image in all directions in this embodiment more, the result of global depth figure is more accurate.
Above content is to combine concrete preferred implementation to the further explain that the present invention did, and can not assert that practical implementation of the present invention is confined to these explanations.For the those of ordinary skill of technical field under the present invention, make some substituting or obvious modification under the prerequisite of the present invention design not breaking away from, and performance or purposes are identical, all should be regarded as belonging to protection scope of the present invention.
Claims (9)
1. global depth estimation method based on single image is characterized in that: may further comprise the steps:
1) pending single width original image is carried out Gaussian Blur and handle, obtain blurred picture;
2) blurred picture that respectively original image and step 1) is obtained is single pass gray-scale value space from RGB triple channel space conversion;
3) according to step 2) gray-scale value that obtains obtains the gray variance value of each pixel in original image and the blurred picture respectively;
4) the gray variance value that obtains according to step 3) calculates each pixel place corresponding variance ratio in the original image, thereby obtains the corresponding depth information value in each pixel place in the said original image;
5) the corresponding depth information value in each pixel place carries out being converted to after the normalization operation the initial global depth figure of original image in the original image that step 4) is obtained;
6) the ID figure that step 5) is obtained carries out bilateral filtering to be handled, and obtains the final global depth figure of original image.
2. the global depth estimation method based on single image according to claim 1 is characterized in that: be directly to carry out Gaussian Blur along the direction that a user sets arbitrarily to handle in the said step 1); In the said step 4) in the original image each pixel place corresponding variance ratio R be to calculate according to formula 1, formula 1 does
, wherein Var is the variance yields of pixel in the original image, Var
1Variance yields for pixel corresponding in the blurred picture.
3. the global depth estimation method based on single image according to claim 1 is characterized in that: be to carry out Gaussian Blur along transverse axis x direction, vertical pivot y direction respectively to handle in the said step 1), obtain x direction blurred picture and y direction blurred picture; Be specially in the said step 4): the original image variance yields, x direction blurred picture variance yields and the y direction blurred picture variance yields that obtain according to step 3) obtain the variance ratio Rx of x direction and the variance ratio Ry of y direction respectively, and the variance ratio Rx of the weight wx of use x direction and the weight wy weighting x direction of y direction and the variance ratio Ry of y direction obtain corresponding variance ratio R in each pixel place in the original image.
4. the global depth estimation method of single image according to claim 3, it is characterized in that: the variance ratio Ry of the variance ratio Rx of the x direction described in the said step 4) and described y direction calculates according to formula 2, and formula 2 does
, wherein Var is the variance yields of pixel in the original image, Var
1mThe place brings the variance yields Var of pixel corresponding in the x direction blurred picture into
1xThe time, the ratio R m that obtains is the variance ratio Rx of x direction; Var
1mThe place brings the variance yields Var of pixel corresponding in the y direction blurred picture into
1yThe time, the ratio R m that obtains is the variance ratio Ry of y direction.
5. the global depth estimation method of single image according to claim 3; It is characterized in that: the weight wx of the x direction described in the said step 4) and the weight wy of y direction are calculated by gray-scale value according to formula 3; Formula 3 is
, and wherein N representes total number of pixel in the original image; Gi representes i gray values of pixel points in the original image; When Gmi brought at the place in the x direction blurred picture i gray values of pixel points Gxi into, the value wm that obtains was the weight wx of x direction; When Gmi brought at the place in the y direction blurred picture i gray values of pixel points Gyi into, the value wm that obtains was the weight wy of y direction.
6. the global depth estimation method of single image according to claim 3; It is characterized in that: in the said step 4) in the original image each pixel place corresponding variance ratio R be that 4 weightings obtain according to formula, formula 4 is
.
7. according to the global depth estimation method of the arbitrary described single image of claim 1-6, it is characterized in that: gradation conversion is carried out according to the conversion formula shown in the formula 5 said step 2), and formula 5 is: and G (x, y)=0.11 * R (x; Y)+and 0.59 * G ' (x, y)+0.3 * B (x, y), wherein; G (x, y) coordinate is (x, the gray-scale value of pixel y) after conversion, R (x on the presentation video; Y), and G ' (x, y), B (x; Y) coordinate is (x, the R before conversion of pixel y), G, the brightness value of three components of B on the difference presentation video.
8. according to the global depth estimation method of the arbitrary described single image of claim 1-6; It is characterized in that: the gray variance value of obtaining pixel in the image in the said step 3) according to following mode: choose diamond block; Said diamond block is a central point with current pixel point to be calculated, is the summit with four adjacent pixels around the current pixel point; For the current pixel point that can form diamond block, then with current pixel point, adjacent on every side four pixels amount to five gray values of pixel points and bring the gray variance value Var that calculates current pixel point in the formula 6 into
0, formula 6 is:
, wherein, G1, G2, G3, G4 and G5 represent said five gray values of pixel points respectively, E representes the average gray of said five pixels; For being positioned at the current pixel point that can not form diamond block on the horizontal edge line, get the gray variance value of the gray variance value of adjacent pixels point on the vertical direction as current pixel point; For being positioned at the current pixel point that can not form diamond block on the vertical edge line, water intaking square upwards the gray variance value of adjacent pixels point as the gray variance value of current pixel point.
9. according to the global depth estimation method of the arbitrary described single image of claim 1-6, it is characterized in that: normalization operating basis formula 7 carries out with formula 8 in the said step 5);
Formula 8 is: Y=y * 255;
Wherein, With the depth information value of pixel to be converted as x substitution formula 7; Y representes to change intermediate value; The depth information value that Y representes to change the back pixel is corresponding to the gray-scale value among the initial global depth figure, and MaxValue and MinValue are respectively maximal value and the minimum value in the corresponding depth information value in all pixel places.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110350229 CN102509294B (en) | 2011-11-08 | 2011-11-08 | Single-image-based global depth estimation method |
HK12107226.7A HK1166545A1 (en) | 2011-11-08 | 2012-07-23 | Overall depth estimation method based on the single image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110350229 CN102509294B (en) | 2011-11-08 | 2011-11-08 | Single-image-based global depth estimation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102509294A true CN102509294A (en) | 2012-06-20 |
CN102509294B CN102509294B (en) | 2013-09-25 |
Family
ID=46221372
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110350229 Active CN102509294B (en) | 2011-11-08 | 2011-11-08 | Single-image-based global depth estimation method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN102509294B (en) |
HK (1) | HK1166545A1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102903098A (en) * | 2012-08-28 | 2013-01-30 | 四川虹微技术有限公司 | Depth estimation method based on image definition difference |
CN102968814A (en) * | 2012-11-22 | 2013-03-13 | 华为技术有限公司 | Image rendering method and equipment |
CN103049906A (en) * | 2012-12-07 | 2013-04-17 | 清华大学深圳研究生院 | Image depth extraction method |
CN103177440A (en) * | 2012-12-20 | 2013-06-26 | 香港应用科技研究院有限公司 | System and method of generating image depth map |
EP2747028A1 (en) | 2012-12-18 | 2014-06-25 | Universitat Pompeu Fabra | Method for recovering a relative depth map from a single image or a sequence of still images |
CN104537637A (en) * | 2014-11-11 | 2015-04-22 | 清华大学深圳研究生院 | Method and device for estimating depth of single static image |
CN105957053A (en) * | 2016-04-19 | 2016-09-21 | 深圳创维-Rgb电子有限公司 | Two-dimensional image depth-of-field generating method and two-dimensional image depth-of-field generating device |
CN105979244A (en) * | 2016-05-31 | 2016-09-28 | 十二维度(北京)科技有限公司 | Method and system used for converting 2D image to 3D image based on deep learning |
CN106815865A (en) * | 2015-11-30 | 2017-06-09 | 展讯通信(上海)有限公司 | Image depth estimation method, depth drawing generating method and device |
WO2022041506A1 (en) * | 2020-08-25 | 2022-03-03 | 中国科学院深圳先进技术研究院 | Image depth estimation method, terminal device, and computer readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101960860A (en) * | 2007-11-09 | 2011-01-26 | 汤姆森许可贸易公司 | System and method for depth map extraction using region-based filtering |
CN102073993A (en) * | 2010-12-29 | 2011-05-25 | 清华大学 | Camera self-calibration-based jittering video deblurring method and device |
-
2011
- 2011-11-08 CN CN 201110350229 patent/CN102509294B/en active Active
-
2012
- 2012-07-23 HK HK12107226.7A patent/HK1166545A1/en not_active IP Right Cessation
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101960860A (en) * | 2007-11-09 | 2011-01-26 | 汤姆森许可贸易公司 | System and method for depth map extraction using region-based filtering |
CN102073993A (en) * | 2010-12-29 | 2011-05-25 | 清华大学 | Camera self-calibration-based jittering video deblurring method and device |
Non-Patent Citations (4)
Title |
---|
何淑珍: "基于灰度梯度的散焦图像测距算法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 02, 28 February 2009 (2009-02-28) * |
张淑芳,李华: "基于一幅散焦图像的深度估计新算法", 《光电子·激光》, vol. 17, no. 3, 31 March 2006 (2006-03-31) * |
江静,张雪松: "基于计算机视觉的深度估计方法", 《光电技术应用》, vol. 26, no. 1, 28 February 2011 (2011-02-28) * |
马祥音,查红彬: "基于单幅图像的深度非连续性估计", 《计算机应用》, vol. 30, 31 December 2010 (2010-12-31) * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102903098A (en) * | 2012-08-28 | 2013-01-30 | 四川虹微技术有限公司 | Depth estimation method based on image definition difference |
CN102968814A (en) * | 2012-11-22 | 2013-03-13 | 华为技术有限公司 | Image rendering method and equipment |
CN102968814B (en) * | 2012-11-22 | 2015-11-25 | 华为技术有限公司 | A kind of method and apparatus of image rendering |
CN103049906B (en) * | 2012-12-07 | 2015-09-30 | 清华大学深圳研究生院 | A kind of image depth extracting method |
CN103049906A (en) * | 2012-12-07 | 2013-04-17 | 清华大学深圳研究生院 | Image depth extraction method |
EP2747028A1 (en) | 2012-12-18 | 2014-06-25 | Universitat Pompeu Fabra | Method for recovering a relative depth map from a single image or a sequence of still images |
CN103177440B (en) * | 2012-12-20 | 2015-09-16 | 香港应用科技研究院有限公司 | The system and method for synthetic image depth map |
CN103177440A (en) * | 2012-12-20 | 2013-06-26 | 香港应用科技研究院有限公司 | System and method of generating image depth map |
CN104537637A (en) * | 2014-11-11 | 2015-04-22 | 清华大学深圳研究生院 | Method and device for estimating depth of single static image |
CN104537637B (en) * | 2014-11-11 | 2017-06-16 | 清华大学深圳研究生院 | A kind of single width still image depth estimation method and device |
CN106815865A (en) * | 2015-11-30 | 2017-06-09 | 展讯通信(上海)有限公司 | Image depth estimation method, depth drawing generating method and device |
CN105957053A (en) * | 2016-04-19 | 2016-09-21 | 深圳创维-Rgb电子有限公司 | Two-dimensional image depth-of-field generating method and two-dimensional image depth-of-field generating device |
CN105957053B (en) * | 2016-04-19 | 2019-01-01 | 深圳创维-Rgb电子有限公司 | Two dimensional image depth of field generation method and device |
US10796442B2 (en) | 2016-04-19 | 2020-10-06 | Shenzhen Skyworth-Rgb Electronic Co., Ltd. | Two-dimensional image depth of field generation method and device |
CN105979244A (en) * | 2016-05-31 | 2016-09-28 | 十二维度(北京)科技有限公司 | Method and system used for converting 2D image to 3D image based on deep learning |
WO2022041506A1 (en) * | 2020-08-25 | 2022-03-03 | 中国科学院深圳先进技术研究院 | Image depth estimation method, terminal device, and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN102509294B (en) | 2013-09-25 |
HK1166545A1 (en) | 2012-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102509294B (en) | Single-image-based global depth estimation method | |
CN102063706B (en) | Rapid defogging method | |
US8059911B2 (en) | Depth-based image enhancement | |
JP4253655B2 (en) | Color interpolation method for digital camera | |
US20070247530A1 (en) | Interpolation Method for Captured Color Image Data | |
CN104574277A (en) | Image interpolation method and image interpolation device | |
CN100568913C (en) | Edge crispening color interpolation method based on gradient | |
CN106204441B (en) | Image local amplification method and device | |
TW201127057A (en) | Image processing method for enhancing the resolution of image boundary | |
CN110852953B (en) | Image interpolation method and device, storage medium, image signal processor and terminal | |
CN110268712A (en) | Method and apparatus for handling image attributes figure | |
TWI546777B (en) | Image processing apparatus and method | |
CN101873509A (en) | Method for eliminating background and edge shake of depth map sequence | |
CN108734668A (en) | Image color restoration methods, device, computer readable storage medium and terminal | |
CN101227621A (en) | Method of performing interpolation for CFA in CMOS sensor and circuit thereof | |
CN106683063A (en) | Method and device of image denoising | |
CN102722902B (en) | Anti-aliasing the improving one's methods of rasterization stage in a kind of graph rendering streamline | |
Wu et al. | Color demosaicking with sparse representations | |
JP2008234130A (en) | Picture quality improvement processing method and program corresponding to two or more regions | |
KR101907451B1 (en) | Filter based high resolution color image restoration and image quality enhancement apparatus and method | |
WO2023078015A1 (en) | Intra-prediction mode screening method and apparatus for video frame | |
CN113793249B (en) | Method for converting Pentille image into RGB image and related equipment | |
CN103366343A (en) | Bitmap scaling method and system | |
JP3959547B2 (en) | Image processing apparatus, image processing method, and information terminal apparatus | |
CN104506784A (en) | Bell format image broken line eliminating method based on directional interpolation correction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1166545 Country of ref document: HK |
|
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: GR Ref document number: 1166545 Country of ref document: HK |