CN105374039A - Monocular image depth information estimation method based on contour acuity - Google Patents
Monocular image depth information estimation method based on contour acuity Download PDFInfo
- Publication number
- CN105374039A CN105374039A CN201510786727.1A CN201510786727A CN105374039A CN 105374039 A CN105374039 A CN 105374039A CN 201510786727 A CN201510786727 A CN 201510786727A CN 105374039 A CN105374039 A CN 105374039A
- Authority
- CN
- China
- Prior art keywords
- contour
- depth
- image
- gradient
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000003708 edge detection Methods 0.000 claims abstract description 5
- 238000004260 weight control Methods 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 3
- 230000000452 restraining effect Effects 0.000 claims 1
- 238000004364 calculation method Methods 0.000 abstract description 3
- 238000000605 extraction Methods 0.000 abstract description 2
- 238000005457 optimization Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 24
- 238000009792 diffusion process Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a monocular image depth information estimation method based on contour acuity. The method uses edge contour acuity as a fuzzy information estimation characteristic, and through low-level cue information, depth information extraction is carried out. Edge detection is firstly carried out on the image; edge energy and contour acuity are calculated for the edge in the image, the edge energy and the contour acuity serve as outer energy of the contour, inner energy, contour line characteristic energy and contour line distance energy, of the contour is combined to build a contour tracking model, the minimal value of an energy function is solved, and an image contour is searched; then, depth gradient hypothesis serves as a priori assumed gradient model, depth value filling is carried out on an area with different contour lines, and depth distribution is obtained through calculation; and finally, the original image information and the obtained depth image information are used for optimization on the obtained depth image, and a final disparity map is obtained. As is proved by an experimental result, the depth estimation algorithm is simple, and the depth image for the monocular image can be quickly and accurately estimated.
Description
Technical Field
The invention relates to a method for estimating depth information of a monocular image, which can obtain the depth information of an object shown in the monocular image through low-level clues of the image.
Background
Depth information perception is a precondition for generating stereoscopic vision, and the depth information-based practical application plays an important role in the fields of scene re-solution, three-dimensional reconstruction, mode recognition, target tracking and the like. In practical application, a monocular camera or a depth camera can be adopted for extracting the depth information. The monocular camera-based depth information estimation method has certain advantages due to the fact that the method is simple in actual operation and low in hardware cost, and depth information can be directly extracted from existing monocular image materials.
Monocular image depth information estimation extracts two-dimensional and three-dimensional geometric information such as color, shape, coplanarity and the like of a target through a single image, and therefore space three-dimensional information of the target is obtained by using a small amount of known conditions. Most algorithms at present are realized by adopting high-level and middle-level clues of images. For example, the semantic label of the known image is obtained by learning the reference image, and the semantic label is defined for the target image by using the semantic label, so that the relative depth information is obtained. The method comprises the steps of segmenting a target image by utilizing image structure information obtained by training in a real parallax image, such as color, texture, shape and the like, and deriving depth information by applying a Markov random field with distinguishing training. And the depth information can be recovered from the image by directly applying the local information without analyzing the content of the image by using the low-level clue information, and the algorithm is relatively simple.
The fuzzy information can be used as an important characteristic of depth estimation in the process of estimating the depth information of the monocular image. The blurring is mostly caused by out-of-focus or different depth of objects in the imaging area during the imaging process. According to the characteristic, monocular image depth information estimation determines the foreground and the background of an image according to the blurring condition of the image by defocusing the image so as to estimate the depth of a scene. Relative depth extraction is achieved by applying vanishing light and markov random fields to spread the blur across the entire image, as initiated by the blur values at the edge locations. Or the modeling of the defocusing fuzzy process is realized by utilizing the thermal diffusion process, and the scene depth is recovered by utilizing the uneven inverse thermal diffusion equation to estimate the fuzzy amount of the edge position. And the fuzzy diffusion method is complex, low in efficiency and poor in practicability.
Therefore, the method for estimating the monocular image depth information is simplified by utilizing the low-level clues and the new fuzzy information characteristics of the image.
Disclosure of Invention
The invention provides a monocular image depth information estimation method based on contour sharpness.
The purpose of the invention is realized by the following technical scheme:
the monocular image depth information estimation method based on the contour sharpness is characterized in that the gradient amplitude and contour sharpness information are used for reflecting the characteristics of object contours with different depths in an image, and when depth estimation is carried out, the gradient amplitude of edges is not only considered, and meanwhile, the spatial information of the edges is added, so that the fuzzy change trend of the edges of the objects is more fully reflected. The method comprises the following steps:
(1) performing edge detection on an input image to obtain edge points P ═ P of an object in the image1,p2,…pn};
(2) According to the prior depth gradient model, defining an initial contour line, and defining a group of contour lines which are parallel to each other and have equal intervals as the initial contour line V ═ { V ═ V }0,v1…vm}; wherein v(s) ═ x(s), y(s)]X(s), y(s) are coordinates of the point s in the initial contour line;
the total energy of the contour is defined as:
Etotal=w1Eedge+w2Esharp+w3Es+w4Ed
wherein EedgeAs a function of the contour edge energy, EsharpEnergy function of contour sharpness, EsAs a function of the characteristic energy of the contour line, EdAs a function of contour distance energy, w1、w2、w3、w4The parameters for each function weight control can be assigned according to specific image characteristics.
Contour edge energy function EedgeGradient of representing contourAmplitude, defined as the magnitude of the gradient amplitude of the image I (x, y) along the gradient direction, is expressed as:
wherein, solving gradient amplitude of I (x, y);
contour sharpness energy function EsharpThe fuzzy degree of the contour is represented and obtained by solving the sharpness of the gradient contour. Is defined as:
wherein the gradient profile sharpness σ (p (q)0) Is the root mean square of the variance of the gradient profile variables. Where the gradient profile is an edge pixel q in the image0(x0,y0) As a starting point, tracing to the boundary of the edge along the gradient direction until the gradient amplitude of the edge does not change any more, and obtaining a one-dimensional path p (q)0) The formed gradient amplitude curve. The sharpness of the gradient profile is defined as
Wherein d isc(q,q0) For the points q and q of the gradient profile0Length of the curve in between, G (q) is the gradient amplitude at point q, G (q)0) Is the sum of gradient amplitudes of all points in the gradient profile, s is any point in the gradient profile, and b is weight controlAnd (4) parameters.
Contour line characteristic energy function EsSmoothness for constraining the contour;
contour distance energy function EdThe method is used for controlling the contour tracing curve not to exceed the search area;
(3) updating the position of each contour point from the bottom to the top of each initial contour line in the image from the starting point on the left side of the image, and calculating contour tracking energy for each contour point according to the total energy definition of the contour lines in the step (2);
(4) solving the minimum value of the energy function; searching pixel points in a pixel column adjacent to the current contour point, and searching for an edge point P which meets the definition of an energy function, wherein the edge point P is equal to { P }1,p2,…pnSelecting the position of the pixel point with the minimum energy as a new contour point; repeatedly searching the minimum value from the left side to the right side of the image to obtain a final contour search result, namely obtaining a target contour line V;
(5) and filling depth values in the areas with different contour lines by taking the depth gradient hypothesis as a priori hypothesis gradient model, and calculating to obtain the depth distribution.
Contour line V ═ V in contour line set0,v1,…,vmAnd the corresponding allocated Depth value Depth is:
(6) and optimizing the obtained depth image by using the gray information of the original image and the obtained depth image information:
wherein Depth (x)i) To input the depth image, Ω (x)i) Is a pixel xiA central neighborhood, I (x)i) Is a pixel xiLuminance I component, x ofjIs a pixel xiIn the neighborhood of Ω (x)i) Of the neighborhood of pixels, W (x)i) Is a normalization factor of the filter parameters; | xi-xj| | is the spatial Euclidean distance of two pixels; i (x)i)-I(xj) Representing the similarity of luminance of two pixelsPixel xiAnd xjRespectively have a spatial coordinate of (x)ix,xiy) And (x)jx,xjy). Spatial weight coefficientAnd chroma weight coefficientIs defined as:
wherein sigmasIs the variance, σ, of the spatial weightrIs the variance of the chroma weights.
(7) A disparity map obtained by estimating depth information of the input monocular image is obtained.
The method has the advantage that the monocular image depth information estimation method based on the outline sharpness is provided. The traditional monocular image depth information estimation method needs to perform learning training, image understanding and other steps by utilizing high-level and middle-level clues of images, and the algorithm is complex. The method utilizes low-level clues of the images and has simple calculation. Different from the traditional method for distinguishing the object depth by using the fuzzy information, the method disclosed by the invention effectively distinguishes the object contour by using the contour sharpness information when calculating the fuzzy information, so that the object contour with different depths is obtained, the steps of fuzzy diffusion and the like are avoided, and the experimentality of the method is improved. The method reflects the characteristics of object contours with different depths in an image by utilizing the information of the gradient amplitude and the contour sharpness, not only considers the gradient amplitude of the edge when carrying out depth estimation, but also adds the spatial information of the edge, and more fully reflects the fuzzy change trend of the edge of the object.
Drawings
FIG. 1 is a flow chart of the method.
Fig. 2 shows the definition of contour sharpness.
FIG. 3 is a diagram illustrating the relative relationship of depth estimation assigned by contour lines.
Detailed Description
The following detailed description of the embodiments of the invention is provided in connection with the accompanying drawings.
(1) For an input image, edge detection is carried out by adopting a Canny edge detection algorithm, and the edge point P of an object in the image is obtained as { P ═ P }1,p2,…pn};
(2) According to the prior depth gradient model, defining an initial contour line, and defining a group of contour lines which are parallel to each other and have equal intervals as the initial contour line V ═ { V ═ V }0,v1,...,vm}. Wherein v(s) ═ x(s), y(s)]X(s), y(s) are coordinates of the point s in the initial contour line;
the total energy of the contour line is defined as
Etotal=w1Eedge+w2Esharp+w3Es+w4Ed
Wherein EedgeAs a function of the contour edge energy, EsharpEnergy function of contour sharpness, EsAs a function of the characteristic energy of the contour line, EdAs a function of contour distance energy, w1、w2、w3、w4For each function weight control parameter, can be set to w for the generic image1=0.25、w2=0.5、w3=0.125、w4=0.125。
(3) Contour edge energy function EedgeThe magnitude of the gradient amplitude of the image I (x, y) along the gradient direction is defined as
Wherein, solving gradient amplitude of I (x, y);
wherein the parameter a is a weight control parameter.
(4) In the method, the gradient contour sharpness of the edge is used for representing the blurring degree of the contour, so that a contour sharpness energy function E obtained by solving the gradient contour sharpness is definedsharpAs a characteristic value of the degree of contour blur. As shown in fig. 2, with an edge pixel q in the figure0(x0,y0) As a starting point, tracing to both sides of the edge along the gradient direction until the gradient magnitude of the edge no longer changes, thus obtaining a path p (q)0). Along a one-dimensional path p (q)0) The resulting gradient magnitude profile is referred to as the gradient profile. Using variance of gradient profile variablesRoot mean square defines the sharpness of the profile, expressed as:
where d isc(q,q0) For the points q and q of the gradient profile0Length of the curve in between, G (q) is the gradient amplitude at point q, G (q)0) Is the sum of the gradient amplitudes of all points in the gradient profile, and s is any point in the gradient profile.
Sharpness energy function EsharpIs defined as:
wherein the parameter b is a weight control parameter.
(5) Contour line characteristic energy function EsAnd controlling contour tracking as a smooth term constraint to ensure that a curve generated by tracking is smooth and prevent the curve from falling into local extreme values during solving.
Defining outline points in the outline as N ═ N0,n1,...,nnIn which n is0For the starting point of the contour, the characteristic energy function of the contour is defined as
Wherein d iss(ni,ni-1) Representing point niAnd point ni-1The length of the curve between the two parameters, and the parameter c is a weight control parameter.
(6) Contour distance energy function EdThe elastic constraint term is used for constraining the distance of each contour point in the contour line tracking process, so that the contour line tracking curve does not exceed the search area, and the contour lines are ensured not to be crossed.
Wherein d ise(ni,n0) Representing parametric representation points niAnd point n0D is a weight control parameter.
(7) Updating the position of each contour point from the bottom to the top of each initial contour line in the image from the starting point on the left side of the image, and calculating contour tracking energy for each contour point according to the total energy calculation method of the contour lines in the steps (2-6);
(8) the minimum of the energy function is solved. Searching pixel points in a pixel column adjacent to the current contour point, and searching for an edge point P which meets the definition of an energy function, wherein the edge point P is equal to { P }1,p2,…pnThe point with the smallest energy value in the pixel is selected as the new pixel point position with the smallest energyThe contour points of (1); and repeatedly searching the minimum value from the left side to the right side of the image to obtain a final contour search result, namely obtaining the target contour line V.
(9) And filling depth values of areas with different contour lines according to the graph 3 by taking a depth gradient hypothesis which is gradually deepened from bottom to top as a priori hypothesis gradient model, and calculating to obtain depth distribution.
Contour line V ═ V in contour line set0,v1,...,vmAnd h, correspondingly allocating the depth value as.
(10) And optimizing the obtained depth image by using the original image information and the obtained depth image information:
wherein Depth (x)i) To input the depth image, Ω (x)i) Is a pixel xiA central neighborhood, I (x)i) Is a pixel xiLuminance I component, pixel xjIs a pixel xiIn the neighborhood of Ω (x)i) Of the neighborhood of pixels, W (x)i) Is a normalization factor of the filter parameters; | xi-xj| | is the spatial Euclidean distance of two pixels; i (x)i)-I(xj) Representing the similarity of the luminance of two pixels, pixel xiAnd xjRespectively have a spatial coordinate of (x)ix,xiy) And (x)jx,xjy). Spatial weight coefficientAnd chroma weight coefficientIs defined as:
wherein sigmasIs the variance, σ, of the spatial weightrIs the variance of the chroma weights.
(11) A disparity map obtained by estimating depth information of the input monocular image is obtained.
Claims (3)
1. The monocular image depth information estimation method based on the contour sharpness is characterized by comprising the following steps of:
(1) performing edge detection on an input image to obtain edge points P ═ P of an object in the image1,p2,…pn};
(2) According to the prior depth gradient model, defining an initial contour line, and defining a group of contour lines which are parallel to each other and have equal intervals as the initial contour line V ═ { V ═ V }0,v1...vm}; wherein v(s) ═ x(s), y(s)]X(s), y(s) are the initial contour linesThe coordinates of point s in (1);
the total energy of the contour line is defined as
Etotal=w1Eedge+w2Esharp+w3Es+w4Ed
Wherein EedgeRepresenting the gradient amplitude of the contour as a function of the contour edge energy;
Esharpa contour sharpness energy function representing a degree of blurring of the contour;
Esis a contour characteristic energy function used for restraining the smoothness of the contour;
Edthe contour line distance energy function is used for controlling the contour tracing curve not to exceed the search area;
w1、w2、w3、w4controlling parameters for each function weight;
(3) updating the position of each contour point from the bottom to the top of each initial contour line in the image from the starting point on the left side of the image, and calculating contour tracking energy for each contour point according to the total energy definition of the contour lines in the step (2);
(4) solving the minimum value of the energy function; searching pixel points in a pixel column adjacent to the current contour point, and searching for an edge point P which meets the definition of an energy function, wherein the edge point P is equal to { P }1,p2,…pnSelecting the position of the pixel point with the minimum energy as a new contour point; repeatedly searching the minimum value from the left side to the right side of the image to obtain a final contour search result, namely obtaining a target contour line V;
(5) filling depth values of areas with different contour lines by taking a depth gradient hypothesis as a priori hypothesis gradient model, and calculating to obtain depth distribution;
contour line V ═ V in contour line set0,v1,...,vmFor an assigned Depth value Depth corresponding to i-0 … m, the Depth value Depth is:
(6) and optimizing the obtained depth image by using the gray information of the original image and the obtained depth image information:
wherein Depth (x)i) To input the depth image, Ω (x)i) Is a pixel xiA central neighborhood, I (x)i) Is a pixel xiLuminance I component, x ofjIs a pixel xiIn the neighborhood of Ω (x)i) Of the neighborhood of pixels, W (x)i) Is a normalization factor of the filter parameters; | xi-xj| | is the spatial Euclidean distance of two pixels; i (x)i)-I(xj) Representing the similarity of the luminance of two pixels, pixel xiAnd xjRespectively have a spatial coordinate of (x)ix,xiy) And (x)jx,xjy) (ii) a Spatial weight coefficientAnd chroma weight coefficientIs defined as:
wherein sigmasIs the variance, σ, of the spatial weightrIs the variance of the chroma weights.
(7) A disparity map obtained by estimating depth information of the input monocular image is obtained.
2. The method of claim 1, wherein the contour edge energy function E isedgeThe magnitude of the gradient amplitude of the image I (x, y) along the gradient direction is defined as:
wherein, solving gradient amplitude of I (x, y);
wherein the parameter a is a weight control parameter.
3. The method of claim 1, wherein the contour sharpness energy function E issharpSolved for gradient profile sharpness, defined as:
wherein the gradient profile sharpness σ (p (q)0) Is the root mean square of the variance of the gradient profile variables. Where the gradient profile is an edge pixel q in the image0(x0,y0) As a starting point, tracing to the boundary of the edge along the gradient direction until the gradient amplitude of the edge does not change any more, and obtaining a one-dimensional path p (q)0) The formed gradient amplitude curve; the sharpness of the gradient profile is defined as
Wherein d isc(q,q0) For the points q and q of the gradient profile0Length of the curve in between, G (q) is the gradient amplitude at point q, G (q)0) Is the sum of the gradient amplitudes of all points in the gradient profile, s is any point in the gradient profile, and b is a weight control parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510786727.1A CN105374039B (en) | 2015-11-16 | 2015-11-16 | Monocular image depth information method of estimation based on contour acuity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510786727.1A CN105374039B (en) | 2015-11-16 | 2015-11-16 | Monocular image depth information method of estimation based on contour acuity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105374039A true CN105374039A (en) | 2016-03-02 |
CN105374039B CN105374039B (en) | 2018-09-21 |
Family
ID=55376211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510786727.1A Active CN105374039B (en) | 2015-11-16 | 2015-11-16 | Monocular image depth information method of estimation based on contour acuity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105374039B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107204010A (en) * | 2017-04-28 | 2017-09-26 | 中国科学院计算技术研究所 | A kind of monocular image depth estimation method and system |
CN107582001A (en) * | 2017-10-20 | 2018-01-16 | 珠海格力电器股份有限公司 | Dish washing machine and control method, device and system thereof |
CN108647713A (en) * | 2018-05-07 | 2018-10-12 | 宁波华仪宁创智能科技有限公司 | Embryo's Boundary Recognition and laser trace approximating method |
CN109087346A (en) * | 2018-09-21 | 2018-12-25 | 北京地平线机器人技术研发有限公司 | Training method, training device and the electronic equipment of monocular depth model |
JP2020524355A (en) * | 2018-05-23 | 2020-08-13 | 浙江商▲湯▼科技▲開▼▲発▼有限公司Zhejiang Sensetime Technology Development Co., Ltd. | Method and apparatus for recovering depth of monocular image, computer device |
US10769805B2 (en) | 2018-05-15 | 2020-09-08 | Wistron Corporation | Method, image processing device, and system for generating depth map |
CN112396645A (en) * | 2020-11-06 | 2021-02-23 | 华中科技大学 | Monocular image depth estimation method and system based on convolution residual learning |
CN112446946A (en) * | 2019-08-28 | 2021-03-05 | 深圳市光鉴科技有限公司 | Depth reconstruction method, system, device and medium based on sparse depth and boundary |
CN116503821A (en) * | 2023-06-19 | 2023-07-28 | 成都经开地理信息勘测设计院有限公司 | Road identification recognition method and system based on point cloud data and image recognition |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100033617A1 (en) * | 2008-08-05 | 2010-02-11 | Qualcomm Incorporated | System and method to generate depth data using edge detection |
US20100141651A1 (en) * | 2008-12-09 | 2010-06-10 | Kar-Han Tan | Synthesizing Detailed Depth Maps from Images |
CN101840574A (en) * | 2010-04-16 | 2010-09-22 | 西安电子科技大学 | Depth estimation method based on edge pixel features |
CN102883175A (en) * | 2012-10-23 | 2013-01-16 | 青岛海信信芯科技有限公司 | Methods for extracting depth map, judging video scene change and optimizing edge of depth map |
CN103793918A (en) * | 2014-03-07 | 2014-05-14 | 深圳市辰卓科技有限公司 | Image definition detecting method and device |
-
2015
- 2015-11-16 CN CN201510786727.1A patent/CN105374039B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100033617A1 (en) * | 2008-08-05 | 2010-02-11 | Qualcomm Incorporated | System and method to generate depth data using edge detection |
US20100141651A1 (en) * | 2008-12-09 | 2010-06-10 | Kar-Han Tan | Synthesizing Detailed Depth Maps from Images |
CN101840574A (en) * | 2010-04-16 | 2010-09-22 | 西安电子科技大学 | Depth estimation method based on edge pixel features |
CN102883175A (en) * | 2012-10-23 | 2013-01-16 | 青岛海信信芯科技有限公司 | Methods for extracting depth map, judging video scene change and optimizing edge of depth map |
CN103793918A (en) * | 2014-03-07 | 2014-05-14 | 深圳市辰卓科技有限公司 | Image definition detecting method and device |
Non-Patent Citations (2)
Title |
---|
NATALIA NEVEROVA ET AL.: "Edge Based Method for Sharp Region Extraction from Low Depth of Field Images", 《VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2012 IEEE》 * |
YONG JU JUNG ET AL.: "A novel 2D-to-3D conversion technique based on relative height depth cue", 《PROCEEDINGS OF THE SPIE》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107204010A (en) * | 2017-04-28 | 2017-09-26 | 中国科学院计算技术研究所 | A kind of monocular image depth estimation method and system |
CN107204010B (en) * | 2017-04-28 | 2019-11-19 | 中国科学院计算技术研究所 | A kind of monocular image depth estimation method and system |
CN107582001A (en) * | 2017-10-20 | 2018-01-16 | 珠海格力电器股份有限公司 | Dish washing machine and control method, device and system thereof |
CN107582001B (en) * | 2017-10-20 | 2020-08-11 | 珠海格力电器股份有限公司 | Dish washing machine and control method, device and system thereof |
CN108647713B (en) * | 2018-05-07 | 2021-04-02 | 宁波华仪宁创智能科技有限公司 | Embryo boundary identification and laser track fitting method |
CN108647713A (en) * | 2018-05-07 | 2018-10-12 | 宁波华仪宁创智能科技有限公司 | Embryo's Boundary Recognition and laser trace approximating method |
US10769805B2 (en) | 2018-05-15 | 2020-09-08 | Wistron Corporation | Method, image processing device, and system for generating depth map |
JP2020524355A (en) * | 2018-05-23 | 2020-08-13 | 浙江商▲湯▼科技▲開▼▲発▼有限公司Zhejiang Sensetime Technology Development Co., Ltd. | Method and apparatus for recovering depth of monocular image, computer device |
US11004221B2 (en) | 2018-05-23 | 2021-05-11 | Zhejiang Sensetime Technology Development Co., Ltd. | Depth recovery methods and apparatuses for monocular image, and computer devices |
CN109087346A (en) * | 2018-09-21 | 2018-12-25 | 北京地平线机器人技术研发有限公司 | Training method, training device and the electronic equipment of monocular depth model |
CN112446946A (en) * | 2019-08-28 | 2021-03-05 | 深圳市光鉴科技有限公司 | Depth reconstruction method, system, device and medium based on sparse depth and boundary |
CN112396645A (en) * | 2020-11-06 | 2021-02-23 | 华中科技大学 | Monocular image depth estimation method and system based on convolution residual learning |
CN112396645B (en) * | 2020-11-06 | 2022-05-31 | 华中科技大学 | Monocular image depth estimation method and system based on convolution residual learning |
CN116503821A (en) * | 2023-06-19 | 2023-07-28 | 成都经开地理信息勘测设计院有限公司 | Road identification recognition method and system based on point cloud data and image recognition |
CN116503821B (en) * | 2023-06-19 | 2023-08-25 | 成都经开地理信息勘测设计院有限公司 | Road identification recognition method and system based on point cloud data and image recognition |
Also Published As
Publication number | Publication date |
---|---|
CN105374039B (en) | 2018-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105374039B (en) | Monocular image depth information method of estimation based on contour acuity | |
Huang et al. | Indoor depth completion with boundary consistency and self-attention | |
US20180231871A1 (en) | Depth estimation method for monocular image based on multi-scale CNN and continuous CRF | |
Malik et al. | Consideration of illumination effects and optimization of window size for accurate calculation of depth map for 3D shape recovery | |
CN105404888B (en) | The conspicuousness object detection method of color combining and depth information | |
EP2087466B1 (en) | Generation of depth map for an image | |
CN104966286B (en) | A kind of 3D saliencies detection method | |
CN107025660B (en) | Method and device for determining image parallax of binocular dynamic vision sensor | |
CN111209770A (en) | Lane line identification method and device | |
CN103927016A (en) | Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision | |
CN103473743B (en) | A kind of method obtaining image depth information | |
WO2018053952A1 (en) | Video image depth extraction method based on scene sample library | |
CN104036481B (en) | Multi-focus image fusion method based on depth information extraction | |
CN104299263A (en) | Method for modeling cloud scene based on single image | |
CN105023253A (en) | Visual underlying feature-based image enhancement method | |
KR101478709B1 (en) | Method and apparatus for extracting and generating feature point and feature descriptor rgb-d image | |
US9171357B2 (en) | Method, apparatus and computer-readable recording medium for refocusing photographed image | |
CN110443228B (en) | Pedestrian matching method and device, electronic equipment and storage medium | |
KR101125061B1 (en) | A Method For Transforming 2D Video To 3D Video By Using LDI Method | |
CN105719317B (en) | Background estimating method is blocked based on camera array synthetic aperture imaging | |
KR101592087B1 (en) | Method for generating saliency map based background location and medium for recording the same | |
CN103198464B (en) | A kind of migration of the face video shadow based on single reference video generation method | |
Fan et al. | Collaborative three-dimensional completion of color and depth in a specified area with superpixels | |
Kim et al. | Data-driven single image depth estimation using weighted median statistics | |
CN107194931A (en) | It is a kind of that the method and system for obtaining target depth information is matched based on binocular image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |