CN105374039A - Monocular image depth information estimation method based on contour acuity - Google Patents
Monocular image depth information estimation method based on contour acuity Download PDFInfo
- Publication number
- CN105374039A CN105374039A CN201510786727.1A CN201510786727A CN105374039A CN 105374039 A CN105374039 A CN 105374039A CN 201510786727 A CN201510786727 A CN 201510786727A CN 105374039 A CN105374039 A CN 105374039A
- Authority
- CN
- China
- Prior art keywords
- image
- contour
- depth
- gradient
- profile
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a monocular image depth information estimation method based on contour acuity. The method uses edge contour acuity as a fuzzy information estimation characteristic, and through low-level cue information, depth information extraction is carried out. Edge detection is firstly carried out on the image; edge energy and contour acuity are calculated for the edge in the image, the edge energy and the contour acuity serve as outer energy of the contour, inner energy, contour line characteristic energy and contour line distance energy, of the contour is combined to build a contour tracking model, the minimal value of an energy function is solved, and an image contour is searched; then, depth gradient hypothesis serves as a priori assumed gradient model, depth value filling is carried out on an area with different contour lines, and depth distribution is obtained through calculation; and finally, the original image information and the obtained depth image information are used for optimization on the obtained depth image, and a final disparity map is obtained. As is proved by an experimental result, the depth estimation algorithm is simple, and the depth image for the monocular image can be quickly and accurately estimated.
Description
Technical field
The present invention relates to one and can carry out depth information method of estimation to monocular image, the method can obtain the depth information of object shown in monocular image by image low level clue.
Background technology
Depth information perception is the prerequisite producing stereoscopic vision, plays a significant role in the fields such as the practical application based on depth information is heavily separated in scene, three-dimensional reconstruction, pattern-recognition, target following.In actual applications, monocular-camera, multi-lens camera or depth camera can be adopted to carry out extraction of depth information.Wherein, based on the depth information method of estimation of monocular-camera because its reality is simple to operate, hardware cost is low, can from existing monocular image material extracting directly depth information and there is certain advantage.
Monocular image depth information estimates the two dimension such as color, shape, coplanarity, the three-dimensional geometric information that are extracted target by single image, thus utilizes a small amount of known conditions to obtain the space three-dimensional information of this target.Algorithm is all adopt high-level, the middle-level clue of image to realize mostly at present.As obtained the semantic label of known image through study reference picture, utilize it to realize target image definition semantic label, thus obtain relative depth information.Utilizing training the image structure information obtained in true anaglyph, as color, texture, shape etc., carrying out over-segmentation to target image, application has the Markov random field distinguishing training to derive depth information.And utilize low level hint information not need to carry out image the analysis of content, only needing directly to apply local message can recover depth information from image, and algorithm is relatively simple.
Fuzzy message can be utilized in the depth information estimation procedure of monocular image as a key character of estimation of Depth.Mostly fuzzy message is that in imaging process the target that there is different depth in the inaccurate or imaging region of focusing causes.According to this characteristic, monocular image depth information is estimated by defocusing process to image, estimates the degree of depth of scene according to the prospect of the ambiguity determination image of image and background.As using the fuzzy value of marginal position as initially, application delustring and Markov random field make fuzzyly to be diffused into entire image, realize relative depth and extract.Or utilize heat expansion from the modeling of process implementation defocus blur process, utilize uneven backheating diffusion equation estimated edge location fuzzy amount to carry out the restoration scenario degree of depth.And due to fuzzy method of diffusion complexity, efficiency is lower, and its practicality is poor.
Therefore, the present invention utilizes image low level clue and new fuzzy message characteristic, simplifies monocular image depth information method of estimation.
Summary of the invention
The present invention proposes a kind of monocular image depth information method of estimation based on contour acuity, the method utilizes image low level hint information, estimate that using the contour acuity information at edge as fuzzy message feature carries out contour of object extraction, according to contour of object and contacting of Object Depth edge, depth assignment is carried out to different objects, thus obtain the depth information of different objects in image.
The object of the invention is to be achieved through the following technical solutions:
Based on the monocular image depth information method of estimation of contour acuity, it is characterized in that, utilize the characteristic in gradient amplitude and contour acuity message reflection image with the contour of object of different depth, when carrying out estimation of Depth, not only consider the gradient amplitude at edge, add the spatial information at edge simultaneously, embody object edge smear out effect trend more fully.Comprise the steps:
(1) for input picture, carry out rim detection, obtain the marginal point P={p of objects in images
1, p
2... p
n;
(2) carry out the definition of initial profile line according to priori concentration gradient model, define one group of parallel to each other and outline line that spacing is equal as initial profile line V={v
0, v
1v
m; Wherein, v (s)=[x (s), y (s)], x (s), y (s) they are the coordinate of the some s in initial profile line;
The gross energy of outline line is defined as:
E
total=w
1E
edge+w
2E
sharp+w
3E
s+w
4E
d
Wherein E
edgefor contour edge energy function, E
sharpcontour acuity energy function, E
sfor outline line characteristic energies function, E
dfor outline line distance energy function, w
1, w
2, w
3, w
4for each function weight controling parameters, can according to concrete picture characteristics assignment.
Contour edge energy function E
edgerepresent the gradient amplitude of profile, be defined as the gradient amplitude size of image I (x, y) along gradient direction, be expressed as:
Wherein,
That gradient amplitude is asked to I (x, y);
Contour acuity energy function E
sharprepresenting the fog-level of profile, obtaining by solving gradient profile acutance.Be defined as:
Wherein, gradient profile acutance σ (p (q
0)) be the root mean square of gradient profile variable variance.Here gradient profile is edge pixel q in image
0(x
0, y
0) as starting point, along the extrorse boundary tracking of gradient direction, until the gradient amplitude at edge no longer changes, institute obtains one dimension path p (q
0) the gradient amplitude curve that formed.Its gradient profile acutance is defined as
Wherein, d
c(q, q
0) be gradient profile mid point q and q
0between length of curve, g (q) is the gradient amplitude at q point place, G (q
0) in gradient profile gradient amplitude a little and, s is any point in gradient profile, and parameter b is control of right parameter.
Outline line characteristic energies function E
sfor retraining the smoothness of profile;
Outline line distance energy function E
dregion of search can not be exceeded for controlling profile aircraft pursuit course;
(3) to each initial profile line in image from the left side starting point of image, upgraded the position of each point by image base to top, according to outline line gross energy definition in step (2), Contour extraction energy is calculated to each point;
(4) minimum value of energy function is solved; Search for the pixel in the pixel column adjacent with current outline point, search meets the marginal point P={p of energy function definition
1, p
2... p
nin there is the point of minimum energy value, select the pixel position with least energy as new point; To right side on the left of image, repeat search minimum value, obtains final Contour searching result, namely obtains target wheel profile V;
(5) using concentration gradient hypothesis as a priori assumption gradient former, depth value filling is carried out to the region with different outline lines, calculates depth profile.
Outline line concentrates outline line V={v
0, v
1..., v
m, corresponding distribution depth value Depth is:
(6) original image half-tone information and gained deep image information is utilized to be optimized process to the depth image obtained:
Wherein Depth (x
i) be input depth image, Ω (x
i) be with pixel x
icentered by neighborhood, I (x
i) be pixel x
ibrightness I component, x
jfor pixel x
iat neighborhood Ω (x
i) neighborhood territory pixel, W (x
i) be the normalized factor of filter parameter; || x
i-x
j|| be the space Euclidean distance of two pixels; I (x
i)-I (x
j) represent the brightness similarity of two pixels, pixel x
iand x
jvolume coordinate be respectively (x
ix, x
iy) and (x
jx, x
jy).Space weight coefficient
with colourity weight coefficient
be defined as:
Wherein σ
sfor variance, the σ of space weight
rfor the variance of colourity weight.
(7) obtain to input monocular image carry out the disparity map after depth information estimation.
Advantage of the present invention is, proposes a kind of monocular image depth information method of estimation based on contour acuity.Traditional monocular image depth information method of estimation needs to utilize figure image height, middle-level clue carries out the step such as learning training, image understanding, and algorithm is complicated.And our rule utilizes image low level clue, calculate simple.With it tradition utilize fuzzy message distinguish Object Depth method unlike, the present invention, when calculating fuzzy message, utilizes contour acuity information effectively to distinguish contour of object, thus obtains the contour of object with different depth, avoid the steps such as fuzzy diffusion, improve the experimental of method.This method utilizes the characteristic in gradient amplitude and contour acuity message reflection image with the contour of object of different depth, when carrying out estimation of Depth, not only consider the gradient amplitude at edge, add the spatial information at edge simultaneously, embody object edge smear out effect trend more fully.
Accompanying drawing explanation
Fig. 1 is this method process flow diagram.
Fig. 2 shows the definition of contour acuity.
The estimation of Depth relativeness schematic diagram of Fig. 3 for utilizing outline line to distribute.
Embodiment
Below in conjunction with accompanying drawing and instantiation, give detailed description to implementation process of the present invention.
(1) for input picture, adopt Canny edge detection algorithm to carry out rim detection, obtain the marginal point P={p of objects in images
1, p
2... p
n;
(2) carry out the definition of initial profile line according to priori concentration gradient model, define one group of parallel to each other and outline line that spacing is equal as initial profile line V={v
0, v
1..., v
m.Wherein, v (s)=[x (s), y (s)], x (s), y (s) they are the coordinate of the some s in initial profile line;
The gross energy of outline line is defined as
E
total=w
1E
edge+w
2E
sharp+w
3E
s+w
4E
d
Wherein E
edgefor contour edge energy function, E
sharpcontour acuity energy function, E
sfor outline line characteristic energies function, E
dfor outline line distance energy function, w
1, w
2, w
3, w
4for each function weight controling parameters, w can be set to for general image
1=0.25, w
2=0.5, w
3=0.125, w
4=0.125.
(3) contour edge energy function E
edgefor image I (x, y) is along the gradient amplitude size of gradient direction, be defined as
Wherein,
That gradient amplitude is asked to I (x, y);
Wherein parameter a is control of right parameter.
(4) the gradient profile acutance at edge is utilized to represent the fog-level of profile in the method, so be defined by gradient profile acutance to solve the contour acuity energy function E obtained
sharpas the characterization value of soft edge degree.As shown in Figure 2, with edge pixel q in scheming
0(x
0, y
0) as starting point, follow the trail of along the extrorse both sides of gradient direction, until the gradient amplitude at edge no longer changes, obtain path p (q like this
0).Along one dimension path p (q
0) the gradient amplitude curve that formed is called as gradient profile.The root mean square of application gradient profile variable variance defines contour acuity, is expressed as:
Here d
c(q, q
0) be gradient profile mid point q and q
0between length of curve, g (q) is the gradient amplitude at q point place, G (q
0) in gradient profile gradient amplitude a little and, s is any point in gradient profile.
Acutance energy function E
sharpbe defined as:
Wherein parameter b is control of right parameter.
(5) outline line characteristic energies function E
scontrol Contour extraction as level and smooth item constraint, guarantee that it is level and smooth for following the tracks of the curve generated, and is absorbed in local extremum when preventing from solving simultaneously.
In definition outline line, point is N={n
0, n
1..., n
n, wherein n
0for profile starting point, then outline line characteristic energies function is defined as
Wherein d
s(n
i, n
i-1) represent some n
iwith a n
i-1between length of curve, parameter c is control of right parameter.
(6) outline line distance energy function E
dthe elastic restraint item of Contour extraction, it for retrain Contour extraction process outline line in the distance of each point, make Contour extraction curve exceed region of search, ensure that outline line does not intersect mutually.
Wherein d
e(n
i, n
0) represent Parametric Representation point n
iwith a n
0between vertical range, d is control of right parameter.
(7) to each initial profile line in image from the left side starting point of image, upgraded the position of each point by image base to top, according to outline line gross energy computing method in step (2 ~ 6), Contour extraction energy is calculated to each point;
(8) minimum value of energy function is solved.Search for the pixel in the pixel column adjacent with current outline point, search meets the marginal point P={p of energy function definition
1, p
2... p
nin there is the point of minimum energy value, select the pixel position with least energy as new point; To right side on the left of image, repeat search minimum value, obtains final Contour searching result, namely obtains target wheel profile V.
(9) using the concentration gradient progressively deepened from the bottom to top hypothesis as a priori assumption gradient former, according to Fig. 3, depth value filling is carried out to the region with different outline lines, calculate depth profile.
Outline line concentrates outline line V={v
0, v
1..., v
m, corresponding distribution depth value is.
(10) original image information and gained deep image information is utilized to be optimized process to the depth image obtained:
Wherein Depth (x
i) be input depth image, Ω (x
i) be with pixel x
icentered by neighborhood, I (x
i) be pixel x
ibrightness I component, pixel x
jfor pixel x
iat neighborhood Ω (x
i) neighborhood territory pixel, W (x
i) be the normalized factor of filter parameter; || x
i-x
j|| be the space Euclidean distance of two pixels; I (x
i)-I (x
j) represent the brightness similarity of two pixels, pixel x
iand x
jvolume coordinate be respectively (x
ix, x
iy) and (x
jx, x
jy).Space weight coefficient
with colourity weight coefficient
be defined as:
Wherein σ
sfor variance, the σ of space weight
rfor the variance of colourity weight.
(11) obtain to input monocular image carry out the disparity map after depth information estimation.
Claims (3)
1., based on the monocular image depth information method of estimation of contour acuity, it is characterized in that, comprise the steps:
(1) for input picture, carry out rim detection, obtain the marginal point P={p of objects in images
1, p
2... p
n;
(2) carry out the definition of initial profile line according to priori concentration gradient model, define one group of parallel to each other and outline line that spacing is equal as initial profile line V={v
0, v
1... v
m; Wherein, v (s)=[x (s), y (s)], x (s), y (s) they are the coordinate of the some s in initial profile line;
The gross energy of outline line is defined as
E
total=w
1E
edge+w
2E
sharp+w
3E
s+w
4E
d
Wherein E
edgefor contour edge energy function, represent the gradient amplitude of profile;
E
sharpcontour acuity energy function, represents the fog-level of profile;
E
sfor outline line characteristic energies function, for retraining the smoothness of profile;
E
dfor outline line distance energy function, region of search can not be exceeded for controlling profile aircraft pursuit course;
W
1, w
2, w
3, w
4for each function weight controling parameters;
(3) to each initial profile line in image from the left side starting point of image, upgraded the position of each point by image base to top, according to outline line gross energy definition in step (2), Contour extraction energy is calculated to each point;
(4) minimum value of energy function is solved; Search for the pixel in the pixel column adjacent with current outline point, search meets the marginal point P={p of energy function definition
1, p
2... p
nin there is the point of minimum energy value, select the pixel position with least energy as new point; To right side on the left of image, repeat search minimum value, obtains final Contour searching result, namely obtains target wheel profile V;
(5) using concentration gradient hypothesis as a priori assumption gradient former, depth value filling is carried out to the region with different outline lines, calculates depth profile;
Outline line concentrates outline line V={v
0, v
1..., v
m, for i=0 ... the distribution depth value Depth that m is corresponding is:
(6) original image half-tone information and gained deep image information is utilized to be optimized process to the depth image obtained:
Wherein Depth (x
i) be input depth image, Ω (x
i) be with pixel x
icentered by neighborhood, I (x
i) be pixel x
ibrightness I component, x
jfor pixel x
iat neighborhood Ω (x
i) neighborhood territory pixel, W (x
i) be the normalized factor of filter parameter; || x
i-x
j|| be the space Euclidean distance of two pixels; I (x
i)-I (x
j) represent the brightness similarity of two pixels, pixel x
iand x
jvolume coordinate be respectively (x
ix, x
iy) and (x
jx, x
jy); Space weight coefficient
with colourity weight coefficient
be defined as:
Wherein σ
sfor variance, the σ of space weight
rfor the variance of colourity weight.
(7) obtain to input monocular image carry out the disparity map after depth information estimation.
2. the monocular image depth information method of estimation based on contour acuity according to claim 1, is characterized in that, described contour edge energy function E
edgefor image I (x, y) is along the gradient amplitude size of gradient direction, be defined as:
Wherein,
That gradient amplitude is asked to I (x, y);
Wherein parameter a is control of right parameter.
3. the monocular image depth information method of estimation based on contour acuity according to claim 1, is characterized in that, described contour acuity energy function E
sharpobtain to solve gradient profile acutance, be defined as:
Wherein, gradient profile acutance σ (p (q
0)) be the root mean square of gradient profile variable variance.Here gradient profile is edge pixel q in image
0(x
0, y
0) as starting point, along the extrorse boundary tracking of gradient direction, until the gradient amplitude at edge no longer changes, institute obtains one dimension path p (q
0) the gradient amplitude curve that formed; Its gradient profile acutance is defined as
Wherein, d
c(q, q
0) be gradient profile mid point q and q
0between length of curve, g (q) is the gradient amplitude at q point place, G (q
0) in gradient profile gradient amplitude a little and, s is any point in gradient profile, and parameter b is control of right parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510786727.1A CN105374039B (en) | 2015-11-16 | 2015-11-16 | Monocular image depth information method of estimation based on contour acuity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510786727.1A CN105374039B (en) | 2015-11-16 | 2015-11-16 | Monocular image depth information method of estimation based on contour acuity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105374039A true CN105374039A (en) | 2016-03-02 |
CN105374039B CN105374039B (en) | 2018-09-21 |
Family
ID=55376211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510786727.1A Active CN105374039B (en) | 2015-11-16 | 2015-11-16 | Monocular image depth information method of estimation based on contour acuity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105374039B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107204010A (en) * | 2017-04-28 | 2017-09-26 | 中国科学院计算技术研究所 | A kind of monocular image depth estimation method and system |
CN107582001A (en) * | 2017-10-20 | 2018-01-16 | 珠海格力电器股份有限公司 | Dish-washing machine and its control method, device and system |
CN108647713A (en) * | 2018-05-07 | 2018-10-12 | 宁波华仪宁创智能科技有限公司 | Embryo's Boundary Recognition and laser trace approximating method |
CN109087346A (en) * | 2018-09-21 | 2018-12-25 | 北京地平线机器人技术研发有限公司 | Training method, training device and the electronic equipment of monocular depth model |
JP2020524355A (en) * | 2018-05-23 | 2020-08-13 | 浙江商▲湯▼科技▲開▼▲発▼有限公司Zhejiang Sensetime Technology Development Co., Ltd. | Method and apparatus for recovering depth of monocular image, computer device |
US10769805B2 (en) | 2018-05-15 | 2020-09-08 | Wistron Corporation | Method, image processing device, and system for generating depth map |
CN112396645A (en) * | 2020-11-06 | 2021-02-23 | 华中科技大学 | Monocular image depth estimation method and system based on convolution residual learning |
CN112446946A (en) * | 2019-08-28 | 2021-03-05 | 深圳市光鉴科技有限公司 | Depth reconstruction method, system, device and medium based on sparse depth and boundary |
CN116503821A (en) * | 2023-06-19 | 2023-07-28 | 成都经开地理信息勘测设计院有限公司 | Road identification recognition method and system based on point cloud data and image recognition |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100033617A1 (en) * | 2008-08-05 | 2010-02-11 | Qualcomm Incorporated | System and method to generate depth data using edge detection |
US20100141651A1 (en) * | 2008-12-09 | 2010-06-10 | Kar-Han Tan | Synthesizing Detailed Depth Maps from Images |
CN101840574A (en) * | 2010-04-16 | 2010-09-22 | 西安电子科技大学 | Depth estimation method based on edge pixel features |
CN102883175A (en) * | 2012-10-23 | 2013-01-16 | 青岛海信信芯科技有限公司 | Methods for extracting depth map, judging video scene change and optimizing edge of depth map |
CN103793918A (en) * | 2014-03-07 | 2014-05-14 | 深圳市辰卓科技有限公司 | Image definition detecting method and device |
-
2015
- 2015-11-16 CN CN201510786727.1A patent/CN105374039B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100033617A1 (en) * | 2008-08-05 | 2010-02-11 | Qualcomm Incorporated | System and method to generate depth data using edge detection |
US20100141651A1 (en) * | 2008-12-09 | 2010-06-10 | Kar-Han Tan | Synthesizing Detailed Depth Maps from Images |
CN101840574A (en) * | 2010-04-16 | 2010-09-22 | 西安电子科技大学 | Depth estimation method based on edge pixel features |
CN102883175A (en) * | 2012-10-23 | 2013-01-16 | 青岛海信信芯科技有限公司 | Methods for extracting depth map, judging video scene change and optimizing edge of depth map |
CN103793918A (en) * | 2014-03-07 | 2014-05-14 | 深圳市辰卓科技有限公司 | Image definition detecting method and device |
Non-Patent Citations (2)
Title |
---|
NATALIA NEVEROVA ET AL.: "Edge Based Method for Sharp Region Extraction from Low Depth of Field Images", 《VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2012 IEEE》 * |
YONG JU JUNG ET AL.: "A novel 2D-to-3D conversion technique based on relative height depth cue", 《PROCEEDINGS OF THE SPIE》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107204010A (en) * | 2017-04-28 | 2017-09-26 | 中国科学院计算技术研究所 | A kind of monocular image depth estimation method and system |
CN107204010B (en) * | 2017-04-28 | 2019-11-19 | 中国科学院计算技术研究所 | A kind of monocular image depth estimation method and system |
CN107582001A (en) * | 2017-10-20 | 2018-01-16 | 珠海格力电器股份有限公司 | Dish-washing machine and its control method, device and system |
CN107582001B (en) * | 2017-10-20 | 2020-08-11 | 珠海格力电器股份有限公司 | Dish washing machine and control method, device and system thereof |
CN108647713B (en) * | 2018-05-07 | 2021-04-02 | 宁波华仪宁创智能科技有限公司 | Embryo boundary identification and laser track fitting method |
CN108647713A (en) * | 2018-05-07 | 2018-10-12 | 宁波华仪宁创智能科技有限公司 | Embryo's Boundary Recognition and laser trace approximating method |
US10769805B2 (en) | 2018-05-15 | 2020-09-08 | Wistron Corporation | Method, image processing device, and system for generating depth map |
JP2020524355A (en) * | 2018-05-23 | 2020-08-13 | 浙江商▲湯▼科技▲開▼▲発▼有限公司Zhejiang Sensetime Technology Development Co., Ltd. | Method and apparatus for recovering depth of monocular image, computer device |
US11004221B2 (en) | 2018-05-23 | 2021-05-11 | Zhejiang Sensetime Technology Development Co., Ltd. | Depth recovery methods and apparatuses for monocular image, and computer devices |
CN109087346A (en) * | 2018-09-21 | 2018-12-25 | 北京地平线机器人技术研发有限公司 | Training method, training device and the electronic equipment of monocular depth model |
CN112446946A (en) * | 2019-08-28 | 2021-03-05 | 深圳市光鉴科技有限公司 | Depth reconstruction method, system, device and medium based on sparse depth and boundary |
CN112396645A (en) * | 2020-11-06 | 2021-02-23 | 华中科技大学 | Monocular image depth estimation method and system based on convolution residual learning |
CN112396645B (en) * | 2020-11-06 | 2022-05-31 | 华中科技大学 | Monocular image depth estimation method and system based on convolution residual learning |
CN116503821A (en) * | 2023-06-19 | 2023-07-28 | 成都经开地理信息勘测设计院有限公司 | Road identification recognition method and system based on point cloud data and image recognition |
CN116503821B (en) * | 2023-06-19 | 2023-08-25 | 成都经开地理信息勘测设计院有限公司 | Road identification recognition method and system based on point cloud data and image recognition |
Also Published As
Publication number | Publication date |
---|---|
CN105374039B (en) | 2018-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105374039A (en) | Monocular image depth information estimation method based on contour acuity | |
CN109934848B (en) | Method for accurately positioning moving object based on deep learning | |
CN103077521B (en) | A kind of area-of-interest exacting method for video monitoring | |
CN107590831B (en) | Stereo matching method based on deep learning | |
CN103927016A (en) | Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision | |
CN104966286A (en) | 3D video saliency detection method | |
CN105457908B (en) | The sorting method for rapidly positioning and system of small size glass panel based on monocular CCD | |
CN102622769A (en) | Multi-target tracking method by taking depth as leading clue under dynamic scene | |
CN102902355A (en) | Space interaction method of mobile equipment | |
CN104574366A (en) | Extraction method of visual saliency area based on monocular depth map | |
CN102799646B (en) | A kind of semantic object segmentation method towards multi-view point video | |
CN104299263A (en) | Method for modeling cloud scene based on single image | |
CN103237155B (en) | The tracking of the target that a kind of single-view is blocked and localization method | |
CN104517317A (en) | Three-dimensional reconstruction method of vehicle-borne infrared images | |
CN103903256B (en) | Depth estimation method based on relative height-depth clue | |
CN104517095A (en) | Head division method based on depth image | |
CN103473743A (en) | Method for obtaining image depth information | |
CN103955945A (en) | Self-adaption color image segmentation method based on binocular parallax and movable outline | |
CN106355608A (en) | Stereoscopic matching method on basis of variable-weight cost computation and S-census transformation | |
CN104102904A (en) | Static gesture identification method | |
CN103985128A (en) | Three-dimensional matching method based on color intercorrelation and self-adaptive supporting weight | |
CN102740096A (en) | Space-time combination based dynamic scene stereo video matching method | |
CN106952312A (en) | It is a kind of based on line feature describe without mark augmented reality register method | |
CN108961385A (en) | A kind of SLAM patterning process and device | |
CN102663777A (en) | Target tracking method and system based on multi-view video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |