CN106780588A - A kind of image depth estimation method based on sparse laser observations - Google Patents

A kind of image depth estimation method based on sparse laser observations Download PDF

Info

Publication number
CN106780588A
CN106780588A CN201611126056.7A CN201611126056A CN106780588A CN 106780588 A CN106780588 A CN 106780588A CN 201611126056 A CN201611126056 A CN 201611126056A CN 106780588 A CN106780588 A CN 106780588A
Authority
CN
China
Prior art keywords
depth
laser
sparse
estimation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611126056.7A
Other languages
Chinese (zh)
Inventor
刘勇
廖依伊
王越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201611126056.7A priority Critical patent/CN106780588A/en
Publication of CN106780588A publication Critical patent/CN106780588A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

This method discloses a kind of image depth estimation method based on sparse laser observations, and the method proposes to realize the dense reconstruct of depth based on monocular image using the sparse observation of single line or many line lasers.Deep neural network is trained by way of constructing reference depth figure and residual error depth map, sparse part Observational depth information is made full use of.Method of the present invention compared to estimation of Depth is only carried out with monocular image, the method embodies clear superiority.

Description

A kind of image depth estimation method based on sparse laser observations
Technical field
It is thick the present invention relates to scene depth estimation field, more particularly to a kind of scene based on monocular image and sparse laser Close depth estimation method.
Background technology
The mankind are based on rich experience and constantly learn, and also have the energy for estimating that objects in images is far and near from monocular image Power, i.e., estimation of Depth ability to a certain extent.In recent years, machine learning method is also imitating this estimation of Depth ability of the mankind On make remarkable progress, wherein being had outstanding performance with the depth learning technology of data-driven especially.Such a technique avoids manual spy Design process is levied, based on original monocular RGB image learning characteristic, and the prediction for image correspondence depth is exported.
Eigen et al. proposes monocular depth based on deep learning and estimates first, they construct one it is two stage Estimation of Depth network, first stage generation rough estimate simultaneously carries out fine adjustment in second stage.Then, they are by the work Expand to and estimate scene depth, depth normal vector and Scene Semantics simultaneously, and demonstrate estimating depth normal vector simultaneously and Semantic information contributes to scene depth to estimate performance boost.Liu et al. has inquired into the depth with condition random field with reference to deep learning Degree estimation, super-pixel segmentation is carried out to image, and all super-pixel structural environment random fields are optimized.Li and Wang distinguishes Extended thereon, successively optimized from super-pixel aspect to pixel aspect by the condition random field being layered.
Although these method validations from the possibility of monocular image estimating depth, actually monocular image be in itself yardstick letter Breath missing.Eigen et al. is also mentioned, and the estimation of Depth based on monocular image there may be a deviation for the overall situation.
The content of the invention
Image dense depth is estimated it is an object of the invention to combine sparse single line laser information, to reduce scene depth Estimate global deviation, obtain confidence level scene depth higher and estimate.
To achieve the above object, the present invention is based on deep learning method, is input with monocular image and sparse single line laser, Autonomous learning feature simultaneously obtains dense depth estimation, and training process is comprised the following steps that:
A kind of depth image estimating method based on sparse laser observations, it is characterised in that it comprises the following steps:
Step one, is that, by sparse single line laser information denseization, the sparse laser includes single line laser and many line lasers, Wherein with the single line laser construction reference depth figure and residual error depth map in sparse laser, in three dimensions in single line laser Each laser spots stretched with direction perpendicular to the ground, obtain a reference depth face perpendicular to the ground;According to monocular The calibration information of camera and single line laser, the reference depth plane projection that will be obtained in three dimensions to monocular camera obtains image Image plane on, obtain a reference depth figure corresponding with described image, the absolute depth that will be obtained by depth transducer Figure makes the difference with reference depth figure, obtains residual error depth map;
Step 2, the monocular image that monocular camera is obtained and the reference depth figure that obtains as described in step one are used as instruction Practice data, training volume machine neural network estimates corresponding residual error depth map;
Step 3, the residual error depth map that will roll up machine neural network estimation is added with reference depth figure, and that is estimated is absolute Depth map, referred to as absolute depth estimate figure, and the volume machine neural network of further constitution optimization on this basis, reduce this absolute Difference between the absolute depth figure that estimation of Depth figure and depth transducer are obtained;The volume machine neural network and step 2 of the optimization It is described to carry out end-to-end optimization for estimating that the volume machine neural network of residual error depth be superimposed, that is, it is input into monocular figure Picture and reference depth figure, output obtain the absolute depth estimation figure by optimizing.
On the basis of above-mentioned technical proposal, the present invention can also be using further technical scheme once:
The absolute depth estimation figure that the end-to-end output of deep neural network is obtained is passed through into condition with sparse laser depth map Random field is merged, so as to confirm that its depth value of position for having single line laser to observe in absolute depth estimation figure is seen with laser The depth value of survey is consistent.
In step 2, training volume machine neural network estimates that corresponding residual error depth map mode is as follows:By depth to be fitted The value discretization of the residual error depth of each pixel on residual plot to several natural numbers numerically, to classify, realize to residual by form The estimation of Depth of difference depth.
Due to using technical scheme, beneficial effects of the present invention to be:It is of the invention can sparse true of bound fraction The observation of real depth, such as single line laser radar, and more accurately estimation of Depth is obtained, the present invention can reduce scene depth estimation Global deviation, obtains confidence level scene depth higher and estimates.
Brief description of the drawings
Fig. 1 a are input monocular image;
Fig. 1 b are the depth image example of expectation estimation;
Fig. 2 a are sparse laser observations;
Fig. 2 b be reference depth figure with;
Fig. 2 c residual error depth illustrated examples;
Fig. 3 a are that depth image is true;
Fig. 3 b are estimation of Depth before optimization;
Fig. 3 c are estimation of Depth after optimization.
Specific embodiment
In order to be better understood from technical scheme, it is further described below in conjunction with accompanying drawing.Fig. 1 illustrates depth The example of estimation, is input into the monocular image shown in Fig. 1 a, it is desirable to estimate the scene depth shown in Fig. 1 b.
Step one, based on single line laser construction reference depth figure and residual error depth map.Fig. 2 a illustrate known in Fig. 1 Single line laser information, it is seen that single line laser information is very sparse and limited.Be by sparse single line laser information denseization, Each laser spots is stretched with direction perpendicular to the ground in three dimensions, obtains a reference depth perpendicular to the ground Face.According to monocular camera and the calibration information of single line laser, the reference depth plane correspondence that will be obtained in three-dimensional is plotted to image On, a dense reference depth figure corresponding with image is obtained, as shown in Figure 2 b.Real depth map is done with reference depth figure Difference, obtains residual error depth map, as shown in Figure 2 c.
Step 2, is input, regression criterion depth map with monocular image and reference depth figure based on deep learning.Will be every To in several integer values, to classify, form realizes estimation of Depth to the residual error depth value discretization of individual pixel.Construct full convolution form Deep neural network, realize that depth value classification in each pixel is estimated.To obtain preferably fitting performance and bigger appearance Amount, the 50 layers of Deep Residual Network proposed using He et al., and the network for obtaining is trained on ImageNet with it It is trained as initial value.
Step 3, network-evaluated residual error depth map is added with reference depth figure, the real depth map estimated, and Further constitution optimization network, the difference reduced between the real depth map of the estimation and actual real depth map on this basis It is different.The optimization network can estimate that network is superimposed with residual error, carry out end-to-end optimization.Fig. 2 illustrates depth true value, excellent The comparing of estimation of Depth after changing preceding estimation of Depth and optimizing.
Validity by carrying out methods of experiments on NYUD2 data sets of the invention.NYUD2 is an interior RGB- D data sets, this method simulates generation single line laser data in RGB-D data.Main advantage of the invention with reference depth Figure and the map generalization of residual error depth and estimation.Therefore, experiment is compared under identical neural network structure, only with RGB as defeated Enter predetermined depth and estimate real depth (scheme one), real depth (scheme is estimated as input using RGB and reference depth figure Two), and using RGB and reference depth figure estimate that residual error depth map further obtains the knot of real depth map (scheme three) Really, and the depth estimation method based on monocular image leading with our times is compared, the specific comparative result such as institute of table 1 Show.
To assess estimation of Depth effect in all directions, table 1 uses 6 Measure Indexes.Make the estimation of Depth value of each pixel ForReal depth value y, T are the intersection of all pixels point, and 6 Measure Indexes are as follows respectively:
1. absolute relative error (rel),
2. average Log errors (log10),
3. square mean error (rms),
4. three threshold value accuracy rate (δi), meet condition'sIt is shared
The ratio of all pixels point.
From the result of table 1, the performance of estimation of Depth can necessarily be carried directly as input after laser denseization Rise, and estimation of Depth performance then can be further lifted by way of residual error is estimated plus follow-up optimization.Compared to remaining generation The leading monocular image depth estimation algorithm in boundary, this method all has a clear superiority on indices.
The NYUD2 data sets estimation of Depth of table 1 is contrasted.
Above-described embodiment is the description of the invention, is not limitation of the invention, it is any to simple transformation of the present invention after Scheme belong to protection scope of the present invention.

Claims (3)

1. a kind of depth image estimating method based on sparse laser observations, it is characterised in that it comprises the following steps:
Step one, is that, by sparse single line laser information denseization, the sparse laser includes single line laser and many line lasers, wherein With single line laser construction reference depth figure and residual error depth map in sparse laser, in three dimensions to single line laser in it is every Individual laser spots are stretched with direction perpendicular to the ground, obtain a reference depth face perpendicular to the ground;According to monocular camera With the calibration information of single line laser, the reference depth face that will be obtained in three dimensions project to monocular camera obtain image picture put down On face, a reference depth figure corresponding with described image, the absolute depth figure that will be obtained by depth transducer and ginseng are obtained Examine depth map to make the difference, obtain residual error depth map;
Step 2, the monocular image that monocular camera is obtained and the reference depth figure that obtains as described in step one are used as training number According to training volume machine neural network estimates corresponding residual error depth map;
Step 3, the residual error depth map that will roll up machine neural network estimation is added with reference depth figure, the absolute depth estimated Figure, referred to as absolute depth estimate figure, and the volume machine neural network of further constitution optimization on this basis,;The volume machine of the optimization Neutral net can be superimposed with described in step 2 for estimating the volume machine neural network of residual error depth, carry out end-to-end excellent Change, that is, be input into monocular image and reference depth figure, output obtains the absolute depth estimation figure by optimizing.
2. as described in claim 1 a kind of image depth estimation method based on sparse laser observations, it is characterised in that will The end-to-end absolute depth estimation figure for obtaining that exports of deep neural network is carried out with sparse laser depth map by condition random field Fusion, so as to confirm the depth value of its depth value of the position and laser observations for thering is single line laser to observe in absolute depth estimation figure It is consistent.
3. as described in claim 1 a kind of image depth estimation method based on sparse laser observations, it is characterised in that step In rapid two, training volume machine neural network estimates that corresponding residual error depth map mode is as follows:By on depth residual plot to be fitted The value discretization of the residual error depth of each pixel numerically, the depth to residual error depth is realized in form of classifying to several natural numbers Degree is estimated.
CN201611126056.7A 2016-12-09 2016-12-09 A kind of image depth estimation method based on sparse laser observations Pending CN106780588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611126056.7A CN106780588A (en) 2016-12-09 2016-12-09 A kind of image depth estimation method based on sparse laser observations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611126056.7A CN106780588A (en) 2016-12-09 2016-12-09 A kind of image depth estimation method based on sparse laser observations

Publications (1)

Publication Number Publication Date
CN106780588A true CN106780588A (en) 2017-05-31

Family

ID=58877585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611126056.7A Pending CN106780588A (en) 2016-12-09 2016-12-09 A kind of image depth estimation method based on sparse laser observations

Country Status (1)

Country Link
CN (1) CN106780588A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992848A (en) * 2017-12-19 2018-05-04 北京小米移动软件有限公司 Obtain the method, apparatus and computer-readable recording medium of depth image
CN108416840A (en) * 2018-03-14 2018-08-17 大连理工大学 A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera
CN108489496A (en) * 2018-04-28 2018-09-04 北京空间飞行器总体设计部 Noncooperative target Relative Navigation method for estimating based on Multi-source Information Fusion and system
CN108510535A (en) * 2018-03-14 2018-09-07 大连理工大学 A kind of high quality depth estimation method based on depth prediction and enhancing sub-network
CN109035319A (en) * 2018-07-27 2018-12-18 深圳市商汤科技有限公司 Monocular image depth estimation method and device, equipment, program and storage medium
CN109087349A (en) * 2018-07-18 2018-12-25 亮风台(上海)信息科技有限公司 A kind of monocular depth estimation method, device, terminal and storage medium
CN109146944A (en) * 2018-10-30 2019-01-04 浙江科技学院 A kind of space or depth perception estimation method based on the revoluble long-pending neural network of depth
CN109300151A (en) * 2018-07-02 2019-02-01 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment
CN109325972A (en) * 2018-07-25 2019-02-12 深圳市商汤科技有限公司 Processing method, device, equipment and the medium of laser radar sparse depth figure
CN109461178A (en) * 2018-09-10 2019-03-12 中国科学院自动化研究所 A kind of monocular image depth estimation method and device merging sparse known label
CN110232361A (en) * 2019-06-18 2019-09-13 中国科学院合肥物质科学研究院 Human body behavior intension recognizing method and system based on the dense network of three-dimensional residual error
CN110428462A (en) * 2019-07-17 2019-11-08 清华大学 Polyphaser solid matching method and device
CN110992271A (en) * 2020-03-04 2020-04-10 腾讯科技(深圳)有限公司 Image processing method, path planning method, device, equipment and storage medium
CN111062981A (en) * 2019-12-13 2020-04-24 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN111680554A (en) * 2020-04-29 2020-09-18 北京三快在线科技有限公司 Depth estimation method and device for automatic driving scene and autonomous vehicle
US10810754B2 (en) 2018-04-24 2020-10-20 Ford Global Technologies, Llc Simultaneous localization and mapping constraints in generative adversarial networks for monocular depth estimation
CN112712017A (en) * 2020-12-29 2021-04-27 上海智蕙林医疗科技有限公司 Robot, monocular depth estimation method and system and storage medium
CN113034562A (en) * 2019-12-09 2021-06-25 百度在线网络技术(北京)有限公司 Method and apparatus for optimizing depth information
CN113219475A (en) * 2021-07-06 2021-08-06 北京理工大学 Method and system for correcting monocular distance measurement by using single line laser radar

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
CN104346608A (en) * 2013-07-26 2015-02-11 株式会社理光 Sparse depth map densing method and device
CN106157307A (en) * 2016-06-27 2016-11-23 浙江工商大学 A kind of monocular image depth estimation method based on multiple dimensioned CNN and continuous CRF

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
CN104346608A (en) * 2013-07-26 2015-02-11 株式会社理光 Sparse depth map densing method and device
CN106157307A (en) * 2016-06-27 2016-11-23 浙江工商大学 A kind of monocular image depth estimation method based on multiple dimensioned CNN and continuous CRF

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FAYAO LIU 等: "Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
YIYI LIAO 等: "Parse Geometry from a Line: Monocular Depth Estimation with Partial Laser Observation", 《ARXIV:1611.02174V1 [CS.CV]》 *

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992848B (en) * 2017-12-19 2020-09-25 北京小米移动软件有限公司 Method and device for acquiring depth image and computer readable storage medium
CN107992848A (en) * 2017-12-19 2018-05-04 北京小米移动软件有限公司 Obtain the method, apparatus and computer-readable recording medium of depth image
CN108416840B (en) * 2018-03-14 2020-02-18 大连理工大学 Three-dimensional scene dense reconstruction method based on monocular camera
CN108416840A (en) * 2018-03-14 2018-08-17 大连理工大学 A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera
CN108510535A (en) * 2018-03-14 2018-09-07 大连理工大学 A kind of high quality depth estimation method based on depth prediction and enhancing sub-network
CN108510535B (en) * 2018-03-14 2020-04-24 大连理工大学 High-quality depth estimation method based on depth prediction and enhancer network
US10810754B2 (en) 2018-04-24 2020-10-20 Ford Global Technologies, Llc Simultaneous localization and mapping constraints in generative adversarial networks for monocular depth estimation
CN108489496A (en) * 2018-04-28 2018-09-04 北京空间飞行器总体设计部 Noncooperative target Relative Navigation method for estimating based on Multi-source Information Fusion and system
CN109300151A (en) * 2018-07-02 2019-02-01 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment
CN109300151B (en) * 2018-07-02 2021-02-12 浙江商汤科技开发有限公司 Image processing method and device and electronic equipment
CN109087349A (en) * 2018-07-18 2018-12-25 亮风台(上海)信息科技有限公司 A kind of monocular depth estimation method, device, terminal and storage medium
CN109087349B (en) * 2018-07-18 2021-01-26 亮风台(上海)信息科技有限公司 Monocular depth estimation method, device, terminal and storage medium
CN109325972A (en) * 2018-07-25 2019-02-12 深圳市商汤科技有限公司 Processing method, device, equipment and the medium of laser radar sparse depth figure
CN109325972B (en) * 2018-07-25 2020-10-27 深圳市商汤科技有限公司 Laser radar sparse depth map processing method, device, equipment and medium
KR102292559B1 (en) * 2018-07-27 2021-08-24 선전 센스타임 테크놀로지 컴퍼니 리미티드 Monocular image depth estimation method and apparatus, apparatus, program and storage medium
WO2020019761A1 (en) * 2018-07-27 2020-01-30 深圳市商汤科技有限公司 Monocular image depth estimation method and apparatus, device, program and storage medium
TWI766175B (en) * 2018-07-27 2022-06-01 大陸商深圳市商湯科技有限公司 Method, device and apparatus for monocular image depth estimation, program and storage medium thereof
CN109035319B (en) * 2018-07-27 2021-04-30 深圳市商汤科技有限公司 Monocular image depth estimation method, monocular image depth estimation device, monocular image depth estimation apparatus, monocular image depth estimation program, and storage medium
KR20200044108A (en) * 2018-07-27 2020-04-28 선전 센스타임 테크놀로지 컴퍼니 리미티드 Method and apparatus for estimating monocular image depth, device, program and storage medium
US11443445B2 (en) 2018-07-27 2022-09-13 Shenzhen Sensetime Technology Co., Ltd. Method and apparatus for depth estimation of monocular image, and storage medium
JP2021500689A (en) * 2018-07-27 2021-01-07 深▲せん▼市商▲湯▼科技有限公司Shenzhen Sensetime Technology Co., Ltd. Monocular image depth estimation method and equipment, equipment, programs and storage media
CN109035319A (en) * 2018-07-27 2018-12-18 深圳市商汤科技有限公司 Monocular image depth estimation method and device, equipment, program and storage medium
CN109461178A (en) * 2018-09-10 2019-03-12 中国科学院自动化研究所 A kind of monocular image depth estimation method and device merging sparse known label
CN109146944B (en) * 2018-10-30 2020-06-26 浙江科技学院 Visual depth estimation method based on depth separable convolutional neural network
CN109146944A (en) * 2018-10-30 2019-01-04 浙江科技学院 A kind of space or depth perception estimation method based on the revoluble long-pending neural network of depth
CN110232361A (en) * 2019-06-18 2019-09-13 中国科学院合肥物质科学研究院 Human body behavior intension recognizing method and system based on the dense network of three-dimensional residual error
CN110232361B (en) * 2019-06-18 2021-04-02 中国科学院合肥物质科学研究院 Human behavior intention identification method and system based on three-dimensional residual dense network
CN110428462A (en) * 2019-07-17 2019-11-08 清华大学 Polyphaser solid matching method and device
CN110428462B (en) * 2019-07-17 2022-04-08 清华大学 Multi-camera stereo matching method and device
CN113034562A (en) * 2019-12-09 2021-06-25 百度在线网络技术(北京)有限公司 Method and apparatus for optimizing depth information
CN113034562B (en) * 2019-12-09 2023-05-12 百度在线网络技术(北京)有限公司 Method and apparatus for optimizing depth information
CN111062981A (en) * 2019-12-13 2020-04-24 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN111062981B (en) * 2019-12-13 2023-05-05 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN110992271A (en) * 2020-03-04 2020-04-10 腾讯科技(深圳)有限公司 Image processing method, path planning method, device, equipment and storage medium
CN110992271B (en) * 2020-03-04 2020-07-07 腾讯科技(深圳)有限公司 Image processing method, path planning method, device, equipment and storage medium
CN111680554A (en) * 2020-04-29 2020-09-18 北京三快在线科技有限公司 Depth estimation method and device for automatic driving scene and autonomous vehicle
CN112712017A (en) * 2020-12-29 2021-04-27 上海智蕙林医疗科技有限公司 Robot, monocular depth estimation method and system and storage medium
CN113219475A (en) * 2021-07-06 2021-08-06 北京理工大学 Method and system for correcting monocular distance measurement by using single line laser radar
CN113219475B (en) * 2021-07-06 2021-10-22 北京理工大学 Method and system for correcting monocular distance measurement by using single line laser radar

Similar Documents

Publication Publication Date Title
CN106780588A (en) A kind of image depth estimation method based on sparse laser observations
CN110555434B (en) Method for detecting visual saliency of three-dimensional image through local contrast and global guidance
CN103824050B (en) A kind of face key independent positioning method returned based on cascade
CN107204010A (en) A kind of monocular image depth estimation method and system
CN110419049A (en) Room layout estimation method and technology
US11182644B2 (en) Method and apparatus for pose planar constraining on the basis of planar feature extraction
CN109377530A (en) A kind of binocular depth estimation method based on deep neural network
CN108648161A (en) The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
CN108537837A (en) A kind of method and relevant apparatus of depth information determination
CN108510535A (en) A kind of high quality depth estimation method based on depth prediction and enhancing sub-network
CN105956597A (en) Binocular stereo matching method based on convolution neural network
CN101394573B (en) Panoramagram generation method and system based on characteristic matching
CN106940704A (en) A kind of localization method and device based on grating map
CN111832655A (en) Multi-scale three-dimensional target detection method based on characteristic pyramid network
CN106127690A (en) A kind of quick joining method of unmanned aerial vehicle remote sensing image
CN101866497A (en) Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system
RU2476825C2 (en) Method of controlling moving object and apparatus for realising said method
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
CN111998862B (en) BNN-based dense binocular SLAM method
CN111402311A (en) Knowledge distillation-based lightweight stereo parallax estimation method
CN107944459A (en) A kind of RGB D object identification methods
CN111127522B (en) Depth optical flow prediction method, device, equipment and medium based on monocular camera
CN104156957A (en) Stable and high-efficiency high-resolution stereo matching method
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN103955942A (en) SVM-based depth map extraction method of 2D image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531