CN109461178A - A kind of monocular image depth estimation method and device merging sparse known label - Google Patents

A kind of monocular image depth estimation method and device merging sparse known label Download PDF

Info

Publication number
CN109461178A
CN109461178A CN201811050407.XA CN201811050407A CN109461178A CN 109461178 A CN109461178 A CN 109461178A CN 201811050407 A CN201811050407 A CN 201811050407A CN 109461178 A CN109461178 A CN 109461178A
Authority
CN
China
Prior art keywords
depth
sparse
rgb image
estimated
known label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811050407.XA
Other languages
Chinese (zh)
Inventor
张帆
张一帆
李耀宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences
Institute of Automation of Chinese Academy of Science
Original Assignee
Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences, Institute of Automation of Chinese Academy of Science filed Critical Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences
Priority to CN201811050407.XA priority Critical patent/CN109461178A/en
Publication of CN109461178A publication Critical patent/CN109461178A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention proposes a kind of monocular image depth estimation methods and device for merging sparse known label, comprising: obtains RGB image to be estimated;Sparse known label is obtained using single line laser radar;The RGB image to be estimated is input in the estimation of Depth model pre-established, the first depth map of RGB image to be estimated is obtained;First depth map is merged with sparse known label by full articulamentum, obtains the ultimate depth figure of the RGB image to be estimated.Technical solution provided by the invention reduces the uncertainty that depth map is mapped to from monocular image, to effectively estimate relatively reliable scene depth by merging sparse known label.

Description

A kind of monocular image depth estimation method and device merging sparse known label
Technical field
The present invention relates to field of image processings, and in particular to a kind of monocular image estimation of Depth for merging sparse known label Method and device.
Background technique
Monocular image estimates that scene depth is the important method for understanding geometry in scene, moreover, research it is many its When his computer vision problem, incorporating depth information can be improved the performance of algorithm, such as semantic segmentation, Attitude estimation, target Detection.There are the depth transducer (Kinect of such as Microsoft) of available RGB-D depth image, but this kind of sensor at present Being limited in scope and (being less than 4m) for perceived depth, can generate a large amount of noise under strong light, so have its limitation in various application scenarios Property.
Existing monocular image estimation of Depth there are many problems, for example, a two dimensional image correspond to it is infinite a variety of true 3D scene, this allows for for single image being mapped to depth map and exists uncertain, and this uncertainty determines computer Vision mode only can not go out accurate depth value by voucher width Image estimation in principle.
Therefore the present invention provides a kind of monocular image depth estimation method for merging sparse known label and device to solve The deficiencies in the prior art.
Summary of the invention
The present invention is intended to provide a kind of monocular image depth estimation method and device for merging sparse known label, solves mesh Preceding picture depth estimates inaccurate problem.
According to an aspect of the present invention, a kind of monocular image depth estimation method for merging sparse known label is provided, Include:
Obtain RGB image to be estimated;
Sparse known label is obtained using single line laser radar;
The RGB image to be estimated is input in the estimation of Depth model pre-established, RGB image to be estimated is obtained First depth map;
First depth map is merged with sparse known label by full articulamentum, obtains the RGB image to be estimated Ultimate depth figure.
It is further, described to obtain sparse known label using single line laser radar, comprising:
The point that single line laser radar is scanned projects to two-dimentional phase plane, obtains sparse known label.
Further, the estimation of Depth model includes:
Obtain multiple RGB images;
The feature of the RGB image is extracted using the depth residual error network of full convolution;
Using full articulamentum by the Feature Conversion be feature vector;
Network parameter according to the depth residual error network of the loss function training full convolution, after being optimized;
Estimation of Depth model is constructed according to the network parameter after optimization.
Further, the depth residual error network according to the loss function training full convolution, the net after being optimized Network parameter, comprising:
The loss function is shown below:
Wherein,It is respectively the real depth value and predetermined depth value of pixel i with yi;Xi is the real depth value of pixel i With the difference of predetermined depth value;C is threshold value.
Further, described to be merged first depth map with sparse known label by full articulamentum, it obtains described The ultimate depth figure of RGB image to be estimated, comprising:
According to the feature of first depth map and sparse known label feature, using full articulamentum obtain merging it is sparse Know the feature vector of label;
The feature vector of the sparse known label of fusion is converted to the ultimate depth figure of RGB image to be estimated.
According to a further aspect of the invention, it discloses a kind of monocular image estimation of Depth dresses for merging sparse known label It sets, comprising:
First obtains module, for obtaining RGB image to be estimated;
Second obtains module, for obtaining sparse known label using single line laser radar;
Processing module, for the RGB image to be estimated to be input in the estimation of Depth model pre-established, obtain to Estimate the first depth map of RGB image;
Fusion Module obtains described for being merged first depth map with sparse known label by full articulamentum The ultimate depth figure of RGB image to be estimated.
Further, described second module is obtained, is used for,
The point that single line laser radar is scanned projects to two-dimentional phase plane, obtains sparse known label.
Further, the processing module includes model construction submodule, and the model construction submodule includes:
Acquiring unit, for obtaining multiple RGB images;
Extraction unit extracts the feature of the RGB image for the depth residual error network using full convolution;
Converting unit is used to using full articulamentum be feature vector by the Feature Conversion;
Optimize unit, the net for the depth residual error network according to the loss function training full convolution, after being optimized Network parameter;
Construction unit, for constructing estimation of Depth model according to the network parameter after optimization.
Further, the Fusion Module, is used for,
According to the feature of first depth map and sparse known label feature, using full articulamentum obtain merging it is sparse Know the feature vector of label;
The feature vector of the sparse known label of fusion is converted to the ultimate depth figure of RGB image to be estimated.
The beneficial effect of the technical program is compared with the immediate prior art:
RGB image to be estimated is input in the estimation of Depth model pre-established by technical solution provided by the invention, is obtained To the first depth map of RGB image to be estimated;Sparse known label is obtained by the first depth map and using single line laser radar again Fusion, obtains the ultimate depth figure of the RGB image to be estimated.Pass through the side for merging the output of model with sparse known label Formula reduces the uncertainty that depth map is mapped to from monocular image, to effectively estimate relatively reliable scene depth.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is estimation of Depth model building method flow chart in the embodiment of the present application;
Fig. 3 is that the structure of Encoder-Decoder in the embodiment of the present application builds schematic diagram;
Fig. 4 is the partial structure diagram of Decoder in the embodiment of the present application.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiments of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
As shown in Figure 1, the present invention provides a kind of monocular image depth estimation method for merging sparse known label, process It is as follows:
S101, RGB image to be estimated is obtained;
S102, sparse known label is obtained using single line laser radar;
S103, the RGB image to be estimated is input in the estimation of Depth model pre-established, obtains RGB to be estimated First depth map of image;
S104, first depth map is merged by full articulamentum with sparse known label, obtains the RGB to be estimated The ultimate depth figure of image.
In the embodiment of the present application, RGB image to be estimated is input in the estimation of Depth model pre-established first, Obtain the first depth map of RGB image to be estimated;Then sparse known label is obtained using single line laser radar;By the first depth Figure is merged with sparse known label, obtains the depth map with Pixel-level deep tag.By the side for merging sparse known label Formula reduces the uncertainty that depth map is mapped to from monocular image, to effectively estimate relatively reliable scene depth.
Since single line two-dimensional laser radar has the characteristics that structure is simple, low-power consumption, low cost.Such as the production of Sick company Product LMS111, the price of the sensor are about 5 the percent of the HDL-64 of velodyne company.Because of these characteristics, the sensing Device is widely installed on some robots or pilotless automobile, for carrying out range measurement.Swashed using single line two dimension These features of optical radar, it is described to obtain sparse known label using single line laser radar in some embodiments of the present application, Include:
The point that single line laser radar is scanned projects to two-dimentional phase plane, obtains sparse known label.
It is, the point that single line laser radar is scanned projects to two-dimentional phase plane, at this point, these points are respectively positioned on one On level of approximation straight line, therefore the pixel on straight line is provided with depth value, the pixel outside straight line is with null filling.Due to whole Only having straight line in a phase plane has reliable depth value, therefore this prior information obtained by single line laser radar exists It is referred to as sparse known label in the present invention.
In some embodiments of the present application, as shown in Fig. 2, estimation of Depth model includes:
S201, multiple RGB images are obtained;
S202, the feature that the RGB image is extracted using the depth residual error network of full convolution;
S203, using full articulamentum by the Feature Conversion be feature vector;
S204, the network parameter according to the depth residual error network of the loss function training full convolution, after being optimized;
S205, estimation of Depth model is constructed according to the network parameter after optimization.
Model in the present embodiment is built according to the structure of Encoder-Decoder, as shown in Figure 3.Encoder is adopted part With the depth residual error network of 152 layers of full convolution, the high dimensional feature of low resolution is successively extracted from input picture. Decoder is mainly made of warp lamination, successively by the output of Encoder output up-sampling to 5 scales, final scale output Size be input picture size.
It is as follows in the optimization operation that the part Decoder carries out:
Firstly, by the characteristic pattern, extracted with the Encoder of this feature figure correspondingly-sized of upper warp lamination output Characteristic pattern and the output of a upper scale these three high dimension vectors are spliced into a high dimension vector;
Then deconvolution operation is carried out to it, can not only merged the information of three's high dimension vector, but also can amplify The size of this layer output;
Finally, being exported by the prediction that a convolutional layer obtains the scale, 5 in Decoder upper sampling process of model There is prediction to export on scale, so supervised training loss layer, the portion of Decoder are added in the prediction output of each scale Separation structure is as shown in Figure 4.
For model by berHu function as supervised training loss function, concrete form is as follows:
Wherein,It is respectively the real depth value and predetermined depth value of pixel i with yi;Xi is the real depth value of pixel i With the difference of predetermined depth value;C is threshold value, concrete form are as follows:
In some embodiments of the present application, it is described by full articulamentum by first depth map and sparse known label Fusion, obtains the ultimate depth figure of the RGB image to be estimated, comprising:
According to the feature of first depth map and sparse known label feature, using full articulamentum obtain merging it is sparse Know the feature vector of label;
The feature vector of the sparse known label of fusion is converted to the ultimate depth figure of RGB image to be estimated.
In the present embodiment, sparse known label is originally considered as the characteristic pattern in a channel, without the pixel of known depth value By null filling.Since convolution operation has with transformation property characteristic pattern, convolution kernel can be by the zero value pixels in sparse known label It is equal with the pixel of known depth value and treats.
In order to achieve the purpose that the sparse known label of fusion, Decoder final output and sparse known label are spliced into Then one tensor returns to obtain final depth map according to this tensor using full articulamentum.Because full articulamentum is not Weight is shared, the weight on each side be all it is different, zero and nonzero value on neuron node can be distinguished in this way, To acquire the absolute reference information that model needs from that a line depth value of offer.
Based on identical inventive concept, the present invention also provides a kind of monocular image depth for merging sparse known label to estimate Counter device, comprising:
First obtains module, for obtaining RGB image to be estimated;
Second obtains module, for obtaining sparse known label using single line laser radar;
Processing module, for the RGB image to be estimated to be input in the estimation of Depth model pre-established, obtain to Estimate the first depth map of RGB image;
Fusion Module obtains described for being merged first depth map with sparse known label by full articulamentum The ultimate depth figure of RGB image to be estimated.
Optionally, described second module is obtained, is used for,
The point that single line laser radar is scanned projects to two-dimentional phase plane, obtains sparse known label.
Optionally, the processing module includes model construction submodule, and the model construction submodule includes:
Acquiring unit, for obtaining multiple RGB images;
Extraction unit extracts the feature of the RGB image for the depth residual error network using full convolution;
Converting unit is used to using full articulamentum be feature vector by the Feature Conversion;
Optimize unit, the net for the depth residual error network according to the loss function training full convolution, after being optimized Network parameter;
Construction unit, for constructing estimation of Depth model according to the network parameter after optimization.
Optionally, the Fusion Module, is used for,
According to the feature of first depth map and sparse known label feature, using full articulamentum obtain merging it is sparse Know the feature vector of label;
The feature vector of the sparse known label of fusion is converted to the ultimate depth figure of RGB image to be estimated.
Particular embodiments described above has carried out further in detail the purpose of the present invention, technical scheme and beneficial effects Describe in detail bright, it should be understood that the above is only a specific embodiment of the present invention, is not intended to restrict the invention, it is all Within the spirit and principles in the present invention, any modification, equivalent replacement, improvement and so on should be included in protection of the invention Within the scope of.

Claims (9)

1. a kind of monocular image depth estimation method for merging sparse known label characterized by comprising
Obtain RGB image to be estimated;
Sparse known label is obtained using single line laser radar;
The RGB image to be estimated is input in the estimation of Depth model pre-established, the first of RGB image to be estimated is obtained Depth map;
First depth map is merged with sparse known label by full articulamentum, obtains the RGB image to be estimated most Whole depth map.
2. the method according to claim 1, wherein described obtain sparse known mark using single line laser radar Label, comprising:
The point that single line laser radar is scanned projects to two-dimentional phase plane, obtains sparse known label.
3. the method according to claim 1, wherein the estimation of Depth model includes:
Obtain multiple RGB images;
The feature of the RGB image is extracted using the depth residual error network of full convolution;
Using full articulamentum by the Feature Conversion be feature vector;
Network parameter according to the depth residual error network of the loss function training full convolution, after being optimized;
Estimation of Depth model is constructed according to the network parameter after optimization.
4. according to the method described in claim 3, it is characterized in that, the depth according to the loss function training full convolution Residual error network, the network parameter after being optimized, comprising:
The loss function is shown below:
Wherein,With yiRespectively the real depth value of pixel i and predetermined depth value;xiFor the real depth value and prediction of pixel i The difference of depth value;C is threshold value.
5. the method according to claim 1, wherein it is described by full articulamentum by first depth map with it is dilute Known label fusion is dredged, the ultimate depth figure of the RGB image to be estimated is obtained, comprising:
According to the feature of first depth map and sparse known label feature, obtain merging sparse known mark using full articulamentum The feature vector of label;
The feature vector of the sparse known label of fusion is converted to the ultimate depth figure of RGB image to be estimated.
6. a kind of monocular image estimation of Depth device for merging sparse known label characterized by comprising
First obtains module, for obtaining RGB image to be estimated;
Second obtains module, for obtaining sparse known label using single line laser radar;
Processing module obtains to be estimated for the RGB image to be estimated to be input in the estimation of Depth model pre-established First depth map of RGB image;
Fusion Module obtains described wait estimate for being merged first depth map with sparse known label by full articulamentum Count the ultimate depth figure of RGB image.
7. device according to claim 6, which is characterized in that described second obtains module, is used for,
The point that single line laser radar is scanned projects to two-dimentional phase plane, obtains sparse known label.
8. device according to claim 6, which is characterized in that the processing module includes model construction submodule, described Model construction submodule includes:
Acquiring unit, for obtaining multiple RGB images;
Extraction unit extracts the feature of the RGB image for the depth residual error network using full convolution;
Converting unit is used to using full articulamentum be feature vector by the Feature Conversion;
Optimize unit, the network ginseng for the depth residual error network according to the loss function training full convolution, after being optimized Number;
Construction unit, for constructing estimation of Depth model according to the network parameter after optimization.
9. device according to claim 6, which is characterized in that the Fusion Module is used for,
According to the feature of first depth map and sparse known label feature, obtain merging sparse known mark using full articulamentum The feature vector of label;
The feature vector of the sparse known label of fusion is converted to the ultimate depth figure of RGB image to be estimated.
CN201811050407.XA 2018-09-10 2018-09-10 A kind of monocular image depth estimation method and device merging sparse known label Pending CN109461178A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811050407.XA CN109461178A (en) 2018-09-10 2018-09-10 A kind of monocular image depth estimation method and device merging sparse known label

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811050407.XA CN109461178A (en) 2018-09-10 2018-09-10 A kind of monocular image depth estimation method and device merging sparse known label

Publications (1)

Publication Number Publication Date
CN109461178A true CN109461178A (en) 2019-03-12

Family

ID=65606646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811050407.XA Pending CN109461178A (en) 2018-09-10 2018-09-10 A kind of monocular image depth estimation method and device merging sparse known label

Country Status (1)

Country Link
CN (1) CN109461178A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414418A (en) * 2019-07-25 2019-11-05 电子科技大学 A kind of Approach for road detection of image-lidar image data Multiscale Fusion
CN111179331A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Depth estimation method, depth estimation device, electronic equipment and computer-readable storage medium
CN111340864A (en) * 2020-02-26 2020-06-26 浙江大华技术股份有限公司 Monocular estimation-based three-dimensional scene fusion method and device
CN111476190A (en) * 2020-04-14 2020-07-31 上海眼控科技股份有限公司 Target detection method, apparatus and storage medium for unmanned driving
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN112712017A (en) * 2020-12-29 2021-04-27 上海智蕙林医疗科技有限公司 Robot, monocular depth estimation method and system and storage medium
CN113269118A (en) * 2021-06-07 2021-08-17 重庆大学 Monocular vision forward vehicle distance detection method based on depth estimation
CN114627351A (en) * 2022-02-18 2022-06-14 电子科技大学 Fusion depth estimation method based on vision and millimeter wave radar
CN114782782A (en) * 2022-06-20 2022-07-22 武汉大学 Uncertainty quantification method for learning performance of monocular depth estimation model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780588A (en) * 2016-12-09 2017-05-31 浙江大学 A kind of image depth estimation method based on sparse laser observations
CN107204010A (en) * 2017-04-28 2017-09-26 中国科学院计算技术研究所 A kind of monocular image depth estimation method and system
CN107578436A (en) * 2017-08-02 2018-01-12 南京邮电大学 A kind of monocular image depth estimation method based on full convolutional neural networks FCN

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780588A (en) * 2016-12-09 2017-05-31 浙江大学 A kind of image depth estimation method based on sparse laser observations
CN107204010A (en) * 2017-04-28 2017-09-26 中国科学院计算技术研究所 A kind of monocular image depth estimation method and system
CN107578436A (en) * 2017-08-02 2018-01-12 南京邮电大学 A kind of monocular image depth estimation method based on full convolutional neural networks FCN

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李耀宇: "《基于深度学习的单目图像深度估计》", 15 March 2018 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414418B (en) * 2019-07-25 2022-06-03 电子科技大学 Road detection method for multi-scale fusion of image-laser radar image data
CN110414418A (en) * 2019-07-25 2019-11-05 电子科技大学 A kind of Approach for road detection of image-lidar image data Multiscale Fusion
CN111179331A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Depth estimation method, depth estimation device, electronic equipment and computer-readable storage medium
CN111179331B (en) * 2019-12-31 2023-09-08 智车优行科技(上海)有限公司 Depth estimation method, depth estimation device, electronic equipment and computer readable storage medium
CN111340864A (en) * 2020-02-26 2020-06-26 浙江大华技术股份有限公司 Monocular estimation-based three-dimensional scene fusion method and device
CN111340864B (en) * 2020-02-26 2023-12-12 浙江大华技术股份有限公司 Three-dimensional scene fusion method and device based on monocular estimation
CN111476190A (en) * 2020-04-14 2020-07-31 上海眼控科技股份有限公司 Target detection method, apparatus and storage medium for unmanned driving
CN111583663B (en) * 2020-04-26 2022-07-12 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN112712017A (en) * 2020-12-29 2021-04-27 上海智蕙林医疗科技有限公司 Robot, monocular depth estimation method and system and storage medium
CN113269118A (en) * 2021-06-07 2021-08-17 重庆大学 Monocular vision forward vehicle distance detection method based on depth estimation
CN114627351A (en) * 2022-02-18 2022-06-14 电子科技大学 Fusion depth estimation method based on vision and millimeter wave radar
CN114782782A (en) * 2022-06-20 2022-07-22 武汉大学 Uncertainty quantification method for learning performance of monocular depth estimation model
CN114782782B (en) * 2022-06-20 2022-10-04 武汉大学 Uncertainty quantification method for learning performance of monocular depth estimation model

Similar Documents

Publication Publication Date Title
CN109461178A (en) A kind of monocular image depth estimation method and device merging sparse known label
US11361456B2 (en) Systems and methods for depth estimation via affinity learned with convolutional spatial propagation networks
KR102126724B1 (en) Method and apparatus for restoring point cloud data
CN113902897B (en) Training of target detection model, target detection method, device, equipment and medium
CN110910437B (en) Depth prediction method for complex indoor scene
CN110689562A (en) Trajectory loop detection optimization method based on generation of countermeasure network
CN111507222B (en) Three-dimensional object detection frame based on multisource data knowledge migration
CN112435267B (en) Disparity map calculation method for high-resolution urban satellite stereo image
CN111160293A (en) Small target ship detection method and system based on characteristic pyramid network
WO2023155387A1 (en) Multi-sensor target detection method and apparatus, electronic device and storage medium
CN110349186A (en) Optical flow computation method is moved based on the matched big displacement of depth
CN116385660A (en) Indoor single view scene semantic reconstruction method and system
Nguyen et al. ROI-based LiDAR sampling algorithm in on-road environment for autonomous driving
CN114299242A (en) Method, device and equipment for processing images in high-precision map and storage medium
CN114358133A (en) Method for detecting looped frames based on semantic-assisted binocular vision SLAM
Wang et al. Pedestrian detection based on YOLOv3 multimodal data fusion
Le Besnerais et al. Dense height map estimation from oblique aerial image sequences
CN117808689A (en) Depth complement method based on fusion of millimeter wave radar and camera
CN116912645A (en) Three-dimensional target detection method and device integrating texture and geometric features
CN112561979A (en) Self-supervision monocular depth estimation method based on deep learning
Hirata et al. Real-time dense depth estimation using semantically-guided LIDAR data propagation and motion stereo
CN116778266A (en) Multi-scale neighborhood diffusion remote sensing point cloud projection image processing method
CN116503686A (en) Training method of image correction model, image correction method, device and medium
CN115731273A (en) Pose graph optimization method and device, electronic equipment and storage medium
CN116129422A (en) Monocular 3D target detection method, monocular 3D target detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190312

WD01 Invention patent application deemed withdrawn after publication