CN109003276A - Antidote is merged based on binocular stereo vision and low line beam laser radar - Google Patents
Antidote is merged based on binocular stereo vision and low line beam laser radar Download PDFInfo
- Publication number
- CN109003276A CN109003276A CN201810575904.5A CN201810575904A CN109003276A CN 109003276 A CN109003276 A CN 109003276A CN 201810575904 A CN201810575904 A CN 201810575904A CN 109003276 A CN109003276 A CN 109003276A
- Authority
- CN
- China
- Prior art keywords
- data
- laser radar
- line beam
- low line
- beam laser
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
Antidote is merged based on binocular stereo vision and low line beam laser radar the present invention relates to a kind of, it include: (1) in network structure, it is 4 that data input layer, which needs to increase original port number 3, in the channel added will storage registration after low line beam laser radar parallax data, by laser radar original depth information substitute into stereoscopic model can inverse go out parallax;(2) in terms of data set, training set changes on raw data set, the data that fan-in will be superimposed low line beam laser radar according to that will become left images, in the initial network structured training stage, the data of format same as low line beam laser radar directly can be converted out for input terminal use from training set data fan-out according to removing;(3) training, initial basis network structure are the modified parallax network based on dispnet, improve input terminal over the network.Compared with prior art, the present invention has many advantages, such as low cost, high-precision.
Description
Technical field
The present invention relates to unmanned field, more particularly, to one kind for it is unpiloted based on binocular stereo vision with
The fusion antidote of low line beam laser radar.
Background technique
Binocular stereo vision: a branch of the binocular stereo vision as computer vision, it is a kind of former based on parallax
Reason obtains the two images of testee by different location and calculates the position deviation between image corresponding points, to obtain three-dimensional letter
The method of breath.
As shown in Figure 1, a point A coordinate is (X, Y, Z) in reality scene, when it projects to a left side for Binocular Stereo Vision System
It is former according to binocular triangulation as known parallax d=ul-ur such as al (ul, vl) and ar (ur, vr) in figure in right picture plane
Reason(wherein baseline B and focal length f are by binocular camera structure determination), can be obtained depth Z.
Binocular solid matching: being known as Stereo matching for the process that the pixel in left images matches, it is three-dimensional
The key technology and critical issue of vision, matching error will carry out huge error to actual depth zone.
Laser radar: its working principle is that objective emission detectable signal (laser beam), the slave target that then will receive
Reflected signal (target echo) is compared with transmitting signal, after making proper treatment, so that it may obtain the related letter of target
Breath, such as target range, orientation, height, speed, posture, even shape parameter.But its cost of manufacture is high at this stage, one
The laser radar market price of a 64 line is up to hundreds of thousands RMB, can not onboard carry out large scale deployment.
The prior art related to the present invention:
1, before deep learning is flourished not yet, correlative study person pass through some traditional constraint sides
Method and matching algorithm match the pixel in the mesh image of left and right.Conventional stereo matching algorithm is different due to division mode
Sample, there are many matching process, including matching, feature-based matching and the matching algorithm based on phase etc. based on region.It passes
The solid matching method precision of system is low, not can guarantee in real time by the huge and huge calculation amount of such environmental effects.
2, the fast development of depth learning technology makes related fields welcome huge leap, large quantities of researcher's research
Various deep learning network structures are realized from left and right figure to disparity map end to end with realizing, including dispnet etc..But the structure
The shortcomings that be that matching precision is larger by such environmental effects;In the case where certain systematic error, the bigger i.e. target range of depth
Remoter absolute error is bigger.
3, the semantic segmentation algorithm based on deep learning network can carry out semantic segmentation, segmentation to reality scene in image level
Universal network frame is as shown in Figure 2.
In automatic driving vehicle, due to the born physical imperfection of each sensor, single-sensor can not be leaned on to complete sense
Know the task of part, then various sensors merge just necessitating for task.
The multi-line laser radar market price is high, can not the large scale deployment on automatic driving car;Alternative scheme binocular
Stereoscopic vision precision is inadequate, and huge by such environmental effects, is unable to get intensive and high-precision disparity map, effective distance compared with
It is short, it is huge in the slightly remote part parallax absolute error of distance.
Binocular stereo vision can provide the dense three-dimensional point cloud of Pixel-level, but its is affected by environment more obvious, can not be right
Its point cloud generated makes whole control in precision;Although getable cloud is excessively sparse for low line beam laser radar, but
It is that its precision is higher, cost is very low, can provide accuracy guarantee for the point cloud precision after fusion.
Simple sensor coupling schematic diagram is as shown in Figure 3.But simple sensor coupling good can not be completed
Mutual sensor is complementary, then needs to study merging for binocular stereo vision and low line beam laser radar.
Summary of the invention
It is vertical that it is an object of the present invention to overcome the above-mentioned drawbacks of the prior art and provide unmanned middle binoculars
Body vision merges antidote with low line beam laser radar.
The purpose of the present invention can be achieved through the following technical solutions:
It is a kind of that antidote is merged based on binocular stereo vision and low line beam laser radar, on the basis of deep learning
Low line beam laser radar is merged with stereoscopic vision to improve a cloud precision, the method includes:
(1) in network structure, it is 4 that data input layer, which needs to increase original port number 3, will be deposited in the channel added
Low line beam laser radar parallax data after putting registration, can be anti-by laser radar original depth information substitution stereoscopic model
Calculate parallax;
(2) in terms of data set, training set changes on raw data set, and fan-in will be according to will become left images
The data of low line beam laser radar are superimposed, it, can be directly from training set data fan-out evidence in the initial network structured training stage
Removing converts out the data of format same as low line beam laser radar for input terminal use;
(3) training, initial basis network structure are the modified parallax network based on dispnet, over the network
Input terminal is improved to guarantee that result is able to satisfy needs.
Preferably, systematic error compensation module is added to reduce stereo visual system error in this method.
Preferably, the systematic error compensation module the solution course of work the following steps are included:
Step 1, using laser radar data as foundation, research parallax d really fathoming the exhausted of value Z in setting scale
Error is distributed;
Step 2, acquisition reality scene mass data;
The data that step 3, analysis the collect i.e. largely vertical stereoscopic difference d of visioncWith low line beam laser radar data dl,
Obtain their mapping relations and distribution error function J (d)=(dc-dl)2;
Step 4 establishes fitting function of the laser radar data collection i.e. between parallax truth set and stereoscopic vision parallax image i.e.
Penalty functionK is the penalty function coefficient for needing to solve, and j is the top step number of fitting function;
Step 5 solves K with gradient descent methodi, i=1,2 ..., j:
Step 5.1, error function are as follows:M is that semantic segmentation partial segmentation goes out
Region quantity;
Function coefficients are repaid in step 5.2, gradient decline supplement:
KiGradient be error function J to KiSeek local derviation:
KiUpdate:α is step-length.
Step 5.3, by updated KiSubstitute into J (K1,K2,,,Kj), if J < e, e are the error threshold of setting, then calculate knot
Beam, penalty function are y (dc), otherwise repeatedly step 3, continues to update Ki, until J is sufficiently small.
Preferably, using semantic segmentation as Constraint fusion into the point cloud point collection for reducing mistake of wafting in stereoscopic vision.
Preferably, the semantic segmentation module can be placed on data input layer, i.e. the data input layer of converged network needs
A channel is extended again to store semantic segmentation result as input.
Preferably, after all modules are completed in initialization, overall network is needed re -training tuning, including following step
It is rapid:
Step 1, low line beam laser radar and binocular camera system alignment;
Mass data collection is in step 2, acquisition reality scene for trained and later period system adjustment and optimization;
Step 3, the re -training tuning that general frame is carried out on the basis of having initialization network model.
Compared with prior art, the invention has the following advantages that
1, it is generated the invention proposes a kind of low cost, high-precision, by lesser cloud of such environmental effects;
2, the present invention utilizes low line beam laser Radar Design binocular solid matching error backoff algorithm, to solve stereoscopic vision
Parallax is when systematic error is certain, bigger, the bigger problem of depth absolute error value of distance, i.e. solution effective distance is too short
The problem of;
3, what the present invention was initiative merges low line beam laser in original binocular stereo vision deep learning network architecture
It is too big to solve the problems, such as that stereoscopic vision such as is illuminated by the light at the such environmental effects while considering cost for radar data;
4, the error rate of the invention in order to further decrease Stereo matching, eliminates problematic three-dimensional point of wafting, and accelerate
Training speed is integrated into Binocular Stereo Vision System using semantic segmentation web results as constraint condition.
Detailed description of the invention
Fig. 1 is the schematic diagram of binocular stereo vision;
Fig. 2 is segmentation universal network frame construction drawing;
Fig. 3 is that simple sensor couples schematic diagram;
Fig. 4 is overall network structural schematic diagram of the invention;
Fig. 5 is the fan-in of training set of the invention according to structural schematic diagram;
Fig. 6 is initial basis network structure;
Fig. 7 is improved infrastructure network figure;
Fig. 8 is Z absolute error distribution map of the d in some scale;
Fig. 9 is distribution error function J (d)=(dc-dl) 2 mapping graph;
Figure 10 is image, semantic segmentation result schematic diagram;
Figure 11 is semantic segmentation module deep learning network structure.
Specific embodiment
The technical scheme in the embodiments of the invention will be clearly and completely described below, it is clear that described implementation
Example is a part of the embodiments of the present invention, rather than whole embodiments.Based on the embodiments of the present invention, ordinary skill
Personnel's every other embodiment obtained without making creative work all should belong to the model that the present invention protects
It encloses.
Technical problem solved by the invention is as follows: the multi-line laser radar market price is high, can not be in automatic driving car
Upper large scale deployment;Alternative scheme binocular stereo vision precision is inadequate, and huge by such environmental effects, is unable to get intensive
And high-precision disparity map, effective distance is shorter, huge in the slightly remote part parallax absolute error of distance.This patent solution
It is too short will to solve stereoscopic vision effective distance, and by parallax precision excessive problem affected by environment.
Main inventive Executive Summary of the invention
1) low line beam laser Radar Design binocular solid matching error backoff algorithm is utilized, is existed with solving stereoscopic vision parallax
When systematic error is certain, bigger, the bigger problem of depth absolute error value of distance solves that effective distance is too short asks
Topic;
2) initiative to merge low line beam laser radar number in original binocular stereo vision deep learning network architecture
According to it is too big to solve the problems, such as that stereoscopic vision such as is illuminated by the light at the such environmental effects while considering cost;
3) in order to further decrease the error rate of Stereo matching, problematic three-dimensional point of wafting is eliminated, and accelerate training speed
Degree, is integrated into Binocular Stereo Vision System using semantic segmentation web results as constraint condition.
Core methed process of the present invention specifically:
Obtain sample set/building initial network model
Since the present invention is based on deep learning Stereo Matching Algorithm, so inevitably needing the branch of data set
It holds.An initial network model can be first trained with the data of each large data sets website, but since we need low harness
The data of laser radar are merged with vision, so:
1) in network structure, it is 4 that left figure data input layer, which needs to increase original port number 3, in the channel added
(it is by laser radar original depth information substitution stereoscopic model by the low line beam laser radar parallax data after storage registration
Can inverse go out parallax);
2) in terms of data set, our training set can also change on raw data set, and fan-in will be according to will become
The data of low line beam laser radar are superimposed at left images, data format is as shown in Figure 5:
In the initial network structured training stage, can directly be converted out and low harness from training set data fan-out according to removing
The data of the same format of laser radar are for input terminal use.
3) training: initial basis network structure is as shown in Figure 6:
Modified parallax network of this network structure based on dispnet, we will improve over the network with
Guarantee that result is able to satisfy our needs.
Input terminal changes as shown in Figure 7.
The design of parallax system Error Compensation Algorithm
The principle of triangulation that binocular stereo vision usesWherein baseline B and focal length f are true by binocular camera structure
Fixed, exact value can be determined by calibration, so its main source of error, that is, parallax d;Parallax d is mainly by stereoscopic vision or so
Image element matching algorithm obtains, but since the factors such as actual environment illumination condition change complexity, not can guarantee each pixel
Accurate can match, and say measurement object i.e. true Z is larger farther out when, the little deviation of parallax d will be brought
The huge absolute error of true measurement Z.
1) binocular stereo vision larger disadvantage of error when measurement is compared with far object in order to solve this problem, is made up, is mentioned
Its high overall precision, we have studied error distribution of the d in some scale using laser radar data as foundation, and according to it
Error distribution design correlative compensation algorithm, realizes that the point cloud of low line beam laser radar and binocular stereo vision melts in Pixel-level level
It closes.The distribution of Z absolute error is as shown in Figure 8:
2) reality scene mass data is acquired;
3) data that analysis the collects i.e. largely vertical stereoscopic difference d of visioncWith low line beam laser radar data dl, obtain
Their mapping relations and distribution error function J (d)=(dc-dl)2, error distribution is as shown in Figure 9;
4) fitting function of the laser radar data collection i.e. between parallax truth set and stereoscopic vision parallax image is established to compensate
Function
5) K is solved with gradient descent methodi:
1. error function are as follows:
2. function coefficients are repaid in gradient decline supplement:
The gradient of Ki:
The update of Ki:
3. updated Ki is substituted into J (K1,K2,,,Kj), if J < e, calculating terminates, and penalty function is y (dc), otherwise
It repeats (3), continues to update Ki, until J is sufficiently small.
Semantic segmentation module Combined design
In image object grade, we can use some existing algorithms and technology and mention for our fusion of point cloud and correction
It supports for the priori of some preciousnesses, then using these existing priori, is rectified in the point cloud fusion for carrying out stereoscopic vision and low harness
Corresponding fusion boundary and constraint are provided when positive, provide more foundations and guarantee for fusion accuracy.
By semantic segmentation, we can know that each region is in scene in image level level, be transformed into vertical
Can be known in advance what positional relationship and side between point cloud certain cluster point cloud is being in reality on body vision three-dimensional point cloud
Boundary can adjust accordingly error dot cloud according to these relationships, to improve its precision.It is as shown in Figure 10 image language
Adopted segmentation result, each of them color represent a kind of classification, and deep learning network structure is as shown in figure 11, can place it in
Data input layer, i.e. the data input layer of converged network need to extend a channel again to store semantic segmentation result as defeated
Enter.
Converged network entirety tuning
After all modules are completed in initialization, overall network is needed re -training tuning.
1) low line beam laser radar and binocular camera system alignment;
2) mass data collection is acquired in reality scene for trained and later period system adjustment and optimization;
3) the re -training tuning of general frame is carried out on the basis of having initialization network model.
Overall depth learning network frame part is as follows:
Step 1, stereoscopic vision data fusion data set part: in terms of data set, our training set also can be original
It is changed on data set, the data that fan-in will be superimposed low line beam laser radar according to that will become left images, data format is such as
Shown in Fig. 5, using kitti data set by the data after format analysis processing as the training number of initialization fusion stereoscopic vision part
According to;
Step 2, initialization training fusion stereoscopic vision part: initial basis network structure is as shown in fig. 6, this network structure
Modified parallax network based on dispnet, we will be improved over the network to guarantee that result is able to satisfy us
Needs.Input terminal changes as shown in Figure 7:
Step 3, the design of parallax system Error Compensation Algorithm, can be obtained penalty function y (dc);Penalty function is once true
It is fixed, it is not involved in later period integral frame re -training tuning;
Stereovision training concentration left figure is first trained semantic segmentation part, semantic segmentation as training set by step 4
Subnetwork model is independent sector and is not involved in later period entirety tuning, i.e., semantic segmentation part is without carrying out reversed weight more
Newly, the stand-alone training if needing improving performance;
Step 5 carries out re -training tuning to overall network;Obtain indoor and outdoor part of test results.
Key technology point of the invention is summarized:
Core innovative point 1: low line beam laser radar is merged with stereoscopic vision to improve a little on the basis of deep learning
Cloud precision, and the influence of environmental change suffered by vision is reduced, to guarantee practicability;
Core innovative point 2: using semantic segmentation as Constraint fusion into the point cloud point for reducing mistake of wafting in stereoscopic vision
Collection;
Core innovative point 3: design error backoff algorithm is to reduce stereo visual system error;
Core innovative point 4: being based on innovative point 1, and 2,3, propose merging for a whole set of low line beam laser radar and stereoscopic vision
Scheme.
Key technology point of the invention it is corresponding the utility model has the advantages that
Beneficial effect 1: low cost, high-precision are generated by lesser cloud of such environmental effects.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or replace
It changes, these modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with right
It is required that protection scope subject to.
Claims (6)
1. a kind of merge antidote based on binocular stereo vision and low line beam laser radar, which is characterized in that in depth
Low line beam laser radar is merged with stereoscopic vision to improve a cloud precision on the basis of habit, the method includes:
(1) in network structure, it is 4 that data input layer, which needs to increase original port number 3, matches storage in the channel added
Low line beam laser radar parallax data after standard, by laser radar original depth information substitute into stereoscopic model can inverse go out
Parallax;
(2) in terms of data set, training set changes on raw data set, and fan-in is superimposed according to that will become left images
The data of low line beam laser radar can be directly from training set data fan-out according to removing in the initial network structured training stage
The data of format same as low line beam laser radar are converted out for input terminal use;
(3) training, initial basis network structure are the modified parallax network based on dispnet, are carried out over the network
Input terminal is improved to guarantee that result is able to satisfy needs.
2. method according to claim 1, which is characterized in that it is vertical to reduce that systematic error compensation module is added in this method
Body vision systematic error.
3. according to the method described in claim 2, it is characterized in that, the solution course of work of the systematic error compensation module
The following steps are included:
Step 1, using laser radar data as foundation, really the fathom absolute mistake of value Z of the research parallax d in setting scale
Difference cloth;
Step 2, acquisition reality scene mass data;
The data that step 3, analysis the collect i.e. largely vertical stereoscopic difference d of visioncWith low line beam laser radar data dl, obtain
Their mapping relations and distribution error function J (d)=(dc-dl)2;
Step 4 is established fitting function of the laser radar data collection i.e. between parallax truth set and stereoscopic vision parallax image and is compensated
FunctionK is the penalty function coefficient for needing to solve, and j is the top step number of fitting function;
Step 5 solves K with gradient descent methodi, i=1,2 ..., j:
Step 5.1, error function are as follows:M is the area that semantic segmentation partial segmentation goes out
Domain quantity;
Function coefficients are repaid in step 5.2, gradient decline supplement:
KiGradient be error function J to KiSeek local derviation:
KiUpdate:α is step-length.
Step 5.3, by updated KiSubstitute into J (K1,K2,,,Kj), if J < e, e are the error threshold of setting, then calculating terminates, and mends
Repaying function is y (dc), otherwise repeatedly step 3, continues to update Ki, until J is sufficiently small.
4. any method according to claim 1~3, which is characterized in that using semantic segmentation as Constraint fusion into solid
The point cloud point collection for mistake of wafting is reduced in vision.
5. according to the method described in claim 4, it is characterized in that, the semantic segmentation module can be placed on data input layer,
That is the data input layer of converged network needs to extend a channel again to store semantic segmentation result as input.
6. according to claim 1, any method in 2,3 or 5, which is characterized in that after all modules are completed in initialization,
Overall network is needed re -training tuning, comprising the following steps:
Step 1, low line beam laser radar and binocular camera system alignment;
Mass data collection is in step 2, acquisition reality scene for trained and later period system adjustment and optimization;
Step 3, the re -training tuning that general frame is carried out on the basis of having initialization network model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810575904.5A CN109003276A (en) | 2018-06-06 | 2018-06-06 | Antidote is merged based on binocular stereo vision and low line beam laser radar |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810575904.5A CN109003276A (en) | 2018-06-06 | 2018-06-06 | Antidote is merged based on binocular stereo vision and low line beam laser radar |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109003276A true CN109003276A (en) | 2018-12-14 |
Family
ID=64599982
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810575904.5A Pending CN109003276A (en) | 2018-06-06 | 2018-06-06 | Antidote is merged based on binocular stereo vision and low line beam laser radar |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109003276A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110675436A (en) * | 2019-09-09 | 2020-01-10 | 中国科学院微小卫星创新研究院 | Laser radar and stereoscopic vision registration method based on 3D feature points |
CN110888144A (en) * | 2019-12-04 | 2020-03-17 | 吉林大学 | Laser radar data synthesis method based on sliding window |
CN112313534A (en) * | 2019-05-31 | 2021-02-02 | 深圳市大疆创新科技有限公司 | Method for multi-channel laser radar point cloud interpolation and distance measuring device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104899855A (en) * | 2014-03-06 | 2015-09-09 | 株式会社日立制作所 | Three-dimensional obstacle detection method and apparatus |
CN107862287A (en) * | 2017-11-08 | 2018-03-30 | 吉林大学 | A kind of front zonule object identification and vehicle early warning method |
CN107886477A (en) * | 2017-09-20 | 2018-04-06 | 武汉环宇智行科技有限公司 | Unmanned neutral body vision merges antidote with low line beam laser radar |
-
2018
- 2018-06-06 CN CN201810575904.5A patent/CN109003276A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104899855A (en) * | 2014-03-06 | 2015-09-09 | 株式会社日立制作所 | Three-dimensional obstacle detection method and apparatus |
CN107886477A (en) * | 2017-09-20 | 2018-04-06 | 武汉环宇智行科技有限公司 | Unmanned neutral body vision merges antidote with low line beam laser radar |
CN107862287A (en) * | 2017-11-08 | 2018-03-30 | 吉林大学 | A kind of front zonule object identification and vehicle early warning method |
Non-Patent Citations (1)
Title |
---|
王玲: "《数据挖掘学习方法》", 31 August 2017, 冶金工业出版社 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112313534A (en) * | 2019-05-31 | 2021-02-02 | 深圳市大疆创新科技有限公司 | Method for multi-channel laser radar point cloud interpolation and distance measuring device |
CN110675436A (en) * | 2019-09-09 | 2020-01-10 | 中国科学院微小卫星创新研究院 | Laser radar and stereoscopic vision registration method based on 3D feature points |
CN110888144A (en) * | 2019-12-04 | 2020-03-17 | 吉林大学 | Laser radar data synthesis method based on sliding window |
CN110888144B (en) * | 2019-12-04 | 2023-04-07 | 吉林大学 | Laser radar data synthesis method based on sliding window |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107886477B (en) | Fusion correction method for three-dimensional vision and low-beam laser radar in unmanned driving | |
CN111951305B (en) | Target detection and motion state estimation method based on vision and laser radar | |
CN108229366B (en) | Deep learning vehicle-mounted obstacle detection method based on radar and image data fusion | |
CN107194989B (en) | Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography | |
CN110288659B (en) | Depth imaging and information acquisition method based on binocular vision | |
WO2024114119A1 (en) | Sensor fusion method based on binocular camera guidance | |
CN109003276A (en) | Antidote is merged based on binocular stereo vision and low line beam laser radar | |
CN111429528A (en) | Large-scale distributed high-precision map data processing system | |
CN110889899B (en) | Digital earth surface model generation method and device | |
CN102778224B (en) | Method for aerophotogrammetric bundle adjustment based on parameterization of polar coordinates | |
CN104076817A (en) | High-definition video aerial photography multimode sensor self-outer-sensing intelligent navigation system and method | |
WO2020215254A1 (en) | Lane line map maintenance method, electronic device and storage medium | |
CN109407115B (en) | Laser radar-based pavement extraction system and extraction method thereof | |
CN104567801B (en) | High-precision laser measuring method based on stereoscopic vision | |
CN114719848B (en) | Unmanned aerial vehicle height estimation method based on vision and inertial navigation information fusion neural network | |
CN114758504B (en) | Online vehicle overspeed early warning method and system based on filtering correction | |
CN114463303B (en) | Road target detection method based on fusion of binocular camera and laser radar | |
CN115272596A (en) | Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene | |
CN106969721A (en) | A kind of method for three-dimensional measurement and its measurement apparatus | |
CN110610650A (en) | Point cloud semantic map construction method based on deep learning and depth camera | |
CN114279434A (en) | Picture construction method and device, electronic equipment and storage medium | |
CN112712566B (en) | Binocular stereo vision sensor measuring method based on structure parameter online correction | |
CN111709998B (en) | ELM space registration model method for TOF camera depth data measurement error correction | |
CN116824433A (en) | Visual-inertial navigation-radar fusion self-positioning method based on self-supervision neural network | |
CN117058366A (en) | Large aircraft large part point cloud semantic segmentation method based on pre-training large model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181214 |
|
RJ01 | Rejection of invention patent application after publication |