CN109002800A - The real-time identification mechanism of objective and recognition methods based on Multi-sensor Fusion - Google Patents
The real-time identification mechanism of objective and recognition methods based on Multi-sensor Fusion Download PDFInfo
- Publication number
- CN109002800A CN109002800A CN201810800374.XA CN201810800374A CN109002800A CN 109002800 A CN109002800 A CN 109002800A CN 201810800374 A CN201810800374 A CN 201810800374A CN 109002800 A CN109002800 A CN 109002800A
- Authority
- CN
- China
- Prior art keywords
- real
- dimensional
- information
- sensor fusion
- objective
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/647—Three-dimensional objects by matching two-dimensional images to three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of real-time identification mechanism of objective based on Multi-sensor Fusion and recognition methods.Identification mechanism includes mechanism outer box, laser radar, monocular high definition RGB camera, calculates mainboard.Recognition methods: the three-dimensional information of analysis output object is carried out by the data of laser radar;Recognition result based on step 2 carries out processing analysis, the two-dimensional signal of output identification object to the data of monocular high definition RGB camera;Two-dimensional signal is mapped in three-dimensional information, two-dimentional three-dimensional information is merged with Kalman filtering, the real-time identification information of object is robustly obtained with acceleration models such as three-dimensional Kalmans.A kind of real-time identification mechanism of objective based on Multi-sensor Fusion of the invention, using multiple RGB cameras and laser radar apparatus, realize the real-time identification function of target of Multi-sensor Fusion, the path based on target object in identifying system coordinate system is obtained in real time, position, speed, directional information, result visualization.
Description
Technical field
The present invention relates to computer vision fields, real-time more particularly to a kind of objective based on Multi-sensor Fusion
Identification mechanism and recognition methods.
Background technique
Existing multi-sensor fusion system only provides the mapping result between two dimension, three-dimensional, could not combine and provide with algorithm
, dedicated for the multi-sensor fusion system identified in real time, it is long that there are deployment cycles in actual application for a whole set of, uses difficulty
Spend big disadvantage;Moreover, existing target identification method is all based on single-sensor, the range information of laser radar can not be merged
With the colouring information of RGB camera image, real-time recognition accuracy is low.
Summary of the invention
The present invention provides a kind of real-time identification mechanism of objective based on Multi-sensor Fusion and recognition methods, use
Multiple RGB cameras and laser radar apparatus realize the real-time identification function of the target of Multi-sensor Fusion, by handling and merging two
Image, three dimensional point cloud are tieed up, obtains the path based on target object in identifying system coordinate system, position, speed, direction in real time
As a result information can be visualized in the form of polygon outline border.
The real-time identification mechanism of objective based on Multi-sensor Fusion, it includes mechanism outer box;The mechanism outer box are
Regular pentagon box body;Laser radar is installed at the top of the mechanism outer box, is identified in real time for the objective in PCD;Institute
The middle position for stating five sides of mechanism outer box is separately installed with monocular high definition RGB camera, real for the two dimension target in RGB image
When identify;The inside of the mechanism outer box is equipped with calculating mainboard, in collect record and handle monocular high definition RGB camera and
The data of laser radar export real-time recognition result.
The monocular high definition RGB camera generates image with the rate of 30Hz, and is pressed with 1920 × 1080 resolution ratio
Contracting.
The laser radar generates data with the frequency of 10Hz, with 360 degree of full filed.
The capture range of the laser radar is 0.9-130m.
The data for calculating mainboard and receiving monocular high definition RGB camera and laser radar with 2,000,000,000 bytes/minute.
Objective real-time identification method based on Multi-sensor Fusion, specifically carries out according to the following steps:
One, laser radar, monocular high definition RGB camera are connect with mainboard is calculated, configures laser radar and camera calibration text
Part;
Two, it is analyzed by the data of laser radar, by the PCD of input, excludes ground point cloud data, and based on equal
The PCD of value drift is detected and positioning, exports the three-dimensional bezel locations information and tracking information of object;
Three, based on the recognition result of step 2, processing point is carried out to the two-dimensional image data result of monocular high definition RGB camera
Analysis, and the detection of the RGB image based on average drifting and positioning, the two-dimentional bezel locations information and tracking letter of output identification object
Breath;
Four, the two-dimensional signal of step 3 is mapped in the three-dimensional information of step 2, with Kalman filtering by two dimension three
Information fusion is tieed up, the real-time identification information of object is robustly obtained with acceleration models such as three-dimensional Kalmans, that is, completes to know in real time
Not.
In step 2, ground point cloud data are excluded, ground point cloud information is removed with Density Estimator.Blocked by combining
The equal acceleration models of Kalman Filtering increase the robustness of removal ground point figure information, define angle searching threshold by engineering experience
Value reduces the influence of outlier, and the point cloud data after processing detects track algorithm by average drifting, output identifies object
Three-dimensional bezel locations information and tracking information.
In step 3, obtained three-dimensional frame information MAP is obtained into two-dimensional convex closure information into two dimensional image.Two dimension
Convex closure amplifies by ratio, by the calculating of log-likelihood ratio, obtains object discrimination color model;In Convex range, pass through
Average drifting detects track algorithm, the two-dimentional bezel locations information and tracking information of output identification object.
In step 3, influence of the brightness to image is reduced with the equal acceleration models of one-dimensional combination Kalman filtering.
In step 3, RGB image detection and positioning based on average drifting, the object inspection of the next frame based on average drifting
The mass center for surveying and positioning the mapping since present frame.
In step 4, Kalman filtering is for merging and tracking the object mass center obtained from image and PCD, using based on card
The Fusion Model of Kalman Filtering integrates the mass center of calculating, using acceleration models such as three-dimensional Kalmans for steadily tracking
With fusion mass center.
Advantages of the present invention: a kind of real-time identification mechanism of objective based on Multi-sensor Fusion of the invention uses
Multiple RGB cameras and laser radar apparatus, realize the real-time identification function of the target of Multi-sensor Fusion, and recognition methods passes through processing
And two dimensional image, three dimensional point cloud are merged, and obtain the path based on target object in identifying system coordinate system in real time, position,
As a result speed, directional information can be visualized in the form of polygon outline border.
Detailed description of the invention
Fig. 1 is a kind of schematic diagram of the real-time identification mechanism of objective based on Multi-sensor Fusion of embodiment;
Fig. 2 is a kind of another angle signal of the real-time identification mechanism of objective based on Multi-sensor Fusion of embodiment
Figure;
Fig. 3 is a kind of showing for the internal structure of the real-time identification mechanism of objective based on Multi-sensor Fusion of embodiment
It is intended to;
Fig. 4 is a kind of flow chart of objective real-time identification method based on Multi-sensor Fusion of embodiment.
Specific embodiment
In order to deepen the understanding of the present invention, the present invention is done below in conjunction with drawings and examples and is further retouched in detail
It states, the embodiment is only for explaining the present invention, does not constitute and limits to protection scope of the present invention.
Embodiment
As shown in Figure 1 to Figure 3, a kind of real-time cognitron of the objective based on Multi-sensor Fusion is present embodiments provided
Structure, it includes mechanism outer box 1;The mechanism outer box 1 are regular pentagon box body;The top of the mechanism outer box 1 is equipped with laser
Radar 2 identifies in real time for the objective in PCD;The middle position on five sides of the mechanism outer box 1 is separately installed with monocular
High definition RGB camera 3 identifies in real time for the two dimension target in RGB image;The inside of the mechanism outer box 1 is equipped with calculating master
Plate 4, for exporting real-time recognition result in collecting the data for recording and handling monocular high definition RGB camera 3 and laser radar 2;Institute
It states monocular high definition RGB camera 3 and image is generated with the rate of 30Hz, and compressed with 1920 × 1080 resolution ratio;It is described to swash
Optical radar 2 generates data with the frequency of 10Hz, with 360 degree of full filed;The capture range of the laser radar 2 is 0.9-
130m;The data for calculating mainboard 4 and receiving monocular high definition RGB camera 3 and laser radar 2 with 2,000,000,000 bytes/minute.
The real-time identification mechanism of a kind of objective based on Multi-sensor Fusion of the present embodiment, using multiple RGB cameras
And laser radar apparatus, realize the real-time identification function of the target of Multi-sensor Fusion.
As shown in figure 4, the present embodiment additionally provides a kind of objective real-time identification method based on Multi-sensor Fusion,
What it was specifically carried out according to the following steps:
One, laser radar, monocular high definition RGB camera are connect with mainboard is calculated, configures laser radar and camera calibration text
Part;
Two, it is analyzed by the data of laser radar, by the PCD of input, excludes ground point cloud data, and based on equal
The PCD of value drift is detected and positioning, exports the three-dimensional bezel locations information and tracking information of object;
Three, based on the recognition result of step 2, processing point is carried out to the two-dimensional image data result of monocular high definition RGB camera
Analysis, and the detection of the RGB image based on average drifting and positioning, the two-dimentional bezel locations information and tracking letter of output identification object
Breath;
Four, the two-dimensional signal of step 3 is mapped in the three-dimensional information of step 2, with Kalman filtering by two dimension three
Information fusion is tieed up, the real-time identification information of object is robustly obtained with acceleration models such as three-dimensional Kalmans, that is, completes to know in real time
Not.
A kind of objective real-time identification method based on Multi-sensor Fusion of the present embodiment, in step 2, exclusively
Face point cloud data removes ground point cloud information with Density Estimator.By the equal acceleration models in conjunction with Kalman filtering come
The robustness for increasing removal ground point figure information, defines angle searching threshold value by engineering experience to reduce the influence of outlier,
Point cloud data after processed detects track algorithm by average drifting, output identify object three-dimensional bezel locations information and with
Track information.
A kind of objective real-time identification method based on Multi-sensor Fusion of the present embodiment in step 3, will obtain
Three-dimensional frame information MAP two-dimensional convex closure information is obtained into two dimensional image.Two-dimentional convex closure amplifies by ratio, by right
The calculating of number likelihood ratio, obtains object discrimination color model;In Convex range, track algorithm is detected by average drifting, it is defeated
The two-dimentional bezel locations information and tracking information of object are identified out.
A kind of objective real-time identification method based on Multi-sensor Fusion of the present embodiment, in step 3, with one
The equal acceleration models in conjunction with Kalman filtering are tieed up to reduce influence of the brightness to image.
A kind of objective real-time identification method based on Multi-sensor Fusion of the present embodiment, in step 3, based on equal
The RGB image detection and positioning of value drift, object detection and positioning the reflecting from present frame of the next frame based on average drifting
The mass center penetrated starts.
A kind of objective real-time identification method based on Multi-sensor Fusion of the present embodiment, in step 4, Kalman
Filtering is for merging and tracking the object mass center obtained from image and PCD, using the Fusion Model by Kalman filtering to based on
The mass center of calculation is integrated, using acceleration models such as three-dimensional Kalmans for steadily tracking and merging mass center.
A kind of objective real-time identification method based on Multi-sensor Fusion of the present embodiment is by handling and merging two
Image, three dimensional point cloud are tieed up, obtains the path based on target object in identifying system coordinate system, position, speed, direction in real time
As a result information can be visualized in the form of polygon outline border.
A kind of real-time identification mechanism of objective based on Multi-sensor Fusion of the present embodiment and recognition methods, based on driving
The RGB camera and laser radar apparatus for sailing vehicle realize the real-time identification function of the target of Multi-sensor Fusion;By algorithm and hardware
In conjunction with obtaining the path based on target object in identifying system coordinate system in real time, position, speed, directional information as a result can be with
The form of polygon outline border visualizes, and can quickly apply to the avoidance in unmanned field on the spot, path planning etc., and
Common camera and laser radar can be carried, cost is considerably reduced.
Above-described embodiment should not in any way limit the present invention, all to be obtained by the way of equivalent replacement or equivalency transform
Technical solution fall within the scope of protection of the present invention.
Claims (10)
1. the real-time identification mechanism of objective based on Multi-sensor Fusion, it is characterised in that: it includes mechanism outer box;The machine
Structure outer box are regular pentagon box body;Laser radar is installed at the top of the mechanism outer box, it is real-time for the objective in PCD
Identification;The middle position on five sides of the mechanism outer box is separately installed with monocular high definition RGB camera, for two in RGB image
Dimension target identifies in real time;The inside of the mechanism outer box is equipped with calculating mainboard, for recording in collection and handling monocular high definition
The data of RGB camera and laser radar export real-time recognition result.
2. the real-time identification mechanism of the objective according to claim 1 based on Multi-sensor Fusion, it is characterised in that: institute
It states monocular high definition RGB camera and image is generated with the rate of 30Hz, and compressed with 1920 × 1080 resolution ratio.
3. the real-time identification mechanism of the objective according to claim 1 based on Multi-sensor Fusion, it is characterised in that: institute
It states laser radar and data is generated with the frequency of 10Hz, with 360 degree of full filed.
4. the real-time identification mechanism of the objective according to claim 1 based on Multi-sensor Fusion, it is characterised in that: institute
The capture range for stating laser radar is 0.9-130m.
5. the objective real-time identification method based on Multi-sensor Fusion, it is characterised in that: be specifically to carry out according to the following steps
:
One, laser radar, monocular high definition RGB camera are connect with mainboard is calculated, configures laser radar and camera calibration file;
Two, it is analyzed by the data of laser radar, by the PCD of input, excludes ground point cloud data, and float based on mean value
The PCD of shifting is detected and positioning, exports the three-dimensional bezel locations information and tracking information of object;
Three, based on the recognition result of step 2, processing analysis is carried out to the two-dimensional image data result of monocular high definition RGB camera,
And RGB image detection and positioning based on average drifting, the two-dimentional bezel locations information and tracking information of output identification object;
Four, the two-dimensional signal of step 3 is mapped in the three-dimensional information of step 2, believes two-dimentional three-dimensional with Kalman filtering
Breath fusion, the real-time identification information of object is robustly obtained with acceleration models such as three-dimensional Kalmans, that is, completes identification in real time.
6. the objective real-time identification method according to claim 5 based on Multi-sensor Fusion, it is characterised in that: step
In rapid two, ground point cloud data are excluded, ground point cloud information is removed with Density Estimator.By combining Kalman filtering
Equal acceleration models increase the robustness of removal ground point figure information, angle searching threshold value is defined by engineering experience come reduce from
The influence of group's value, the point cloud data after processing detect track algorithm, the three-dimensional frame of output identification object by average drifting
Location information and tracking information.
7. the objective real-time identification method according to claim 5 based on Multi-sensor Fusion, it is characterised in that: step
In rapid three, obtained three-dimensional frame information MAP is obtained into two-dimensional convex closure information into two dimensional image.Two-dimentional convex closure pass through than
Rate amplification, by the calculating of log-likelihood ratio, obtains object discrimination color model;In Convex range, visited by average drifting
Survey track algorithm, the two-dimentional bezel locations information and tracking information of output identification object.
8. the objective real-time identification method according to claim 5 based on Multi-sensor Fusion, it is characterised in that: step
In rapid three, influence of the brightness to image is reduced with the equal acceleration models of one-dimensional combination Kalman filtering.
9. the objective real-time identification method according to claim 5 based on Multi-sensor Fusion, it is characterised in that: step
In rapid three, the detection of RGB image based on average drifting and positioning, the object detection of the next frame based on average drifting and positioning from
The mass center of mapping in present frame starts.
10. the objective real-time identification method according to claim 5 based on Multi-sensor Fusion, it is characterised in that:
In step 4, Kalman filtering is for merging and tracking the object mass center obtained from image and PCD, using based on Kalman filtering
Fusion Model the mass center of calculating is integrated, using the acceleration models such as three-dimensional Kalman for steadily track and merge matter
The heart.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810800374.XA CN109002800A (en) | 2018-07-20 | 2018-07-20 | The real-time identification mechanism of objective and recognition methods based on Multi-sensor Fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810800374.XA CN109002800A (en) | 2018-07-20 | 2018-07-20 | The real-time identification mechanism of objective and recognition methods based on Multi-sensor Fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109002800A true CN109002800A (en) | 2018-12-14 |
Family
ID=64597141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810800374.XA Pending CN109002800A (en) | 2018-07-20 | 2018-07-20 | The real-time identification mechanism of objective and recognition methods based on Multi-sensor Fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109002800A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070025A (en) * | 2019-04-17 | 2019-07-30 | 上海交通大学 | Objective detection system and method based on monocular image |
CN110147106A (en) * | 2019-05-29 | 2019-08-20 | 福建(泉州)哈工大工程技术研究院 | Has the intelligent Mobile Service robot of laser and vision fusion obstacle avoidance system |
CN110261869A (en) * | 2019-05-15 | 2019-09-20 | 深圳市速腾聚创科技有限公司 | Target detection system and data fusion method |
CN110550072A (en) * | 2019-08-29 | 2019-12-10 | 北京博途智控科技有限公司 | method, system, medium and equipment for identifying obstacle in railway shunting operation |
CN110928414A (en) * | 2019-11-22 | 2020-03-27 | 上海交通大学 | Three-dimensional virtual-real fusion experimental system |
CN111563457A (en) * | 2019-12-31 | 2020-08-21 | 成都理工大学 | Road scene segmentation method for unmanned automobile |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102508246A (en) * | 2011-10-13 | 2012-06-20 | 吉林大学 | Method for detecting and tracking obstacles in front of vehicle |
US20120176271A1 (en) * | 2011-01-12 | 2012-07-12 | Dai Liwen L | Navigation System and Method for Resolving Integer Ambiguities Using Double Difference Ambiguity Constraints |
CN106096516A (en) * | 2016-06-01 | 2016-11-09 | 常州漫道罗孚特网络科技有限公司 | The method and device that a kind of objective is followed the tracks of |
CN106778907A (en) * | 2017-01-11 | 2017-05-31 | 张军 | A kind of intelligent travelling crane early warning system based on multi-sensor information fusion |
CN106878687A (en) * | 2017-04-12 | 2017-06-20 | 吉林大学 | A kind of vehicle environment identifying system and omni-directional visual module based on multisensor |
CN106908783A (en) * | 2017-02-23 | 2017-06-30 | 苏州大学 | Obstacle detection method based on multi-sensor information fusion |
CN107015238A (en) * | 2017-04-27 | 2017-08-04 | 睿舆自动化(上海)有限公司 | Unmanned vehicle autonomic positioning method based on three-dimensional laser radar |
CN107235044A (en) * | 2017-05-31 | 2017-10-10 | 北京航空航天大学 | It is a kind of to be realized based on many sensing datas to road traffic scene and the restoring method of driver driving behavior |
US20180158200A1 (en) * | 2016-12-07 | 2018-06-07 | Hexagon Technology Center Gmbh | Scanner vis |
-
2018
- 2018-07-20 CN CN201810800374.XA patent/CN109002800A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120176271A1 (en) * | 2011-01-12 | 2012-07-12 | Dai Liwen L | Navigation System and Method for Resolving Integer Ambiguities Using Double Difference Ambiguity Constraints |
CN102508246A (en) * | 2011-10-13 | 2012-06-20 | 吉林大学 | Method for detecting and tracking obstacles in front of vehicle |
CN106096516A (en) * | 2016-06-01 | 2016-11-09 | 常州漫道罗孚特网络科技有限公司 | The method and device that a kind of objective is followed the tracks of |
US20180158200A1 (en) * | 2016-12-07 | 2018-06-07 | Hexagon Technology Center Gmbh | Scanner vis |
CN106778907A (en) * | 2017-01-11 | 2017-05-31 | 张军 | A kind of intelligent travelling crane early warning system based on multi-sensor information fusion |
CN106908783A (en) * | 2017-02-23 | 2017-06-30 | 苏州大学 | Obstacle detection method based on multi-sensor information fusion |
CN106878687A (en) * | 2017-04-12 | 2017-06-20 | 吉林大学 | A kind of vehicle environment identifying system and omni-directional visual module based on multisensor |
CN107015238A (en) * | 2017-04-27 | 2017-08-04 | 睿舆自动化(上海)有限公司 | Unmanned vehicle autonomic positioning method based on three-dimensional laser radar |
CN107235044A (en) * | 2017-05-31 | 2017-10-10 | 北京航空航天大学 | It is a kind of to be realized based on many sensing datas to road traffic scene and the restoring method of driver driving behavior |
Non-Patent Citations (2)
Title |
---|
张春林: ""基于多传感器信息融合的移动机器人定位与跟踪"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
杨鑫等: ""面向高级辅助驾驶雷达和视觉传感器信息融合算法的研究"", 《汽车实用技术》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070025A (en) * | 2019-04-17 | 2019-07-30 | 上海交通大学 | Objective detection system and method based on monocular image |
CN110070025B (en) * | 2019-04-17 | 2023-03-31 | 上海交通大学 | Monocular image-based three-dimensional target detection system and method |
CN110261869A (en) * | 2019-05-15 | 2019-09-20 | 深圳市速腾聚创科技有限公司 | Target detection system and data fusion method |
CN110147106A (en) * | 2019-05-29 | 2019-08-20 | 福建(泉州)哈工大工程技术研究院 | Has the intelligent Mobile Service robot of laser and vision fusion obstacle avoidance system |
CN110550072A (en) * | 2019-08-29 | 2019-12-10 | 北京博途智控科技有限公司 | method, system, medium and equipment for identifying obstacle in railway shunting operation |
CN110928414A (en) * | 2019-11-22 | 2020-03-27 | 上海交通大学 | Three-dimensional virtual-real fusion experimental system |
CN111563457A (en) * | 2019-12-31 | 2020-08-21 | 成都理工大学 | Road scene segmentation method for unmanned automobile |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109002800A (en) | The real-time identification mechanism of objective and recognition methods based on Multi-sensor Fusion | |
KR102109941B1 (en) | Method and Apparatus for Vehicle Detection Using Lidar Sensor and Camera | |
Dhiman et al. | Pothole detection using computer vision and learning | |
CN113255481B (en) | Crowd state detection method based on unmanned patrol car | |
CN105866790B (en) | A kind of laser radar obstacle recognition method and system considering lasing intensity | |
CN104183127B (en) | Traffic surveillance video detection method and device | |
CN106681353A (en) | Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion | |
CN107139666B (en) | Obstacle detouring identifying system and method | |
US10922824B1 (en) | Object tracking using contour filters and scalers | |
CN110472553A (en) | Target tracking method, computing device and the medium of image and laser point cloud fusion | |
CN106295459A (en) | Based on machine vision and the vehicle detection of cascade classifier and method for early warning | |
CN104715471A (en) | Target positioning and tracking method and device | |
CN115113206B (en) | Pedestrian and obstacle detection method for assisting driving of underground rail car | |
US11875524B2 (en) | Unmanned aerial vehicle platform based vision measurement method for static rigid object | |
McManus et al. | Distraction suppression for vision-based pose estimation at city scales | |
CN111913177A (en) | Method and device for detecting target object and storage medium | |
KR20180098945A (en) | Method and apparatus for measuring speed of vehicle by using fixed single camera | |
Bartl et al. | Planecalib: Automatic camera calibration by multiple observations of rigid objects on plane | |
CN108388854A (en) | A kind of localization method based on improvement FAST-SURF algorithms | |
CN118411507A (en) | Semantic map construction method and system for scene with dynamic target | |
KR20230101560A (en) | Vehicle lidar system and object detecting method thereof | |
CN112699748B (en) | Human-vehicle distance estimation method based on YOLO and RGB image | |
CN105631431B (en) | The aircraft region of interest that a kind of visible ray objective contour model is instructed surveys spectral method | |
Giosan et al. | Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information | |
KR101392222B1 (en) | Laser radar for calculating the outline of the target, method for calculating the outline of the target |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181214 |