CN110097047A - A kind of vehicle checking method using single line laser radar based on deep learning - Google Patents

A kind of vehicle checking method using single line laser radar based on deep learning Download PDF

Info

Publication number
CN110097047A
CN110097047A CN201910206463.6A CN201910206463A CN110097047A CN 110097047 A CN110097047 A CN 110097047A CN 201910206463 A CN201910206463 A CN 201910206463A CN 110097047 A CN110097047 A CN 110097047A
Authority
CN
China
Prior art keywords
vehicle
laser radar
convolutional neural
neural networks
single line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910206463.6A
Other languages
Chinese (zh)
Other versions
CN110097047B (en
Inventor
瞿三清
卢凡
董金虎
陈广
许仲聪
陈凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201910206463.6A priority Critical patent/CN110097047B/en
Publication of CN110097047A publication Critical patent/CN110097047A/en
Application granted granted Critical
Publication of CN110097047B publication Critical patent/CN110097047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of vehicle checking methods that single line laser radar is used based on deep learning, comprising the following steps: 1) acquires the feature point cloud data of vehicle by single line laser radar, and pre-processed to obtain binary picture;2) to being labeled in binary picture, the wherein position where vehicle is obtained, generates training dataset, and area-of-interest is generated using clustering method;3) building is suitable for the depth convolutional neural networks and its loss function of target vehicle detection;4) it is inputted in depth convolutional neural networks after carrying out augmentation to area-of-interest, and update is trained to the parameter in convolutional neural networks according to the difference of output valve and training true value, obtain optimal network parameter, and vehicle location detection is carried out according to trained depth convolutional neural networks, obtain vehicle location.Compared with prior art, the present invention has the advantages that robustness is high, testing cost is low etc..

Description

A kind of vehicle checking method using single line laser radar based on deep learning
Technical field
The present invention relates to intelligent driving technical fields, use single line laser radar based on deep learning more particularly, to one kind Vehicle checking method.
Background technique
In intelligent driving field, the detection for vehicle be ensure automatic driving vehicle safety traffic critical tasks it One.Detection for vehicle is limited to single line laser radar at present mostly using 3D laser radar or camera as sensor Less data information rarely has using single line laser radar as vehicle detecting sensor.But 3D laser radar is with high costs, And the point cloud information that acquisition data obtain is excessive, needs to expend huge computing resource.Using camera as detection sensor Higher detection accuracy can be obtained, but camera is affected by light, is especially having night, dense fog, sandstorm etc. In the case of be limited to acquisition image be difficult complete surrounding vehicles detection.
Summary of the invention
It is an object of the present invention to overcome the above-mentioned drawbacks of the prior art and provide one kind to be based on deep learning Using the vehicle checking method of single line laser radar.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of vehicle checking method using single line laser radar based on deep learning, comprising the following steps:
1) the feature point cloud data of vehicle is acquired by single line laser radar, and is pre-processed to obtain binary picture;
2) to being labeled in binary picture, the wherein position where vehicle is obtained, generates training dataset, and use Clustering method generates area-of-interest;
3) building is suitable for the depth convolutional neural networks and its loss function of target vehicle detection;
4) it is inputted in depth convolutional neural networks after carrying out augmentation to area-of-interest, and according to output valve and training true value Difference update is trained to the parameter in convolutional neural networks, obtain optimal network parameter, and according to trained depth It spends convolutional neural networks and carries out vehicle location detection, obtain vehicle location.
The step 1) stated specifically includes the following steps:
11) collected feature point cloud data is converted into the overall situation by the polar coordinate system of coordinate origin of single line laser radar Unified cartesian coordinate system;
12) to the point cloud data grid gridding after coordinate conversion, binary picture is converted to.
In the step 11), transformed representation are as follows:
(xj0,yj0)=(xj1,yj1)R+t
Wherein, (rjj) be original point cloud data midpoint j position coordinates, (xj1,yj1) it is that point j is converted to laser thunder Up to the position coordinates in the cartesian coordinate system for coordinate origin, (xj0,yj0) it is the global seat unified in cartesian coordinate system Mark, R are conversion spin matrix, and t is translation vector.
In the step 2), marked content includes that the Pixel-level mark of image and the bounding box of target vehicle mark.
The depth convolutional neural networks are Faster R-CNN convolutional neural networks, with the binary system being sized Image is as input, to export with position and the confidence level of target vehicle is corresponded on input binary picture.
In the step 3), the expression formula of the loss function of depth convolutional neural networks are as follows:
Loss=Lcls(p, u)+λ [u=1] Lloc(tu,v)
Lcls(p, u)=- log (p)
X=(tu-v)
Wherein, Lcls(p, u) is target classification Detectability loss subfunction, Lloc(tu, v) and it is range loss subfunction, p is pair In the predictive factor of target category, u is the practical factor of corresponding classification, and λ is the weighting weight of loss function, is indicated as u=1 It is target vehicle in area-of-interest, indicates that area-of-interest is background, t as u=0uFor the location factor of prediction, v is instruction Practice true location factor in sample, x is the deviation of predicted value and true value.
In the step 2), the clustering method includes Density Clustering, mean cluster and Mean-Shift cluster.
In the step 4), augmentation is carried out to area-of-interest specifically:
Region of interest area image is carried out Random Level overturning, cuts and uniformly zooms to fixed size, and is marked Data are also overturn accordingly, cut and are scaled.
In the step 4), training depth convolutional neural networks specifically:
According to loss function, declines back-propagation method using gradient, change to the parameter of depth convolutional neural networks In generation, updates, and the network parameter obtained after iteration to maximum is set number completes training as optimal network parameter.
Compared with prior art, the invention has the following advantages that
One, robustness is high: since single line laser radar can acquire accurate vehicle under the operating condition of various complexity Point cloud data is aided with the vehicle detecting algorithm of robustness deep learning neural network with higher, therefore, vehicle of the invention Detection method has very high robustness, can also guarantee the relative precision of testing result under complex working condition.
Two, testing cost is low: vehicle detecting algorithm proposed by the invention, using single line laser radar as sensor, Compared to 3D laser radar, testing cost is lower.
Detailed description of the invention
Fig. 1 is the flow chart of detection method of the invention.
Fig. 2 is vehicle depth convolutional network structural schematic diagram in the embodiment of the present invention.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.
Embodiment
The present invention provides a kind of based on deep learning using the method for the vehicle detection of single line laser radar, passes through list Line laser radar obtains the point cloud data of vehicle as the sensor of detection, and depth convolutional Neural net is input to after being pre-processed Network finally obtains position and the confidence level of vehicle.As shown in Figure 1, this method comprises the following steps:
(1) vehicle characteristic point cloud data are acquired using single line laser radar, and point cloud data collected is located in advance Reason;
(2) training data set used is constructed by manually marking wherein vehicle position to acquisition data
(3) building is suitable for the depth convolutional neural networks and loss function of vehicle detection
(4) area-of-interest is generated using clustering method to training data obtained by step (2), while by training data and felt Interest region augmentation is input in depth convolutional neural networks constructed by step (3).According to the difference of output valve and training true value The different parameter in convolutional neural networks is trained update, finally obtains ideal network parameter.
It in the present embodiment, include to the coordinate conversion of point cloud data and figure to the pretreatment of point cloud data in step (1) As two following steps such as conversions:
Single line laser radar in (1-1) the present embodiment is located at the front of embodiment vehicle, and single line laser radar is by certain Point cloud data around the acquisition of ground frame per second, collection result are stored using laser radar as coordinate origin with polar form.It will adopt The point cloud data of collection is transformed into cartesian coordinate system.
Transformed representation is as follows:
In above formula, (rjj) indicate acquisition original point cloud data in certain point, (xj1,yj1) it is the point cloud number acquired Certain point in is converted to using laser radar to indicate in the cartesian coordinate system of coordinate origin.
(1-2) is to point cloud data imageization processing.In the present embodiment, what laser radar acquired is set as apart from the upper limit 25m, the acquisition visual angle of single line laser radar are 180 °, the meshing point cloud data after coordinate is converted, then adjust having a size of 250 × 250 binary picture is set to 1 and is otherwise set to 0 if having data point in corresponding grid.
In the present embodiment, step (2) is to construct the required data set of deep learning training.In the laser radar to acquisition It after point cloud data is handled, needs manually to mark training data, to form the required data set of training.The mode of mark The including but not limited to bounding box mark of the Pixel-level mark of image, target vehicle.Target vehicle need to be included at least when mark Position, but the posture information etc. for increasing target vehicle can be expanded.
In the present embodiment, step (3) is depth convolutional neural networks and the loss that building is suitable for target vehicle detection Function.The training dataset prepared in the building of depth convolutional neural networks and step (2) is directly related, in the present embodiment, Step (2) is marked using the bounding box of target vehicle, therefore the structure and Fast of depth convolutional neural networks in the present embodiment R-CNN is similar, and main structure is built with reference to Fast R-CNN, and convolutional neural networks structure is as shown in Figure 2.
In the present embodiment, in step (3), the loss function of depth convolutional neural networks is constituted with two parts weighting:
Loss=Lcls(p, u)+λ [u=1] Lloc(tu,v)
(3-1) constructs target classification loss function Lcls(p, u), wherein p is the predictive factor for target category, and u table Show the practical factor of corresponding classification.The building of log loss function is generallyd use, wherein p indicates that the prediction probability of a certain classification, p are got over Close to 1, confidence level is higher, loses smaller.
Lcls(p, u)=- log (p)
(3-2) constructs target detection range Lloc(tu,v).Wherein, λ indicates the weighting weight of loss function, usually can be with Take λ=1.[u=1] indicates in area-of-interest to be that target vehicle takes 1,0 is taken when area-of-interest is background, i.e., if currently Area-of-interest when being the unrelated things of environment, do not consider its range loss.T in formulauIt indicates to represent predicted position The factor, and v indicates true location factor in training sample.Usually the range loss subfunction is with smooth manhatton distance Formula smoothL1(x) it constructs, expression formula are as follows: wherein x=(tu- v), indicate the deviation of predicted value and true value.
In the present embodiment, step (4) uses density clustering algorithm using the method that cluster generates area-of-interest.Specifically It is embodied as, by the point cloud data before conversion to Density Clustering function is input to, by Density Clustering function to point cloud data pair Category division, then by the class switch of point cloud data pair into binary picture corresponding point, utilize minimum rectangle frame surround Same category of binary picture point, the obtained image in the rectangle frame is area-of-interest.When realizing Density Clustering, The field distance threshold in minimum number strong point logarithm and every a kind of cluster for needing specified cluster cluster generated to need, at this In embodiment, minimum number strong point logarithm is set as 5, and field distance threshold is set as 0.5m.
In the present embodiment, when area-of-interest is input to depth convolutional neural networks, due to the size of area-of-interest Depend on the size with cluster result, therefore, in order to ensure that the normal operation of depth convolutional network, it is special first to carry out convolution to image data Sign extract, after by way of the pond of the space ROI, the characteristic image of area-of-interest is converted into uniform sizes size.
In the present embodiment, step (4) mainly includes turning over image into row stochastic level to the augmentation of training data Turn, cut and uniformly zoom to fixed size, labeled data is also overturn accordingly, cut and scaled, on this basis Obtained image is normalized by channel, the fixed dimension used in the present embodiment is 250 × 250.
In the present embodiment, when step (4) is to the initialization for training network model, first in ImageNet or other figures Pre-training is carried out as extracting network to object features using SoftMax loss function on categorized data set, obtained parameter value is made For the initial parameter of network.
In the present embodiment, when in step (4) to the training of network, comprehensive loss is calculated using the loss function of weighting Then value carries out backpropagation and calculates gradient, and updates network parameter using optimizers such as Adam, the certain number of iteration obtains most Whole result.And set final parametric results to the network model parameter of target vehicle detection device, for detecting target carriage Use.
In short, utilizing the side of the target vehicle detection of single line laser radar based on deep learning the present invention provides a kind of Method, the sensor by single line laser radar as detection obtain the point cloud data of target vehicle, input after being pre-processed To depth convolutional neural networks, position and the confidence level of target vehicle are finally obtained.The detection performance is outstanding, while having higher Robustness, cost of implementation is low, is easy to be deployed to the detection that target vehicle is used in existing intelligent parking robot.
Person skilled in the art obviously easily can make various modifications to these embodiments, and saying herein Bright General Principle is applied in other embodiments without having to go through creative labor.Therefore, the present invention is not limited to here Embodiment, those skilled in the art's announcement according to the present invention, improvement and modification made without departing from the scope of the present invention are all answered This is within protection scope of the present invention.

Claims (9)

1. a kind of vehicle checking method for using single line laser radar based on deep learning, which comprises the following steps:
1) the feature point cloud data of vehicle is acquired by single line laser radar, and is pre-processed to obtain binary picture;
2) to being labeled in binary picture, the wherein position where vehicle is obtained, generates training dataset, and using cluster Method generates area-of-interest;
3) building is suitable for the depth convolutional neural networks and its loss function of target vehicle detection;
4) it is inputted in depth convolutional neural networks after carrying out augmentation to area-of-interest, and according to the difference of output valve and training true value The different parameter in convolutional neural networks is trained update, obtains optimal network parameter, and roll up according to trained depth Product neural network carries out vehicle location detection, obtains vehicle location.
2. a kind of vehicle checking method for using single line laser radar based on deep learning according to claim 1, special Sign is, the step 1) stated specifically includes the following steps:
11) collected feature point cloud data is converted to using single line laser radar as the polar coordinate system of coordinate origin global unified Cartesian coordinate system;
12) to the point cloud data grid gridding after coordinate conversion, binary picture is converted to.
3. a kind of vehicle checking method for using single line laser radar based on deep learning according to claim 1, special Sign is, in the step 11), transformed representation are as follows:
(xj0,yj0)=(xj1,yj1)R+t
Wherein, (rjj) be original point cloud data midpoint j position coordinates, (xj1,yj1) it is that point j is converted to and is with laser radar Position coordinates in the cartesian coordinate system of coordinate origin, (xj0,yj0) it is the global coordinate unified in cartesian coordinate system, R is Spin matrix is converted, t is translation vector.
4. a kind of vehicle checking method for using single line laser radar based on deep learning according to claim 1, special Sign is, in the step 2), marked content includes that the Pixel-level mark of image and the bounding box of target vehicle mark.
5. a kind of vehicle checking method for using single line laser radar based on deep learning according to claim 1, special Sign is that the depth convolutional neural networks are Faster R-CNN convolutional neural networks, with the binary system being sized Image is as input, to export with position and the confidence level of target vehicle is corresponded on input binary picture.
6. a kind of target vehicle detection method based on deep learning according to claim 1, which is characterized in that described In step 3), the expression formula of the loss function of depth convolutional neural networks are as follows:
Loss=Lcls(p, u)+λ [u=1] Lloc(tu,v)
Lcls(p, u)=- log (p)
X=(tu-v)
Wherein, Lcls(p, u) is target classification Detectability loss subfunction, Lloc(tu, v) and it is range loss subfunction, p is for mesh The predictive factor of classification is marked, u is the practical factor of corresponding classification, and λ is the weighting weight of loss function, indicates feeling as u=1 Interest region is target vehicle, indicates that area-of-interest is background, t as u=0uFor the location factor of prediction, v is training sample True location factor in this, x are the deviation of predicted value and true value.
7. a kind of target vehicle detection method based on deep learning according to claim 1, which is characterized in that described In step 2), the clustering method includes Density Clustering, mean cluster and Mean-Shift cluster.
8. a kind of target vehicle detection method based on deep learning according to claim 1, which is characterized in that described In step 4), augmentation is carried out to area-of-interest specifically:
Region of interest area image is carried out Random Level overturning, cuts and uniformly zooms to fixed size, and labeled data Also it overturn, cut and is scaled accordingly.
9. a kind of target vehicle detection method based on deep learning according to claim 1, which is characterized in that described In step 4), training depth convolutional neural networks specifically:
According to loss function, declines back-propagation method using gradient, the parameter of depth convolutional neural networks is iterated more Newly, the network parameter obtained after iteration to maximum being set number completes training as optimal network parameter.
CN201910206463.6A 2019-03-19 2019-03-19 Vehicle detection method based on deep learning and adopting single line laser radar Active CN110097047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910206463.6A CN110097047B (en) 2019-03-19 2019-03-19 Vehicle detection method based on deep learning and adopting single line laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910206463.6A CN110097047B (en) 2019-03-19 2019-03-19 Vehicle detection method based on deep learning and adopting single line laser radar

Publications (2)

Publication Number Publication Date
CN110097047A true CN110097047A (en) 2019-08-06
CN110097047B CN110097047B (en) 2021-10-08

Family

ID=67443320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910206463.6A Active CN110097047B (en) 2019-03-19 2019-03-19 Vehicle detection method based on deep learning and adopting single line laser radar

Country Status (1)

Country Link
CN (1) CN110097047B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992731A (en) * 2019-12-12 2020-04-10 苏州智加科技有限公司 Laser radar-based 3D vehicle detection method and device and storage medium
CN112444784A (en) * 2019-08-29 2021-03-05 北京市商汤科技开发有限公司 Three-dimensional target detection and neural network training method, device and equipment
CN113655477A (en) * 2021-06-11 2021-11-16 成都圭目机器人有限公司 Method for automatically detecting geological diseases of land radar by adopting shallow layer
US11586925B2 (en) * 2017-09-29 2023-02-21 Samsung Electronics Co., Ltd. Neural network recogntion and training method and apparatus

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170039436A1 (en) * 2015-08-03 2017-02-09 Nokia Technologies Oy Fusion of RGB Images and Lidar Data for Lane Classification
CN107239794A (en) * 2017-05-18 2017-10-10 深圳市速腾聚创科技有限公司 Point cloud data segmentation method and terminal
CN108830188A (en) * 2018-05-30 2018-11-16 西安理工大学 Vehicle checking method based on deep learning
CN109063753A (en) * 2018-07-18 2018-12-21 北方民族大学 A kind of three-dimensional point cloud model classification method based on convolutional neural networks
CN109118500A (en) * 2018-07-16 2019-01-01 重庆大学产业技术研究院 A kind of dividing method of the Point Cloud Data from Three Dimension Laser Scanning based on image
CN109212541A (en) * 2018-09-20 2019-01-15 同济大学 High-precision vehicle detecting system based on vehicle perpendicular type feature and laser radar
CN109270544A (en) * 2018-09-20 2019-01-25 同济大学 Mobile robot self-localization system based on shaft identification
CN109270543A (en) * 2018-09-20 2019-01-25 同济大学 A kind of system and method for pair of target vehicle surrounding vehicles location information detection
CN109324616A (en) * 2018-09-20 2019-02-12 同济大学 Nobody based on onboard sensor parks the alignment method of transfer robot
CN109344786A (en) * 2018-10-11 2019-02-15 深圳步智造科技有限公司 Target identification method, device and computer readable storage medium
CN109386155A (en) * 2018-09-20 2019-02-26 同济大学 Nobody towards automated parking ground parks the alignment method of transfer robot
CN109389053A (en) * 2018-09-20 2019-02-26 同济大学 High performance vehicle detection system based on vehicle perpendicular type feature

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170039436A1 (en) * 2015-08-03 2017-02-09 Nokia Technologies Oy Fusion of RGB Images and Lidar Data for Lane Classification
CN107239794A (en) * 2017-05-18 2017-10-10 深圳市速腾聚创科技有限公司 Point cloud data segmentation method and terminal
CN108830188A (en) * 2018-05-30 2018-11-16 西安理工大学 Vehicle checking method based on deep learning
CN109118500A (en) * 2018-07-16 2019-01-01 重庆大学产业技术研究院 A kind of dividing method of the Point Cloud Data from Three Dimension Laser Scanning based on image
CN109063753A (en) * 2018-07-18 2018-12-21 北方民族大学 A kind of three-dimensional point cloud model classification method based on convolutional neural networks
CN109212541A (en) * 2018-09-20 2019-01-15 同济大学 High-precision vehicle detecting system based on vehicle perpendicular type feature and laser radar
CN109270544A (en) * 2018-09-20 2019-01-25 同济大学 Mobile robot self-localization system based on shaft identification
CN109270543A (en) * 2018-09-20 2019-01-25 同济大学 A kind of system and method for pair of target vehicle surrounding vehicles location information detection
CN109324616A (en) * 2018-09-20 2019-02-12 同济大学 Nobody based on onboard sensor parks the alignment method of transfer robot
CN109386155A (en) * 2018-09-20 2019-02-26 同济大学 Nobody towards automated parking ground parks the alignment method of transfer robot
CN109389053A (en) * 2018-09-20 2019-02-26 同济大学 High performance vehicle detection system based on vehicle perpendicular type feature
CN109344786A (en) * 2018-10-11 2019-02-15 深圳步智造科技有限公司 Target identification method, device and computer readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
AMIR KOTB 等: "Smart Parking Guidance, Monitoring and Reservations: A Review", 《 IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE》 *
企鹅号 - AI科技大本营: "无人驾驶汽车系统入门:基于深度学习的实时激光雷达点云目标检测及ROS实现", 《HTTPS://CLOUD.TENCENT.COM/DEVELOPER/NEWS/339676》 *
李游: "基于车载激光扫描数据的城市街道信息提取技术研究", 《中国博士学位论文全文数据库 基础科学辑》 *
罗海峰等: "基于DBN的车载激光点云路侧多目标提取", 《测绘学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11586925B2 (en) * 2017-09-29 2023-02-21 Samsung Electronics Co., Ltd. Neural network recogntion and training method and apparatus
CN112444784A (en) * 2019-08-29 2021-03-05 北京市商汤科技开发有限公司 Three-dimensional target detection and neural network training method, device and equipment
CN112444784B (en) * 2019-08-29 2023-11-28 北京市商汤科技开发有限公司 Three-dimensional target detection and neural network training method, device and equipment
CN110992731A (en) * 2019-12-12 2020-04-10 苏州智加科技有限公司 Laser radar-based 3D vehicle detection method and device and storage medium
CN113655477A (en) * 2021-06-11 2021-11-16 成都圭目机器人有限公司 Method for automatically detecting geological diseases of land radar by adopting shallow layer
CN113655477B (en) * 2021-06-11 2023-09-01 成都圭目机器人有限公司 Method for automatically detecting geological diseases by adopting shallow layer ground radar

Also Published As

Publication number Publication date
CN110097047B (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN110069993A (en) A kind of target vehicle detection method based on deep learning
CN110163187B (en) F-RCNN-based remote traffic sign detection and identification method
CN110929578B (en) Anti-shielding pedestrian detection method based on attention mechanism
Wang et al. Data-driven based tiny-YOLOv3 method for front vehicle detection inducing SPP-net
CN110097047A (en) A kind of vehicle checking method using single line laser radar based on deep learning
CN109711295B (en) Optical remote sensing image offshore ship detection method
CN108491854B (en) Optical remote sensing image target detection method based on SF-RCNN
CN112766087A (en) Optical remote sensing image ship detection method based on knowledge distillation
CN107862293A (en) Radar based on confrontation generation network generates colored semantic image system and method
CN114565860B (en) Multi-dimensional reinforcement learning synthetic aperture radar image target detection method
CN107016357A (en) A kind of video pedestrian detection method based on time-domain convolutional neural networks
CN110414509B (en) Port docking ship detection method based on sea-land segmentation and characteristic pyramid network
KR20210043516A (en) Method and apparatus for training trajectory planning model, electronic device, storage medium and program
CN113160062A (en) Infrared image target detection method, device, equipment and storage medium
CN111898432A (en) Pedestrian detection system and method based on improved YOLOv3 algorithm
Golovko et al. Development of solar panels detector
CN117975436A (en) Three-dimensional target detection method based on multi-mode fusion and deformable attention
CN113591617B (en) Deep learning-based water surface small target detection and classification method
CN107944354A (en) A kind of vehicle checking method based on deep learning
CN116188999A (en) Small target detection method based on visible light and infrared image data fusion
CN112348758A (en) Optical remote sensing image data enhancement method and target identification method
CN113011338A (en) Lane line detection method and system
CN116129234A (en) Attention-based 4D millimeter wave radar and vision fusion method
CN116503602A (en) Unstructured environment three-dimensional point cloud semantic segmentation method based on multi-level edge enhancement
Zhang et al. Nearshore vessel detection based on Scene-mask R-CNN in remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant