CN110069993B - Target vehicle detection method based on deep learning - Google Patents

Target vehicle detection method based on deep learning Download PDF

Info

Publication number
CN110069993B
CN110069993B CN201910206458.5A CN201910206458A CN110069993B CN 110069993 B CN110069993 B CN 110069993B CN 201910206458 A CN201910206458 A CN 201910206458A CN 110069993 B CN110069993 B CN 110069993B
Authority
CN
China
Prior art keywords
target vehicle
neural network
convolutional neural
training
deep convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910206458.5A
Other languages
Chinese (zh)
Other versions
CN110069993A (en
Inventor
瞿三清
许仲聪
卢凡
陈广
董金虎
陈凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201910206458.5A priority Critical patent/CN110069993B/en
Publication of CN110069993A publication Critical patent/CN110069993A/en
Application granted granted Critical
Publication of CN110069993B publication Critical patent/CN110069993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Abstract

The invention relates to a target vehicle detection method based on deep learning, which comprises the following steps: 1) acquiring tail characteristic point cloud data of a target vehicle through two single-line laser radars arranged at the tail of the parking robot, and preprocessing the data to obtain a binary image; 2) labeling the binary image to obtain the position of the tail of the target vehicle, so as to generate a training data set; 3) constructing a deep convolutional neural network suitable for target vehicle detection and a loss function thereof; 4) and the training data set is input into the deep convolutional neural network after being augmented, parameters in the convolutional neural network are trained and updated according to the difference between the output value and the training true value to obtain the optimal network parameters, and detection is performed according to the trained deep convolutional neural network. Compared with the prior art, the method has the advantages of high robustness, independence on manual characteristics, low detection cost and the like.

Description

Target vehicle detection method based on deep learning
Technical Field
The invention relates to the technical field of intelligent parking, in particular to a target vehicle detection method based on deep learning.
Background
In the field of intelligent driving, detection of a target vehicle is one of key tasks for guaranteeing safe driving of an unmanned vehicle. In the technical field of intelligent parking, the detection of the position of a target vehicle is a key step for realizing the accurate alignment of a parking robot on the target vehicle. Because the laser radar is slightly influenced by the environment and can acquire accurate point cloud data of a target vehicle, the laser radar becomes the most important sensor for detecting and positioning the vehicle in the field of intelligent parking.
At present, in the technical field of intelligent parking, a traditional detection algorithm based on manual characteristics of target vehicles is mainly adopted for a target vehicle detection method. Although the traditional detection algorithm is small in calculated amount and high in speed, the characteristics of the target vehicle cannot be well matched with the manual characteristics in many scenes, so that the traditional detection algorithm for the target vehicle is low in recall rate and poor in robustness.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a target vehicle detection method based on deep learning.
The purpose of the invention can be realized by the following technical scheme:
a target vehicle detection method based on deep learning is used for realizing intelligent parking, and comprises the following steps:
1) acquiring tail characteristic point cloud data of a target vehicle through two single-line laser radars arranged at the tail of the parking robot, and preprocessing the data to obtain a binary image;
2) labeling the binary image to obtain the position of the tail of the target vehicle, so as to generate a training data set;
3) constructing a deep convolutional neural network suitable for target vehicle detection and a loss function thereof;
4) and the training data set is input into the deep convolutional neural network after being augmented, parameters in the convolutional neural network are trained and updated according to the difference between the output value and the training true value to obtain the optimal network parameters, and detection is performed according to the trained deep convolutional neural network.
The step 1) specifically comprises the following steps:
11) converting the collected point cloud data into a globally uniform Cartesian coordinate system according to a polar coordinate system taking a single-line laser radar as a coordinate origin;
12) and meshing the point cloud data after coordinate conversion and converting the point cloud data into a binary image.
In the step 11), the conversion expression is:
Figure BDA0001999108420000021
(xj0,yj0)=(xj1,yj1)R+t
wherein (r)jj) Is the position coordinate of the point j in the original point cloud data, (x)j1,yj1) For point j to a position coordinate in a Cartesian coordinate system with the lidar as the origin of coordinates, (x)j0,yj0) And the coordinates in the global unified Cartesian coordinate system, R is a conversion rotation matrix, and t is a conversion translation vector.
In the step 2), the labeling content includes pixel level labeling of the image and boundary frame labeling of the target vehicle.
The deep convolutional neural network is a Faster R-CNN convolutional neural network, takes a binary image with a set size as input, and takes the position and the confidence degree of a corresponding target vehicle on the input binary image as output.
The loss function of the deep convolutional neural network is expressed as:
Loss=Lcls(p,u)+λ[u=1]Lloc(tu,v)
Lcls(p,u)=-log(p)
Figure BDA0001999108420000022
x=(tu-v)
wherein L iscls(p, u) detection of loss subfunction for target classification, Lloc(tuV) is a distance loss subfunction, p is a predictor for the target class, u is an actual factor for the corresponding class, λ is a weighted weight of the loss function, when u is 1, it means that the target vehicle is in the region of interest, when u is 0, it means that the region of interest is the background, t isuV is in the training sample for the predicted location factorThe true position factor, x is the deviation of the predicted value from the true value.
The specific steps for augmenting the training data set are:
and performing random horizontal turning, cutting and unified zooming on the image to a fixed size, and performing corresponding turning, cutting and zooming on the marking data.
The training deep convolutional neural network specifically comprises the following steps:
and (4) according to the loss function, carrying out iterative updating on the parameters of the deep convolutional neural network by using a gradient descent back propagation method, and taking the network parameters obtained after iteration to the maximum set times as the optimal network parameters to finish training.
Compared with the prior art, the invention has the following advantages:
firstly, the robustness is high: the laser radar can acquire accurate target vehicle point cloud data under various complex working conditions, and a target vehicle detection algorithm with high robustness is used as an auxiliary tool, so that the vehicle detection method has high robustness, and the relative accuracy of a detection result can be ensured under the complex working conditions.
Secondly, do not rely on manual characteristics: compared with the traditional target vehicle detection algorithm, the method provided by the invention has the advantages that the target characteristics are learned through the deep neural network, the manual characteristics are not relied on, and the recall rate of the detection result is high.
Thirdly, the detection cost is low: the invention adopts two single line laser radars as sensors, and has low detection cost compared with a multi-line laser radar.
Drawings
FIG. 1 is a flow chart of the detection method of the present invention.
Fig. 2 is a schematic structural diagram of an intelligent parking robot in an embodiment of the invention.
Fig. 3 is a schematic diagram of a deep convolutional network structure of a target vehicle in an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
The invention provides a method for detecting a target vehicle by utilizing a single-line laser radar based on deep learning. As shown in fig. 1, the method comprises the steps of:
(1) acquiring tail characteristic point cloud data of a target vehicle by using 2 single-line laser radars, and preprocessing the acquired point cloud data;
(2) manually marking the position of the tail of the target vehicle in the acquired data to construct a data set for training
(3) Construction of deep convolutional neural network and loss function suitable for target vehicle detection
(4) And (3) amplifying the data used for training in the step (2), and inputting the data into the deep convolutional neural network constructed in the step (3). And training and updating parameters in the convolutional neural network according to the difference between the output value and the training true value, and finally obtaining more ideal network parameters.
In this embodiment, the preprocessing of the point cloud data in step (1) includes two steps of coordinate transformation and image transformation of the point cloud data, as follows:
(1-1) the 2 single-line laser radars in the embodiment are located on two sides of the rear of the intelligent parking robot, the structure of the intelligent parking robot is shown in fig. 2, the two single-line laser radars collect surrounding point cloud data according to a certain ground frame rate, and the collection results take the laser radars as coordinate origin and are stored in a polar coordinate mode. And converting the collected point cloud data into a global unified Cartesian coordinate system of the parking robot.
The conversion expression is as follows:
Figure BDA0001999108420000041
(xj0,yj0)=(xj1,yj1)R+t
in the above formula, (r)jj) Representing acquired raw point cloud data(x)j1,yj1) Is a representation of a point in the collected point cloud data converted into a cartesian coordinate system with the laser radar as the origin of coordinates. (x)j0,yj0) The corresponding point cloud data points are converted to a representation in the global uniform cartesian coordinate system of the parking robot. R is the translation rotation matrix and t is the translation vector.
And (1-2) carrying out imaging processing on the point cloud data. In this embodiment, the upper limit of the distance acquired by the laser radar is set to 10m, the point cloud data after coordinate conversion is gridded, and then the binary image with the size of 250 × 250 is adjusted, if there is a data point in the corresponding grid, the binary image is set to 1, otherwise the binary image is set to 0.
In the present embodiment, step (2) is to construct a data set required for deep learning training. After the collected laser radar point cloud data is processed, training data needs to be manually marked to form a data set required by training. The labeling mode includes, but is not limited to, pixel level labeling of the image, and bounding box labeling of the target vehicle. The position of the target vehicle at least needs to be included during marking, but the attitude information of the target vehicle can be expanded and increased.
In this embodiment, step (3) is to construct a deep convolutional neural network and a loss function suitable for target vehicle detection. The construction of the deep convolutional neural network is directly related to the training data set prepared in the step (2), in the embodiment, the step (2) is marked by using a boundary box of the target vehicle, so that the structure of the deep convolutional neural network is similar to that of the Faster R-CNN in the embodiment, the construction of the main structure refers to the Faster R-CNN, and the structure of the convolutional neural network is shown in FIG. 3.
In this embodiment, in step (3), the loss function of the deep convolutional neural network is constructed with two-part weighting:
Loss=Lcls(p,u)+λ[u=1]Lloc(tu,v)
(3-1) construction of an object Classification loss function Lcls(p, u), where p is the predictor for the target class and u represents the actual factor for the corresponding class. Usually constructed using a log-loss function, where p represents the prediction probability of a certain class,the closer p is to 1, the higher the confidence, the smaller the loss.
Lcls(p,u)=-log(p)
(3-2) construction of the target detection distance Lloc(tuV). Where λ represents the weighting weight of the loss function, and may be generally taken to be λ ═ 1. [ u-1 ]]The expression takes 1 when the region of interest is the target vehicle and 0 when the region of interest is the background, i.e., if the current region of interest is an environment-independent thing, its distance loss is not considered. T in the formulauThe representation represents the predicted location factor and v represents the true location factor in the training sample. The distance-loss sub-function is usually expressed in the form of a smoothed Manhattan distance equation
Figure BDA0001999108420000051
Constructing the expression: wherein x is (t)u-v) representing the deviation of the predicted value from the true value.
Figure BDA0001999108420000052
In this embodiment, the step (4) of augmenting the training data mainly includes performing random horizontal flipping, cropping, and unified scaling on the image to a fixed size, and performing corresponding flipping, cropping, and scaling on the labeled data, and performing normalization processing on the obtained image according to a channel on the basis, where the fixed size adopted in this embodiment is 250 × 250.
In this embodiment, when initializing the training network model in step (4), pre-training the object feature extraction network by using a SoftMax loss function on the ImageNet or other image classification data sets, and using the obtained parameter values as initial parameters of the network.
In this embodiment, when the network is trained in step (4), a weighted loss function is used to calculate a comprehensive loss value, then a back propagation calculation gradient is performed, and an optimizer such as Adam is used to update network parameters, and a final result is obtained by iterating for a certain number of times. And setting the final parameter result as the network model parameter of the target vehicle detector for the detection of the target vehicle.
The invention provides a method for detecting a target vehicle by utilizing a single-line laser radar based on deep learning. The intelligent parking robot has the advantages of excellent detection performance, higher robustness and low implementation cost, and is easy to deploy to the existing intelligent parking robot for detecting the target vehicle.
It will be readily apparent to those skilled in the art that various modifications to these embodiments may be made, and the generic principles described herein may be applied to other embodiments without the use of the inventive faculty. Therefore, the present invention is not limited to the embodiments described herein, and those skilled in the art should make improvements and modifications within the scope of the present invention based on the disclosure of the present invention.

Claims (1)

1. A target vehicle detection method based on deep learning is used for realizing intelligent parking, and is characterized by comprising the following steps:
1) the method comprises the following steps of collecting tail characteristic point cloud data of a target vehicle through two single-line laser radars arranged at the tail of a parking robot, and preprocessing the data to obtain a binary image, wherein the method specifically comprises the following steps:
11) converting the collected point cloud data into a globally uniform Cartesian coordinate system according to a polar coordinate system taking a single-line laser radar as a coordinate origin, wherein the conversion expression is as follows:
Figure FDA0003153987480000011
(xj0,yj0)=(xj1,yj1)R+t
wherein (r)jj) Is the position coordinate of the point j in the original point cloud data, (x)j1,yj1) Convert point j to a Cartesian coordinate with lidar as the origin of coordinatesPosition coordinates in the frame, (x)j0,yj0) Coordinates in a global unified Cartesian coordinate system, R is a conversion rotation matrix, and t is a conversion translation vector;
12) meshing the point cloud data after coordinate conversion, and converting the point cloud data into a binary image;
2) labeling the binary image to obtain the position of the tail of the target vehicle so as to generate a training data set, wherein the labeled content comprises pixel level labeling of the image and boundary frame labeling of the target vehicle;
3) constructing a deep convolutional neural network and a loss function thereof, wherein the deep convolutional neural network is suitable for target vehicle detection, the deep convolutional neural network is a Faster R-CNN convolutional neural network, a binary image with a set size is used as input, the position and the confidence coefficient of a corresponding target vehicle on the input binary image are used as output, and the loss function of the deep convolutional neural network has the expression:
Loss=Lcls(p,u)+λ[u=1]Lloc(tu,v)
Lcls(p,u)=-log(p)
Figure FDA0003153987480000012
x=(tu-v)
wherein L iscls(p, u) detection of loss subfunction for target classification, Lloc(tuV) is a distance loss subfunction, p is a predictor for the target class, u is an actual factor for the corresponding class, λ is a weighted weight of the loss function, when u is 1, it means that the region of interest is a target vehicle, when u is 0, it means that the region of interest is a background, t isuIs a predicted position factor, v is a real position factor in the training sample, and x is the deviation of the predicted value and the real value;
4) the method comprises the following steps of inputting a training data set after being augmented into a deep convolutional neural network, training and updating parameters in the convolutional neural network according to the difference between an output value and a training true value to obtain optimal network parameters, detecting according to the trained deep convolutional neural network, and specifically, the method for augmenting the training data set comprises the following steps:
performing random horizontal turning, cutting and unified zooming on the image to a fixed size, and performing corresponding turning, cutting and zooming on the labeled data;
the training deep convolutional neural network specifically comprises the following steps:
and (4) according to the loss function, carrying out iterative updating on the parameters of the deep convolutional neural network by using a gradient descent back propagation method, and taking the network parameters obtained after iteration to the maximum set times as the optimal network parameters to finish training.
CN201910206458.5A 2019-03-19 2019-03-19 Target vehicle detection method based on deep learning Active CN110069993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910206458.5A CN110069993B (en) 2019-03-19 2019-03-19 Target vehicle detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910206458.5A CN110069993B (en) 2019-03-19 2019-03-19 Target vehicle detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN110069993A CN110069993A (en) 2019-07-30
CN110069993B true CN110069993B (en) 2021-10-08

Family

ID=67366360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910206458.5A Active CN110069993B (en) 2019-03-19 2019-03-19 Target vehicle detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN110069993B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560834A (en) * 2019-09-26 2021-03-26 武汉金山办公软件有限公司 Coordinate prediction model generation method and device and graph recognition method and device
CN111178213B (en) * 2019-12-23 2022-11-18 大连理工大学 Aerial photography vehicle detection method based on deep learning
CN111177297B (en) * 2019-12-31 2022-09-02 信阳师范学院 Dynamic target speed calculation optimization method based on video and GIS
CN111523403B (en) * 2020-04-03 2023-10-20 咪咕文化科技有限公司 Method and device for acquiring target area in picture and computer readable storage medium
CN111539347B (en) * 2020-04-27 2023-08-08 北京百度网讯科技有限公司 Method and device for detecting target
CN111653103A (en) * 2020-05-07 2020-09-11 浙江大华技术股份有限公司 Target object identification method and device
CN111783844A (en) * 2020-06-10 2020-10-16 东莞正扬电子机械有限公司 Target detection model training method and device based on deep learning and storage medium
US11834066B2 (en) * 2020-12-29 2023-12-05 GM Global Technology Operations LLC Vehicle control using neural network controller in combination with model-based controller
CN113313201A (en) * 2021-06-21 2021-08-27 南京挥戈智能科技有限公司 Multi-target detection and distance measurement method based on Swin transducer and ZED camera
CN114692720B (en) * 2022-02-25 2023-05-23 广州文远知行科技有限公司 Image classification method, device, equipment and storage medium based on aerial view

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050240323A1 (en) * 2004-03-31 2005-10-27 Honda Motor Co., Ltd. Parking lot attendant robot system
CN106355194A (en) * 2016-08-22 2017-01-25 广东华中科技大学工业技术研究院 Treatment method for surface target of unmanned ship based on laser imaging radar
CN106650809A (en) * 2016-12-20 2017-05-10 福州大学 Method and system for classifying vehicle-borne laser-point cloud targets
CN107239794A (en) * 2017-05-18 2017-10-10 深圳市速腾聚创科技有限公司 Point cloud data segmentation method and terminal
CN108009509A (en) * 2017-12-12 2018-05-08 河南工业大学 Vehicle target detection method
CN108830188A (en) * 2018-05-30 2018-11-16 西安理工大学 Vehicle checking method based on deep learning
US20190004508A1 (en) * 2017-07-03 2019-01-03 Volvo Car Corporation Method and system for automatic parking of a vehicle
CN109270543A (en) * 2018-09-20 2019-01-25 同济大学 A kind of system and method for pair of target vehicle surrounding vehicles location information detection
CN109270544A (en) * 2018-09-20 2019-01-25 同济大学 Mobile robot self-localization system based on shaft identification
CN109324616A (en) * 2018-09-20 2019-02-12 同济大学 Nobody based on onboard sensor parks the alignment method of transfer robot
CN109386155A (en) * 2018-09-20 2019-02-26 同济大学 Nobody towards automated parking ground parks the alignment method of transfer robot

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9710714B2 (en) * 2015-08-03 2017-07-18 Nokia Technologies Oy Fusion of RGB images and LiDAR data for lane classification
CN109118500B (en) * 2018-07-16 2022-05-10 重庆大学产业技术研究院 Image-based three-dimensional laser scanning point cloud data segmentation method
CN109063753B (en) * 2018-07-18 2021-09-14 北方民族大学 Three-dimensional point cloud model classification method based on convolutional neural network
CN109344786A (en) * 2018-10-11 2019-02-15 深圳步智造科技有限公司 Target identification method, device and computer readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050240323A1 (en) * 2004-03-31 2005-10-27 Honda Motor Co., Ltd. Parking lot attendant robot system
CN106355194A (en) * 2016-08-22 2017-01-25 广东华中科技大学工业技术研究院 Treatment method for surface target of unmanned ship based on laser imaging radar
CN106650809A (en) * 2016-12-20 2017-05-10 福州大学 Method and system for classifying vehicle-borne laser-point cloud targets
CN107239794A (en) * 2017-05-18 2017-10-10 深圳市速腾聚创科技有限公司 Point cloud data segmentation method and terminal
US20190004508A1 (en) * 2017-07-03 2019-01-03 Volvo Car Corporation Method and system for automatic parking of a vehicle
CN108009509A (en) * 2017-12-12 2018-05-08 河南工业大学 Vehicle target detection method
CN108830188A (en) * 2018-05-30 2018-11-16 西安理工大学 Vehicle checking method based on deep learning
CN109270543A (en) * 2018-09-20 2019-01-25 同济大学 A kind of system and method for pair of target vehicle surrounding vehicles location information detection
CN109270544A (en) * 2018-09-20 2019-01-25 同济大学 Mobile robot self-localization system based on shaft identification
CN109324616A (en) * 2018-09-20 2019-02-12 同济大学 Nobody based on onboard sensor parks the alignment method of transfer robot
CN109386155A (en) * 2018-09-20 2019-02-26 同济大学 Nobody towards automated parking ground parks the alignment method of transfer robot

Also Published As

Publication number Publication date
CN110069993A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN110069993B (en) Target vehicle detection method based on deep learning
CN110097047B (en) Vehicle detection method based on deep learning and adopting single line laser radar
CN110287849B (en) Lightweight depth network image target detection method suitable for raspberry pi
CN111507982B (en) Point cloud semantic segmentation method based on deep learning
US11532151B2 (en) Vision-LiDAR fusion method and system based on deep canonical correlation analysis
CN111046781B (en) Robust three-dimensional target detection method based on ternary attention mechanism
CN109523013B (en) Air particulate matter pollution degree estimation method based on shallow convolutional neural network
CN112183203A (en) Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN111414954B (en) Rock image retrieval method and system
CN112949407B (en) Remote sensing image building vectorization method based on deep learning and point set optimization
CN111160407A (en) Deep learning target detection method and system
CN115032648A (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN111259733A (en) Point cloud image-based ship identification method and device
CN116486396A (en) 3D target detection method based on 4D millimeter wave radar point cloud
CN112785636A (en) Multi-scale enhanced monocular depth estimation method
CN114280626A (en) Laser radar SLAM method and system based on local structure information expansion
CN115062527A (en) Geostationary satellite sea temperature inversion method and system based on deep learning
CN113536920B (en) Semi-supervised three-dimensional point cloud target detection method
CN108133182B (en) New energy power generation prediction method and device based on cloud imaging
CN113327253A (en) Weak and small target detection method based on satellite-borne infrared remote sensing image
CN116912673A (en) Target detection method based on underwater optical image
CN109389053B (en) Method and system for detecting position information of vehicle to be detected around target vehicle
CN115661694A (en) Intelligent detection method, system, storage medium and electronic equipment for light-weight main transformer focusing on key characteristics
CN111027626B (en) Flow field identification method based on deformable convolution network
CN116843918A (en) Airplane key point detection method based on two-stage multi-scale feature learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant