CN110568445A - Laser radar and vision fusion perception method of lightweight convolutional neural network - Google Patents

Laser radar and vision fusion perception method of lightweight convolutional neural network Download PDF

Info

Publication number
CN110568445A
CN110568445A CN201910814535.5A CN201910814535A CN110568445A CN 110568445 A CN110568445 A CN 110568445A CN 201910814535 A CN201910814535 A CN 201910814535A CN 110568445 A CN110568445 A CN 110568445A
Authority
CN
China
Prior art keywords
network
fusion
laser radar
final
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910814535.5A
Other languages
Chinese (zh)
Inventor
宋春毅
章叶
宋钰莹
李俊杰
徐志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910814535.5A priority Critical patent/CN110568445A/en
Publication of CN110568445A publication Critical patent/CN110568445A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a laser radar and visual fusion perception method based on a lightweight convolutional neural network. And while the average accuracy is ensured, the network running time under the processing of different lightweight methods and different compressed sensing methods is obtained, and the sensing method with the shortest running time is selected as the optimal sensing method. The laser radar of the lightweight convolutional neural network and the vision fusion perception method can reduce the running time as much as possible while ensuring the effect of perceiving the target, thereby reducing the hardware requirement of an identification task on an execution platform.

Description

Laser radar and vision fusion perception method of lightweight convolutional neural network
Technical Field
The invention belongs to the field of target detection, and particularly relates to a laser radar and visual fusion perception method of a lightweight convolutional neural network.
Background
The laser radar is used for accurately measuring the position (distance and angle), the shape (size) and the state (speed and posture) of a target, so that the aims of detecting, identifying and tracking the target are fulfilled. One of the most widespread applications today is the combination of lidar with vision sensors, which also become an indispensable presence in target detection applications due to the low cost nature of vision sensors compared to lidar.
Since AlexNet has won ImageNet challenge race (ILSVRC2012) to generalize deep convolutional neural networks, convolutional neural networks have become very popular in the field of computer vision. The current general trend is to build deeper, more complex networks for better accuracy. However, these advances to increase accuracy do not necessarily make the network size and speed efficient. In many real-world applications, such as robotics, auto-driving cars, and AR technology (automated reliability), recognition tasks need to be performed on computationally limited platforms in a timely manner.
With the development of the technology, the convolutional neural network is widely applied in the fields of image classification, image segmentation, target detection and the like. In order to pursue higher performance, powerful and complex model networks and experimental methods are often used in the deep learning field, and thus the complex models inherently have better performance but are accompanied by problems of high storage space, consumption of computing resources and the like. Although network performance has improved, efficiency issues have followed. The efficiency problem mainly is the storage problem of the model and the speed problem of the model for prediction, so that the algorithms are difficult to be effectively applied to each hardware platform and cannot be widely applied to a mobile terminal.
Disclosure of Invention
The invention aims to provide a laser radar of a lightweight convolutional neural network and a vision fusion perception method aiming at the defects of the prior art. Aiming at the problem of efficiency, the method is to design a lightweight model, and the main idea is to design a more efficient network computing mode (mainly aiming at a convolution mode), so that the network performance is not lost while the network parameters are reduced; another method is to perform Model Compression (Model Compression), that is, to perform Compression on a trained Model, so that the network carries fewer network parameters, thereby solving the memory problem and the speed problem.
The invention is realized by the following technical scheme: a laser radar and visual fusion perception method of a lightweight convolutional neural network specifically comprises the following steps:
the method comprises the following steps: and simultaneously acquiring data by using a laser radar sensor and a visual sensor, and dividing the acquired data into training data and testing data.
Step two: and selecting a laser radar and a vision fusion network which mainly comprise the convolutional layer to simultaneously process the training data and the test data, and obtaining a test result of the original fusion network. The test results include average accuracy and run time.
Step three: and optimizing the original fusion network by adopting a lightweight method to obtain an optimized fusion network.
step four: and training the optimized fusion network obtained in the step three by using the training data to obtain network connection, parameter values and the like in the fusion network.
Step five: and processing the fusion network in the fourth step by using a compressed sensing method to obtain the final laser radar and visual fusion sensing network.
Step six: and testing the final laser radar and vision fusion perception network obtained in the step five by using the test data to obtain the final average accuracy and the final running time.
Step seven: and if the final running time is less than 10% higher than that in the original converged network in the step two, repeating the steps three to six until the final running time is higher than 10%.
Further, the weight reduction method comprises one or more of a method of adopting an FPN structure, a method of changing an elementaltwise operation mode and a method of replacing standard convolution by depth wise partial convolution.
Further, the compressed sensing method is composed of one or more of model clipping, weight sharing, quantization, convolutional layer BN layer combination and the like.
Different from the prior art, the method has the advantages that the fusion network is optimized by adopting a lightweight method, namely, a more efficient network computing mode is designed, so that network parameters are reduced, and meanwhile, the network performance is not lost. And after the lightweight convolution method and the compressed sensing method are effectively combined, redundancy can be further reduced, precision can be further reduced, and therefore storage space is reduced. The laser radar of the lightweight convolutional neural network and the vision fusion perception method can reduce the running time as much as possible while ensuring the effect of perceiving the target, thereby reducing the hardware requirement of an identification task on an execution platform.
Drawings
FIG. 1 is a flow chart of a lidar and vision fusion perception method of a lightweight convolutional neural network of the present invention;
Fig. 2 is a FPN architecture.
Detailed Description
The invention provides a laser radar and visual fusion perception method based on a lightweight convolutional neural network, which is used for optimizing a network for the purpose that an identification task needs to be timely executed on a platform with limited calculation on the basis that the convolutional neural network fuses laser radar point cloud and visual image data, and can classify and detect targets for subsequent application to target identification and tracking.
As shown in fig. 1, the method of the present invention comprises the steps of:
The method comprises the following steps: and simultaneously acquiring data by using a laser radar sensor and a visual sensor, and dividing the acquired data into training data and testing data.
Step two: and selecting a laser radar and a vision fusion network which mainly comprise the convolutional layer to simultaneously process the training data and the test data, and obtaining a test result of the original fusion network. The test results include average accuracy and run time.
Step three: and optimizing the original fusion network by adopting a lightweight method to obtain an optimized fusion network. The weight reduction method comprises one or more of a method adopting an FPN structure, a method for changing an elementary wind operation mode and a method for replacing standard convolution by depth wise partial convolution.
(1) FPN structure method
As shown in fig. 2, in the original convolutional layer bottom-up pre-training network, a corresponding top-down network is constructed, that is, a deep network is subjected to upsampling operation and then added to a corresponding shallow network, so as to fuse information of multiple layers of feature layers.
The FPN structure can utilize the context information after passing through the top-down model, namely the more concerned detail information of the shallow network and the more concerned semantic information of the high-level network are combined, so that the resolution of feature mapping is increased, and small targets can be well processed.
(2) Method for replacing standard convolution by Depth wise partial convolution
The Depth wise separable convolution is a form of deconvolution, standard convolution is decomposed into a Depth convolution (Depth wise convolution) and a convolution of 1 x 1, the convolution of 1 x 1 is also called point-by-point convolution (position wise convolution), the method that the channel and the region are considered at the same time in the conventional common convolution operation is changed, a novel method that only the region is considered first and then the channel is considered is formed, and the separation of the channel and the region is realized.
(3) Elementwise operation method
The Elementwise operation is an important operation in the network structure design, and an Elementwise operation method for selecting the most suitable algorithm by balancing the calculated amount and the information loss at the network feature combination position. Several common approaches include: channel conditioner is the combination of Channel numbers and is used for fusing different types of characteristic meanings, so that the precision is high, and the memory is greatly increased; the Elementwise sum is suitable for the condition that the semantics of the feature graphs of the corresponding channels are similar, and the parameter quantity and the calculated quantity are saved; the Elementwise product inhibits or highlights the characteristics of certain specific areas, and is beneficial to the detection of small targets.
Step four: and training the optimized fusion network obtained in the step three by using the training data to obtain network connection, parameter values and the like in the fusion network.
Step five: and processing the fusion network in the fourth step by using a compressed sensing method to obtain the final laser radar and visual fusion sensing network. The compressed sensing method comprises one or more of model clipping, weight sharing, quantization, convolution layer BN layer combination and the like.
(1) Model cutting
Model cutting is mainly to compress a trained network, namely to find a more effective evaluation mode, and to cut unimportant connections or filters to reduce the redundancy of the model, thereby reducing the storage space of the model and accelerating the calculation speed. .
(2) Weight sharing
The weight sharing is to cluster the weights, and the average value of each class is used to replace the size of each weight in the class, so that a plurality of edges belonging to the same class share the same weight, the purpose of reducing the number of parameters is achieved, the number of parameters is reduced, and the calculation speed is accelerated.
(3) Quantization
Quantization refers to an operation of reducing the number of bits of data in memory. The parameters of a general neural network model are represented by floating point type numbers of 32bit length, and actually do not need to retain that high precision. For example, if the weight is mostly around 0, the precision represented by the original 32 bits is represented by 0-255, and the space occupied by each weight is reduced by sacrificing the precision, so that the storage space of the model is reduced, and the calculation speed is increased.
(4) Convolutional layer BN lamination
When training the deep net model, the BN (Batch Normalization) layer can accelerate the net convergence and can control the overfitting, typically after the convolutional layer. Although the BN layer plays a positive role in training, the operations of some layers are added in the network forward inference, the performance of the model is influenced, and more memory or video memory space is occupied. Based on the principle that the convolutional layer and the BN layer are both pure linear conversion, in the convolutional layer, the input is X, the convolution weight is W, and the convolution offset is B, then the convolutional layer operates:
W×X+B
In the BN layer, let the mean be μ, the variance be σ, the scaling factor γ, the offset β, and a smaller offset e with the prevented denominator 0, then there are:
By derivation, the parameters of the BN layer can be incorporated into the convolutional layer:
Wmerged=W×α
Bmerged=B×α+(β-μ×α)
Wherein, gamma, mu, sigma2and beta is a trained quantity, and is equivalent to a constant in a prediction stage, so that the forward inference speed of the model can be increased by combining the parameters of the BN layer into the convolutional layer, and the test time is reduced.
in particular, step four and step five may sometimes be interleaved for different compressed sensing algorithms. The optimized fusion network can be trained by using part of training data, then the optimized fusion network is processed by using a compressed sensing method to form a new fusion network, and then the untrained part of the used part of the new fusion network is trained, and the steps are repeated in this way to obtain the final laser radar and vision fusion sensing network.
Step six: and testing the final laser radar and vision fusion perception network obtained in the step five by using the test data to obtain the final average accuracy and the final running time.
Step seven: and if the final running time is less than 10% higher than that in the original converged network in the step two, repeating the steps three to six until the final running time is higher than 10%.
The measurement standard of the network optimization is that when the final average accuracy is unchanged or improved compared with the average accuracy processed by the original converged network, only whether the running time meets the standard or not is considered in the seventh step. And selecting the algorithm model with the shortest running time as the optimal fusion sensing method when the running times of the networks are different under the processing of different lightweight methods and different compression sensing methods. When the final average accuracy is lower than the average accuracy processed by the original converged network, a comprehensive consideration needs to be given, that is, whether the increase range or the demand of the operation time is higher than the demand for the average accuracy.
Example 1
the method comprises the following steps: and simultaneously acquiring data by using a laser radar sensor and a visual sensor, and dividing the acquired data into training data and testing data.
Step two: and selecting a laser radar and a vision fusion network which mainly comprise the convolutional layer to simultaneously process the training data and the test data, and obtaining a test result of the original fusion network. The test results include average accuracy and run time.
Step three: and optimizing the original fusion network by adopting an FPN structural method to obtain the optimized fusion network.
Step four: training the optimized fusion network by using part of training data to obtain network connection, parameter values and the like in the fusion network, then processing by using a model cutting method to form a new fusion network, training the untrained part of data used by the new fusion network, and repeating the steps to obtain the final laser radar and visual fusion perception network.
Step five: and testing the final laser radar and vision fusion perception network obtained in the fourth step by using the test data to obtain the final average accuracy and the final running time.
The final average accuracy obtained by the method of the embodiment is basically consistent with the average accuracy in the original fusion network, and the final running time is improved by more than 10% compared with the running time in the original fusion network, which shows that the method reduces network parameters without losing network performance.
Example 2
The method comprises the following steps: and simultaneously acquiring data by using a laser radar sensor and a visual sensor, and dividing the acquired data into training data and testing data.
Step two: and selecting a laser radar and a vision fusion network which mainly comprise the convolutional layer to simultaneously process the training data and the test data, and obtaining a test result of the original fusion network. The test results include average accuracy and run time.
Step three: and optimizing the original fusion network by adopting a depth wise partial convolution replacing standard convolution method to obtain the optimized fusion network.
Step four: and training the optimized fusion network obtained in the step three by using the training data to obtain network connection, parameter values and the like in the fusion network.
Step five: and (4) obtaining the final laser radar and visual fusion perception network through the fusion network in the weight sharing and quantization common processing step four.
Step six: and testing the final laser radar and vision fusion perception network obtained in the step five by using the test data to obtain the final average accuracy and the final running time.
The final average accuracy obtained by the method of the embodiment is slightly reduced compared with the average accuracy in the original fusion network, and the final running time is improved by more than 10% compared with the running time in the original fusion network, which shows that the method reduces redundancy, thereby reducing storage space, reducing running time and reducing the hardware requirement of the recognition task on the execution platform.
Example 3
The method comprises the following steps: and simultaneously acquiring data by using a laser radar sensor and a visual sensor, and dividing the acquired data into training data and testing data.
Step two: and selecting a laser radar taking the convolutional layer as a main part and the visual fusion network AVOD to simultaneously process the training data and the test data, and obtaining a test result of the original fusion network. The test results include average accuracy and run time.
Step three: when a feature extraction module of the AVOD network is optimized, firstly, an FPN structure is used on a VGG framework, information of a plurality of feature layers is fused, wherein an elementwise product in elementwise operation is selected in a fusion mode to improve the detection precision of a small target, and then all common convolutions are replaced by depth wise partial volume to optimize the AVOD network, so that the optimized AVOD network is obtained.
Step four: and training the optimized AVOD network obtained in the step three by using the training data to obtain network connection, parameter values and the like in the AVOD network.
Step five: and (4) combining and processing the AVOD network in the step four by the convolutional layer BN layer to obtain the final laser radar and visual fusion perception network.
Step six: and testing the final laser radar and vision fusion perception network obtained in the step five by using the test data to obtain the final average accuracy and the final running time.
The final average accuracy obtained by the method of the embodiment is slightly reduced compared with the average accuracy in the original fusion network, and the final running time is improved by more than 10% compared with the running time in the original fusion network, which shows that the method solves the memory problem and the speed problem.

Claims (3)

1. A laser radar and visual fusion perception method of a lightweight convolutional neural network is characterized by comprising the following steps:
The method comprises the following steps: and simultaneously acquiring data by using a laser radar sensor and a visual sensor, and dividing the acquired data into training data and testing data.
Step two: and selecting a laser radar and a vision fusion network which mainly comprise the convolutional layer to simultaneously process the training data and the test data, and obtaining a test result of the original fusion network. The test results include average accuracy and run time.
Step three: and optimizing the original fusion network by adopting a lightweight method to obtain an optimized fusion network.
Step four: and training the optimized fusion network obtained in the step three by using the training data to obtain network connection, parameter values and the like in the fusion network.
Step five: and processing the fusion network in the fourth step by using a compressed sensing method to obtain the final laser radar and visual fusion sensing network.
step six: and testing the final laser radar and vision fusion perception network obtained in the step five by using the test data to obtain the final average accuracy and the final running time.
Step seven: and if the final running time is less than 10% higher than that in the original converged network in the step two, repeating the steps three to six until the final running time is higher than 10%.
2. The lidar and vision fusion perception method for a light-weighted convolutional neural network according to claim 1, wherein the method for reducing weight is composed of one or more of a method using FPN structure, a method of changing elementaltwise operation mode, and a method of replacing standard convolution with depth wise partial convolution.
3. The lidar and vision fusion perception method for the light-weight convolutional neural network of claim 1, wherein the compressive perception method comprises one or more of model clipping, weight sharing, quantization, convolutional layer BN layer merging and the like.
CN201910814535.5A 2019-08-30 2019-08-30 Laser radar and vision fusion perception method of lightweight convolutional neural network Pending CN110568445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910814535.5A CN110568445A (en) 2019-08-30 2019-08-30 Laser radar and vision fusion perception method of lightweight convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910814535.5A CN110568445A (en) 2019-08-30 2019-08-30 Laser radar and vision fusion perception method of lightweight convolutional neural network

Publications (1)

Publication Number Publication Date
CN110568445A true CN110568445A (en) 2019-12-13

Family

ID=68777199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910814535.5A Pending CN110568445A (en) 2019-08-30 2019-08-30 Laser radar and vision fusion perception method of lightweight convolutional neural network

Country Status (1)

Country Link
CN (1) CN110568445A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111198496A (en) * 2020-01-03 2020-05-26 浙江大学 Target following robot and following method
CN111814769A (en) * 2020-09-02 2020-10-23 深圳市城市交通规划设计研究中心股份有限公司 Information acquisition method and device, terminal equipment and storage medium
CN112396178A (en) * 2020-11-12 2021-02-23 江苏禹空间科技有限公司 Method for improving CNN network compression efficiency
CN112767475A (en) * 2020-12-30 2021-05-07 重庆邮电大学 Intelligent roadside sensing system based on C-V2X, radar and vision
CN113449632A (en) * 2021-06-28 2021-09-28 重庆长安汽车股份有限公司 Vision and radar perception algorithm optimization method and system based on fusion perception and automobile

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104898089A (en) * 2015-04-03 2015-09-09 西北大学 Device-free localization method based on space migration compressive sensing
CN106772365A (en) * 2016-11-25 2017-05-31 南京理工大学 A kind of multipath based on Bayes's compressed sensing utilizes through-wall radar imaging method
CN107704866A (en) * 2017-06-15 2018-02-16 清华大学 Multitask Scene Semantics based on new neural network understand model and its application
US10043113B1 (en) * 2017-10-04 2018-08-07 StradVision, Inc. Method and device for generating feature maps by using feature upsampling networks
CN109145769A (en) * 2018-08-01 2019-01-04 辽宁工业大学 The target detection network design method of blending image segmentation feature
CN109902623A (en) * 2019-02-27 2019-06-18 浙江大学 A kind of gait recognition method based on perception compression
CN109934230A (en) * 2018-09-05 2019-06-25 浙江大学 A kind of radar points cloud dividing method of view-based access control model auxiliary
CN109977981A (en) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 Scene analytic method, robot and storage device based on binocular vision
CN110059675A (en) * 2019-06-21 2019-07-26 南京擎盾信息科技有限公司 A kind of robot identifies road traffic law enforcement behavior and provides the method for standardization auxiliary
CN110070072A (en) * 2019-05-05 2019-07-30 厦门美图之家科技有限公司 A method of generating object detection model

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104898089A (en) * 2015-04-03 2015-09-09 西北大学 Device-free localization method based on space migration compressive sensing
CN106772365A (en) * 2016-11-25 2017-05-31 南京理工大学 A kind of multipath based on Bayes's compressed sensing utilizes through-wall radar imaging method
CN107704866A (en) * 2017-06-15 2018-02-16 清华大学 Multitask Scene Semantics based on new neural network understand model and its application
US10043113B1 (en) * 2017-10-04 2018-08-07 StradVision, Inc. Method and device for generating feature maps by using feature upsampling networks
CN109977981A (en) * 2017-12-27 2019-07-05 深圳市优必选科技有限公司 Scene analytic method, robot and storage device based on binocular vision
CN109145769A (en) * 2018-08-01 2019-01-04 辽宁工业大学 The target detection network design method of blending image segmentation feature
CN109934230A (en) * 2018-09-05 2019-06-25 浙江大学 A kind of radar points cloud dividing method of view-based access control model auxiliary
CN109902623A (en) * 2019-02-27 2019-06-18 浙江大学 A kind of gait recognition method based on perception compression
CN110070072A (en) * 2019-05-05 2019-07-30 厦门美图之家科技有限公司 A method of generating object detection model
CN110059675A (en) * 2019-06-21 2019-07-26 南京擎盾信息科技有限公司 A kind of robot identifies road traffic law enforcement behavior and provides the method for standardization auxiliary

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
QIANG ZHANG: "Fine-grained Vehicle Recognition Using Lightweight Convolutional Neural Network with Combined Learning Strategy", 《2018 IEEE FOURTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM)》 *
唐溯: "基于深度学习的指静脉识别算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
杨李杰: "用于海洋监测的宽带数字相控阵关键技术", 《科技导报》 *
许庆勇: "《基于深度学习理论的纹身图像识别与检测研究》", 31 December 2018, 华中科技大学出版社 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111198496A (en) * 2020-01-03 2020-05-26 浙江大学 Target following robot and following method
CN111814769A (en) * 2020-09-02 2020-10-23 深圳市城市交通规划设计研究中心股份有限公司 Information acquisition method and device, terminal equipment and storage medium
CN112396178A (en) * 2020-11-12 2021-02-23 江苏禹空间科技有限公司 Method for improving CNN network compression efficiency
CN112767475A (en) * 2020-12-30 2021-05-07 重庆邮电大学 Intelligent roadside sensing system based on C-V2X, radar and vision
CN113449632A (en) * 2021-06-28 2021-09-28 重庆长安汽车股份有限公司 Vision and radar perception algorithm optimization method and system based on fusion perception and automobile
CN113449632B (en) * 2021-06-28 2023-04-07 重庆长安汽车股份有限公司 Vision and radar perception algorithm optimization method and system based on fusion perception and automobile

Similar Documents

Publication Publication Date Title
CN110568445A (en) Laser radar and vision fusion perception method of lightweight convolutional neural network
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN111462126B (en) Semantic image segmentation method and system based on edge enhancement
CN108133188B (en) Behavior identification method based on motion history image and convolutional neural network
CN110991311B (en) Target detection method based on dense connection deep network
CN109919032B (en) Video abnormal behavior detection method based on motion prediction
CN112561910A (en) Industrial surface defect detection method based on multi-scale feature fusion
CN111914924B (en) Rapid ship target detection method, storage medium and computing equipment
CN112163628A (en) Method for improving target real-time identification network structure suitable for embedded equipment
CN111126359A (en) High-definition image small target detection method based on self-encoder and YOLO algorithm
CN110533022B (en) Target detection method, system, device and storage medium
WO2021088101A1 (en) Insulator segmentation method based on improved conditional generative adversarial network
CN111242026B (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN112036475A (en) Fusion module, multi-scale feature fusion convolutional neural network and image identification method
CN112365511B (en) Point cloud segmentation method based on overlapped region retrieval and alignment
CN111079539A (en) Video abnormal behavior detection method based on abnormal tracking
CN116030237A (en) Industrial defect detection method and device, electronic equipment and storage medium
CN111104855B (en) Workflow identification method based on time sequence behavior detection
CN110443279B (en) Unmanned aerial vehicle image vehicle detection method based on lightweight neural network
CN116912796A (en) Novel dynamic cascade YOLOv 8-based automatic driving target identification method and device
CN111563525A (en) Moving target detection method based on YOLOv3-Tiny
CN112132207A (en) Target detection neural network construction method based on multi-branch feature mapping
CN112464982A (en) Target detection model, method and application based on improved SSD algorithm
CN109657577B (en) Animal detection method based on entropy and motion offset
CN113592885B (en) SegNet-RS network-based large obstacle contour segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191213