CN110378398B - Deep learning network improvement method based on multi-scale feature map jump fusion - Google Patents

Deep learning network improvement method based on multi-scale feature map jump fusion Download PDF

Info

Publication number
CN110378398B
CN110378398B CN201910566224.1A CN201910566224A CN110378398B CN 110378398 B CN110378398 B CN 110378398B CN 201910566224 A CN201910566224 A CN 201910566224A CN 110378398 B CN110378398 B CN 110378398B
Authority
CN
China
Prior art keywords
fusion
convolution layer
layer
convolution
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910566224.1A
Other languages
Chinese (zh)
Other versions
CN110378398A (en
Inventor
张小国
叶绯
郑冰清
张开心
王慧青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910566224.1A priority Critical patent/CN110378398B/en
Publication of CN110378398A publication Critical patent/CN110378398A/en
Application granted granted Critical
Publication of CN110378398B publication Critical patent/CN110378398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a deep learning network improvement method based on multi-scale feature map jump fusion, which performs feature fusion through jump connection among multi-scale feature map layers, and enables a network to fully utilize high-level and low-level features by fusing high-level semantics and low-level information, so that the sensitivity and the perceptibility of a model to a small target are improved, and meanwhile, the overall detection performance of the model is improved. And secondly, through a multi-view multi-classification strategy, the accurate detection of the target category in the high dynamic scene is realized. The application provides a deep learning network improvement method based on multi-scale feature map jump fusion in terms of speed, practicality, robustness and the like, and improves the detection performance of an SSD algorithm in a high dynamic scene.

Description

Deep learning network improvement method based on multi-scale feature map jump fusion
Technical Field
The application relates to a deep learning network improvement method based on multi-scale feature map jump fusion, and belongs to the technical field of target detection.
Background
The deep neural network structure comprises a plurality of feature extraction operations, and each time a layer of convolution operation is carried out, the deeper the network layer is, the less the outline and detail information of the feature map are, the more the semantic information is enriched, and the bigger the receptive field of the model is. Models learn to focus on larger objects in the image, while for small targets, the recognition of the model is poor. One technical difficulty in target detection is small target detection, which is not well behaved by the original SSD algorithm (single-shot multi-frame detector).
Meanwhile, the rapid visual angle change of the mobile vehicle-mounted platform causes that the target detection is easy to generate false detection due to missing detection.
Disclosure of Invention
The application aims to: the application provides a deep learning network improvement method based on multi-scale feature map jump fusion, which improves the detection performance of an SSD algorithm in small target detection and high dynamic scenes.
The technical scheme is as follows: the technical scheme adopted by the application is a deep learning network improvement method based on multi-scale feature map jump fusion, which comprises the following steps:
constructing a feature fusion network based on a convolution layer;
designing a feature fusion connection module;
selecting a fusion strategy and an up-sampling mode to obtain an SSD-based multi-scale characteristic map layer jump fusion structure;
and training the interlayer jump fusion structure of the scale characteristic map by integrating a multi-view strategy.
The feature fusion network is a feature graph layer jump connection.
The feature map layer jump connection sequentially comprises a fourth fusion convolution layer, a seventh fusion full-connection layer, a sixth fusion convolution layer, a seventh second fusion convolution layer, an eighth second convolution layer and a ninth second convolution layer.
The feature fusion connection module firstly carries out up-sampling on the high-level feature map to obtain the up-sampled high-level feature map. And then the dimension is reduced through the low-level feature map through a 1 multiplied by 1 convolution kernel and the linear rectification function is activated, so that the dimension-reduced low-level feature map is obtained. And then performing feature fusion operation, namely splicing or element summation, to obtain a high-low layer feature map after splicing/element summation, and finally performing convolution operation of a 3X 3 convolution kernel to reduce an aliasing effect, and activating a linear rectification function to obtain a completely fused high-low layer feature map.
The fusion strategy is to sum elements first and then normalize the elements in batches.
The upsampling mode is bilinear interpolation.
The beneficial effects are that: according to the application, the feature fusion is carried out through the jump connection between the multi-scale feature images, and the network can fully utilize the high-low layer features by fusing the high-layer semantics and the low-layer position information, so that the sensitivity and the perceptibility of the model to the small target are improved, and the overall detection performance of the model is improved. And secondly, through a multi-view multi-classification strategy, the accurate detection of the target category in the high dynamic scene is realized. The application provides a deep learning network improvement method based on multi-scale feature map jump fusion in terms of speed, practicality, robustness and the like, and improves the detection performance of an SSD algorithm in a high dynamic scene.
Drawings
FIG. 1 is a schematic flow diagram of a system according to the present application;
FIG. 2 (a) is a graph of the relationship between the layer jump connections of the multi-scale predictive feature map;
FIG. 2 (b) is a schematic diagram of the structure of the multi-scale predictive feature layer jump connection;
FIG. 3 fuses the module a flowchart;
FIG. 4 fuses the module b flow chart;
fig. 5 multi-view feature map layer jump connection detection model framework.
Detailed Description
The present application is further illustrated in the accompanying drawings and detailed description which are to be understood as being merely illustrative of the application and not limiting of its scope, since various modifications of the application, which are equivalent to those skilled in the art, will fall within the scope of the application as defined in the appended claims after reading the application.
As shown in fig. 1, the method for improving the multi-scale network according to the present embodiment includes the following steps:
1) And constructing a feature fusion network.
The feature pyramids are fused into an SSD algorithm to form a multi-scale predictive feature pyramid network FPNSSD. Firstly, features of an eighth second convolution layer Conv8_2 and a ninth second convolution layer Conv9_2 in an SSD algorithm are fused to obtain an eighth second fused convolution layer Conv8_2_ff, and the eighth second fused convolution layer Conv8_2_ff is subjected to upsampling and feature fusion with a seventh second convolution layer Conv7_2 to obtain a seventh second fused convolution layer Conv7_2_ff. The seventh second fusion convolution layer conv7_2_ff is up-sampled and then is fused with the features of the sixth second convolution layer conv6_2_ff to obtain a sixth fusion convolution layer conv6_2_ff, and the sixth second fusion convolution layer conv6_2_ff is up-sampled and then is fused with the features of the seventh full connection layer fc7 to obtain a seventh fusion full connection layer fc7_ff. And the seventh fusion full-connection layer fc7_ff is subjected to up-sampling and then is subjected to feature fusion with the fourth third convolution layer Conv4_3 to obtain a fourth fusion convolution layer Conv4_3_ff. And finally, a fourth fusion convolution layer Conv4_3_ff, a seventh fusion full-connection layer fc7_ff, a sixth fusion convolution layer Conv6_2_ff, a seventh second fusion convolution layer Conv7_2_ff, an eighth second fusion convolution layer Conv8_2_ff and a ninth second convolution layer Conv9_2 are adopted as multi-scale prediction feature maps.
Next, a multi-scale prediction feature map is designed to be adjacently connected with the adjacentsds. The seventh full connection layer fc7 in the SSD is subjected to up-sampling and then is subjected to feature fusion with the fourth third convolution layer Conv4_3 to obtain a fourth third fusion convolution layer Conv4_3_ff, and the sixth second convolution layer Conv6_2 is subjected to up-sampling and then is subjected to feature fusion with the seventh full connection layer fc7 to obtain a seventh fusion full connection layer fc7_ff. The seventh second convolution layer Conv7_2 is subjected to up-sampling and then is fused with the characteristics of the sixth second convolution layer Conv6_2 to obtain a sixth second fused convolution layer Conv6_2_ff, and the eighth second convolution layer Conv8_2 is subjected to up-sampling and is fused with the characteristics of the seventh second convolution layer Conv7_2 to obtain a seventh second fused convolution layer Conv7_2_ff. And the ninth second convolution layer Conv9_2 is subjected to up-sampling and then is subjected to feature fusion with the eighth second convolution layer Conv8_2 to obtain an eighth fusion convolution layer Conv8_2_ff. And finally, a fourth fusion convolution layer Conv4_3_ff, a seventh fusion full-connection layer fc7_ff, a sixth fusion convolution layer Conv6_2_ff, a seventh second fusion convolution layer Conv7_2_ff, an eighth second fusion convolution layer Conv8_2_ff and a ninth second convolution layer Conv9_2 are adopted as multi-scale prediction feature maps.
As shown in fig. 2 (a) and (b), a multi-scale predicted feature map layer jump connection SKIPSSD is then designed. And the ninth second convolution layer Conv9_2 in the SSD is subjected to upsampling and then fused with the seventh second convolution layer Conv7_2 to obtain a seventh second fused convolution layer Conv7_2_ff. And the eighth second convolution layer Conv8_2 is subjected to up-sampling and then is fused with the sixth second convolution layer Conv6_2 to obtain a sixth fused convolution layer Conv6_2_ff. The seventh second convolution layer conv7_2 is up-sampled and then fused with the seventh full connection layer fc7 to obtain a seventh fused full connection layer fc7_ff. And the sixth second convolution layer Conv6_2 is up-sampled and fused with the fourth third convolution layer Conv4_3 to obtain a fourth fused convolution layer Conv4_3_ff. The fourth third fused convolution layer conv4_3_ff, the seventh fused full-connection layer fc7_ff, the sixth fused convolution layer conv6_2_ff and the seventh fused convolution layer conv7_2_ff formed after the above fusion are used as a multi-scale prediction feature map together with the eighth second convolution layer conv8_2 and the ninth second convolution layer conv9_2 in the original SSD.
And continuing to design the multi-scale prediction partial connection Part-SKIPSSD, compared with the jump connection, the partial connection reduces the feature fusion of layers, and only adopts a fourth third fusion convolution layer Conv4_3_ff, a seventh fusion full connection layer fc7_ff, a sixth fusion convolution layer Conv6_2_ff, a seventh second convolution layer Conv7_2, an eighth second convolution layer Conv8_2 and a ninth second convolution layer Conv9_2 of the original SSD.
And designing a Bi-SKIPSSD (Bi-level skip) with the two-way skip connection, wherein compared with the skip connection, the feature fusion of an eighth second convolution layer Conv8_2 and a ninth second convolution layer Conv9_2 is added in the Bi-way skip connection, the feature fusion of a sixth second convolution layer Conv6_2 and the eighth second convolution layer Conv8_2 is carried out after convolution pooling operation, an eighth second fusion convolution layer Conv8_2_ff is obtained, and the feature fusion of a seventh second convolution layer Conv7_2 and the ninth second convolution layer Conv9_2 is carried out after convolution pooling operation, so that a ninth second fusion convolution layer Conv9_2_ff is obtained. The bidirectional jump connection adopts a fourth third fusion convolution layer Conv4_3_ff, a seventh fusion full connection layer fc7_ff, a sixth fusion convolution layer Conv6_2_ff, a seventh second fusion convolution layer Conv7_2_ff, an eighth second fusion convolution layer Conv8_2_ff and a ninth fusion convolution layer Conv9_2_ff as a multi-scale prediction feature map.
And designing the jump connection Base-SKIPSSD for fusing part of the basic network feature graphs, wherein the jump connection does not adopt a strategy for fusing six layers of predicted feature graphs of the original SSD network, but performs the jump connection on the whole basic network. For example, after the fourth first convolution layer conv4_1 is subjected to convolution pooling operation, the fourth first convolution layer conv4_1 is fused with the fourth third convolution layer conv4_3 feature map to obtain a fourth third fused convolution layer conv4_3_ff, the seventh full-connection layer fc7, the sixth second convolution layer conv6_2, the seventh second convolution layer conv7_2, the eighth second convolution layer conv8_2 and the ninth second convolution layer conv9_2 all adopt similar modes and their corresponding basic feature layers to perform feature fusion, and the fourth third fused convolution layer conv4_3_ff, the seventh fused full-connection layer fc7_ff, the sixth second fused convolution layer conv6_2_ff, the seventh fused convolution layer conv7_2_ff, the eighth second fused convolution layer conv8_2_ff and the ninth second fused convolution layer conv9_2_ff obtained after the fusion are used as multi-scale prediction feature maps.
The above six feature fusion networks were tested on the VOC2007 dataset. The accuracy of the interlayer jump connection is improved from 77.2% to 79.0%, and the best detection performance is achieved, so that the interlayer jump connection shown in fig. 2 is selected as a feature fusion network structure in the embodiment.
2) And designing a characteristic fusion connection module.
First, a feature fusion module a is designed. As shown in fig. 3, the fusion module a performs upsampling on the high-level feature map (i.e. the map layer containing the high-level features) to obtain an upsampled high-level feature map, then performs a convolution operation of a 3×3 convolution kernel, and activates a linear rectification function (relu) to obtain a high-level feature map to be fused. And activating the low-level feature map through 3×3 convolution kernel convolution operation and a relu activation function to obtain the low-level feature map to be fused. And then performing feature fusion operation splicing or element summation to obtain a spliced/element summation high-low layer feature map, and finally performing dimension reduction of a 1 multiplied by 1 convolution kernel and activation of a relu activation function to obtain a completely fused high-low layer feature map.
The fusion module b is then designed. As shown in fig. 4, the fusion module b performs upsampling on the high-level feature map to obtain an upsampled high-level feature map. And then the dimension is reduced through the low-level feature map through a 1 multiplied by 1 convolution kernel and the linear rectification function is activated, so that the dimension-reduced low-level feature map is obtained. And then performing feature fusion operation, namely splicing or element summation, to obtain a high-low layer feature map after splicing/element summation, and finally performing convolution operation of a 3X 3 convolution kernel to reduce an aliasing effect, and activating a linear rectification function to obtain a completely fused high-low layer feature map.
Fusion modules a and b were tested on the VOC2007 test dataset. The accuracy of the fusion module b serving as a feature fusion connection module in the network is improved by 0.3% compared with that of the fusion module a, so that the network has better performance, and therefore the fusion module b is selected as the feature fusion connection module in the embodiment.
3) Selecting a fusion strategy, comprising the steps of:
s3.1, adopting a fusion mode of splicing or element summation during feature fusion, wherein the element summation can enable the network to have better performance;
s3.2, using batch normalization after splicing/element summation to enable the feature fusion operation to be more sufficient.
4) An up-sampling mode is selected, comprising the following steps:
s4.1, deconvolution, cavity convolution or bilinear interpolation is selected as an up-sampling mode;
s4.2, the two upsampling modes are tested on the VOC2007 test data set, and the bilinear interpolation is improved by 0.6% compared with deconvolution and cavity convolution accuracy, so that the network has better performance by selecting the bilinear interpolation as the upsampling mode.
5) And obtaining the SSD-based multi-scale characteristic image layer jump fusion structure through the first 4 steps.
6) And (3) merging the model of the multi-view strategy training step 5) to obtain a multi-view SSD improved model structure based on multi-scale feature map jump merging as shown in figure 5, wherein the method comprises the following steps of:
s6.1, taking samples of all angles of a target as different categories, selecting three typical angles which are respectively a front face, a side face and a back face, and adding background categories at the same time to achieve the effects of reducing false detection rate and increasing model robustness;
s6.2, using the multi-view multi-classification training sample for training of SKIPSSD to obtain a multi-view SSD improved model based on multi-scale feature map jump fusion.
In summary, the application performs feature fusion through jump connection between the multi-scale feature images, and the network can fully utilize the high-low layer features by fusing the high-layer semantics and the low-layer information, thereby improving the sensitivity and the perceptibility of the model to the small target and improving the overall detection performance of the model. And secondly, through a multi-view multi-classification strategy, the accurate detection of the target category in the high dynamic scene is realized. The application provides a deep learning network improvement method based on multi-scale feature map jump fusion in terms of speed, practicality, robustness and the like, improves the detection performance of an SSD algorithm in a high dynamic scene, is also suitable for other deep learning networks such as YOLO, has high practical value and wide application prospect.

Claims (3)

1. A deep learning network improvement method based on multi-scale feature map jump fusion is applied to the field of target detection, and is characterized by comprising the following steps:
constructing a characteristic fusion network based on a convolution layer, wherein the characteristic fusion network is characteristic layer jump connection, and the characteristic layer jump connection sequentially comprises a fourth third fusion convolution layer Conv4_3_ff, a seventh fusion full-connection layer fc7_ff, a sixth second fusion convolution layer Conv6_2_ff, a seventh second fusion convolution layer Conv7_2_ff, an eighth second convolution layer Conv8_2 and a ninth second convolution layer Conv9_2;
the method comprises the steps of designing a feature fusion connection module, wherein the feature fusion connection module carries out up-sampling on a high-level feature image to obtain an up-sampled high-level feature image, then carries out 1X 1 convolution kernel dimension reduction and linear rectification function activation on a low-level feature image to obtain a dimension reduced low-level feature image, then carries out feature fusion operation, namely splicing or element summation to obtain a spliced/element summation high-low-level feature image, and finally carries out 3X 3 convolution kernel convolution operation to reduce an aliasing effect, and then activates a linear rectification function to obtain a completely fused high-low-level feature image;
selecting a fusion strategy and an up-sampling mode to obtain an SSD-based multi-scale feature map interlayer jump fusion structure SKIPSSD, wherein a ninth second convolution layer Conv9_2 in the SSD is subjected to up-sampling and then fused with a seventh second convolution layer Conv7_2 to obtain a seventh second fusion convolution layer Conv7_2_ff, an eighth second convolution layer Conv8_2 is subjected to up-sampling and then fused with a sixth second convolution layer Conv6_2_ff to obtain a sixth fusion convolution layer Conv6_ff, the seventh second convolution layer Conv7_2 is subjected to up-sampling and then fused with a seventh full connection layer fc7 to obtain a seventh fusion full connection layer fc7_ff, the sixth second convolution layer Conv6_2 is subjected to up-sampling and then fused with a fourth third convolution layer Conv4_3_ff, and the fourth third fusion convolution layer Conv4_3_ff formed after the fusion, the seventh fusion full connection layer fc7_ff, the sixth second fusion convolution layer Conv6_2_ff and the seventh fusion full connection layer Conv6_ff are used as the multi-scale feature map in the SSD and the eighth multi-scale feature map layer SKIPSD;
training the multi-scale characteristic image interlayer jump fusion structure by merging a multi-view strategy, comprising the following steps:
s6.1, taking samples of all angles of a target as different categories, selecting three typical angles which are respectively a front face, a side face and a back face, and adding background categories at the same time to achieve the effects of reducing false detection rate and increasing model robustness;
s6.2, using the multi-view multi-classification training sample for training of SKIPSSD to obtain a multi-view SSD improved model based on multi-scale feature map jump fusion.
2. The method of claim 1, wherein the fusion strategy is to sum elements first and then normalize the elements in batches.
3. The method for improving a deep learning network based on multi-scale feature map jump fusion according to claim 1, wherein the upsampling mode is bilinear interpolation.
CN201910566224.1A 2019-06-27 2019-06-27 Deep learning network improvement method based on multi-scale feature map jump fusion Active CN110378398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910566224.1A CN110378398B (en) 2019-06-27 2019-06-27 Deep learning network improvement method based on multi-scale feature map jump fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910566224.1A CN110378398B (en) 2019-06-27 2019-06-27 Deep learning network improvement method based on multi-scale feature map jump fusion

Publications (2)

Publication Number Publication Date
CN110378398A CN110378398A (en) 2019-10-25
CN110378398B true CN110378398B (en) 2023-08-25

Family

ID=68250974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910566224.1A Active CN110378398B (en) 2019-06-27 2019-06-27 Deep learning network improvement method based on multi-scale feature map jump fusion

Country Status (1)

Country Link
CN (1) CN110378398B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222534B (en) * 2019-11-15 2022-10-11 重庆邮电大学 Single-shot multi-frame detector optimization method based on bidirectional feature fusion and more balanced L1 loss
CN110751134B (en) * 2019-12-23 2020-05-12 长沙智能驾驶研究院有限公司 Target detection method, target detection device, storage medium and computer equipment
CN113496158A (en) * 2020-03-20 2021-10-12 中移(上海)信息通信科技有限公司 Object detection model optimization method, device, equipment and storage medium
CN111476249B (en) * 2020-03-20 2021-02-23 华东师范大学 Construction method of multi-scale large-receptive-field convolutional neural network
CN112070070B (en) * 2020-11-10 2021-02-09 南京信息工程大学 LW-CNN method and system for urban remote sensing scene recognition
CN117828407B (en) * 2024-03-04 2024-05-14 江西师范大学 Double-stage gating attention time sequence classification method and system for bidirectional jump storage

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118539A (en) * 2018-07-16 2019-01-01 深圳辰视智能科技有限公司 Point cloud and picture fusion method, device and its equipment based on Analysis On Multi-scale Features
US20190156144A1 (en) * 2017-02-23 2019-05-23 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting object, method and apparatus for training neural network, and electronic device
CN109800628A (en) * 2018-12-04 2019-05-24 华南理工大学 A kind of network structure and detection method for reinforcing SSD Small object pedestrian detection performance

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156144A1 (en) * 2017-02-23 2019-05-23 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting object, method and apparatus for training neural network, and electronic device
CN109118539A (en) * 2018-07-16 2019-01-01 深圳辰视智能科技有限公司 Point cloud and picture fusion method, device and its equipment based on Analysis On Multi-scale Features
CN109800628A (en) * 2018-12-04 2019-05-24 华南理工大学 A kind of network structure and detection method for reinforcing SSD Small object pedestrian detection performance

Also Published As

Publication number Publication date
CN110378398A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN110378398B (en) Deep learning network improvement method based on multi-scale feature map jump fusion
CN110188705B (en) Remote traffic sign detection and identification method suitable for vehicle-mounted system
CN111524135B (en) Method and system for detecting defects of tiny hardware fittings of power transmission line based on image enhancement
CN110738697A (en) Monocular depth estimation method based on deep learning
CN113392960B (en) Target detection network and method based on mixed hole convolution pyramid
CN111461083A (en) Rapid vehicle detection method based on deep learning
CN111611861B (en) Image change detection method based on multi-scale feature association
CN111523439B (en) Method, system, device and medium for target detection based on deep learning
CN113436210B (en) Road image segmentation method fusing context progressive sampling
CN116503318A (en) Aerial insulator multi-defect detection method, system and equipment integrating CAT-BiFPN and attention mechanism
CN113222824B (en) Infrared image super-resolution and small target detection method
CN115908772A (en) Target detection method and system based on Transformer and fusion attention mechanism
CN111860411A (en) Road scene semantic segmentation method based on attention residual error learning
CN113052057A (en) Traffic sign identification method based on improved convolutional neural network
CN116597326A (en) Unmanned aerial vehicle aerial photography small target detection method based on improved YOLOv7 algorithm
CN112699889A (en) Unmanned real-time road scene semantic segmentation method based on multi-task supervision
CN114529462A (en) Millimeter wave image target detection method and system based on improved YOLO V3-Tiny
CN113066089A (en) Real-time image semantic segmentation network based on attention guide mechanism
Meng et al. A block object detection method based on feature fusion networks for autonomous vehicles
CN116645598A (en) Remote sensing image semantic segmentation method based on channel attention feature fusion
CN115082798A (en) Power transmission line pin defect detection method based on dynamic receptive field
CN112633123B (en) Heterogeneous remote sensing image change detection method and device based on deep learning
CN113111740A (en) Characteristic weaving method for remote sensing image target detection
CN117456330A (en) MSFAF-Net-based low-illumination target detection method
CN116452900A (en) Target detection method based on lightweight neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant