CN111160100A - Lightweight depth model aerial photography vehicle detection method based on sample generation - Google Patents

Lightweight depth model aerial photography vehicle detection method based on sample generation Download PDF

Info

Publication number
CN111160100A
CN111160100A CN201911200419.0A CN201911200419A CN111160100A CN 111160100 A CN111160100 A CN 111160100A CN 201911200419 A CN201911200419 A CN 201911200419A CN 111160100 A CN111160100 A CN 111160100A
Authority
CN
China
Prior art keywords
vehicle
network
image
lightweight
aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911200419.0A
Other languages
Chinese (zh)
Inventor
刘宁钟
白瑜颖
沈家全
后弘毅
陆保国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201911200419.0A priority Critical patent/CN111160100A/en
Publication of CN111160100A publication Critical patent/CN111160100A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The invention discloses a lightweight depth model aerial photography vehicle detection method based on sample generation, belongs to the field of computer vision, and can reduce the demand of aerial photography vehicle detection on sample data and improve the precision and speed. The method comprises the following steps: firstly, collecting a certain amount of aerial sample pictures, and marking out vehicles in the images; then random noise is added into the real background image, a countermeasure network is generated through multi-condition constraint to generate a new aerial vehicle picture sample fused with the background, and the aerial vehicle picture sample data and the original data form aerial vehicle sample data; then, sending the generated vehicle sample data into a lightweight convolutional neural network aiming at a small target for training until the network converges to obtain a weight file; and finally, the trained lightweight convolutional neural network and the trained weight file can be used for detecting the aerial image.

Description

Lightweight depth model aerial photography vehicle detection method based on sample generation
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a lightweight depth model aerial photography vehicle detection method based on sample generation.
Background
With the development of the productivity level, the number of vehicles is increasing day by day, the traffic problem is more serious, the normal traffic of residents is seriously affected, the traffic dispersion work is more and more important, and the most critical part is to monitor the road traffic condition. Although the traffic problem can be alleviated to a certain extent by the electronic camera at present, the traffic condition of the whole street cannot be visually displayed because the equipment cannot be moved. Therefore, the vehicle can be identified and positioned by using the flexibility of aerial photography, and great advantages can be brought into play in the field of monitoring road traffic conditions.
Currently, vehicle detection algorithms mainly include feature-based detection algorithms and algorithms based on optical flow or interframe difference. The feature-based detection algorithm is divided into algorithms based on artificial feature extraction and deep learning feature extraction, and is applicable to dynamic and static vehicle detection, and the algorithm based on optical flow or interframe difference is mainly used for moving vehicle target detection. The detection algorithm based on the artificial extraction of the features mostly adopts a sliding window method to search for a target meeting the features in the detected image. Liu Kang et al propose a high-efficient multi-class vehicle detection algorithm of taking photo by plane, utilize a quick binary sliding window to detect the position of vehicle, later utilize a soft connection structure AdaBoost classifier to further classify the classification of vehicle, can the effectual vehicle of detection, nevertheless manual extraction characteristic, the generalization ability of model is not enough to sliding window consumes a large amount of calculations. However, the deep learning-based method mostly adopts a deep convolutional neural network and relies on a large amount of labeled data. Shaoqingren et al propose a fast R-CNN target detection algorithm, combine feature extraction, ROI extraction, coordinate regression, classification into a convolutional neural network, greatly improve the speed of target detection, but the method is difficult to be directly applied to aerial vehicle detection with a large picture size and a small target, and simultaneously needs a large amount of labeled data. The prior methods therefore suffer mainly from the following drawbacks: the method based on the manual characteristics has weak model generalization capability and high time consumption, and the method based on the deep learning depends on a large number of samples and has large computation amount.
Disclosure of Invention
The invention provides a lightweight depth model aerial photography vehicle detection method based on sample generation, which reduces the requirement on sample data volume and improves the detection speed.
In order to achieve the purpose, the invention adopts the following technical scheme:
a lightweight depth model aerial photography vehicle detection method based on sample generation comprises the following steps:
(1) an image acquisition process: acquiring aerial images aiming at vehicles, and preprocessing and labeling the vehicles in the aerial images;
(2) generating a countermeasure network generation process: generating a new vehicle image through a generation countermeasure network, and forming a vehicle database together with the picture obtained in the step (1);
(3) the lightweight network training process comprises the following steps: sending the vehicle database obtained in the step (2) into a lightweight convolution neural network for training until the network is converged;
(4) and (3) testing an image detection process: and (4) detecting the vehicle target in the test image by using the light-weight network and the weight file trained in the step (3), and outputting a detection result.
Further, the preprocessing and labeling in the step (1) comprises: cleaning the acquired images, filtering out blurs and overexposure, not containing vehicle targets and not meeting the requirements of the vehicles, and then labeling the vehicle targets in the rest images;
further, the step (2) of generating the countermeasure network is a multi-condition constrained generation countermeasure network Mc-GAN, which constrains the image generation of the generator by adding discriminators and using different loss functions for different discriminators compared to other generation countermeasure networks;
further, the step (2) of generating a new vehicle image by the countermeasure network specifically includes the following steps:
(21) adding random noise into the aerial vehicle image, and generating the aerial vehicle image at the noise position through a generator in the Mc-GAN;
(22) forming an image pair by the generated vehicle image and the real image, and sending the image pair into a vehicle discriminator and a background discriminator to discriminate the real degree of the generated image and the degree of fusion in the background;
(23) training using least squares loss function for background discriminators
Llsgan(Db,G)=E[(Db(y)-1)2]+E[(Db(G(x)))2];
Training using cross entropy loss function for vehicle discriminators
Lgan(Dc,G)=E[logDc(yc)]+E[log(1-Dc(G(z)))];
Adding L1 regularization constraints to the generator
Ll1(G)=E(||y-G(x)||1);
The final loss function of the generator is
L(G,Db,Dc)=Llsgan(Db,G)+Lgan(Dc,G)+λLl1(G);
(24) Continuously iterating until the discriminator and the generator reach a balance point by optimizing the discriminator and the generator through a loss function, namely the generator can generate a sufficiently real picture;
(25) adding a large amount of random noise into the aerial images, sending the random noise into a generator after training, and generating a large amount of vehicle sample images fused with the background;
further, in the step (21), U-Net is used as a network structure, the whole network has 23 convolution layers which are a full convolution neural network, the input and the output of the network are images, and the encoding and decoding structure is followed;
further, in the step (22), the discriminator uses a five-layer convolution network to extract the features, and uses a spatial pyramid pooling layer to obtain the pooled features with fixed length;
further, the step (3) specifically comprises the following steps:
(31) sending the vehicle database into a lightweight network, setting the learning rate to be 0.0001, setting the iteration times to be 20 ten thousand and setting the batch size to be 256 by using ImageNet pre-training weight as an initial weight;
(32) carrying out feature extraction on the training sample through a lightweight network to obtain a convolution feature map;
(34) predicting the convolution characteristic graph by adopting an improved Faster R-CNN algorithm to obtain a bounding box and a classification prediction;
(35) and when the loss function converges or the maximum iteration number is reached, stopping training to obtain a network file and a weight file which can be used for aerial vehicle detection.
Further, the lightweight network used in step (32) is composed of a stem block and a stack block, the stem block uses a propagation mechanism of two parallel channels, one channel uses a convolution kernel of 1 × 1, the other channel uses convolution kernels of 1 × 1 and 3 × 3, and then the results of the two channels are feature-fused and sent into the stack block, and the stack block uses three-way feature fusion to fuse feature maps of different scales.
Further, the step (4) specifically comprises the following steps:
(41) sending the test image into a lightweight network to obtain a convolution characteristic diagram;
(42) processing the convolution characteristic graph through a fast R-CNN algorithm, and outputting a prediction boundary value and a classification value;
(43) and setting a threshold value, and filtering out a final detection result through non-maximum suppression.
Has the advantages that: the invention provides a lightweight depth model aerial photography vehicle detection method based on sample generation, a multi-condition-constrained generation countermeasure network can generate a large number of vehicle pictures fused with a real background, and the requirement on the sample data amount is reduced. The method has the advantages of simple algorithm, high precision, high speed and strong robustness, solves the problem of fusion of the target generated by the generated countermeasure network and the background, and effectively solves the influence on aerial image vehicle detection caused by large calculation amount of the neural network and large demand on samples.
Drawings
FIG. 1 is an overall flow diagram of the present invention;
FIG. 2 is a flow chart of step 2 of the present invention;
FIG. 3 is a flow chart of step 3 of the present invention;
FIG. 4 is a flow chart of step 4 of the present invention;
FIG. 5 is a diagram of a situation in which confrontation networks are generated to generate vehicles in a real aerial photography scenario;
FIG. 6 is an aerial vehicle image;
FIG. 7 is a graph of vehicle test results using the method of the present invention.
Detailed Description
The invention is described in detail below with reference to the following figures and specific examples:
the method for detecting the aerial photography vehicle based on the lightweight depth model generated by the samples as shown in FIG. 1 comprises the following steps:
step 1: acquiring aerial images aiming at vehicles, and preprocessing and labeling the vehicles in the aerial images;
step 2: generating a new vehicle image through a generation countermeasure network, and forming a vehicle database together with the acquired pictures;
and step 3: sending the obtained vehicle database into a lightweight convolution neural network for training until the network is converged;
and 4, step 4: and detecting the vehicle target in the test image by using the trained lightweight network and the weight file, and outputting a detection result.
The following scheme is adopted in the step 1:
preprocessing the acquired aerial images: and cleaning the acquired images, filtering out the fuzzy and overexposed photos without containing vehicle targets, incomplete vehicles and other photos which do not meet the requirements, and labeling the vehicle targets in the rest images.
The following scheme is adopted in the step 2:
first, the generation countermeasure network is a multi-conditional constrained generation countermeasure network Mc-GAN that constrains the image generation of the generator by adding discriminators and using different loss functions for different discriminators compared to other generation countermeasure networks.
As shown in fig. 2, step 2 specifically includes the following steps:
step 201: adding random noise into the aerial vehicle image, and generating the aerial vehicle image at the noise position through a generator in the Mc-GAN, wherein the generator uses U-Net as a network structure and follows a coding and decoding structure;
step 202: the generated vehicle image and the real image form an image pair, and the image pair is sent to a vehicle discriminator and a background discriminator to discriminate the real degree of the generated image and the degree of fusion with the background,
training is performed using a least squares loss function for the background arbiter, which is:
Llsgan(Db,G)=E[(Db(y)-1)2]+E[(Db(G(x)))2];
training a vehicle discriminator by using a cross entropy loss function, wherein the cross entropy loss function is as follows:
Lgan(Dc,G)=E[logDc(yc)]+E[log(1-Dc(G(z)))];
add L1 regularization constraint to the generator:
Ll1(G)=E(||y-G(x)||1);
the final loss function of the generator is:
L(G,Db,Dc)=Llsgan(Db,G)+Lgan(Dc,G)+λLl1(G);
the vehicle discriminator uses a five-layer convolutional network for feature extraction, and uses a spatial pyramid pooling layer to obtain a pooling feature with a fixed length.
Step 203: iterating continuously until the arbiter and the generator reach a balance point through the loss function iteration optimization arbiter and the generator, namely the generator can generate a sufficiently real picture;
step 204: a large amount of random noise is added into an aerial image and is sent into a generator after training is completed, a large amount of vehicle sample images fused with the background are generated, and an image database is formed together with the acquired images.
As shown in fig. 3, step 3 specifically includes the following steps:
step 301: sending the vehicle database into a lightweight network, setting the learning rate to be 0.0001, setting the iteration times to be 20 ten thousand and setting the batch size to be 256 by using ImageNet pre-training weight as an initial weight;
step 302: performing feature extraction on a training sample through a lightweight network to obtain a convolution feature map, wherein the used lightweight network is composed of a stem block and a stack block, the stem block uses a propagation mechanism of two parallel channels, one channel uses a convolution kernel of 1 × 1, the other channel uses convolution kernels of 1 × 1 and 3 × 3, then the results of the two channels are subjected to feature fusion and sent to the stack block, and the stack block uses three-way feature fusion to fuse feature maps with different scales;
step 303: predicting the convolution characteristic graph by adopting an improved Faster R-CNN algorithm to obtain a bounding box and a classification prediction;
step 304: and when the loss function converges or the maximum iteration number is reached, stopping training to obtain a network file and a weight file which can be used for aerial vehicle detection.
As shown in fig. 4, step 4 specifically includes the following steps:
step 401: sending the test image into a lightweight network to obtain a convolution characteristic diagram;
step 402: processing the convolution characteristic graph through a Faster R-CNN algorithm, and outputting a prediction boundary value and a classification value;
step 403: the threshold was set to 0.5 and the final detection results were filtered out by non-maximum suppression.
Fig. 5 shows the situation of generating a confrontation network to generate a vehicle in a real aerial photography scene, fig. 6 and 7 show aerial photography vehicle images and detection results obtained by using the method of the present invention, respectively, and through inspection, the present invention can generate vehicle images fused in a real environment, and can achieve 89.7% vehicle detection accuracy in the case of requiring a small number of samples, and can detect and identify vehicles of various different types of brands, and at the same time, has strong robustness to external influences such as illumination distance scales, and is suitable for detection of a plurality of vehicles.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A lightweight depth model aerial photography vehicle detection method based on sample generation is characterized by comprising the following steps:
(1) an image acquisition process: acquiring aerial images aiming at vehicles, and preprocessing and labeling the vehicles in the aerial images;
(2) generating a countermeasure network generation process: generating a new vehicle image through a generation countermeasure network, and forming a vehicle database together with the picture obtained in the step (1);
(3) the lightweight network training process comprises the following steps: sending the vehicle database obtained in the step (2) into a lightweight convolution neural network for training until the network is converged;
(4) and (3) testing an image detection process: and (4) detecting the vehicle target in the test image by using the light-weight network and the weight file trained in the step (3), and outputting a detection result.
2. The method for detecting the aerial vehicle based on the lightweight depth model generated by the sample as claimed in claim 1, wherein the preprocessing and labeling in step (1) comprises: and cleaning the acquired images, filtering out blur and overexposure, not containing vehicle targets and not meeting the requirements of the vehicles, and then labeling the vehicle targets in the rest images.
3. The method of claim 1, wherein the step (2) of generating the countermeasure network is a multi-condition constrained generation countermeasure network Mc-GAN, which constrains generator image generation by adding discriminators and using different loss functions for different discriminators.
4. The method for detecting the aerial vehicle based on the lightweight depth model generated by the sample as claimed in claim 1 or 3, wherein the step (2) of generating the new vehicle image by the countermeasure network specifically comprises the following steps:
(21) adding random noise into the aerial vehicle image, and generating the aerial vehicle image at the noise position through a generator in the Mc-GAN;
(22) forming an image pair by the generated vehicle image and the real image, and sending the image pair into a vehicle discriminator and a background discriminator to discriminate the real degree of the generated image and the degree of fusion in the background;
(23) training using least squares loss function for background discriminators
Llsgan(Db,G)=E[(Db(y)-1)2]+E[(Db(G(x)))2];
Training using cross entropy loss function for vehicle discriminators
Lgan(Dc,G)=E[logDc(yc)]+E[log(1-Dc(G(z)))];
Adding L1 regularization constraints to the generator
Ll1(G)=E(||y-G(x)||1);
The final loss function of the generator is
L(G,Db,Dc)=Llsgan(Db,G)+Lgan(Dc,G)+λLl1(G);
(24) Continuously iterating until the discriminator and the generator reach a balance point by optimizing the discriminator and the generator through a loss function, namely the generator can generate a sufficiently real picture;
(25) a large amount of random noise is added into the aerial images and is sent into a generator after training is completed, and a large amount of vehicle sample images fused with the background are generated.
5. The method for detecting the lightweight depth model aerial photography vehicle based on the sample generation as claimed in claim 4, wherein U-Net is used as a network structure in step (21), the whole network has 23 convolutional layers, and the input and output are images for a full convolutional neural network, and the encoding and decoding structure is followed.
6. The method for detecting a lightweight depth model aerial vehicle based on sample generation as claimed in claim 4, wherein the discriminator in step (22) uses a five-layer convolution network for feature extraction, and uses a spatial pyramid pooling layer to obtain a pooled feature of fixed length.
7. The method for detecting the aerial vehicle based on the lightweight depth model generated by the sample as claimed in claim 1, wherein the step (3) comprises the following steps:
(31) sending the vehicle database into a lightweight network, setting the learning rate to be 0.0001, setting the iteration times to be 20 ten thousand and setting the batch size to be 256 by using ImageNet pre-training weight as an initial weight;
(32) carrying out feature extraction on the training sample through a lightweight network to obtain a convolution feature map;
(34) predicting the convolution characteristic graph by adopting an improved Faster R-CNN algorithm to obtain a bounding box and a classification prediction;
(35) and when the loss function converges or the maximum iteration number is reached, stopping training to obtain a network file and a weight file which can be used for aerial vehicle detection.
8. The method for detecting a lightweight depth model aerial vehicle based on sample generation as claimed in claim 7, wherein the lightweight network in step (32) is composed of a stem block and a stack block, the stem block uses a propagation mechanism of two parallel channels, one channel uses a convolution kernel of 1 x 1, the other channel uses convolution kernels of 1 x 1 and 3 x 3, and then the results of the two channels are feature-fused into the stack block, and the stack block uses three-way feature fusion to fuse feature maps of different scales.
9. The method for detecting the aerial vehicle based on the lightweight depth model generated by the sample as claimed in claim 1, wherein the step (4) comprises the following steps:
(41) sending the test image into a lightweight network to obtain a convolution characteristic diagram;
(42) processing the convolution characteristic graph through a fast R-CNN algorithm, and outputting a prediction boundary value and a classification value;
(43) and setting a threshold value, and filtering out a final detection result through non-maximum suppression.
CN201911200419.0A 2019-11-29 2019-11-29 Lightweight depth model aerial photography vehicle detection method based on sample generation Pending CN111160100A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911200419.0A CN111160100A (en) 2019-11-29 2019-11-29 Lightweight depth model aerial photography vehicle detection method based on sample generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911200419.0A CN111160100A (en) 2019-11-29 2019-11-29 Lightweight depth model aerial photography vehicle detection method based on sample generation

Publications (1)

Publication Number Publication Date
CN111160100A true CN111160100A (en) 2020-05-15

Family

ID=70556262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911200419.0A Pending CN111160100A (en) 2019-11-29 2019-11-29 Lightweight depth model aerial photography vehicle detection method based on sample generation

Country Status (1)

Country Link
CN (1) CN111160100A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614053A (en) * 2020-12-25 2021-04-06 哈尔滨市科佳通用机电股份有限公司 Method and system for generating multiple images based on single image of antagonistic neural network
WO2022179088A1 (en) * 2021-02-25 2022-09-01 华为技术有限公司 Data processing method and apparatus, and system
CN117218613A (en) * 2023-11-09 2023-12-12 中远海运特种运输股份有限公司 Vehicle snapshot recognition system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2018102037A4 (en) * 2018-12-09 2019-01-17 Ge, Jiahao Mr A method of recognition of vehicle type based on deep learning
US20190236411A1 (en) * 2016-09-14 2019-08-01 Konica Minolta Laboratory U.S.A., Inc. Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
CN110175524A (en) * 2019-04-26 2019-08-27 南京航空航天大学 A kind of quick vehicle checking method of accurately taking photo by plane based on lightweight depth convolutional network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190236411A1 (en) * 2016-09-14 2019-08-01 Konica Minolta Laboratory U.S.A., Inc. Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
AU2018102037A4 (en) * 2018-12-09 2019-01-17 Ge, Jiahao Mr A method of recognition of vehicle type based on deep learning
CN110175524A (en) * 2019-04-26 2019-08-27 南京航空航天大学 A kind of quick vehicle checking method of accurately taking photo by plane based on lightweight depth convolutional network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陶晓力,刘宁钟: "航拍场景下的车辆生成" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614053A (en) * 2020-12-25 2021-04-06 哈尔滨市科佳通用机电股份有限公司 Method and system for generating multiple images based on single image of antagonistic neural network
WO2022179088A1 (en) * 2021-02-25 2022-09-01 华为技术有限公司 Data processing method and apparatus, and system
CN117218613A (en) * 2023-11-09 2023-12-12 中远海运特种运输股份有限公司 Vehicle snapshot recognition system and method
CN117218613B (en) * 2023-11-09 2024-03-19 中远海运特种运输股份有限公司 Vehicle snapshot recognition system and method

Similar Documents

Publication Publication Date Title
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN106875373B (en) Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm
Rijal et al. Ensemble of deep neural networks for estimating particulate matter from images
CN106096561B (en) Infrared pedestrian detection method based on image block deep learning features
CN109241982B (en) Target detection method based on deep and shallow layer convolutional neural network
CN111583229B (en) Road surface fault detection method based on convolutional neural network
CN113436169B (en) Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation
CN111723748A (en) Infrared remote sensing image ship detection method
CN107123111B (en) Deep residual error network construction method for mobile phone screen defect detection
CN111611874B (en) Face mask wearing detection method based on ResNet and Canny
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN111160100A (en) Lightweight depth model aerial photography vehicle detection method based on sample generation
CN109376580B (en) Electric power tower component identification method based on deep learning
CN113536972B (en) Self-supervision cross-domain crowd counting method based on target domain pseudo label
CN113111727A (en) Method for detecting rotating target in remote sensing scene based on feature alignment
CN111223087B (en) Automatic bridge crack detection method based on generation countermeasure network
CN114627383A (en) Small sample defect detection method based on metric learning
CN110033481A (en) Method and apparatus for carrying out image procossing
CN114972316A (en) Battery case end surface defect real-time detection method based on improved YOLOv5
CN111563525A (en) Moving target detection method based on YOLOv3-Tiny
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks
CN108154199B (en) High-precision rapid single-class target detection method based on deep learning
CN114596244A (en) Infrared image identification method and system based on visual processing and multi-feature fusion
CN112488043A (en) Unmanned aerial vehicle target detection method based on edge intelligence
CN116363535A (en) Ship detection method in unmanned aerial vehicle aerial image based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination