CN111126359A - High-definition image small target detection method based on self-encoder and YOLO algorithm - Google Patents

High-definition image small target detection method based on self-encoder and YOLO algorithm Download PDF

Info

Publication number
CN111126359A
CN111126359A CN202010143805.7A CN202010143805A CN111126359A CN 111126359 A CN111126359 A CN 111126359A CN 202010143805 A CN202010143805 A CN 202010143805A CN 111126359 A CN111126359 A CN 111126359A
Authority
CN
China
Prior art keywords
network
data
yolo
encoder
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010143805.7A
Other languages
Chinese (zh)
Other versions
CN111126359B (en
Inventor
吴宪云
孙力
李云松
王柯俨
刘凯
雷杰
郭杰
苏丽雪
王康
司鹏辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yixin Yiyi Information Technology Co ltd
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Publication of CN111126359A publication Critical patent/CN111126359A/en
Application granted granted Critical
Publication of CN111126359B publication Critical patent/CN111126359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a high-definition image small target detection method based on an autoencoder and a YOLO algorithm, and mainly solves the problem that the prior art cannot give consideration to both the accuracy and the speed of high-definition image small target detection. The method comprises the following implementation steps: 1) acquiring and labeling high-definition images to obtain a training set and a test set; 2) carrying out data expansion on the marked training set; 3) generating corresponding Mask data according to the labeling information; 4) building a self-encoder model; 5) training the same by using a training set; 6) splicing the coding network of the trained self-encoder with a YOLO-V3 detection network to obtain a mixed network and training the mixed network by using a training set; 7) and performing target detection on the test set by using the trained hybrid network. The method reduces the calculated amount of target detection, improves the detection speed, improves the detection precision of small targets in high-definition images under the condition of ensuring the detection speed, and can be used for target identification of aerial images of unmanned aerial vehicles.

Description

High-definition image small target detection method based on self-encoder and YOLO algorithm
Technical Field
The invention belongs to the technical field of target detection, and particularly relates to a method for detecting a small target of a high-definition image, which can be used for target identification of an aerial image of an unmanned aerial vehicle.
Technical Field
Currently, with the development of target detection technology, especially in recent years, target detection algorithms based on deep learning, such as fast-RCNN, SSD series, YOLO series, have been proposed, and compared with conventional target detection algorithms, the target detection algorithms based on deep learning greatly exceed the conventional detection algorithms in terms of accuracy and efficiency. However, the current algorithms are optimized based on the existing data sets, such as ImageNet, COCO, and the like, and in practical applications, such as unmanned aerial vehicle aerial image target detection, since the flying height of an unmanned aerial vehicle is high, the size of the acquired image is large and generally high-definition images are obtained, and in the acquired image, the size of the target is generally small, the method is mainly used for small target detection in the aspect of target detection of the high-definition images.
In target detection, there are two main processing modes for high-definition images, one is a down-sampling size scaling mode, and the other is an image cropping mode, which is specifically as follows:
joseph Redmon et al, in a non-patent document "YOLO 9000: Better, Faster, Stronger" of IEEE International computer Vision and Pattern recognition conference, proposed an improvement to a YOLO network that allows the network to detect input images of different sizes by removing all connected layers, which, in experimental results using a data set of VOC2007+ VOC2012, scales the input image to 288x288 by down-sampling size scaling, which can reach 91FPS at a speed but only 69.0mAP at a precision, and reduces the speed to 40FPS and increases the precision to 78.6mAP if the input image is scaled to 544 x 544 at a size. It can be seen from the experiment that the large-size input image target detection inevitably increases the calculation amount, thereby reducing the speed of target detection, and the downsampling size scaling mode also causes the loss of target space information, thereby reducing the precision of target detection. In the small target detection of the high-definition image, if the high-definition image is directly sent to a network for detection, the detection speed is reduced more seriously, and if the small target detection is carried out in a size scaling mode, the characteristic information of the small target is reduced, so that the precision is reduced.
The second common mode is image cropping, which specifically comprises the following steps: and cutting the original high-definition image into small images, sending the small images into a network for detection, and merging after the detection is finished. The method has the advantages that through cutting, the spatial information of the image is guaranteed not to be lost, and a good effect is achieved on the target detection precision, but because one image is cut into a plurality of images, the target detection speed is doubled.
In summary, how to perform fast and accurate target detection on a high-definition image in practical application becomes a problem to be solved.
Disclosure of Invention
The invention aims to provide a high-definition image small target detection method based on an autoencoder and a YOLO algorithm aiming at overcoming the defects of the existing method, and aims to improve the detection precision of the high-definition image small target under the condition of ensuring that the detection speed of the high-definition image is not reduced.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
(1) collecting high-definition image data to form a data set, labeling the data set to obtain correct label data, and dividing the data set and the label data into a training set and a test set according to a ratio of 8: 2;
(2) carrying out data expansion on the marked training set;
(3) for each piece of high-definition image data, generating target Mask data of a corresponding image according to the size of the image and the labeling information;
(4) building a full convolution self-encoder model comprising an encoding network and a decoding network, wherein the encoding network is used for carrying out feature extraction and data compression on a high-definition image, and the decoding network is used for restoring a compressed feature map to an original size;
(5) sending high-definition image training set data into a full convolution self-encoder model for training to obtain a trained full convolution self-encoder model:
(5a) initializing the offset of the network to 0, initializing the weight parameters of the network by adopting a kaiming Gaussian initialization method, and setting the iteration times T of the self-encoder according to the size of a high-definition image training set1
(5b) The partition-based mean square error loss function is defined as follows:
Figure BDA0002400010530000021
the method comprises the following steps of (1) calculating a Mask-MSE-Loss (y, y _) according to the position of a target region, wherein the Mask-MSE-Loss (y, y _) is a Loss function to be calculated, y is an output image of a decoder, y _ is an input original high-definition image, α is a Loss penalty weight of the target region and is set to be 0.9, β is a background region penalty weight and is set to be 0.1, W is an input image size width of a self-encoder, H is an input image size width of the self-encoder, and Mask (i, j) is a value of the (i, j) th position of Mask;
(5c) inputting high-definition image training set data into a full convolution self-coding network, carrying out forward propagation to obtain a coded feature map, and recovering the feature map through a decoder;
(5d) calculating loss values of the input image and the output image by using the partition area-based mean square error loss function defined in the step (5 b);
(5e) updating the weight and the offset of the full convolution self-encoder by using a back propagation algorithm to finish one iteration of training the full convolution self-encoder;
(5f) repeating (5c) - (5e) until the iteration times T of all the self-encoders are completed1Obtaining a trained full convolution self-encoder;
(6) splicing the coding network of the trained full-convolution self-encoder with a YOLO-V3 detection network, and training the spliced network:
(6a) splicing the coding network of the trained full-convolution self-encoder to the front of a YOLO-V3 detection network to form a spliced mixed network;
(6b) training the spliced hybrid network:
(6b1) reading parameters of the trained full-convolution self-encoder, initializing the coding network by using the read parameter values, and setting the parameters of the coding network in a non-trainable state;
(6b2) setting the input image size of the YOLO-V3 network to be the same as the input size of the full-convolution self-encoder network;
(6b3) downloading pre-trained parameters on ImageNet data sets from a Yolo organ network, initializing the parameters of the Yolo-V3 network by using the parameters, and setting the iteration times T of the Yolo-V3 network according to the size of the acquired data set in the step (1)2
(6b4) Sending the high-definition image training set data into the spliced hybrid network for forward propagation to obtain an output detection result;
(6b5) calculating a loss value between the output detection result and the correct label data marked in (1) by using a loss function in a YOLO-V3 algorithm;
(6b6) updating the weight and the offset of the hybrid network by using a back propagation algorithm according to the loss value, and completing one iteration of training the hybrid network;
(6b7) repeat (6b4) - (6b6) until all iterations T of YOLO-V3 are completed2Obtaining a trained hybrid network;
(7) and (3) inputting the test set data in the step (1) into the trained mixed model to obtain a final detection result.
Compared with the prior art, the invention has the following advantages:
the invention combines the coding network of the self-encoder with the YOLO-V3 detection network, compresses the high-definition image on the premise of little loss of the target area characteristics through the coding network, and detects the small target of the compressed image through the YOLO-V3 detection network, and the coding network only compresses the background characteristic information and retains the target characteristic information, thereby improving the detection precision of the small target in the high-definition image under the condition of ensuring the detection speed.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a labeling diagram of the high-definition image acquired in the invention;
FIG. 3 is a Mask data diagram generated by labeling information in the present invention;
FIG. 4 is a network architecture of the convolutional auto-encoder of the present invention;
FIG. 5 is a block diagram of an encoder in combination with a YOLO-V3 network in accordance with the present invention;
FIG. 6 is a graph of simulated test results on a test specimen using the present invention;
FIG. 7 is a diagram of simulation detection results of a prior downsampling compressed high definition image method on a test sample through YOLO-V3.
Detailed Description
The following describes embodiments and effects of the present invention in further detail with reference to the accompanying drawings, where the embodiments are used for detecting a small target at a sewage discharge port of a high-definition image captured by an unmanned aerial vehicle.
Referring to fig. 1, the implementation steps of this example include the following:
step 1, collecting high-definition images to obtain a training set and a test set.
Acquiring high-definition image data aerial photographed by an unmanned aerial vehicle, wherein the image width is 1920 pixels, and the image height is 1080 pixels;
performing target annotation on the acquired image data by using a common image annotation tool LabelImg to obtain correct label data, as shown in FIG. 2;
the data set and label data were divided into training and test sets in an 8:2 ratio.
And 2, performing data expansion on the marked data set.
2.1) carrying out left-right turning, rotation, translation, noise addition, brightness adjustment, contrast adjustment and saturation adjustment on each high-definition image collected in the unmanned aerial vehicle aerial photography training set;
2.2) adding the processed image data into the original training data set to obtain an expanded training data set.
And 3, generating a target Mask data image of the corresponding image.
3.1) setting a Mask data image as binary image data according to the size and the labeling information of the high-definition image acquired by the unmanned aerial vehicle aerial photography, wherein the width and the height of the Mask data image are the same as those of the high-definition image acquired by the unmanned aerial vehicle aerial photography, namely the image width of the Mask data is 1920 pixels, the height of the Mask data is 1080 pixels, and the number of channels is 1;
3.2) reading the position information of the pixel points in the original image, and setting the values of the pixel points corresponding to the Mask data through the position information:
if the pixel point is in the background area, the value of the Mask data corresponding to the pixel position is set as 0,
if the pixel point is in the target area, the value of the corresponding pixel position of the Mask data is set as 1,
the formula is expressed as follows:
Figure BDA0002400010530000051
and (i, j) refers to the ith row and the jth column of the pixel point in the unmanned aerial vehicle aerial image data, and Mask (i, j) is the value of the Mask image data at the (i, j) position.
The Mask map generated by the method of fig. 2 according to the 3.2) is shown in fig. 3.
And 4, building a full convolution self-encoder model.
The full convolution self-encoder model comprises an encoding network and a decoding network, wherein the encoding network is used for carrying out feature extraction and data compression on a high-definition image, the decoding network is used for restoring a compressed feature map to an original size, and the building process comprises the following steps:
4.1) building a coding network:
the coding network comprises 5 convolutional layers, wherein each convolutional layer is connected in series, and the parameters of each convolutional layer are set as follows:
a first layer: the convolution kernel size is 3 × 3, the number is 16, the convolution step size is 1, the activation function adopts ReLU, and the output feature size is 1664 × 16;
a second layer: the convolution kernel size is 3 × 3, the number is 32, the convolution step size is 2, the activation function adopts ReLU, and the output feature map size is 832 × 32;
and a third layer: the convolution kernel size is 3 × 3, the number is 64, the convolution step size is 1, the activation function adopts ReLU, and the output feature map size is 832 × 64;
a fourth layer: the convolution kernel size is 3 × 3, the number is 128, the convolution step size is 2, the ReLU is adopted as the activation function, and the output feature map size is 416 × 128;
and a fifth layer: the convolution kernel size is 1 × 1, the number is 3, the convolution step size is 1, the activation function adopts Sigmoid, and the output feature graph size is 416 × 3;
4.2) building a decoding network:
the decoding network comprises 5 deconvolution layers, wherein each deconvolution layer is connected in series, and the parameters of each deconvolution layer are set as follows:
layer 1: the convolution kernel size is 1 × 1, the number is 128, the convolution step size is 1, the ReLU is adopted as the activation function, and the output feature map size is 416 × 128;
layer 2: the convolution kernel size is 3 × 3, the number is 64, the convolution step size is 2, the activation function adopts ReLU, and the output feature map size is 832 × 64;
layer 3: the convolution kernel size is 3 × 3, the number is 32, the convolution step size is 1, the activation function adopts ReLU, and the output feature map size is 832 × 32;
layer 4: convolution kernel size is 3 × 3, number is 16, convolution step is 2, activation function adopts ReLU, output feature size is 1664 × 16;
layer 5: the convolution kernel size is 3 × 3, the number is 3, the convolution step size is 1, the activation function adopts Sigmoid, and the output feature size is 1664 × 3;
the description form of the size of the convolution kernel is w x h, and the meaning of the description form indicates that the width of the convolution kernel is w and the height of the convolution kernel is h;
the characteristic diagram size description form is w x h c, and the meaning of the characteristic diagram size description form is that the width of the characteristic diagram is w pixels, the height of the characteristic diagram is h pixels, and the number of channels is c;
the constructed full convolutional network is shown in fig. 4.
And 5, training the built full convolution self-encoder model.
5.1) initializing network parameters:
initializing the offset of the network to 0, and initializing the weight parameters of the network by adopting a kaiming Gaussian initialization method so as to ensure that the weight parameters are distributed as follows:
Figure BDA0002400010530000061
wherein: wlIs the weight of the l-th layer; n is Gaussian distribution, namely nominal normal distribution; a being ReLU activation function or LeakyReLU activation functionNegative half-axis slope, nlFor the data dimension of each layer, nlLength of convolution kernel edge2The number of channels is multiplied, and the channels are the number of channels input by each layer of convolution;
the iteration times of the self-encoder are set to 8000 according to the size of the high-definition image training set;
5.2) up-sampling the image data of the training set, and enabling the size of the image data of the up-sampled training set to be the same as the input size of the full convolution network, namely 1664 pixels in width, 1664 pixels in height and 3 in channel number;
5.3) performing up-sampling on the Mask data, wherein the size of the up-sampled Mask data is the same as the data width and height of the full convolution network, namely the width is 1664 pixels, the height is 1664 pixels, and the number of channels is 1;
5.4) inputting the up-sampled image into a full convolution self-coding network, carrying out forward propagation to obtain a coded feature map, and then restoring the feature map through a decoder;
5.5) constructing a mean square error loss function based on the subareas according to the following formula:
Figure BDA0002400010530000071
the method comprises the steps of calculating a Loss function of a decoder, calculating a Loss function of a Mask-MSE-Loss (y, y), outputting an image by the decoder, inputting an original high-definition image by y, setting α as a Loss penalty weight of a target area to be 0.9, setting β as a background area penalty weight to be 0.1, setting W as a width of input data of the encoder to be 1664, setting H as a height of the data of the encoder to be 1664, and setting Mask (i, j) as a value of the position (i, j) of the data of the Mask image subjected to upsampling;
5.6) calculating the loss value of the input image and the output image by using the loss function of 5.5):
5.7) updating the weight and the offset of the full convolution self-encoder by using a back propagation algorithm to finish one iteration of training the full convolution self-encoder:
5.7.1) updating the weight value by using a back propagation algorithm, wherein the formula is as follows:
Figure BDA0002400010530000072
wherein: wt+1Is the updated weight; wtIs the weight before update; μ is the learning rate of the back propagation algorithm, set here to 0.001;
Figure BDA0002400010530000073
partial derivative of the loss function of 5.5) with respect to the weight W;
5.7.2) update the offset using a back propagation algorithm, which is formulated as follows:
Figure BDA0002400010530000074
wherein: bt+1Is the updated offset; btIs the offset before updating; mu is the learning rate of the back propagation algorithm, and the value is 0.001;
Figure BDA0002400010530000075
partial derivative of the loss function of 5.5) with respect to the offset b;
5.8) repeating the steps from 5.2) to 5.7) until the iteration times of the full convolution self-encoder are completed, and obtaining the trained full convolution self-encoder.
Step 6, splicing the coding network of the full convolution self-encoder and a YOLO-V3 detection network, training the spliced mixed network:
6.1) splicing the coding network of the trained full-convolution self-encoder to the front of a YOLO-V3 detection network to form a spliced mixed network, as shown in FIG. 5;
6.2) training the spliced hybrid network:
6.2.1) reading the parameters of the trained full-convolution self-encoder, initializing the encoding network by using the read parameter values, and setting the parameters of the encoding network in a non-trainable state;
6.2.2) set the input image size of the YOLO-V3 network to be the same as the input size of the full-convolution self-encoder network;
6.2.3) downloading the pre-training parameters on the ImageNet data set from the YoLO official network, initializing the parameters of the YoLO-V3 network by using the parameters, and setting the iteration times of the YoLO-V3 network to be 5000 times according to the size of the acquired data set in the step (1);
6.2.4) sending the high-definition image training set data of the unmanned aerial vehicle aerial photography into the spliced hybrid network for forward propagation to obtain an output detection result;
6.2.5) calculating a loss value between the output detection result and the correct tag data labeled in (1) using a loss function in the YOLO-V3 algorithm,
the loss function in the YOLO-V3 algorithm is expressed as follows:
Figure BDA0002400010530000081
wherein: lambda [ alpha ]coordSetting the penalty weight of the predicted coordinate loss as 5;
λnoobjsetting the penalty weight of confidence coefficient loss when the target is not detected to be 0.5;
k is the scale size of the output characteristic diagram;
m is the number of bounding boxes;
Figure BDA0002400010530000082
whether a jth bounding box of an ith unit in the output feature map contains a target or not is judged, if so, the value is 1, otherwise, the value is 0;
Figure BDA0002400010530000083
and
Figure BDA0002400010530000084
conversely, if a target is included, the value is 0, otherwise the value is 1;
xithe abscissa value of the predicted central position of the bounding box in the ith cell in the feature map output by the YOLO-V3 network;
Figure BDA0002400010530000085
the abscissa value of the center position of the actual boundary box in the ith cell is taken as the coordinate value;
yithe ordinate value of the predicted boundary box center position in the ith cell in the feature map output by the YOLO-V3 network;
Figure BDA0002400010530000091
the vertical coordinate value of the center position of the actual boundary frame in the ith cell;
withe width of the predicted bounding box in the ith cell in the feature map output for the YOLO-V3 network;
Figure BDA0002400010530000092
the width of an actual bounding box in the ith cell;
hipredicting the height of a bounding box in the ith cell in the feature map output by the YOLO-V3 network;
Figure BDA0002400010530000093
the height of the actual bounding box in the ith cell;
Ciconfidence of the ith cell prediction output for the YOLO-V3 network;
Figure BDA0002400010530000094
confidence that the ith cell is true;
pi(c) the probability that the ith cell type in the feature map output by the YOLO-V3 network is c;
Figure BDA0002400010530000095
is the probability that the ith cell category is c.
6.2.6) updating the weight and the offset of the hybrid network by using a back propagation algorithm according to the loss value calculated by 6.2.5), wherein the updating method of the weight and the offset is the same as the updating formula of 5.7), and one iteration of training the hybrid network is completed;
6.2.7) repeating (6.2.4) to (6.2.6) until all iterations of YOLO-V3 are completed, and obtaining a trained mixed network;
and 7, using the trained network to detect the target.
Inputting the test set data in the step 1 into the trained hybrid model to obtain a final detection result, and detecting a small target in the image, wherein the result is shown in fig. 6.
In fig. 6 and 7, the region where the frame is drawn and the text is noted indicates that the target is successfully detected in the region, and it can be seen from the results of the conventional method in fig. 7 that two obvious small dark-tube targets are not detected in the lower left corner, and one more obvious small dark-tube target is not detected in the lower right corner. Compared with the detection result in fig. 6, the present invention successfully detects the target at the lower left corner and the lower right corner because the spatial characteristics of the target are preserved in the image compression process. Compared with the prior art, the method has obvious advantages in the aspect of small target detection of high-definition images.

Claims (5)

1. A high-definition image small target detection method based on an auto-encoder and a YOLO algorithm is characterized by comprising the following steps:
(1) collecting high-definition image data to form a data set, labeling the data set to obtain correct label data, and dividing the data set and the label data into a training set and a test set according to a ratio of 8: 2;
(2) carrying out data expansion on the marked training set;
(3) for each piece of high-definition image data, generating target Mask data of a corresponding image according to the size of the image and the labeling information;
(4) building a full convolution self-encoder model comprising an encoding network and a decoding network, wherein the encoding network is used for carrying out feature extraction and data compression on a high-definition image, and the decoding network is used for restoring a compressed feature map to an original size;
(5) sending high-definition image training set data into a full convolution self-encoder model for training to obtain a trained full convolution self-encoder model:
(5a) initializing the offset of the network to 0, initializing the weight parameters of the network by adopting a kaiming Gaussian initialization method, and setting the iteration times T of the self-encoder according to the size of a high-definition image training set1
(5b) The partition-based mean square error loss function is defined as follows:
Figure FDA0002400010520000011
the method comprises the following steps of (1) calculating a Mask-MSE-Loss (y, y _) according to the position of a target region, wherein the Mask-MSE-Loss (y, y _) is a Loss function to be calculated, y is an output image of a decoder, y _ is an input original high-definition image, α is a Loss penalty weight of the target region and is set to be 0.9, β is a background region penalty weight and is set to be 0.1, W is an input image size width of a self-encoder, H is an input image size width of the self-encoder, and Mask (i, j) is a value of the (i, j) th position of Mask;
(5c) inputting high-definition image training set data into a full convolution self-coding network, carrying out forward propagation to obtain a coded feature map, and recovering the feature map through a decoder;
(5d) calculating loss values of the input image and the output image by using the partition area-based mean square error loss function defined in the step (5 b);
(5e) updating the weight and the offset of the full convolution self-encoder by using a back propagation algorithm to finish one iteration of training the full convolution self-encoder;
(5f) repeating (5c) - (5e) until the iteration times T of all the self-encoders are completed1Obtaining a trained full convolution self-encoder;
(6) splicing the coding network of the trained full-convolution self-encoder with a YOLO-V3 detection network, and training the spliced network:
(6a) splicing the coding network of the trained full-convolution self-encoder to the front of a YOLO-V3 detection network to form a spliced mixed network;
(6b) training the spliced hybrid network:
(6b1) reading parameters of the trained full-convolution self-encoder, initializing the coding network by using the read parameter values, and setting the parameters of the coding network in a non-trainable state;
(6b2) setting the input image size of the YOLO-V3 network to be the same as the input size of the full-convolution self-encoder network;
(6b3) downloading pre-trained parameters on ImageNet data sets from a Yolo organ network, initializing the parameters of the Yolo-V3 network by using the parameters, and setting the iteration times T of the Yolo-V3 network according to the size of the acquired data set in the step (1)2
(6b4) Sending the high-definition image training set data into the spliced hybrid network for forward propagation to obtain an output detection result;
(6b5) calculating a loss value between the output detection result and the correct label data marked in (1) by using a loss function in a YOLO-V3 algorithm;
(6b6) updating the weight and the offset of the hybrid network by using a back propagation algorithm according to the loss value, and completing one iteration of training the hybrid network;
(6b7) repeat (6b4) - (6b6) until all iterations T of YOLO-V3 are completed2Obtaining a trained hybrid network;
(7) and (3) inputting the test set data in the step (1) into the trained mixed model to obtain a final detection result.
2. The method according to claim 1, wherein the step (2) of performing data expansion on the labeled training set comprises performing left-right flipping, rotation, translation, noise adding, brightness adjustment, contrast adjustment and saturation adjustment on each high-definition image in the original data set, and adding the processed image data into the original data set to obtain expanded data.
3. The method according to claim 1, wherein in the step (3), for each piece of high-definition image data, target Mask data of a corresponding image is generated according to the image size and the annotation information, and the target Mask data is implemented as follows:
(3a) setting Mask data as binary image data, wherein the width and the height of the Mask data are the same as those of the acquired high-definition image;
(3b) reading position information of pixel points in an original image according to the marked data, and setting values of the pixel points corresponding to Mask data:
if the pixel point is in the target area, the value of the pixel point corresponding to the Mask data is set to be 1,
if the pixel point is in the background area, the value of the pixel point corresponding to the Mask data is set to be 0,
the formula is expressed as follows:
Figure FDA0002400010520000031
4. the method of claim 1, wherein the initializing of the weight parameters of the network in step (5a) using a kaiming gaussian initialization method is randomly initializing the weights of the network to obey the following distribution:
Figure FDA0002400010520000032
wherein: wlIs the weight of the l-th layer; n is Gaussian distribution, namely nominal normal distribution; a is the negative half-axis slope of the ReLU activation function or Leaky ReLU activation function, nlFor the data dimension of each layer, nlLength of convolution kernel edge2The number of channels, channel being the number of channels input for each layer of convolution.
5. The method of claim 1, wherein the loss function in the YOLO-V3 algorithm used in step (6b5) is expressed as follows:
Figure FDA0002400010520000041
wherein: lambda [ alpha ]coordSetting the penalty weight of the predicted coordinate loss as 5;
λnoobjsetting the penalty weight of confidence coefficient loss when the target is not detected to be 0.5;
k is the scale size of the output characteristic diagram;
m is the number of bounding boxes;
Figure FDA0002400010520000042
whether a jth bounding box of an ith unit in the output feature map contains a target or not is judged, if so, the value is 1, otherwise, the value is 0;
Figure FDA0002400010520000043
and
Figure FDA0002400010520000044
conversely, if a target is included, the value is 0, otherwise the value is 1;
xithe abscissa value of the predicted central position of the bounding box in the ith cell in the feature map output by the YOLO-V3 network;
Figure FDA0002400010520000045
the abscissa value of the center position of the actual boundary box in the ith cell is taken as the coordinate value;
yithe ordinate value of the predicted boundary box center position in the ith cell in the feature map output by the YOLO-V3 network;
Figure FDA0002400010520000046
the vertical coordinate value of the center position of the actual boundary frame in the ith cell;
wifeatures for the output of the YOLO-V3 networkThe width of the predicted bounding box in the ith cell in the graph;
Figure FDA0002400010520000047
the width of an actual bounding box in the ith cell;
hipredicting the height of a bounding box in the ith cell in the feature map output by the YOLO-V3 network;
Figure FDA0002400010520000048
the height of the actual bounding box in the ith cell;
Ciconfidence of the ith cell prediction output for the YOLO-V3 network;
Figure FDA0002400010520000051
confidence that the ith cell is true;
pi(c) the probability that the ith cell type in the feature map output by the YOLO-V3 network is c;
Figure FDA0002400010520000052
is the probability that the ith cell category is c.
CN202010143805.7A 2019-11-15 2020-03-04 High-definition image small target detection method based on self-encoder and YOLO algorithm Active CN111126359B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019111176908 2019-11-15
CN201911117690 2019-11-15

Publications (2)

Publication Number Publication Date
CN111126359A true CN111126359A (en) 2020-05-08
CN111126359B CN111126359B (en) 2023-03-28

Family

ID=70493460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010143805.7A Active CN111126359B (en) 2019-11-15 2020-03-04 High-definition image small target detection method based on self-encoder and YOLO algorithm

Country Status (1)

Country Link
CN (1) CN111126359B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832513A (en) * 2020-07-21 2020-10-27 西安电子科技大学 Real-time football target detection method based on neural network
CN111881982A (en) * 2020-07-30 2020-11-03 北京环境特性研究所 Unmanned aerial vehicle target identification method
CN111986160A (en) * 2020-07-24 2020-11-24 成都恒创新星科技有限公司 Method for improving small target detection effect based on fast-RCNN
CN112287998A (en) * 2020-10-27 2021-01-29 佛山市南海区广工大数控装备协同创新研究院 Method for detecting target under low-light condition
CN112396582A (en) * 2020-11-16 2021-02-23 南京工程学院 Mask RCNN-based equalizing ring skew detection method
CN112766223A (en) * 2021-01-29 2021-05-07 西安电子科技大学 Hyperspectral image target detection method based on sample mining and background reconstruction
CN112926637A (en) * 2021-02-08 2021-06-08 天津职业技术师范大学(中国职业培训指导教师进修中心) Method for generating text detection training set
CN113255830A (en) * 2021-06-21 2021-08-13 上海交通大学 Unsupervised target detection method and system based on variational self-encoder and Gaussian mixture model
CN114419395A (en) * 2022-01-20 2022-04-29 江苏大学 Online target detection model training method based on intermediate position coding
CN114743116A (en) * 2022-04-18 2022-07-12 蜂巢航宇科技(北京)有限公司 Barracks patrol scene-based unattended special load system and method
CN114818838A (en) * 2022-06-30 2022-07-29 中国科学院国家空间科学中心 Low signal-to-noise ratio moving point target detection method based on pixel time domain distribution learning
CN115542282A (en) * 2022-11-28 2022-12-30 南京航空航天大学 Radar echo detection method, system, device and medium based on deep learning
WO2023040744A1 (en) * 2021-09-18 2023-03-23 华为技术有限公司 Method and apparatus for determining image loss value, storage medium, and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447033A (en) * 2018-11-14 2019-03-08 北京信息科技大学 Vehicle front obstacle detection method based on YOLO
CN109785333A (en) * 2018-12-11 2019-05-21 华北水利水电大学 Object detection method and device for parallel manipulator human visual system
CN109886359A (en) * 2019-03-25 2019-06-14 西安电子科技大学 Small target detecting method and detection model based on convolutional neural networks
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110087092A (en) * 2019-03-11 2019-08-02 西安电子科技大学 Low bit-rate video decoding method based on image reconstruction convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN109447033A (en) * 2018-11-14 2019-03-08 北京信息科技大学 Vehicle front obstacle detection method based on YOLO
CN109785333A (en) * 2018-12-11 2019-05-21 华北水利水电大学 Object detection method and device for parallel manipulator human visual system
CN110087092A (en) * 2019-03-11 2019-08-02 西安电子科技大学 Low bit-rate video decoding method based on image reconstruction convolutional neural networks
CN109886359A (en) * 2019-03-25 2019-06-14 西安电子科技大学 Small target detecting method and detection model based on convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
梁华等: "基于深度学习的航空对地小目标检测", 《液晶与显示》 *
王旭初等: "融合候选区域提取与SSAE深度特征学习的心脏MR图像左心室检测", 《计算机辅助设计与图形学学报》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832513A (en) * 2020-07-21 2020-10-27 西安电子科技大学 Real-time football target detection method based on neural network
CN111832513B (en) * 2020-07-21 2024-02-09 西安电子科技大学 Real-time football target detection method based on neural network
CN111986160A (en) * 2020-07-24 2020-11-24 成都恒创新星科技有限公司 Method for improving small target detection effect based on fast-RCNN
CN111881982A (en) * 2020-07-30 2020-11-03 北京环境特性研究所 Unmanned aerial vehicle target identification method
CN112287998A (en) * 2020-10-27 2021-01-29 佛山市南海区广工大数控装备协同创新研究院 Method for detecting target under low-light condition
CN112396582A (en) * 2020-11-16 2021-02-23 南京工程学院 Mask RCNN-based equalizing ring skew detection method
CN112396582B (en) * 2020-11-16 2024-04-26 南京工程学院 Mask RCNN-based equalizing ring skew detection method
CN112766223B (en) * 2021-01-29 2023-01-06 西安电子科技大学 Hyperspectral image target detection method based on sample mining and background reconstruction
CN112766223A (en) * 2021-01-29 2021-05-07 西安电子科技大学 Hyperspectral image target detection method based on sample mining and background reconstruction
CN112926637A (en) * 2021-02-08 2021-06-08 天津职业技术师范大学(中国职业培训指导教师进修中心) Method for generating text detection training set
CN113255830A (en) * 2021-06-21 2021-08-13 上海交通大学 Unsupervised target detection method and system based on variational self-encoder and Gaussian mixture model
WO2023040744A1 (en) * 2021-09-18 2023-03-23 华为技术有限公司 Method and apparatus for determining image loss value, storage medium, and program product
CN114419395A (en) * 2022-01-20 2022-04-29 江苏大学 Online target detection model training method based on intermediate position coding
CN114743116A (en) * 2022-04-18 2022-07-12 蜂巢航宇科技(北京)有限公司 Barracks patrol scene-based unattended special load system and method
CN114818838B (en) * 2022-06-30 2022-09-13 中国科学院国家空间科学中心 Low signal-to-noise ratio moving point target detection method based on pixel time domain distribution learning
CN114818838A (en) * 2022-06-30 2022-07-29 中国科学院国家空间科学中心 Low signal-to-noise ratio moving point target detection method based on pixel time domain distribution learning
CN115542282A (en) * 2022-11-28 2022-12-30 南京航空航天大学 Radar echo detection method, system, device and medium based on deep learning

Also Published As

Publication number Publication date
CN111126359B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN111126359B (en) High-definition image small target detection method based on self-encoder and YOLO algorithm
CN111598030B (en) Method and system for detecting and segmenting vehicle in aerial image
CN111191566B (en) Optical remote sensing image multi-target detection method based on pixel classification
CN109886066B (en) Rapid target detection method based on multi-scale and multi-layer feature fusion
CN111612008B (en) Image segmentation method based on convolution network
CN111709416B (en) License plate positioning method, device, system and storage medium
CN113780296A (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN112308860A (en) Earth observation image semantic segmentation method based on self-supervision learning
CN115147598B (en) Target detection segmentation method and device, intelligent terminal and storage medium
CN111242026B (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN115035295B (en) Remote sensing image semantic segmentation method based on shared convolution kernel and boundary loss function
CN112464912B (en) Robot end face detection method based on YOLO-RGGNet
CN110310305B (en) Target tracking method and device based on BSSD detection and Kalman filtering
CN116645592B (en) Crack detection method based on image processing and storage medium
CN113066089B (en) Real-time image semantic segmentation method based on attention guide mechanism
CN114332070A (en) Meteor crater detection method based on intelligent learning network model compression
CN114519819B (en) Remote sensing image target detection method based on global context awareness
CN116503709A (en) Vehicle detection method based on improved YOLOv5 in haze weather
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
CN112101113B (en) Lightweight unmanned aerial vehicle image small target detection method
CN116363610A (en) Improved YOLOv 5-based aerial vehicle rotating target detection method
CN115690770A (en) License plate recognition method based on space attention characteristics in non-limited scene
CN115984568A (en) Target detection method in haze environment based on YOLOv3 network
CN115359091A (en) Armor plate detection tracking method for mobile robot
CN114241470A (en) Natural scene character detection method based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211123

Address after: 710071 Taibai South Road, Yanta District, Xi'an, Shaanxi Province, No. 2

Applicant after: XIDIAN University

Applicant after: Nanjing Yixin Yiyi Information Technology Co.,Ltd.

Address before: 710071 No. 2 Taibai South Road, Shaanxi, Xi'an

Applicant before: XIDIAN University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant