CN113159183A - Micro-pest image identification method based on local dense area density feature detection - Google Patents

Micro-pest image identification method based on local dense area density feature detection Download PDF

Info

Publication number
CN113159183A
CN113159183A CN202110440782.0A CN202110440782A CN113159183A CN 113159183 A CN113159183 A CN 113159183A CN 202110440782 A CN202110440782 A CN 202110440782A CN 113159183 A CN113159183 A CN 113159183A
Authority
CN
China
Prior art keywords
pest
area
dense
network
density
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110440782.0A
Other languages
Chinese (zh)
Other versions
CN113159183B (en
Inventor
王儒敬
杜健铭
陈天娇
谢成军
张洁
李�瑞
陈红波
胡海瀛
刘海云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN202110440782.0A priority Critical patent/CN113159183B/en
Publication of CN113159183A publication Critical patent/CN113159183A/en
Application granted granted Critical
Publication of CN113159183B publication Critical patent/CN113159183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a tiny pest image identification method based on local dense region density feature detection, which overcomes the defect of low tiny pest identification rate compared with the prior art. The invention comprises the following steps: acquiring a training image; constructing a pest dense area detection network; training a pest dense area detection network; standardizing pest dense areas; constructing and training a local area pest target detection network group; constructing and training a global pest target detection network; fusing pest detection results; acquiring an image of the pest to be detected; and obtaining a pest image detection result. According to the invention, the density characteristic information of the tiny pest gathering area is utilized to accurately divide the dense area and carry out individual pest target detection, so that the problems of detection omission, low detection precision and the like of global pest target detection in the area are solved, and the overall detection precision of tiny pest image detection is improved.

Description

Micro-pest image identification method based on local dense area density feature detection
Technical Field
The invention relates to the technical field of image processing, in particular to a tiny pest image identification method based on local dense region density feature detection.
Background
Most of traditional agricultural pest forecasting methods are based on manual field investigation to identify and estimate quantity, the identification accuracy is influenced by professional knowledge of investigators, and quantity estimation is influenced by subjective judgment of the investigators, so that the forecasting results have large difference. In recent years, pest identification and detection algorithms based on machine vision and image processing technologies are applied to agricultural pest identification and detection work in a large quantity, so that the labor cost of field investigation is greatly reduced, and the accuracy of identification and counting is improved.
In practical application, although the existing target detection algorithm has a good performance in detecting pests with large size and high identification degree, the existing target detection algorithm has the problems of large detection omission, poor detection precision and the like in the case of some pests with small size and high concentration density, such as wheat aphids and the like. This is because the global target detection algorithm for the entire image has a low detection resolution, and it is difficult to resolve a minute target. If the detection resolution of the global target detection algorithm is directly improved, the calculation burden of the algorithm is greatly increased, a large amount of calculation resources are occupied, and the actual application requirements cannot be met.
Therefore, how to improve the detection of the tiny pests while ensuring the operation efficiency becomes a technical problem which needs to be solved urgently by the tiny pest detection task.
Disclosure of Invention
The invention aims to solve the defect of low micro-pest identification rate in the prior art, and provides a micro-pest image identification method based on local dense area density feature detection to solve the problem.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a tiny pest image identification method based on local dense region density feature detection comprises the following steps:
11) acquisition of training images: acquiring a pest image data set with an artificial mark;
12) constructing a pest dense area detection network: constructing a pest dense area detection network, wherein the pest dense area detection network comprises an overall feature extraction network and a dense area suggestion network;
13) training of a pest dense area detection network: training a pest dense area detection network by using a pest image data set;
14) standardization of pest dense areas: performing standardized operation on a local pest region output by the pest dense region detection network; the standardized operation of the pest dense region comprises pest dense region merging operation and pest dense region segmentation operation, pest local regions output by a pest dense region detection network are input, and standardized image local regions which are grouped according to density scores and are similar in size are output;
15) constructing and training a local area pest target detection network group: constructing and training a local area pest target detection network group; inputting image local areas which are obtained by standardizing pest dense areas and grouped according to density scores, and outputting pest identification and positioning results in the image local areas which are grouped according to the density scores;
16) constructing and training a global pest target detection network;
17) and (3) fusing pest detection results: fusing the pest identification and positioning results output by the global pest target detection network and the local region pest target detection network group to obtain a global pest identification and positioning result;
18) obtaining an image of the pest to be detected: acquiring a tiny pest image to be detected;
19) and obtaining a pest image detection result.
The construction of the pest dense area detection network comprises the following steps:
21) setting an overall feature extraction network: the overall feature extraction network comprises a backbone network and a feature fusion network; the backbone network consists of a plurality of layers of convolutional neural network layers, a pooling layer and an activation function layer which are superposed and is used for extracting basic features in the picture and outputting a plurality of layers of feature maps; the feature fusion network fuses the feature maps of all layers by laterally connecting the multi-layer feature maps output by the backbone network, and outputs an overall feature map considering different levels of semantic features; wherein the backbone network is a ResNet50 network, and the feature fusion network is an FPN feature pyramid network;
22) setting a dense area proposal network: setting the input of the dense area suggestion network as an overall characteristic graph output by an overall characteristic extraction network, and outputting the overall characteristic graph as a density score of a selected area which takes each anchor point as the center;
the dense area proposal network firstly uses a convolution layer with convolution kernel size of 3 multiplied by 3 and provided with 512 channels, then uses a linear rectification function ReLu as a convolution layer activation function, and then uses a convolution layer with convolution kernel size of 1 multiplied by 1, and the number S multiplied by R of the area shape number S and the area magnification ratio number is used for determining the channel number S multiplied by R of the convolution layer.
The training of the pest dense area detection network comprises the following steps:
31) inputting a pest image data set with artificial labels into an overall feature extraction network, extracting an image basic feature map through a backbone network, and outputting an overall feature map after multi-layer semantics are mutually fused through a feature fusion network by the basic feature map;
32) the global feature map is input into the dense area proposal network,
setting an anchor point A sliding on the characteristic diagram, setting the single sliding step length as k, taking the anchor point as the center, sharing S-shaped region selection frames, wherein each selection frame has R amplification ratios, and when the anchor point slides to the ith position, the number of the artificial marks contained in the current S-shaped selection frame under the R amplification ratio is the target number
Figure BDA0003034976500000031
The area of the current region marquee is
Figure BDA0003034976500000032
The target density score in the current marquee is represented using the following formula:
Figure BDA0003034976500000033
wherein O is a deviation compensation coefficient to ensure that the target density score is a positive number, and O is 10 in application, and the maximum value d of the target density score is setmax4, minimum value dmin=1;
Setting a target density score for a current marquee
Figure BDA0003034976500000034
For the real density score, the score output by the network through the convolutional layer according to the global feature map is set
Figure BDA0003034976500000035
Predicting a score for the target density of the current marquee; the loss function generated by the current image for the dense area detection network back propagation training is represented using the following formula:
Figure BDA0003034976500000036
wherein I is a figureThe number of anchor point positions in the image,
Figure BDA0003034976500000037
the loss function for each vote box is calculated from the smooth L1 norm SmoothL 1:
Figure BDA0003034976500000038
finally, the pest dense area detection network obtained through training outputs a series of candidate areas for each image
Figure BDA0003034976500000041
Corresponding prediction density score
Figure BDA0003034976500000042
The candidate regions with high density scores are dense regions.
The pest dense area standardization comprises the following steps:
41) merging the candidate regions with similar density scores and highly overlapped regions, inputting a dense region set and a corresponding density score output by a dense region detection network, and outputting the merged dense region set and the corresponding density score; the dense area merging step is as follows:
411) the density scores were divided into 5 groups according to the score, and the scores were as follows:
score 1-2 is no dense, score 2-2.5 is sparse, score 2.5-3 is moderately dense, score 3-3.5 is generally dense, score 3.5-4 is extremely dense;
inputting the agricultural tiny pest image with the artificial mark into the trained pest dense area detection network to obtain the local pest area
Figure BDA0003034976500000043
Wherein each dense region is represented using top-left and bottom-right coordinates
Figure BDA0003034976500000044
412) Setting an overlap degree calculation formula to calculate the overlap degree OL (a, b) of the two dense areas a and b, wherein the calculation formula is as follows:
Figure BDA0003034976500000045
and setting a synthesis threshold NtJudging whether the overlapping degree of the two dense areas a and b needs to be synthesized; if OL (a, b) is greater than threshold NtThen merging the dense areas a and b;
413) setting merge operations
Figure BDA0003034976500000046
Input of it
Figure BDA0003034976500000047
In order to collect the regions to be merged,
Figure BDA0003034976500000048
outputting { b ', d' } as the new region to be merged and the corresponding density score,
Figure BDA0003034976500000049
of the smallest
Figure BDA00030349765000000410
And
Figure BDA00030349765000000411
and maximum
Figure BDA00030349765000000412
And
Figure BDA00030349765000000413
as the upper left and lower right coordinates of the synthesized region b'; the density score corresponding to the synthesized region is
Figure BDA00030349765000000414
Minimum value of (1), is noted
Figure BDA00030349765000000415
414) From dense areas
Figure BDA00030349765000000416
Taking out the corresponding density score
Figure BDA00030349765000000417
Maximum area bkAnd go through
Figure BDA00030349765000000418
All other areas b iniPerforming overlap calculation OL (b)k,bi) If the degree of overlap is greater than the composition threshold NtAnd corresponding to the density score dkAnd diBelonging to the same density group, then biAnd diFetch-put to merge candidate set
Figure BDA0003034976500000051
Performing the following steps; if the candidate set is not empty after the traversal is finished, the currently selected area b is selectedkAnd dkAlso put into the candidate set for merging operation
Figure BDA0003034976500000052
And put the output b ', d' back
Figure BDA0003034976500000053
And
Figure BDA0003034976500000054
otherwise, will bk,dkPut into output set
Figure BDA0003034976500000055
Performing the following steps; the above operations are circulated until
Figure BDA0003034976500000056
All the areas in the table are taken out;
415) collection
Figure BDA0003034976500000057
The middle is the output dense region after merging and the corresponding dense score;
42) performing segmentation operation on the oversize regions in the merged dense regions, inputting the merged dense region set and the corresponding density score output for the merging operation, and outputting the merged dense region set and the corresponding density score; the cutting operation steps are as follows:
421) setting a slicing threshold LtJudging whether the current dense area a needs to be segmented or not;
422) from
Figure BDA0003034976500000058
Taking out area biIf the region does not need to be segmented, it is put into an output set with the corresponding density score
Figure BDA0003034976500000059
Performing the following steps; otherwise, halving operation is carried out, and L is reserved at an halving boundarytA/4 overlapping region, keeping the density score of the region after segmentation unchanged from the original density score, and putting the region into an output set
Figure BDA00030349765000000510
Performing the following steps; the above operations are circulated until
Figure BDA00030349765000000511
All the areas in the table are taken out;
423) collection
Figure BDA00030349765000000512
The dense region after the output segmentation and the corresponding density score.
The construction and training of the local area pest target detection network group comprises the following steps:
51) constructing a plurality of groups of parallel local area pest target detection networks to form a local area pest target detection network group; the input of each group of local area pest target detection network is a local area which is obtained by standardizing a pest dense area and is grouped in a corresponding density mode, and the output is an identification and positioning result of the pest target in the grouped local area; the construction of each group of local area pest target detection network comprises the following steps:
511) setting a pest feature extraction network; the pest feature extraction network is used for extracting an input pest feature map of a local region, inputting a dense region obtained by standardizing a pest region, and outputting the pest feature map based on the dense region;
512) setting a pest identification and positioning network; the pest identification and positioning network is used for automatically learning pest characteristic diagrams and identifying and positioning pests, inputting the obtained pest characteristic diagrams of the dense area, and outputting pest type identification and positioning results of the dense area;
52) training a plurality of groups of parallel local area pest target detection networks by using the obtained standardized pest dense areas; the training comprises the following steps:
521) selecting three groups of sparse, medium dense, general and extremely dense according to the density score of the input region, and grouping the normalized dense regions according to the corresponding dense scores;
522) and inputting each grouped local dense region into the corresponding grouped local region pest target detection network for training to obtain the trained local region pest target detection network.
The construction and training of the global pest target detection network comprises the following steps:
61) constructing a global pest target detection network comprising an overall feature extraction network and a pest target identification and positioning network;
62) setting an overall characteristic extraction network for extracting a characteristic graph in the whole input picture, wherein the input picture is a tiny pest picture, and the output picture is an overall characteristic graph obtained based on the whole pest picture;
63) setting a pest target identification and positioning network for automatically learning an integral characteristic diagram and detecting pest targets, inputting the integral characteristic diagram, and outputting a pest identification result and a positioning result;
64) and training the global pest target detection network.
The pest image detection result obtaining method comprises the following steps:
71) inputting an agricultural tiny pest image to be detected into a trained pest dense area detection network to obtain a pest local area;
72) standardizing pest dense areas in pest local areas to generate standardized pest dense areas;
73) dividing pest dense areas into groups according to corresponding density scores, and respectively inputting corresponding target detection networks in the trained local area pest target detection network groups to generate local area pest target detection results of the corresponding density groups;
74) inputting an agricultural tiny pest image to be detected into the trained global pest target detection network to obtain a global pest target detection result;
75) and (3) fusion of pest target detection results: and fusing the global pest target detection result and the local region pest target detection result to obtain a final target detection result.
Advantageous effects
Compared with the prior art, the method for identifying the tiny pest image based on the density characteristic detection of the local dense area accurately divides the dense area and performs the independent pest target detection by using the density characteristic information of the tiny pest gathering area, solves the problems of detection omission, low detection precision and the like of the global pest target detection in the area, and improves the overall detection precision of the tiny pest image detection.
Drawings
FIG. 1 is a sequence diagram of the method of the present invention;
fig. 2-5 are graphs showing the image recognition result of the pest according to the method of the present invention.
Detailed Description
So that the manner in which the above recited features of the present invention can be understood and readily understood, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings, wherein:
as shown in fig. 1, the method for identifying a tiny pest image based on the density feature detection of a local dense region according to the present invention includes the following steps:
firstly, acquiring a training image: a pest image dataset with artificial markers is acquired.
Secondly, constructing a pest dense area detection network: and constructing a pest dense area detection network, wherein the pest dense area detection network comprises an overall feature extraction network and a dense area suggestion network. The overall characteristic extraction network is used for extracting a characteristic diagram of pests in the whole image, the network inputs the agricultural tiny pest image and outputs the overall characteristic diagram extracted based on the pest image; and the dense region suggestion network predicts the pest dense region and the density degree according to the overall depth feature map, inputs the overall feature map and outputs the dense region and the density score corresponding to each region.
The pest dense region detection network locates pest dense distribution regions in the pest picture and outputs the dense regions to a subsequent local pest target identification and positioning network for individual detection. In the process, the resolution ratio of the tiny pest targets in the local area is increased, the difficulty in identifying and positioning the tiny pests by a pest target identification and positioning network is reduced, and the identification, positioning and detection performance of the tiny pest targets by the identification and positioning network is finally improved. The difficulty lies in the accurate resolution of dense region targets in the overall feature map and the correct prediction of region density scores. When training is insufficient, the network has the problems that dense region selection is not accurate, and the region density score is not in accordance with the reality seriously. The method comprises the following specific steps:
(1) setting an overall feature extraction network: the overall feature extraction network comprises a backbone network and a feature fusion network; the backbone network consists of a plurality of layers of convolutional neural network layers, a pooling layer and an activation function layer which are superposed and is used for extracting basic features in the picture and outputting a plurality of layers of feature maps; the feature fusion network fuses the feature maps of all layers by laterally connecting the multi-layer feature maps output by the backbone network, and outputs an overall feature map considering different levels of semantic features; the backbone network is a ResNet50 network, and the feature fusion network is an FPN feature pyramid network.
(2) Setting a dense area proposal network: setting the input of the dense area suggestion network as an overall characteristic graph output by an overall characteristic extraction network, and outputting the overall characteristic graph as a density score of a selected area which takes each anchor point as the center;
the dense area proposal network firstly uses a convolution layer with convolution kernel size of 3 multiplied by 3 and provided with 512 channels, then uses a linear rectification function ReLu as a convolution layer activation function, and then uses a convolution layer with convolution kernel size of 1 multiplied by 1, and the number S multiplied by R of the area shape number S and the area magnification ratio number is used for determining the channel number S multiplied by R of the convolution layer.
Thirdly, training a pest dense area detection network: and training the pest dense region detection network by using the pest image data set. The dense region suggestion network in the pest dense region detection network is used as a basis for network training according to the target density score in the marquee. In other prior art, the initial target detection result of the whole image is obtained mainly by initial detection, and then the dense area in the image is selected by methods such as clustering or thermodynamic diagram according to the result. Compared with other prior art, the method has the advantages that the judgment on the density degree of the area is more direct and accurate, the density score is considered to select the target quantity and the area size in the area, and the calculation burden is less. The technical difficulty is that the target density score contains complex information, and a large amount of density region information is needed to be used as a training sample in order to obtain an accurate density score prediction result. The method comprises the following specific steps:
(1) inputting a pest image data set with artificial labels into an overall feature extraction network, extracting an image basic feature map through a backbone network, and outputting an overall feature map after multi-layer semantics are mutually fused through a feature fusion network by the basic feature map;
(2) the global feature map is input into the dense area proposal network,
setting an anchor point A sliding on the characteristic diagram, setting the single sliding step length as k, taking the anchor point as the center, sharing S-shaped region selection frames, wherein each selection frame has R amplification ratios, and when the anchor point slides to the ith position, the number of the artificial marks contained in the current S-shaped selection frame under the R amplification ratio is the target number
Figure BDA0003034976500000091
The area of the current region marquee is
Figure BDA0003034976500000092
The target density score in the current marquee is represented using the following formula:
Figure BDA0003034976500000093
wherein O is a deviation compensation coefficient to ensure that the target density score is a positive number, and O is 10 in application, and the maximum value d of the target density score is setmax4, minimum value dmin=1;
Setting a target density score for a current marquee
Figure BDA0003034976500000094
For the real density score, the score output by the network through the convolutional layer according to the global feature map is set
Figure BDA0003034976500000095
Predicting a score for the target density of the current marquee; the loss function generated by the current image for the dense area detection network back propagation training is represented using the following formula:
Figure BDA0003034976500000096
wherein I is the number of anchor point positions in the image,
Figure BDA0003034976500000097
the loss function for each vote box is calculated from the smooth L1 norm SmoothL 1:
Figure BDA0003034976500000098
finally, the pest dense area detection network obtained through training outputs a series of candidate areas for each image
Figure BDA0003034976500000099
Corresponding prediction density score
Figure BDA00030349765000000910
The candidate regions with high density scores are dense regions.
Fourthly, standardizing the pest dense area: performing standardized operation on a local pest region output by the pest dense region detection network; the standardized operation of the pest dense region comprises a merging operation of the pest dense region and a splitting operation of the pest dense region, a pest local region output by a pest dense region detection network is input, and standardized image local regions which are grouped according to density scores and have similar sizes are output.
The pest dense areas are standardized, and the highly overlapped areas are combined, so that the subsequent target detection calculation burden is reduced; meanwhile, the overlarge regions are grouped according to the density and segmented, the standardized regions with similar density and size are finally obtained, and the problems of difficult training and insufficient precision of a subsequent detection network caused by uneven density and large size span are solved;
the design difficulty of the method lies in that the predicted density score of the dense area network is effectively combined, reasonable score grouping basis is used, and a merging threshold value and a segmentation threshold value with proper size are set, so that the optimal effect can be achieved only by confirming the parameters through a large number of experiments; the method comprises the following specific steps:
(1) merging the candidate regions with similar density scores and highly overlapped regions, inputting a dense region set and a corresponding density score output by a dense region detection network, and outputting the merged dense region set and the corresponding density score; the dense area merging step is as follows:
A1) the density scores were divided into 5 groups according to the score, and the scores were as follows:
score 1-2 is no dense, score 2-2.5 is sparse, score 2.5-3 is moderately dense, score 3-3.5 is generally dense, score 3.5-4 is extremely dense;
inputting the agricultural tiny pest image with the artificial mark into the trained pest dense area detection network to obtain the local pest area
Figure BDA0003034976500000101
Wherein each dense region is represented using top-left and bottom-right coordinates
Figure BDA0003034976500000102
A2) Setting an overlap degree calculation formula to calculate the overlap degree OL (a, b) of the two dense areas a and b, wherein the calculation formula is as follows:
Figure BDA0003034976500000103
and setting a synthesis threshold NtJudging whether the overlapping degree of the two dense areas a and b needs to be synthesized; if OL (a, b) is greater than threshold NtThen merging the dense areas a and b;
A3) setting merge operations
Figure BDA0003034976500000104
Input of it
Figure BDA0003034976500000105
In order to collect the regions to be merged,
Figure BDA0003034976500000106
outputting { b ', d' } as new combined density score set for the region needing to be combinedThe region and the corresponding density score,
Figure BDA0003034976500000107
of the smallest
Figure BDA0003034976500000108
And
Figure BDA0003034976500000109
and maximum
Figure BDA00030349765000001010
And
Figure BDA00030349765000001011
as the upper left and lower right coordinates of the synthesized region b'; the density score corresponding to the synthesized region is
Figure BDA00030349765000001012
Minimum value of (1), is noted
Figure BDA00030349765000001013
A4) From dense areas
Figure BDA00030349765000001014
Taking out the corresponding density score
Figure BDA00030349765000001015
Maximum area bkAnd go through
Figure BDA00030349765000001016
All other areas b iniPerforming overlap calculation OL (b)k,bi) If the degree of overlap is greater than the composition threshold NtAnd corresponding to the density score dkAnd diBelonging to the same density group, then biAnd diFetch-put to merge candidate set
Figure BDA0003034976500000111
Performing the following steps; if the candidate set is not empty after the traversal is finished, the currently selected area b is selectedkAnd dkAlso put into the candidate set for merging operation
Figure BDA0003034976500000112
And put the output b ', d' back
Figure BDA0003034976500000113
And
Figure BDA0003034976500000114
otherwise, will bk,dkPut into output set
Figure BDA0003034976500000115
Performing the following steps; the above operations are circulated until
Figure BDA0003034976500000116
All the areas in the table are taken out;
A5) collection
Figure BDA0003034976500000117
The middle is the output dense region after merging and the corresponding dense score;
(2) performing segmentation operation on the oversize regions in the merged dense regions, inputting the merged dense region set and the corresponding density score output for the merging operation, and outputting the merged dense region set and the corresponding density score; the cutting operation steps are as follows:
B1) setting a slicing threshold LtJudging whether the current dense area a needs to be segmented or not;
B2) from
Figure BDA0003034976500000118
Taking out area biIf the region does not need to be segmented, it is put into an output set with the corresponding density score
Figure BDA0003034976500000119
Performing the following steps; whether or notThen a halving operation is performed and L is retained at the halving boundarytA/4 overlapping region, keeping the density score of the region after segmentation unchanged from the original density score, and putting the region into an output set
Figure BDA00030349765000001110
Performing the following steps; the above operations are circulated until
Figure BDA00030349765000001111
All the areas in the table are taken out;
B3) collection
Figure BDA00030349765000001112
The dense region after the output segmentation and the corresponding density score.
Fifthly, constructing and training a local area pest target detection network group: constructing and training a local area pest target detection network group; the image local areas grouped according to the density scores and obtained through pest dense area standardization are input, and pest identification and positioning results in the image local areas grouped according to the density scores are output.
In the prior art, a group of local area detection networks are generally used for carrying out target identification and positioning operation in a local area, so that the group of detection networks needs to detect various density characteristics, and the detection precision is not high; the local area pest target detection network group groups local areas with various densities according to the target densities thereof, so that each group of detection networks carries out identification and positioning operation aiming at similar density areas, and the problem of detection precision reduction caused by density span is reduced; the technical difficulty lies in that the optimal grouping number and density grouping basis are set manually in effective combination with standardized operation, and the optimal parameters need to be summarized through a large number of experiments.
The construction and training of the local area pest target detection network group comprises the following steps:
(1) constructing a plurality of groups of parallel local area pest target detection networks to form a local area pest target detection network group; the input of each group of local area pest target detection network is a local area which is obtained by standardizing a pest dense area and is grouped in a corresponding density mode, and the output is an identification and positioning result of the pest target in the grouped local area; the construction of each group of local area pest target detection network comprises the following steps:
C1) setting a pest feature extraction network; the pest feature extraction network is used for extracting an input pest feature map of a local region, inputting a dense region obtained by standardizing a pest region, and outputting the pest feature map based on the dense region;
C2) setting a pest identification and positioning network; the pest identification and positioning network is used for automatically learning pest characteristic diagrams and identifying and positioning pests, inputting the obtained pest characteristic diagrams of the dense area, and outputting pest type identification and positioning results of the dense area;
(2) training a plurality of groups of parallel local area pest target detection networks by using the obtained standardized pest dense areas; the training comprises the following steps:
D1) selecting three groups of sparse, medium dense, general and extremely dense according to the density score of the input region, and grouping the normalized dense regions according to the corresponding dense scores;
D2) and inputting each grouped local dense region into the corresponding grouped local region pest target detection network for training to obtain the trained local region pest target detection network.
And sixthly, constructing and training a global pest target detection network. The global pest target detection network is used for detecting sparse pest targets in the whole picture, is used as supplement of detection results of local dense areas, finally obtains complete recognition and positioning results of all pests in the picture, and is constructed and trained according to the prior art.
Firstly, constructing a global pest target detection network, including an overall feature extraction network and a pest target identification and positioning network;
secondly, setting an overall characteristic extraction network for extracting a characteristic diagram in the whole input picture, inputting the characteristic diagram into a tiny pest picture, and outputting the characteristic diagram into an overall characteristic diagram obtained based on the whole pest picture;
thirdly, setting a pest target identification and positioning network for automatically learning the overall characteristic diagram and detecting pest targets, inputting the overall characteristic diagram, and outputting pest identification results and positioning results;
and finally, training the global pest target detection network.
Seventhly, fusing pest detection results: and fusing the pest identification and positioning results output by the global pest target detection network and the local region pest target detection network group to obtain a global pest identification and positioning result.
Eighthly, acquiring an image of the pest to be detected: and acquiring a micro pest image to be detected.
And ninthly, obtaining a pest image detection result.
(1) Inputting an agricultural tiny pest image to be detected into a trained pest dense area detection network to obtain a pest local area;
(2) standardizing pest dense areas in pest local areas to generate standardized pest dense areas;
(3) dividing pest dense areas into groups according to corresponding density scores, and respectively inputting corresponding target detection networks in the trained local area pest target detection network groups to generate local area pest target detection results of the corresponding density groups;
(4) inputting an agricultural tiny pest image to be detected into the trained global pest target detection network to obtain a global pest target detection result;
(5) and (3) fusion of pest target detection results: and fusing the global pest target detection result and the local region pest target detection result to obtain a final target detection result.
As shown in fig. 2 to 5, it can be shown by the technical method of the present invention aiming at the recognition effect of the tiny pest picture that the method of the present invention can give consideration to both the detection of the tiny pests in the dense region and the detection of the sparse pests, and compared with the prior art, the method of the present invention has the advantages of missing set detection and high precision of the detection of the dense region.
TABLE 1 comparison table of the detection results on the micro-pest data set
Method AP AP50 AP75
FCOS 22.0 61.9 8.7
RetinaNet 17.5 51.3 6.5
FasterRCNN 23.6 63.2 10.8
DMNet 24.5 64.6 12.0
The method of the invention 30.5 71.8 16.3
As shown in Table 1, the detection precision of the method of the present invention and other prior art methods on the micro-pest data set is superior to that of the prior art methods, as shown in Table 1, by using the detection precision evaluation methods AP, AP50 and AP75 which are well known in the industry.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1.一种基于局部密集区域密度特征检测的微小害虫图像识别方法,其特征在于,包括以下步骤:1. a tiny pest image recognition method based on local dense area density feature detection, is characterized in that, comprises the following steps: 11)训练图像的获取:获取带有人工标记的害虫图像数据集;11) Acquisition of training images: Acquire a dataset of pest images with artificial labels; 12)害虫密集区域检测网络的构建:构建害虫密集区域检测网络,其中害虫密集区域检测网络包括整体特征提取网络与密集区域建议网络;12) Construction of the detection network of the dense area of pests: construct the detection network of dense area of pests, in which the detection network of dense area of pests includes the overall feature extraction network and the dense area suggestion network; 13)害虫密集区域检测网络的训练:利用害虫图像数据集对害虫密集区域检测网络进行训练;13) Training of pest-intensive area detection network: use the pest image dataset to train the pest-intensive area detection network; 14)害虫密集区域标准化:对由害虫密集区域检测网络输出的害虫局部区域进行标准化操作;害虫密集区域的标准化操作包括害虫密集区域合并操作与害虫密集区域切分操作,输入为害虫密集区域检测网络输出的害虫局部区域,输出为按照密度得分分组且尺寸相似的标准化的图像局部区域;14) Standardization of pest-intensive areas: standardize the local areas of pests output by the pest-intensive area detection network; the normalization operations of pest-intensive areas include pest-intensive area merging operations and pest-intensive area segmentation operations, and the input is the pest-intensive area detection network The output pest local area, the output is a standardized image local area grouped according to the density score and similar in size; 15)局部区域害虫目标检测网络组的构建与训练:对局部区域害虫目标检测网络组进行构建与训练;输入为经过害虫密集区域标准化获得的按照密度得分分组的图像局部区域,输出为按照密度得分分组的图像局部区域中的害虫识别定位结果;15) Construction and training of the local area pest target detection network group: construct and train the local area pest target detection network group; the input is the image local area grouped according to the density score obtained by the normalization of the pest dense area, and the output is according to the density score. Pest identification and localization results in the local area of the grouped image; 16)全局害虫目标检测网络的构建与训练;16) Construction and training of global pest target detection network; 17)害虫检测结果融合:将全局害虫目标检测网络与局部区域害虫目标检测网络组输出的害虫识别定位结果进行融合,获得全局害虫识别定位结果;17) Fusion of pest detection results: Integrate the pest identification and positioning results output by the global pest target detection network and the local pest target detection network group to obtain the global pest identification and positioning results; 18)待检测害虫图像的获取:获取待检测的微小害虫图像;18) Acquisition of images of pests to be detected: acquiring images of tiny pests to be detected; 19)害虫图像检测结果的获得。19) Acquisition of pest image detection results. 2.根据权利要求1所述的基于局部密集区域密度特征检测的微小害虫图像识别方法,其特征在于,所述害虫密集区域检测网络的构建包括以下步骤:2. The method for recognizing tiny pest images based on local dense area density feature detection according to claim 1, wherein the construction of the pest dense area detection network comprises the following steps: 21)设定整体特征提取网络:整体特征提取网络包括骨干网络和特征融合网络;骨干网络由叠加的多层卷积神经网络层、池化层与激活函数层构成,用于提取图片中的基础特征,输出多层特征图;特征融合网络通过侧向连接骨干网络输出的多层特征图,将各层特征图进行融合,输出兼顾不同层次语义特征的整体特征图;其中,骨干网络为ResNet50网络,特征融合网络为FPN特征金字塔网络;21) Set the overall feature extraction network: the overall feature extraction network includes a backbone network and a feature fusion network; the backbone network is composed of superimposed multi-layer convolutional neural network layers, pooling layers and activation function layers, which are used to extract the basis of the image. feature, and output multi-layer feature maps; the feature fusion network fuses the feature maps of each layer by laterally connecting the multi-layer feature maps output by the backbone network, and outputs an overall feature map that takes into account the semantic features of different levels; among them, the backbone network is the ResNet50 network , the feature fusion network is FPN feature pyramid network; 22)设定密集区域建议网络:设定密集区域建议网络的输入为整体特征提取网络输出的整体特征图,输出为整体特征图上以每个锚点为中心的选取区域的密度得分;22) Setting the dense region proposal network: the input of the dense region proposal network is set as the overall feature map output by the overall feature extraction network, and the output is the density score of the selected region centered on each anchor point on the overall feature map; 密集区域建议网络首先使用一个拥有512个通道的卷积核大小为3×3的卷积层,之后使用线性整流函数ReLu作为卷积层激活函数,再使用一个卷积核大小为1×1的卷积层,由区域形状数量S与区域放大比例数量R的乘积决定该卷积层的通道数S×R。The dense region proposal network first uses a convolutional layer with a convolutional kernel size of 3×3 with 512 channels, then uses the linear rectification function ReLu as the convolutional layer activation function, and then uses a convolutional kernel size of 1×1. For the convolutional layer, the number of channels S×R of the convolutional layer is determined by the product of the number of area shapes S and the number of area enlargement ratios R. 3.根据权利要求1所述的基于局部密集区域密度特征检测的微小害虫图像识别方法,其特征在于,所述害虫密集区域检测网络的训练包括以下步骤:3. The tiny pest image recognition method based on local dense area density feature detection according to claim 1, is characterized in that, the training of described pest dense area detection network comprises the following steps: 31)将带有人工标注的害虫图像数据集输入整体特征提取网络,通过骨干网络提取图像基础特征图,之后基础特征图通过特征融合网络,输出多层语义互相融合之后的整体特征图;31) Input the pest image data set with manual annotation into the overall feature extraction network, extract the basic feature map of the image through the backbone network, and then output the overall feature map after the multi-layer semantic fusion of the basic feature map through the feature fusion network; 32)将整体特征图输入密集区域建议网络,32) Input the overall feature map into the dense region proposal network, 设定一个在特征图上滑动的锚点A,滑动单次步长设定为k,以该锚点为中心,共有S种形状的区域选择框,每种选择框拥有R种放大比例,当锚点滑动至第i个位置,当前第s种形状在第r个放大比例下的选取框包含的人工标记的目标数量为
Figure FDA0003034976490000021
当前区域选取框面积为
Figure FDA0003034976490000022
使用如下公式表示当前选取框中的目标密度得分:
Set an anchor point A that slides on the feature map, set the single sliding step to k, and take the anchor point as the center, there are S types of area selection boxes, each selection box has R types of magnification ratios, when The anchor point slides to the i-th position, and the number of artificially marked targets contained in the marquee of the current s-th shape at the r-th magnification ratio is:
Figure FDA0003034976490000021
The current area marquee area is
Figure FDA0003034976490000022
Use the following formula to express the target density score in the current marquee:
Figure FDA0003034976490000023
Figure FDA0003034976490000023
其中O为偏差补偿系数,以保证目标密度得分为正数,在应用中取O=10,且设定目标密度得分的最大值dmax=4,最小值dmin=1;Wherein O is the deviation compensation coefficient, to ensure that the target density score is a positive number, take O=10 in the application, and set the maximum value of the target density score d max =4, the minimum value d min =1; 设定当前选取框的目标密度得分
Figure FDA0003034976490000024
为真实密度得分,设定网络根据整体特征图通过卷积层输出的得分
Figure FDA0003034976490000025
为当前选取框的目标密度预测得分;使用如下公式表示当前图像产生的用于密集区域检测网络反向传播训练的损失函数:
Sets the target density score for the current marquee
Figure FDA0003034976490000024
For the true density score, set the score output by the network through the convolutional layer according to the overall feature map
Figure FDA0003034976490000025
Predict the score for the target density of the current marquee; use the following formula to represent the loss function generated by the current image for back-propagation training of the dense region detection network:
Figure FDA0003034976490000026
Figure FDA0003034976490000026
其中I为图像中锚点位置个数,
Figure FDA0003034976490000027
为每个选取框的损失函数,由光滑L1范数SmoothL1计算而得:
where I is the number of anchor positions in the image,
Figure FDA0003034976490000027
The loss function for each marquee, calculated by the smooth L1 norm SmoothL1:
Figure FDA0003034976490000031
Figure FDA0003034976490000031
最终经过训练得害虫密集区域检测网络将对每一张图像输出一系列候选区域
Figure FDA0003034976490000032
与其对应的预测密度得分
Figure FDA0003034976490000033
具有高密度得分的候选区域即为密集区域。
Finally, the trained pest dense region detection network will output a series of candidate regions for each image
Figure FDA0003034976490000032
its corresponding predicted density score
Figure FDA0003034976490000033
A candidate region with a high density score is a dense region.
4.根据权利要求1所述的基于局部密集区域密度特征检测的微小害虫图像识别方法,其特征在于,所述害虫密集区域标准化包括以下步骤:4. The tiny pest image recognition method based on local dense area density feature detection according to claim 1, wherein the standardization of the pest dense area comprises the following steps: 41)对密度得分相近且区域高度重合的候选区域进行合并操作,输入为密集区域检测网络输出的密集区域集合与对应的密度得分,输出为合并后的密集区域集合与对应的密度得分;合并密集区域步骤如下:41) Merge the candidate regions with similar density scores and highly overlapping regions, the input is the dense region set output by the dense region detection network and the corresponding density score, and the output is the merged dense region set and the corresponding density score; The regional steps are as follows: 411)将密度得分按照得分高低分为5组,分别如下:411) Divide the density scores into 5 groups according to the scores, as follows: 得分1-2为无密集、得分2-2.5为稀疏、得分2.5-3为中等密集、得分3-3.5为一般密集、得分3.5-4为极度密集;A score of 1-2 is no dense, a score of 2-2.5 is sparse, a score of 2.5-3 is moderately dense, a score of 3-3.5 is moderately dense, and a score of 3.5-4 is extremely dense; 将带有人工标记的农业微小害虫图像输入训练后的害虫密集区域检测网络,得到害虫局部区域
Figure FDA0003034976490000034
其中使用左上与右下坐标表示每个密集区域
Figure FDA0003034976490000035
The artificially labeled agricultural tiny pest images are input into the trained pest dense area detection network to obtain the pest local area
Figure FDA0003034976490000034
where upper left and lower right coordinates are used to represent each dense area
Figure FDA0003034976490000035
412)设定重叠程度计算公式计算两个密集区域a与b的重叠程度OL(a,b),其计算公式如下:412) Set the overlapping degree calculation formula to calculate the overlapping degree OL(a,b) of the two dense areas a and b, and the calculation formula is as follows:
Figure FDA0003034976490000036
Figure FDA0003034976490000036
并设定合成阈值Nt判断两个密集区域a与b的重叠程度是否需要合成;若OL(a,b)大于阈值Nt,则进行密集区域a与b的合并操作;And set the synthesis threshold N t to judge whether the overlapping degree of the two dense areas a and b needs to be synthesized; if OL(a, b) is greater than the threshold N t , then perform the merging operation of the dense areas a and b; 413)设定合并操作
Figure FDA0003034976490000037
其输入
Figure FDA0003034976490000038
为需合并的区域集合,
Figure FDA0003034976490000039
为需要合并区域对应密度得分集合,输出{b′,d′}为合并成的新区域以及对应的密度得分,
Figure FDA00030349764900000310
中最小的
Figure FDA00030349764900000311
Figure FDA00030349764900000312
以及最大的
Figure FDA00030349764900000313
Figure FDA00030349764900000314
作为合成区域b′的左上和右下坐标;合成区域对应的密度得分为
Figure FDA0003034976490000041
中的最小值,记为
Figure FDA0003034976490000042
413) Set merge operation
Figure FDA0003034976490000037
its input
Figure FDA0003034976490000038
is the set of regions to be merged,
Figure FDA0003034976490000039
For the set of density scores corresponding to the regions to be merged, the output {b', d'} is the new merged region and the corresponding density scores,
Figure FDA00030349764900000310
the smallest
Figure FDA00030349764900000311
and
Figure FDA00030349764900000312
and the largest
Figure FDA00030349764900000313
and
Figure FDA00030349764900000314
As the upper left and lower right coordinates of the synthesized area b'; the corresponding density score of the synthesized area is
Figure FDA0003034976490000041
The minimum value in , denoted as
Figure FDA0003034976490000042
414)从密集区域
Figure FDA0003034976490000043
中取出对应密度得分
Figure FDA0003034976490000044
最大的区域bk,并遍历
Figure FDA0003034976490000045
中其他所有区域bi进行重叠度计算OL(bk,bi),如果重叠程度大于合成阈值Nt,且对应密度得分dk与di属于同一密度分组,则将bi与di取出放入至合并候选集合
Figure FDA0003034976490000046
中;如果当遍历结束后候选集和非空,则将当前选取的区域bk与dk也放入候选集合进行合并操作
Figure FDA0003034976490000047
并将输出的b′,d′放回
Figure FDA0003034976490000048
Figure FDA0003034976490000049
否则,就将bk,dk放入输出集合
Figure FDA00030349764900000410
中;循环上述操作直至
Figure FDA00030349764900000411
中所有区域被取出;
414) From dense areas
Figure FDA0003034976490000043
Take out the corresponding density score
Figure FDA0003034976490000044
the largest area b k , and traverse
Figure FDA0003034976490000045
OL(b k ,b i ) is used to calculate the overlap degree of all other regions b i in the put into merge candidate set
Figure FDA0003034976490000046
in; if the candidate set sum is not empty after the traversal, the currently selected regions b k and d k are also put into the candidate set for the merging operation
Figure FDA0003034976490000047
and put the output b', d' back
Figure FDA0003034976490000048
and
Figure FDA0003034976490000049
Otherwise, put b k , d k into the output set
Figure FDA00030349764900000410
; loop the above operations until
Figure FDA00030349764900000411
All areas are taken out;
415)集合
Figure FDA00030349764900000412
中为输出的合并之后的密集区域与对应的密集得分;
415) Collection
Figure FDA00030349764900000412
In is the output merged dense region and the corresponding dense score;
42)对经过合并后的密集区域中尺寸过大的区域进行切分操作,输入为合并操作输出的合并之后的密集区域集合以及对应的密度得分,输出为切分之后的密集区域集合以及对应的密度得分;切分操作步骤如下:42) Perform a segmentation operation on the oversized area in the merged dense area, the input is the merged dense area set and the corresponding density score output by the merge operation, and the output is the divided dense area set and the corresponding Density score; segmentation operation steps are as follows: 421)设定切分阈值Lt判断当前密集区域a是否需要切分;421) Set the segmentation threshold L t to determine whether the current dense area a needs to be segmented; 422)从
Figure FDA00030349764900000413
中取出区域bi,如果该区域不需要切分,则将其与对应密度得分放入输出集合
Figure FDA00030349764900000414
中;否则进行二等分操作,并在等分边界保留Lt/4的重叠区域,切分之后的区域的密度得分与原密度得分保持不变,并将该区域放入输出集合
Figure FDA00030349764900000415
中;循环上述操作直至
Figure FDA00030349764900000416
中所有区域都被取出;
422) from
Figure FDA00030349764900000413
Take out the region b i from , if the region does not need to be segmented, put it and the corresponding density score into the output set
Figure FDA00030349764900000414
Otherwise, perform the bisection operation, and keep the overlapping area of L t /4 at the bisection boundary, the density score of the divided area remains the same as the original density score, and put this area into the output set
Figure FDA00030349764900000415
; loop the above operations until
Figure FDA00030349764900000416
All regions in are taken out;
423)集合
Figure FDA00030349764900000417
中为输出的切分之后的密集区域与对应的密度得分。
423) Collection
Figure FDA00030349764900000417
In is the output dense area after segmentation and the corresponding density score.
5.根据权利要求1所述的基于局部密集区域密度特征检测的微小害虫图像识别方法,其特征在于,所述局部区域害虫目标检测网络组的构建与训练包括以下步骤:5. The tiny pest image recognition method based on local dense area density feature detection according to claim 1, is characterized in that, the construction and training of described local area pest target detection network group comprises the following steps: 51)构建多组并行的局部区域害虫目标检测网络构成局部区域害虫目标检测网络组;每组局部区域害虫目标检测网络的输入为经过害虫密集区域标准化得到的对应密度分组的局部区域,输出为该分组的局部区域中害虫目标的识别定位结果;每组局部区域害虫目标检测网络构建包括以下步骤:51) Constructing multiple groups of parallel local area pest target detection networks to form a local area pest target detection network group; the input of each group of local area pest target detection networks is the local area of the corresponding density grouping obtained by standardizing the pest dense area, and the output is the The identification and positioning results of pest targets in the grouped local areas; the construction of the pest target detection network in each group of local areas includes the following steps: 511)设定害虫特征提取网络;害虫特征提取网络用于提取输入的局部区域的害虫特征图,输入为害虫区域标准化得到的密集区域,输出为基于该密集区域的害虫特征图;511) setting a pest feature extraction network; the pest feature extraction network is used to extract a pest feature map of the input local area, the input is a dense area obtained by standardizing the pest area, and the output is a pest feature map based on the dense area; 512)设定害虫识别定位网络;害虫识别定位网络用于自动学习害虫特征图并进行害虫识别定位,输入为获得的密集区域的害虫特征图,输出为该密集区域的害虫种类识别与定位结果;512) setting a pest identification and positioning network; the pest identification and positioning network is used to automatically learn the pest feature map and carry out pest identification and positioning, the input is the pest feature map of the obtained dense area, and the output is the pest species identification and positioning results in the dense area; 52)利用获得的标准化的害虫密集区域对多组并行的局部区域害虫目标检测网络进行训练;训练包含以下步骤:52) Use the obtained standardized pest-intensive areas to train multiple groups of parallel local area pest target detection networks; the training includes the following steps: 521)根据输入区域的密度得分,选取稀疏、中等密集、一般与极度密集三个组别,并将标准化后的密集区域按照对应密集得分进行分组;521) According to the density score of the input area, select three groups of sparse, moderately dense, general and extremely dense, and group the standardized dense areas according to the corresponding dense scores; 522)将每个分组的局部密集区域输入到对应分组的局部区域害虫目标检测网络进行训练,得到训练后的局部区域害虫目标检测网络。522) Input the local dense area of each group into the local area pest target detection network of the corresponding group for training, and obtain the trained local area pest target detection network. 6.根据权利要求1所述的基于局部密集区域密度特征检测的微小害虫图像识别方法,其特征在于,所述全局害虫目标检测网络的构建与训练包括以下步骤:6. The tiny pest image recognition method based on local dense area density feature detection according to claim 1, is characterized in that, the construction and training of described global pest target detection network comprise the following steps: 61)构建全局害虫目标检测网络包括整体特征提取网络和害虫目标识别定位网络;61) Constructing a global pest target detection network including an overall feature extraction network and a pest target identification and positioning network; 62)设定整体特征提取网路用于提取整张输入图片中的特征图,输入为微小害虫图片,输出为基于整张害虫图片得到的整体特征图;62) set the overall feature extraction network to extract the feature map in the entire input picture, the input is a tiny pest picture, and the output is an overall feature map obtained based on the entire pest picture; 63)设定害虫目标识别定位网络用于自动学习整体特征图并进行害虫目标检测,输入为整体特征图,输出为害虫识别结果与定位结果;63) A pest target identification and positioning network is set to automatically learn the overall feature map and detect pest targets, the input is the overall feature map, and the output is the pest identification result and positioning result; 64)对全局害虫目标检测网络进行训练。64) Train the global pest target detection network. 7.根据权利要求1所述的基于局部密集区域密度特征检测的微小害虫图像识别方法,其特征在于,所述害虫图像检测结果的获得包括以下步骤:7. The method for recognizing tiny pest images based on local dense area density feature detection according to claim 1, wherein the obtaining of the pest image detection results comprises the following steps: 71)将待检测的农业微小害虫图像输入训练后的害虫密集区域检测网络,得到害虫局部区域;71) Input the image of agricultural tiny pests to be detected into the trained pest-intensive area detection network to obtain the local area of pests; 72)对害虫局部区域进行害虫密集区域标准化,生成标准化后的害虫密集区域;72) Standardize the pest-intensive area on the local area of the pest to generate a standardized pest-intensive area; 73)将害虫密集区域按照对应密度得分进行分组,并分别输入训练后的局部区域害虫目标检测网络组中的对应目标检测网络,生成对应密度组别的局部区域害虫目标检测结果;73) Group the pest-intensive areas according to the corresponding density scores, and respectively input the corresponding target detection networks in the trained local area pest target detection network group to generate the local area pest target detection results of the corresponding density groups; 74)将待检测的农业微小害虫图像输入训练后的全局害虫目标检测网络,得到全局害虫目标检测结果;74) Input the image of agricultural tiny pests to be detected into the trained global pest target detection network to obtain the global pest target detection result; 75)害虫目标检测结果的融合:将全局害虫目标检测结果与局部区域害虫目标检测结果进行融合,获得最终目标检测结果。75) Fusion of pest target detection results: the global pest target detection results and the local area pest target detection results are fused to obtain the final target detection results.
CN202110440782.0A 2021-04-23 2021-04-23 Tiny pest image identification method based on local dense area density feature detection Active CN113159183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110440782.0A CN113159183B (en) 2021-04-23 2021-04-23 Tiny pest image identification method based on local dense area density feature detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110440782.0A CN113159183B (en) 2021-04-23 2021-04-23 Tiny pest image identification method based on local dense area density feature detection

Publications (2)

Publication Number Publication Date
CN113159183A true CN113159183A (en) 2021-07-23
CN113159183B CN113159183B (en) 2022-08-30

Family

ID=76870091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110440782.0A Active CN113159183B (en) 2021-04-23 2021-04-23 Tiny pest image identification method based on local dense area density feature detection

Country Status (1)

Country Link
CN (1) CN113159183B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178121A (en) * 2018-12-25 2020-05-19 中国科学院合肥物质科学研究院 Pest image localization and recognition method based on spatial feature and depth feature enhancement technology
WO2020102988A1 (en) * 2018-11-20 2020-05-28 西安电子科技大学 Feature fusion and dense connection based infrared plane target detection method
JP2020091543A (en) * 2018-12-03 2020-06-11 キヤノン株式会社 Learning device, processing device, neural network, learning method, and program
CN111460315A (en) * 2020-03-10 2020-07-28 平安科技(深圳)有限公司 Social portrait construction method, device and equipment and storage medium
CN111476317A (en) * 2020-04-29 2020-07-31 中国科学院合肥物质科学研究院 Plant protection image non-dense pest detection method based on reinforcement learning technology
CN111476238A (en) * 2020-04-29 2020-07-31 中国科学院合肥物质科学研究院 Pest image detection method based on regional scale perception technology
US20200250461A1 (en) * 2018-01-30 2020-08-06 Huawei Technologies Co., Ltd. Target detection method, apparatus, and system
CN112488244A (en) * 2020-12-22 2021-03-12 中国科学院合肥物质科学研究院 Method for automatically counting densely distributed small target pests in point labeling mode by utilizing thermodynamic diagram

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200250461A1 (en) * 2018-01-30 2020-08-06 Huawei Technologies Co., Ltd. Target detection method, apparatus, and system
WO2020102988A1 (en) * 2018-11-20 2020-05-28 西安电子科技大学 Feature fusion and dense connection based infrared plane target detection method
JP2020091543A (en) * 2018-12-03 2020-06-11 キヤノン株式会社 Learning device, processing device, neural network, learning method, and program
CN111178121A (en) * 2018-12-25 2020-05-19 中国科学院合肥物质科学研究院 Pest image localization and recognition method based on spatial feature and depth feature enhancement technology
CN111460315A (en) * 2020-03-10 2020-07-28 平安科技(深圳)有限公司 Social portrait construction method, device and equipment and storage medium
CN111476317A (en) * 2020-04-29 2020-07-31 中国科学院合肥物质科学研究院 Plant protection image non-dense pest detection method based on reinforcement learning technology
CN111476238A (en) * 2020-04-29 2020-07-31 中国科学院合肥物质科学研究院 Pest image detection method based on regional scale perception technology
CN112488244A (en) * 2020-12-22 2021-03-12 中国科学院合肥物质科学研究院 Method for automatically counting densely distributed small target pests in point labeling mode by utilizing thermodynamic diagram

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
C. LI 等: "Density map guided object detection in aerial images", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (2020)》 *
LI R 等: "A coarse-to-fine network for aphid recognition and detection in the field", 《BIOSYSTEMS ENGINEERING》 *
LIU, LIU 等: "Deep Learning based Automatic Approach using Hybrid Global and Local Activated Features towards Large-scale Multi-class Pest Monitoring", 《2019 IEEE 17TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN)》 *
刘浏: "基于深度学习的农作物害虫检测方法研究与应用", 《知网博士电子期刊》 *
李国瑞等: "基于语义信息跨层特征融合的细粒度鸟类识别", 《计算机应用与软件》 *
梁延禹等: "多尺度非局部注意力网络的小目标检测算法", 《计算机科学与探索》 *

Also Published As

Publication number Publication date
CN113159183B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN111027547B (en) Automatic detection method for multi-scale polymorphic target in two-dimensional image
CN110599448B (en) Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN107977671B (en) Tongue picture classification method based on multitask convolutional neural network
CN108830326B (en) Automatic segmentation method and device for MRI (magnetic resonance imaging) image
CN101315631B (en) A news video story unit association method
CN109410238B (en) Wolfberry identification and counting method based on PointNet + + network
CN112489081B (en) Visual target tracking method and device
CN111079602A (en) Vehicle fine granularity identification method and device based on multi-scale regional feature constraint
CN108288271A (en) Image detecting system and method based on three-dimensional residual error network
Pan et al. Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks
CN108171700A (en) Medical image pulmonary nodule detection method based on confrontation network
CN112651969B (en) Trachea tree hierarchical extraction method combining multi-information fusion network and regional growth
JP2016018538A (en) Image recognition device and method and program
CN109558902A (en) A kind of fast target detection method
CN114140665A (en) A Dense Small Object Detection Method Based on Improved YOLOv5
CN109472226A (en) A sleep behavior detection method based on deep learning
CN107256017A (en) route planning method and system
CN113313149A (en) Dish identification method based on attention mechanism and metric learning
CN107424161A (en) A kind of indoor scene image layout method of estimation by thick extremely essence
CN110929746A (en) A deep neural network-based method for location, extraction and classification of electronic file titles
CN106407943A (en) Pyramid layer positioning based quick DPM pedestrian detection method
CN116563205A (en) Wheat spike counting detection method based on small target detection and improved YOLOv5
CN106023159A (en) Disease spot image segmentation method and system for greenhouse vegetable leaf
CN112215285A (en) Cross-media-characteristic-based automatic fundus image labeling method
CN110378882A (en) A kind of Chinese medicine tongue nature method for sorting colors of multi-layer depth characteristic fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant