CN114782682A - Agricultural pest image intelligent identification method based on neural network - Google Patents

Agricultural pest image intelligent identification method based on neural network Download PDF

Info

Publication number
CN114782682A
CN114782682A CN202210694051.3A CN202210694051A CN114782682A CN 114782682 A CN114782682 A CN 114782682A CN 202210694051 A CN202210694051 A CN 202210694051A CN 114782682 A CN114782682 A CN 114782682A
Authority
CN
China
Prior art keywords
image
scale
region
pest
significant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210694051.3A
Other languages
Chinese (zh)
Other versions
CN114782682B (en
Inventor
常新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Daofa Digital Information Technology Co ltd
Original Assignee
Xi'an Daofa Digital Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Daofa Digital Information Technology Co ltd filed Critical Xi'an Daofa Digital Information Technology Co ltd
Priority to CN202210694051.3A priority Critical patent/CN114782682B/en
Publication of CN114782682A publication Critical patent/CN114782682A/en
Application granted granted Critical
Publication of CN114782682B publication Critical patent/CN114782682B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of data processing, in particular to an agricultural pest image intelligent identification method based on a neural network, which comprises the steps of obtaining a pest visible light image containing a pest body, carrying out corresponding data processing based on the pest visible light image, and determining the significant value of each area image of the pest visible light image in a scale area image under different scales so as to determine each significant object area image in each area image; determining a comprehensive significant value of each significant object area image, thereby determining each pest area image in each significant object area image; and identifying pests in each pest region image so as to obtain corresponding pest types. The invention can accurately determine each pest region image by adopting the mode of carrying out data identification and data processing on the image, thereby avoiding the pest identification result from being influenced by background and other interferents and effectively improving the identification accuracy of the pest category.

Description

Agricultural pest image intelligent identification method based on neural network
Technical Field
The invention relates to the technical field of data processing, in particular to an agricultural pest image intelligent identification method based on a neural network.
Background
Agriculture, as a basic industry, dominates the national economy. With climate change, change of farming and cultivation modes and improvement of crop multiple cropping indexes, crop diseases and insect pests are in a multi-occurrence and frequent trend, and major crop diseases and insect pests occur frequently, so that the insect pest control work needs to be developed urgently. Therefore, accurately identifying the agricultural pest species in real time becomes an important premise for developing pest situation control work. However, the pests in the crop field have small volume, various varieties, diversified morphological characteristics, similar and intraspecific varieties and are easy to be confused.
Rice is one of the main food crops in China, and about half of the population takes rice as staple food. At present, the diagnosis method of crop pests in China mainly depends on manual identification, has large subjective factors and poor real-time performance, and is easy to cause misjudgment. Physical prevention and control methods such as insect pest situation measuring and reporting lamps are time-consuming, labor-consuming and poor in accuracy. With the increasingly mature image processing and mode recognition technology, the method for automatically recognizing pests by extracting image features of agricultural pests improves the accuracy and timeliness of agricultural pest recognition, but recognition results are greatly influenced by image feature extraction, and the method is narrow in applicable range and weak in generalization capability. Because the pests are similar to other interference objects, the detection precision of the existing target detection algorithm is not high. The image processing method based on the convolutional neural network greatly surpasses the traditional machine vision method in model precision and generalization capability, and shows stronger robustness in the pest image recognition problem under the natural environment of the field, but the result of the existing target detection algorithm is greatly influenced by the image background and the extracted image characteristics, so that the detection result is not accurate enough.
Disclosure of Invention
The invention aims to provide an agricultural pest image intelligent identification method based on a neural network, which is used for solving the problem that the existing pest identification result is not accurate enough.
In order to solve the technical problem, the invention provides an agricultural pest image intelligent identification method based on a neural network, which comprises the following steps:
acquiring a pest visible light image containing a pest body, and performing data preprocessing on the pest visible light image so as to acquire a preprocessed pest visible light image;
carrying out region segmentation on the preprocessed pest visible light image to obtain each region image, and further obtaining scale region images of each region image under different scales;
acquiring a scale region gray image and a scale region Lab image corresponding to all scale region images, and performing data processing on the scale region gray image and the scale region Lab image so as to obtain significant values corresponding to all scale region images;
screening each area image according to the corresponding significant values of all the scale area images so as to obtain each significant object area image;
performing data processing on the images of the various salient object areas so as to obtain a pest limb space position salient enhancement coefficient and a pest limb space distribution salient enhancement coefficient corresponding to the images of the various salient object areas;
calculating a comprehensive significant value corresponding to each significant object region image according to the significant enhancement coefficient of the spatial position of the pest limbs, the significant enhancement coefficient of the spatial distribution of the pest limbs and the significant value corresponding to each significant object region image;
screening each salient object area image according to the comprehensive salient value corresponding to each salient object area image, thereby obtaining each pest area image;
and inputting each pest region image into a pest type identification network respectively so as to obtain a corresponding pest type.
Further, the performing data processing on the gray scale image in the scale area and the Lab image in the scale area to obtain significant values corresponding to all the images in the scale area includes:
determining the corresponding regional contrast between any one scale region image and other scale region images under the same scale according to the scale region Lab image of each scale region image under the same scale;
determining the central position of the scale region gray level image of each scale region image under the same scale according to the scale region gray level image of each scale region image under the same scale, thereby determining the corresponding spatial Euclidean distance between any one scale region image and other scale region images under the same scale;
determining the mean square error of the gray level of the scale region gray level image of each scale region image under the same scale according to the scale region gray level image of each scale region image under the same scale;
determining corresponding difference values between any one scale region image and other scale region images under the same scale according to the corresponding region contrast and the spatial Euclidean distance between any one scale region image and other scale region images under the same scale and the gray mean square error of the gray scale region gray scale images of any one scale region image and other scale region images under the same scale;
and determining the corresponding significant values of all the scale region images according to the corresponding difference values between any one scale region image and other scale region images under the same scale.
Further, the determining the corresponding regional contrast between any one scale region image and each other scale region image under the same scale includes:
determining an a-channel color mean value and a b-channel color mean value of the scale region Lab image of each scale region image under the same scale according to the scale region Lab image of each scale region image under the same scale;
calculating the absolute value of the difference value of the average value of the colors of the a channel and the absolute value of the difference value of the average value of the colors of the b channel of the scale area Lab image of any scale area image and other scale area images under the same scale, thereby obtaining the color difference of any scale area image and other scale area images under the same scale;
and determining the corresponding region contrast between any one scale region image and each other scale region image under the same scale according to the color difference between any one scale region image and each other scale region image under the same scale.
Further, the calculation formula for obtaining the color difference between any one scale region image and each other scale region image under the same scale is as follows:
Figure 674307DEST_PATH_IMAGE002
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE003
for any ith scale region image under the same scale
Figure 400430DEST_PATH_IMAGE004
And other k-th scale region images
Figure DEST_PATH_IMAGE005
The difference in the color of (a) is,
Figure 403152DEST_PATH_IMAGE006
for any ith scale area image under the same scale
Figure 240658DEST_PATH_IMAGE004
And other k-th scale region images
Figure 10031DEST_PATH_IMAGE005
The absolute value of the difference of the average values of the colors of the a-channel of the Lab image,
Figure DEST_PATH_IMAGE007
for any ith scale area image under the same scale
Figure 856240DEST_PATH_IMAGE004
And other k-th scale region images
Figure 685656DEST_PATH_IMAGE005
The absolute value of the difference of the mean values of the b-channel colors of the Lab image of the scale region,
Figure 10458DEST_PATH_IMAGE008
and
Figure DEST_PATH_IMAGE009
are color adjustment parameters.
Further, the calculation formula for determining the corresponding region contrast between any one scale region image and each other scale region image under the same scale is as follows:
Figure DEST_PATH_IMAGE011
wherein, the first and the second end of the pipe are connected with each other,
Figure 865412DEST_PATH_IMAGE012
for any ith scale area image under the same scale
Figure 362865DEST_PATH_IMAGE004
And other k-th scale region images
Figure 363182DEST_PATH_IMAGE005
The contrast of the corresponding region is set to be,
Figure 175281DEST_PATH_IMAGE003
for any ith scale region image under the same scale
Figure 286456DEST_PATH_IMAGE004
And other k-th scale region images
Figure 375766DEST_PATH_IMAGE005
The difference in the color of (a) is,
Figure DEST_PATH_IMAGE013
in order to be a threshold value for the difference in color,
Figure 747317DEST_PATH_IMAGE014
and
Figure DEST_PATH_IMAGE015
are contrast adjustment parameters.
Further, the calculation formula for determining the correspondence between the corresponding difference values between any one scale region image and each of the other scale region images at the same scale is as follows:
Figure 984394DEST_PATH_IMAGE016
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE017
for any ith scale area image under the same scale
Figure 243469DEST_PATH_IMAGE004
And other k-th scale region images
Figure 311919DEST_PATH_IMAGE005
The corresponding difference value is set to be equal to the corresponding difference value,
Figure 388459DEST_PATH_IMAGE012
for any ith scale area image under the same scale
Figure 641061DEST_PATH_IMAGE004
And other k-th scale region images
Figure 359619DEST_PATH_IMAGE005
The contrast of the corresponding area is compared with that of the area,
Figure 423521DEST_PATH_IMAGE018
for any ith scale area image under the same scale
Figure 670962DEST_PATH_IMAGE004
And other k-th scale region images
Figure 679370DEST_PATH_IMAGE005
The corresponding spatial euclidean distance is calculated,
Figure DEST_PATH_IMAGE019
for any ith scale area image under the same scale
Figure 873722DEST_PATH_IMAGE004
The mean square error of the gray levels of the gray level image in the scale region of (2),
Figure 913835DEST_PATH_IMAGE020
is the minimum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,
Figure DEST_PATH_IMAGE021
is the maximum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,
Figure 410806DEST_PATH_IMAGE022
parameters are adjusted for the mean square error of the gray scale.
Further, the calculation formula for determining the saliency values corresponding to the images of all the scale regions is as follows:
Figure 172089DEST_PATH_IMAGE024
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE025
for any ith scale region image under the same scale
Figure 170132DEST_PATH_IMAGE004
The corresponding significance value is set to be,
Figure 67681DEST_PATH_IMAGE017
for any ith scale area image under the same scale
Figure 794941DEST_PATH_IMAGE004
And other k-th scale region images
Figure 512361DEST_PATH_IMAGE005
The corresponding difference value is compared with the corresponding difference value,Kis the total number of other images in the area of each scale under the same scale.
Further, the data processing is performed on the images of the respective salient object regions to obtain a significant pest limb space position enhancement coefficient and a significant pest limb space distribution enhancement coefficient corresponding to the images of the respective salient object regions, and the method includes:
acquiring a gray image of each salient object region corresponding to each salient object region image, and performing edge detection on the gray image of each salient object region to obtain each edge pixel point of each gray image of each salient object region;
uniformly sampling each edge pixel point of each gray level image of the salient object regions, thereby obtaining each edge sampling pixel point of each gray level image of the salient object regions;
determining a central pixel point between any two edge sampling pixel points of the gray image of the same salient object region according to each edge sampling pixel point of the gray image of each salient object region, so as to obtain each central pixel point of the gray image of each salient object region;
determining each central pixel point in each central pixel point of each gray image of the significant object region in the corresponding significant object region according to the position of each central pixel point of each gray image of the significant object region, and further determining the proportion value of each central pixel point in each gray image of the significant object region in the corresponding significant object region, so as to obtain the pest body spatial position significant enhancement coefficient corresponding to each gray image of the significant object region;
according to the position of each central pixel point of each gray image of the significant object region, each central pixel point which is positioned outside the corresponding significant object region in each central pixel point of each gray image of the significant object region is determined, and ellipse equation fitting and linear equation fitting are respectively carried out on each central pixel point which is positioned outside the corresponding significant object region, so that ellipse fitting goodness and linear fitting goodness are obtained, and further pest body spatial distribution significant enhancement coefficients corresponding to each gray image of the significant object region are determined.
Further, the formula for further determining the pest limb space distribution significant enhancement coefficient corresponding to each significant object region image is as follows:
Figure DEST_PATH_IMAGE027
wherein the content of the first and second substances,
Figure 314095DEST_PATH_IMAGE028
a pest limb space distribution obvious enhancement coefficient corresponding to the image of the obvious object area,
Figure DEST_PATH_IMAGE029
and
Figure 675937DEST_PATH_IMAGE030
respectively the ellipse fitting goodness and the straight line fitting goodness corresponding to the image of the salient object region,
Figure DEST_PATH_IMAGE031
and
Figure 659853DEST_PATH_IMAGE032
respectively an ellipse goodness of fit amplification coefficient and a straight line goodness of fit amplification coefficient,
Figure DEST_PATH_IMAGE033
and
Figure 208777DEST_PATH_IMAGE034
a first ellipse goodness-of-fit threshold and a second ellipse goodness-of-fit threshold,
Figure 610940DEST_PATH_IMAGE013
and
Figure DEST_PATH_IMAGE035
respectively a first straight line fitting goodness threshold value and a second straight line fitting goodness threshold value,
Figure 824359DEST_PATH_IMAGE036
is a fixed value of goodness of fit.
Further, screening the images of the regions of the respective salient objects to obtain images of the regions of the respective pests includes:
and judging whether the comprehensive significant value is greater than a set comprehensive significant value threshold value or not according to the comprehensive significant value corresponding to each significant object area image, and if so, judging the corresponding significant object area image as a pest area image.
The invention has the following beneficial effects: by acquiring a pest visible light image containing a pest body, performing data processing and data identification on the pest visible light image by using visible light identification equipment, and determining a significant value of a scale area image of each area image of the pest visible light image under different scales, thereby determining each significant object area image in each area image; determining a comprehensive significant value of each significant object area image, thereby determining each pest area image in each significant object area image; and identifying the pests in each pest region image to obtain the corresponding pest types. According to the pest type identification method and device, the corresponding data processing mode is adopted for the pest visible light images, each pest region image can be accurately determined, only the pest region images are identified when pest type identification is carried out, the pest identification result is prevented from being influenced by backgrounds and other interferents, and the pest type identification accuracy is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of an agricultural pest image intelligent identification method based on a neural network according to the present invention.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the technical solutions according to the present invention will be given with reference to the accompanying drawings and preferred embodiments. In the following description, the different references to "one embodiment" or "another embodiment" do not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The embodiment provides an agricultural pest image intelligent identification method based on a neural network. Then according to the characteristic that the color difference between the insect body and the background in the image is large, the remarkable objects in the image can be detected and separated, and then the remarkable objects are accurately identified according to the body texture, the limb space and the shape distribution condition of the pests, so as to determine the pest region image. And finally, identifying and classifying the pest region image by using a pest type identification network. Because the method segments the pest region image in the image before identifying the pests and only identifies the pest region image determined as the pests, the identification process and the result are not influenced by the background, and the pest identification precision is effectively improved.
Specifically, a flow chart corresponding to the intelligent agricultural pest image identification method based on the neural network is shown in fig. 1, and includes the following steps:
(1) the method comprises the steps of obtaining a pest visible light image containing a pest body, and carrying out data preprocessing on the pest visible light image so as to obtain a preprocessed pest visible light image.
The method comprises the steps of shooting rice attached with polypide by using a CMOS camera, and obtaining a pest visible light image containing the polypide, wherein the pest visible light image is an image in an RGB space. The pest visible light image is preprocessed to eliminate the influence caused by noise and part of external interference and enhance the accuracy of subsequent analysis. Since the image is subsequently converted from the RGB space to the Lab space, the present embodiment employs gaussian filtering to reduce noise of the pest visible light image, and performs convolution with the obtained pest visible light image by using a gaussian function to eliminate random noise. Of course, the implementer may also adopt other suitable denoising methods to preprocess the pest visible light image.
Since the color and shape of the polypide in the image are generally different from those of the background, the part with strong significance can be segmented according to the differences. However, dark patches, soil stains and the like often appear on the blades in the image after being damaged by worms, the shape and the color of the blades are similar to those of the worms, and the blades cannot be directly separated. Therefore, the segmented parts with strong significance are distinguished according to the surface texture and the limb characteristics of the worm body, and the image of the worm body part is selected. Firstly, in order to extract a significant part of a preprocessed pest visible light image, which has a large difference with a background, an object needs to be distinguished from the background according to differences of different positions in the image, and the method comprises the following specific implementation steps:
(2) and carrying out region segmentation on the preprocessed pest visible light image to obtain each region image, and further obtaining scale region images of each region image under different scales.
And (3) performing region segmentation on the preprocessed pest visible light image by adopting an SLIC (simple linear iterative clustering) super-pixel segmentation method to obtain each region image. In order to improve the accuracy of subsequently screening the image of the salient object region from the region image, interpolation is utilizedThe value algorithm, such as a nearest neighbor method, a bilinear interpolation method, and the like, reduces each region image into M different scales, thereby obtaining scale region images of each region image under different scales. Is provided with
Figure DEST_PATH_IMAGE037
The set of scales for the region image is, for example,
Figure 489826DEST_PATH_IMAGE038
=100%, meaning that the area image is the original size,
Figure DEST_PATH_IMAGE039
meaning that the original region image is reduced to its original size
Figure 526047DEST_PATH_IMAGE040
. The specific dimension can be selected according to the requirement, and in the embodiment, the setting is carried out
Figure DEST_PATH_IMAGE041
(3) And acquiring the gray scale images of the scale areas and the Lab images of the scale areas corresponding to the images of all the scale areas, and performing data processing on the gray scale images of the scale areas and the Lab images of the scale areas so as to obtain the significant values corresponding to the images of all the scale areas.
For the scale area images of each area image under any same scale, carrying out gray processing on the scale area images so as to obtain corresponding scale area gray images; and simultaneously, converting the scale area images into Lab space so as to obtain the corresponding scale area Lab images. Carrying out data identification and processing on a scale region gray image and a scale region Lab image corresponding to a scale region image of each region image under any same scale so as to obtain a significant value corresponding to the scale region image of each region image under any same scale, wherein the specific implementation process comprises the following steps:
(3.1) according to the scale region Lab image of each scale region image under the same scale, determining the corresponding region contrast between any one scale region image and other scale region images under the same scale, and the specific implementation steps comprise:
(3.1.1) determining the average value of the color of the channel a and the average value of the color of the channel b of the scale region Lab image of each scale region image under the same scale according to the scale region Lab image of each scale region image under the same scale.
For the scale region Lab images of all the scale region images under any scale, as the scale region Lab images are Lab space images, the international standard of color measurement of Lab space is adopted, and the color values comprise red to green color values and yellow to blue color values. Firstly, determining an average value of red to green color values of a Lab image in a scale area, namely determining an average value of a-channel colors of all pixel points of the Lab image in the scale area; meanwhile, determining the average value of the color values from yellow to blue of the Lab image in the scale area, namely determining the average value of the colors of the b channels of all the pixel points of the Lab image in the scale area.
(3.1.2) calculating the absolute value of the difference value of the average value of the colors of the a channel and the absolute value of the difference value of the average value of the colors of the b channel of the scale area Lab images of any scale area image and other scale area images under the same scale so as to obtain the color difference between any scale area image and other scale area images under the same scale, wherein the corresponding calculation formula is as follows:
Figure 935162DEST_PATH_IMAGE002
wherein, the first and the second end of the pipe are connected with each other,
Figure 393301DEST_PATH_IMAGE003
for any ith scale area image under the same scale
Figure 636195DEST_PATH_IMAGE004
And other k-th scale region images
Figure 81082DEST_PATH_IMAGE005
The difference in the color of (a) is,
Figure 825047DEST_PATH_IMAGE042
for any ith scale region image under the same scale
Figure 406202DEST_PATH_IMAGE004
And other k-th scale region images
Figure 85576DEST_PATH_IMAGE005
The absolute value of the difference of the a-channel color mean values of the Lab image of the scale region,
Figure 749251DEST_PATH_IMAGE007
for any ith scale area image under the same scale
Figure 31327DEST_PATH_IMAGE004
And other k-th scale region images
Figure 466988DEST_PATH_IMAGE005
The absolute value of the difference of the mean values of the b-channel colors of the Lab image of the scale region,
Figure 176318DEST_PATH_IMAGE008
and
Figure 330219DEST_PATH_IMAGE009
are all color adjustment parameters for the purpose of limiting
Figure 291353DEST_PATH_IMAGE003
And highlighting pest color and weakening background color, the present embodiment is set
Figure DEST_PATH_IMAGE043
Figure 47432DEST_PATH_IMAGE003
Has a value range of
Figure 193242DEST_PATH_IMAGE044
(3.1.3) according to the color difference between any one scale area image and other scale area images under the same scale, determining the corresponding area contrast between any one scale area image and other scale area images under the same scale, wherein the corresponding calculation formula is as follows:
Figure 709805DEST_PATH_IMAGE011
wherein, the first and the second end of the pipe are connected with each other,
Figure 333685DEST_PATH_IMAGE012
for any ith scale region image under the same scale
Figure 9517DEST_PATH_IMAGE004
And other k-th scale region images
Figure 60649DEST_PATH_IMAGE005
The contrast of the corresponding area is compared with that of the area,
Figure 189142DEST_PATH_IMAGE003
for any ith scale region image under the same scale
Figure 489149DEST_PATH_IMAGE004
And other k-th scale region images
Figure 19488DEST_PATH_IMAGE005
The difference in the color of (a) is,
Figure 241521DEST_PATH_IMAGE013
for the color difference threshold, the present embodiment sets
Figure DEST_PATH_IMAGE045
Figure 794994DEST_PATH_IMAGE014
And
Figure 901621DEST_PATH_IMAGE015
are contrast adjustment parameters, the purpose is to adjust
Figure 283536DEST_PATH_IMAGE012
Value range of (2), this embodiment setting
Figure 207630DEST_PATH_IMAGE046
Figure DEST_PATH_IMAGE047
Figure 123765DEST_PATH_IMAGE012
Value range of
Figure 627558DEST_PATH_IMAGE048
According to any ith scale region image under the same scale
Figure 132489DEST_PATH_IMAGE004
And other k-th scale region images
Figure 961905DEST_PATH_IMAGE005
Corresponding regional contrast
Figure 159143DEST_PATH_IMAGE012
As can be seen from the calculation formula of (c),
Figure 466628DEST_PATH_IMAGE012
from any ith scale region image under the same scale
Figure 826065DEST_PATH_IMAGE004
And other k-th scale region images
Figure 826382DEST_PATH_IMAGE005
The difference between the average values of the color values is obtained, and the larger the difference between the color values of the two-scale area image is, the larger the difference is, the
Figure 638480DEST_PATH_IMAGE003
The larger the area contrast
Figure 484076DEST_PATH_IMAGE012
The larger.
And (3.2) determining the central position of the gray level image of the scale area of each scale area image under the same scale according to the gray level image of the scale area of each scale area image under the same scale, thereby determining the corresponding spatial Euclidean distance between any one scale area image and other scale area images under the same scale.
For the scale region gray level images of all scale region images under any same scale, calculating the position mean value of all pixel points in the scale region gray level images, thereby determining the central position of the scale region gray level images, and calculating the spatial Euclidean distance between any two central positions, thereby obtaining the corresponding spatial Euclidean distance between any one scale region image and other scale region images under the same scale, wherein the corresponding calculation formula is as follows:
Figure 859473DEST_PATH_IMAGE050
wherein the content of the first and second substances,
Figure 30692DEST_PATH_IMAGE018
for any ith scale area image under the same scale
Figure 64507DEST_PATH_IMAGE004
And other k-th scale region images
Figure 120319DEST_PATH_IMAGE005
The corresponding spatial euclidean distance is calculated,
Figure DEST_PATH_IMAGE051
for any ith scale region image
Figure 657610DEST_PATH_IMAGE004
The abscissa of the central position of the gray scale image of the scale area, that is, the image of the arbitrary ith scale area
Figure 872167DEST_PATH_IMAGE004
The average value of the abscissa of all pixel points of the gray scale image in the scale area,
Figure 658857DEST_PATH_IMAGE052
for any ith scale region image
Figure 377414DEST_PATH_IMAGE004
The ordinate of the central position of the gray scale image of the scale area, that is, the arbitrary ith scale area image
Figure 300371DEST_PATH_IMAGE004
The average value of the vertical coordinates of all pixel points of the gray scale image in the scale area,
Figure DEST_PATH_IMAGE053
for other k-th scale region image
Figure 157600DEST_PATH_IMAGE005
The abscissa of the central position of the gray scale image of the k-th scale area, that is, the other images of the k-th scale area
Figure 166007DEST_PATH_IMAGE005
The average value of the abscissa of all pixel points of the gray scale image in the scale area,
Figure 419746DEST_PATH_IMAGE054
for other k-th scale region image
Figure 603734DEST_PATH_IMAGE005
The ordinate of the central position of the gray scale image of the scale area, that is, the other k-th scale area image
Figure 287656DEST_PATH_IMAGE005
The average value of the vertical coordinates of all pixel points of the gray scale image in the scale area.
And (3.3) determining the mean square error of the gray scale image of the scale area of each scale area image under the same scale according to the gray scale image of the scale area image under the same scale.
And calculating the mean square error of the gray value of each pixel point in the gray image of the scale area for the gray image of the scale area of each scale area under the same scale, thereby obtaining the mean square error of the gray value of the gray image of the scale area.
(3.4) determining corresponding difference values between any scale region image and other scale region images under the same scale according to the corresponding region contrast and the spatial Euclidean distance between any scale region image and other scale region images under the same scale and the gray mean square error of the gray scale region gray scale image of any scale region image and other scale region images under the same scale, wherein the corresponding calculation formula is as follows:
Figure 48939DEST_PATH_IMAGE016
wherein, the first and the second end of the pipe are connected with each other,
Figure 719086DEST_PATH_IMAGE017
for any ith scale area image under the same scale
Figure 616635DEST_PATH_IMAGE004
And other k-th scale region images
Figure 734108DEST_PATH_IMAGE005
The corresponding difference value is set to be equal to the corresponding difference value,
Figure 717107DEST_PATH_IMAGE012
for any ith scale region image under the same scale
Figure 190945DEST_PATH_IMAGE004
And other k-th scale region images
Figure 474159DEST_PATH_IMAGE005
The contrast of the corresponding area is compared with that of the area,
Figure 499884DEST_PATH_IMAGE018
for any ith scale region image under the same scale
Figure 845545DEST_PATH_IMAGE004
And other k-th scale region images
Figure 510357DEST_PATH_IMAGE005
The corresponding spatial euclidean distance is calculated,
Figure 382498DEST_PATH_IMAGE019
for any ith scale region image under the same scale
Figure 454491DEST_PATH_IMAGE004
The mean square error of the gray levels of the gray level image in the scale region of (2),
Figure 412082DEST_PATH_IMAGE020
is the minimum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,
Figure 352357DEST_PATH_IMAGE021
is the maximum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,
Figure 79004DEST_PATH_IMAGE022
adjusting parameters for the mean square error of the gray scale for the purpose of adjusting the mean square error of the gray scale
Figure 584547DEST_PATH_IMAGE019
Value range of (1), this embodiment setting
Figure DEST_PATH_IMAGE055
Any ith scale region image under the same scale
Figure 232698DEST_PATH_IMAGE004
And other k-th scale region images
Figure 711083DEST_PATH_IMAGE005
Corresponding difference value
Figure 292237DEST_PATH_IMAGE017
Due to the arbitrary ith scale region image
Figure 971612DEST_PATH_IMAGE004
Gray mean square error of gray scale image of scale region
Figure 903796DEST_PATH_IMAGE019
Over a large range of values, are less easily controlled and estimated, and are therefore formulated
Figure 182943DEST_PATH_IMAGE056
Mean square error of gray scale
Figure 25128DEST_PATH_IMAGE019
Normalization processing is carried out, and parameters are adjusted through mean square error of gray scale
Figure 37DEST_PATH_IMAGE022
Is adjusted so that its value range is
Figure DEST_PATH_IMAGE057
According to the above-mentioned arbitrary ith scale region image under the same scale
Figure 91621DEST_PATH_IMAGE004
And other k-th scale region images
Figure 52755DEST_PATH_IMAGE005
Corresponding difference value
Figure 871151DEST_PATH_IMAGE017
The difference value can be obtained by the calculation formula
Figure 16961DEST_PATH_IMAGE017
From any ith scale region image under the same scale
Figure 392579DEST_PATH_IMAGE004
Corresponding mean square error of gray scale
Figure 891824DEST_PATH_IMAGE019
Any ith scale region image under the same scale
Figure 833236DEST_PATH_IMAGE004
And other k-th scale region images
Figure 149948DEST_PATH_IMAGE005
Corresponding regional contrast
Figure 705473DEST_PATH_IMAGE012
And any ith scale region image under the same scale
Figure 133043DEST_PATH_IMAGE004
And other k-th scale region images
Figure 538748DEST_PATH_IMAGE005
Corresponding spatial Euclidean distance
Figure 26361DEST_PATH_IMAGE018
It is determined that the user is to be,
Figure 783096DEST_PATH_IMAGE017
and with
Figure 14357DEST_PATH_IMAGE019
And
Figure 396272DEST_PATH_IMAGE012
in a positive correlation with
Figure 789207DEST_PATH_IMAGE018
In an anti-correlation relationship, i.e. when
Figure 33238DEST_PATH_IMAGE019
And
Figure 802611DEST_PATH_IMAGE012
the larger the size of the tube is,
Figure 307542DEST_PATH_IMAGE018
the smaller the size of the product is,
Figure 12323DEST_PATH_IMAGE017
the larger the correspondence, the image of any ith scale region
Figure 865354DEST_PATH_IMAGE004
The more likely it is to be an area of pest damage.
(3.5) determining corresponding significant values of all the scale region images according to corresponding difference values between any one scale region image and other scale region images under the same scale, wherein the corresponding calculation formula is as follows:
Figure 313784DEST_PATH_IMAGE024
wherein the content of the first and second substances,
Figure 673222DEST_PATH_IMAGE025
for any ith scale area image under the same scale
Figure 673539DEST_PATH_IMAGE004
The corresponding significance value is given to the corresponding one,
Figure 626582DEST_PATH_IMAGE017
is an arbitrary ith scale under the same scaleRegion image
Figure 3337DEST_PATH_IMAGE004
And other k-th scale region images
Figure 355296DEST_PATH_IMAGE005
The corresponding difference value is set to be equal to the corresponding difference value,Kis the total number of the images of other scale regions under the same scale.
According to the image of the arbitrary ith scale region under the same scale
Figure 526515DEST_PATH_IMAGE004
Corresponding significance value
Figure 560330DEST_PATH_IMAGE025
The calculation formula shows that when the image is in any ith scale area under the same scale
Figure 944038DEST_PATH_IMAGE004
When the difference value corresponding to each other scale area image is larger, the corresponding arbitrary ith scale area image
Figure 746909DEST_PATH_IMAGE004
Corresponding significance value
Figure 682504DEST_PATH_IMAGE025
The larger the size, the more likely the i-th scale region image is to be the pest region image.
(4) And screening each area image according to the corresponding significant values of all the scale area images, thereby obtaining each significant object area image.
In this embodiment, in order to enhance the contrast between the salient region and the non-salient region, the calculation is expanded to multiple scales, that is, the scale region images of the region images under different scales are obtained, and then the salient values corresponding to the scale region images of the region images under different scales are determined. When the scale region images of a region image under different scales have larger significant values, the region image is regarded as a significant region which needs to be searched. Specifically, for any one region image, whether the significant values of the scale region images of the region image under different scales are all greater than a significant value threshold is judged, and if all the significant values are greater than the significant value threshold, the region image is determined as a significant object region image. In this embodiment, the significance threshold is set to 0.75, but of course, the significance threshold can be set selectively according to the actual effect. In this way, the salient object images in the respective region images are extracted based on the salient value threshold, and the respective salient object region images are obtained.
Secondly, after the object is distinguished from the background according to the difference of different positions in the image in the steps, the parts with strong significance are distinguished according to the surface texture and the limb characteristics of the polypide, and the image of the polypide part is selected, wherein the specific implementation steps are as follows:
(5) the method comprises the following steps of carrying out data processing on each image of the salient object region so as to obtain a pest limb space position salient enhancement coefficient and a pest limb space distribution salient enhancement coefficient corresponding to each image of the salient object region, wherein the specific steps comprise:
and (5.1) acquiring a gray image of the salient object region corresponding to each gray image of the salient object region, and performing edge detection on the gray image of the salient object region, thereby obtaining each edge pixel point of each gray image of the salient object region.
After obtaining each salient object area image, performing graying processing on each salient object area image respectively to obtain a salient object area grayscale image corresponding to each salient object area image. And then, performing edge detection on the gray level image of each significant object region by using a canny edge detection algorithm to obtain each edge binary image, wherein in each edge binary image, a pixel point with a pixel value of 1 is an edge pixel point of the significant object region, and a pixel point with a pixel value of 0 is an internal pixel point or a background pixel point of the significant object region. Because the edge pixel points detected by the edge detection may have a discontinuous phenomenon, the closed operation processing is performed on each edge binary image respectively to ensure the continuity of the edge pixel points, so that each final edge binary image is obtained. And determining each edge pixel point corresponding to each gray image of the salient object region according to the position of each pixel point with the pixel value of 1 in each final edge binary image.
And (5.2) uniformly sampling each edge pixel point of each gray level image of the salient object region, thereby obtaining each edge sampling pixel point of each gray level image of the salient object region.
For any final edge binary image, uniformly sampling the pixel point with the pixel value of 1 in the edge binary image, so as to obtain each sampling point, and in this embodiment, c sampling points can be obtained, where c = 200. According to the positions of the sampling points, each edge sampling pixel point corresponding to each gray level image of the salient object region can be determined.
And (5.3) determining a central pixel point between any two edge sampling pixel points of the gray image of the same salient object region according to each edge sampling pixel point of the gray image of each salient object region, thereby obtaining each central pixel point of the gray image of each salient object region.
For each sampling point of any final edge binary image, determining the position of the central point of a line segment formed by any two sampling points, wherein each central point corresponds to a pixel point, and as the number of the sampling points c =200 is set in the embodiment, the position of the central point of the line segment can be obtained
Figure 938036DEST_PATH_IMAGE058
A center point according to
Figure 125434DEST_PATH_IMAGE058
And the position of each central point can determine each central pixel point in the corresponding gray image of the area of the salient object.
And (5.4) determining each central pixel point in each central pixel point of each gray image of the salient object region, which is located in the corresponding salient object region, according to the position of each central pixel point of each gray image of the salient object region, and further determining the proportion value of each central pixel point in each gray image of the salient object region, which is located in the corresponding salient object region, so as to obtain the spatial position significant enhancement coefficient of the pest body corresponding to each image of the salient object region.
And for any final edge binary image, assigning the pixel values of the internal pixel points of the significant object region to be 1, and keeping the pixel values of the background pixel points to be 0, at the moment, in any final edge binary image, the pixel values of the pixel points inside and at the edge of the significant object region are 1, and the pixel value of the background pixel point outside the significant object region is 0. On the basis of the above, according to
Figure 45461DEST_PATH_IMAGE058
The position of a central point, count this
Figure 292903DEST_PATH_IMAGE058
And the proportion value of the central point with the pixel value of 0 in each central point is the proportion value of each central pixel point in the corresponding significant object region of the gray level image of the corresponding significant object region. According to the proportion value of each central pixel point of each gray level image of each salient object region in the corresponding salient object region, the salient enhancement coefficient of the pest limb space position corresponding to each salient object region image can be determined, and the corresponding calculation formula is as follows:
Figure 35731DEST_PATH_IMAGE060
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE061
the space position of the pest limbs corresponding to the ith salient object area image is obviously enhanced by a coefficient,
Figure 557979DEST_PATH_IMAGE062
the proportion value of each central pixel point of the gray image of the ith salient object region in the corresponding salient object region is calculated,
Figure DEST_PATH_IMAGE063
the ratio value of each central pixel point of the j-th significant object area gray level image in the corresponding significant object area is shown, n is the total number of the significant object area images,
Figure 273126DEST_PATH_IMAGE064
to adjust the coefficient by duty ratio for the purpose of adjustment
Figure 691469DEST_PATH_IMAGE061
Range of values, the present embodiment sets
Figure DEST_PATH_IMAGE065
According to the pest limb space position significant enhancement coefficient corresponding to the ith significant object region image
Figure 590767DEST_PATH_IMAGE061
The calculation formula shows that when the gray image of the ith salient object area is positioned in each central pixel point in the corresponding salient object area, the proportion value of each central pixel point is
Figure 385548DEST_PATH_IMAGE062
The bigger the pest is, the obvious enhancement coefficient of the spatial position of the pest limbs
Figure 17517DEST_PATH_IMAGE061
The larger the corresponding salient object region image is, the more likely the corresponding salient object region image is to be an image of a pest.
For pests, the biggest difference from other interfering objects is the unique limb characteristics of the pests, the midpoint obtained after connecting any two points selected from the image edge is positioned outside the pest body with higher probability, other interfering objects are usually arc-shaped or nearly arc-shaped edges, and the midpoint obtained after connecting any two points selected from the image edge is positioned in the interfering object with higher probability. Therefore, by using the characteristic, the proportion value of each central pixel point of the gray image of the salient object region, which is located in the corresponding salient object region, is determined, and the proportion value is used as an important basis for judging whether the salient object is a worm or not.
(5.5) according to the position of each central pixel point of each gray image of each significant object area, determining each central pixel point, which is positioned outside the corresponding significant object area, in each central pixel point of each gray image of each significant object area, and respectively performing ellipse equation fitting and linear equation fitting on each central pixel point positioned outside the corresponding significant object area, so as to obtain ellipse fitting goodness and linear fitting goodness, and further determining the pest body space distribution significant enhancement coefficient corresponding to each significant object area image.
And selecting each central pixel point of the gray-scale image of each remarkable object area, wherein the central pixel points are positioned outside the corresponding remarkable object area. According to the unique limb characteristics of the polypide, the polypide is generally symmetrically distributed in the image, and each central pixel point which is selected at the moment and is positioned outside the corresponding remarkable object area is distributed on two sides of the image in an arc shape. In addition, according to different postures of the insect body, when the insect body is distributed on one side in the picture, each central pixel point which is selected and positioned outside the corresponding salient object area is distributed on one side of the image in an arc shape. While for the rest of the interfering objects the morphology has no such feature. Based on the unique body characteristics of the worm body, the position conditions of all central pixel points which are positioned outside the corresponding remarkable object area in the selected central pixel points can be judged, and the selected central pixel points are analyzed so as to distinguish the worm body from other interferents.
Specifically, for each central pixel point of each gray scale image of the significant object region, which is located outside the corresponding significant object region, ellipse equation fitting and linear equation fitting are respectively performed on each central pixel point located outside the corresponding significant object region, so that ellipse fitting goodness and linear fitting goodness are obtained, and further, a pest body spatial distribution significant enhancement coefficient corresponding to the gray scale image of the significant object region is obtained, wherein the corresponding calculation formula is as follows:
Figure DEST_PATH_IMAGE067
wherein, the first and the second end of the pipe are connected with each other,
Figure 278865DEST_PATH_IMAGE068
a pest limb space distribution significant enhancement coefficient corresponding to the ith significant object region image,
Figure 261865DEST_PATH_IMAGE029
and
Figure 594757DEST_PATH_IMAGE030
respectively corresponding ellipse goodness of fit and straight line goodness of fit of the image of the salient object region,
Figure 81233DEST_PATH_IMAGE031
and
Figure 369608DEST_PATH_IMAGE032
respectively, an ellipse goodness of fit amplification factor and a straight line goodness of fit amplification factor, which function is to amplify the ellipse goodness of fit and the straight line goodness of fit, the present embodiment is arranged
Figure DEST_PATH_IMAGE069
Figure 246428DEST_PATH_IMAGE033
And
Figure 117432DEST_PATH_IMAGE034
a first ellipse goodness-of-fit threshold and a second ellipse goodness-of-fit threshold,
Figure 317469DEST_PATH_IMAGE013
and
Figure 514095DEST_PATH_IMAGE035
the first and second linear goodness of fit thresholds, respectively, are set by this embodiment
Figure 940529DEST_PATH_IMAGE070
Figure DEST_PATH_IMAGE071
Figure 552907DEST_PATH_IMAGE036
For a fixed value of goodness of fit, this embodiment sets
Figure 745466DEST_PATH_IMAGE072
The pest limb space distribution obvious enhancement coefficient corresponding to the image of the obvious object area
Figure 112994DEST_PATH_IMAGE068
In the calculation formula, when the salient objects in the image of the salient object region are the polyps, if the polyps are symmetrically distributed in the image, the fitting effect of fitting the ellipse equation to each central pixel point outside the salient object region is better, so that the obtained ellipse fitting goodness is obtained
Figure 761144DEST_PATH_IMAGE029
Will be greater than a certain value, i.e., greater than the second threshold value of goodness-of-fit of ellipse
Figure 239530DEST_PATH_IMAGE034
However, considering that the number of central pixel points located outside the significant object region is relatively large, the fitting effect of fitting the elliptic equation has certain limitation, and the obtained elliptic fitting goodness
Figure 679738DEST_PATH_IMAGE029
Nor too high, i.e., less than the first threshold of goodness-of-fit of ellipse
Figure 952588DEST_PATH_IMAGE033
(ii) a If the worm body is distributed on one side in the picture, linear equation fitting is carried out on each central pixel point outside the obvious object areaThe fitting effect is better, and the obtained straight line fitting goodness
Figure 619192DEST_PATH_IMAGE030
Will be greater than a certain value, i.e. greater than the threshold value of goodness-of-fit of the second straight line
Figure 370111DEST_PATH_IMAGE035
However, considering that the number of central pixel points located outside the significant object region is relatively large, the fitting effect of fitting the linear equation has certain limitations, and the obtained linear fitting goodness is
Figure 540192DEST_PATH_IMAGE030
Nor too high, i.e. less than the first line goodness-of-fit threshold
Figure 4451DEST_PATH_IMAGE013
At this time, the goodness of fit to the ellipse
Figure 892772DEST_PATH_IMAGE029
Goodness of fit to a straight line
Figure 572015DEST_PATH_IMAGE030
Amplifying to obtain a pest limb space distribution significant enhancement coefficient larger than 1
Figure 862182DEST_PATH_IMAGE068
So as to amplify the salient value of the salient object region image subsequently. When the salient objects in the image of the salient object area are other interferents, carrying out ellipse equation fitting and linear equation fitting on each central pixel point positioned outside the salient object area to obtain ellipse fitting goodness
Figure 742414DEST_PATH_IMAGE029
Goodness of fit to a straight line
Figure 852452DEST_PATH_IMAGE030
Other conditions can be met, and the fitting is directly optimizedFixed value of degree
Figure 617277DEST_PATH_IMAGE036
=1 as pest limb space distribution significant enhancement coefficient
Figure 27530DEST_PATH_IMAGE068
So as to keep the saliency value of the salient object region image unchanged subsequently.
(6) And calculating a comprehensive significant value corresponding to each significant object area image according to the significant enhancement coefficient of the spatial position of the pest limbs, the significant enhancement coefficient of the spatial distribution of the pest limbs and the significant value corresponding to each significant object area image.
For each significant object region image, setting a significant pest limb enhancement coefficient according to the significant pest limb space position enhancement coefficient and the significant pest limb space distribution enhancement coefficient corresponding to the significant object region image, wherein the corresponding calculation formula is as follows:
Figure 672138DEST_PATH_IMAGE074
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE075
the pest limb significant enhancement coefficient corresponding to the ith significant object region image,
Figure 204226DEST_PATH_IMAGE061
the spatial position of the pest limb corresponding to the ith salient object region image is obviously enhanced by a coefficient,
Figure 366217DEST_PATH_IMAGE068
and significantly enhancing the spatial distribution of the pest limbs corresponding to the ith significant object area image.
The pest limb significant enhancement coefficient corresponding to the ith significant object area image
Figure 365397DEST_PATH_IMAGE075
In the calculation formula (2), take into accountConsidering the spatial position of the limbs of the pests to obviously enhance the coefficient
Figure 587431DEST_PATH_IMAGE061
And the spatial distribution of the limbs of the pests is obviously enhanced
Figure 531116DEST_PATH_IMAGE068
All can generate amplification effect on pest identification, therefore
Figure 965639DEST_PATH_IMAGE061
And
Figure 819326DEST_PATH_IMAGE068
the larger the corresponding pest limb is, the significant enhancement factor
Figure 212261DEST_PATH_IMAGE075
The larger the more likely the salient object within the ith salient object region image is to be a pest.
For each significant object area image, determining a corresponding comprehensive significant value according to the corresponding pest limb significant enhancement coefficient and the significant value, wherein the corresponding calculation formula is as follows:
Figure DEST_PATH_IMAGE077
wherein, the first and the second end of the pipe are connected with each other,
Figure 718941DEST_PATH_IMAGE078
for the integrated saliency value corresponding to the ith salient object region image,
Figure 81789DEST_PATH_IMAGE075
the pest limb significant enhancement coefficient corresponding to the ith significant object region image,
Figure 55562DEST_PATH_IMAGE025
the corresponding significant value of the ith significant object area image.
Healds corresponding to the i-th salient object area imageSum of significant value
Figure 619398DEST_PATH_IMAGE078
In the calculation formula (2), the more the salient object in the salient object area image conforms to the characteristics of the pest body, the more the pest body salient enhancement coefficient
Figure 678621DEST_PATH_IMAGE075
And a significant value
Figure 720526DEST_PATH_IMAGE025
The larger the corresponding integrated saliency value
Figure 814384DEST_PATH_IMAGE078
The larger the likelihood that the salient object in the salient object region image is an insect is.
(7) And screening the images of the salient object areas according to the comprehensive salient values corresponding to the images of the salient object areas so as to obtain the images of the pest areas.
The comprehensive significance threshold is set in advance, and the comprehensive significance threshold is set to be 2 in the embodiment. And judging whether the comprehensive significant value is greater than a set comprehensive significant value threshold value or not according to the comprehensive significant value corresponding to each significant object area image, and if so, judging the corresponding significant object area image as a pest area image. In this way, screening of images of each salient object region can be achieved, and therefore images of each pest region can be obtained.
(8) And inputting each pest region image into a pest type identification network respectively so as to obtain a corresponding pest type.
And inputting each pest region image into a pre-trained pest type recognition network formed by a neural network, and recognizing corresponding pest types such as rice planthoppers, locusts, tryporyza incertulas or leafhoppers by the pest type recognition network.
In the present embodiment, the neural network constituting the pest category identification network employs a convolutional neural network such as ResNet34 or SENet, the loss function of the neural network employs a cross entropy loss function, and the optimization algorithm employs an Adaptive motion estimation algorithm (Adam). The specific implementation process of adopting the neural network to form the pest category identification network and correspondingly training the pest category identification network belongs to the prior art, and is not described herein again.
According to the method, the characteristics that the difference between pests and the background is large are utilized, the significant value of the scale area image of each area image under different scales is determined, the area image containing the significant objects is extracted, then the comprehensive significant value is constructed according to the body space and the shape distribution condition of the pests, the comprehensive significant value corresponding to the pest image is obviously larger than other interferents, and the pest area image is further segmented. When the pest category is identified, only the pest region image is identified, so that the detection process and the result are prevented from being influenced by the background and other interferents, and the identification accuracy of the pest category is effectively improved.
It should be noted that: the above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An agricultural pest image intelligent identification method based on a neural network is characterized by comprising the following steps:
acquiring a pest visible light image containing a pest body, and performing data preprocessing on the pest visible light image so as to acquire a preprocessed pest visible light image;
carrying out region segmentation on the preprocessed pest visible light image to obtain each region image, and further obtaining scale region images of each region image under different scales;
acquiring a scale region gray image and a scale region Lab image corresponding to all scale region images, and performing data processing on the scale region gray image and the scale region Lab image so as to obtain significant values corresponding to all scale region images;
screening each area image according to the corresponding significant values of all the scale area images, thereby obtaining each significant object area image;
performing data processing on the images of the salient object areas to obtain a pest limb space position salient enhancement coefficient and a pest limb space distribution salient enhancement coefficient corresponding to the images of the salient object areas;
calculating a comprehensive significant value corresponding to each significant object region image according to the significant enhancement coefficient of the spatial position of the pest limbs, the significant enhancement coefficient of the spatial distribution of the pest limbs and the significant value corresponding to each significant object region image;
screening each salient object region image according to the comprehensive salient value corresponding to each salient object region image so as to obtain each pest region image;
and inputting each pest region image into a pest type identification network respectively so as to obtain a corresponding pest type.
2. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 1, wherein the step of performing data processing on the gray scale image of the scale area and the Lab image of the scale area so as to obtain the corresponding significant values of all the images of the scale area comprises the steps of:
determining the corresponding regional contrast between any one scale region image and other scale region images under the same scale according to the scale region Lab image of each scale region image under the same scale;
determining the central position of the scale region gray level image of each scale region image under the same scale according to the scale region gray level image of each scale region image under the same scale, thereby determining the corresponding space Euclidean distance between any one scale region image and other scale region images under the same scale;
determining the mean square error of the gray level of the scale region gray level image of each scale region image under the same scale according to the scale region gray level image of each scale region image under the same scale;
determining corresponding difference values between any scale region image and other scale region images at the same scale according to the corresponding region contrast and the corresponding spatial Euclidean distance between any scale region image and other scale region images at the same scale and the mean square error of the gray scale of the scale region gray scale image of any scale region image and other scale region images at the same scale;
and determining the corresponding significant values of all the scale region images according to the corresponding difference values between any one scale region image and other scale region images under the same scale.
3. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 2, wherein the determining of the corresponding regional contrast between the regional image of any scale and the regional images of other scales under the same scale comprises:
determining an a-channel color mean value and a b-channel color mean value of the scale region Lab image of each scale region image under the same scale according to the scale region Lab image of each scale region image under the same scale;
calculating the absolute value of the difference of the color mean values of the a channels and the b channels of the Lab images of the scale areas of any one scale area image and other scale area images under the same scale, thereby obtaining the color difference of any one scale area image and other scale area images under the same scale;
and determining the corresponding region contrast between any one scale region image and each other scale region image under the same scale according to the color difference between any one scale region image and each other scale region image under the same scale.
4. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 3, wherein the calculation formula for obtaining the color difference between the image of any one scale region and the images of other scale regions under the same scale is as follows:
Figure DEST_PATH_IMAGE002
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE004
for any ith scale region image under the same scale
Figure DEST_PATH_IMAGE006
And other k-th scale region images
Figure DEST_PATH_IMAGE008
The difference in the color of (a) is,
Figure DEST_PATH_IMAGE010
for any ith scale region image under the same scale
Figure 797428DEST_PATH_IMAGE006
And other k-th scale region images
Figure 547210DEST_PATH_IMAGE008
The absolute value of the difference of the a-channel color mean values of the Lab image of the scale region,
Figure DEST_PATH_IMAGE012
for any ith scale area image under the same scale
Figure 858718DEST_PATH_IMAGE006
And other k-th scale region images
Figure 807082DEST_PATH_IMAGE008
The absolute value of the difference of the b-channel color mean values of the Lab image of the scale region,
Figure DEST_PATH_IMAGE014
and
Figure DEST_PATH_IMAGE016
are color adjustment parameters.
5. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 3, wherein the calculation formula for determining the corresponding regional contrast ratio between any one scale regional image and each other scale regional image under the same scale is as follows:
Figure DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE020
for any ith scale region image under the same scale
Figure 398207DEST_PATH_IMAGE006
And other k-th scale region images
Figure 759919DEST_PATH_IMAGE008
The contrast of the corresponding region is set to be,
Figure 409206DEST_PATH_IMAGE004
for any ith scale region image under the same scale
Figure 477656DEST_PATH_IMAGE006
And other k-th scale region images
Figure 757459DEST_PATH_IMAGE008
The difference in the color of (a) is,
Figure DEST_PATH_IMAGE022
in order to be the threshold value for the color difference,
Figure DEST_PATH_IMAGE024
and
Figure DEST_PATH_IMAGE026
are contrast adjustment parameters.
6. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 2, wherein the calculation formula for determining the correspondence of the corresponding difference values between any one scale region image and each other scale region image under the same scale is as follows:
Figure DEST_PATH_IMAGE028
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE030
for any ith scale region image under the same scale
Figure 167318DEST_PATH_IMAGE006
And other k-th scale region images
Figure 354717DEST_PATH_IMAGE008
The corresponding difference value is set to be equal to the corresponding difference value,
Figure 605570DEST_PATH_IMAGE020
is an arbitrary ith scale under the same scaleRegion image
Figure 118590DEST_PATH_IMAGE006
And other k-th scale region images
Figure 126998DEST_PATH_IMAGE008
The contrast of the corresponding region is set to be,
Figure DEST_PATH_IMAGE032
for any ith scale area image under the same scale
Figure 259033DEST_PATH_IMAGE006
And other k-th scale region images
Figure 33566DEST_PATH_IMAGE008
The corresponding spatial euclidean distance,
Figure DEST_PATH_IMAGE034
for any ith scale region image under the same scale
Figure 983068DEST_PATH_IMAGE006
The mean square error of the gray levels of the gray level image in the scale region of (2),
Figure DEST_PATH_IMAGE036
is the minimum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,
Figure DEST_PATH_IMAGE038
is the maximum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,
Figure DEST_PATH_IMAGE040
parameters are adjusted for the mean square error of the gray scale.
7. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 6, wherein the calculation formula for determining the corresponding significant values of the images of all the scale areas is as follows:
Figure DEST_PATH_IMAGE042
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE044
for any ith scale region image under the same scale
Figure 859798DEST_PATH_IMAGE006
The corresponding significance value is given to the corresponding one,
Figure 920158DEST_PATH_IMAGE030
for any ith scale area image under the same scale
Figure 755390DEST_PATH_IMAGE006
And other k-th scale region images
Figure 344635DEST_PATH_IMAGE008
The corresponding difference value is compared with the corresponding difference value,Kis the total number of other images in the area of each scale under the same scale.
8. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 1, wherein the data processing is performed on each significant object area image so as to obtain the significant pest limb space position enhancement coefficient and the significant pest limb space distribution enhancement coefficient corresponding to each significant object area image, and the method comprises the following steps:
acquiring a gray image of each salient object region corresponding to each salient object region image, and performing edge detection on the gray image of each salient object region to obtain each edge pixel point of each gray image of each salient object region;
uniformly sampling each edge pixel point of each gray image of the area of the salient object, thereby obtaining each edge sampling pixel point of each gray image of the area of the salient object;
determining a central pixel point between any two edge sampling pixel points of the gray image of the same salient object region according to each edge sampling pixel point of the gray image of each salient object region, so as to obtain each central pixel point of the gray image of each salient object region;
determining each central pixel point in each central pixel point of each gray image of the significant object region in the corresponding significant object region according to the position of each central pixel point of each gray image of the significant object region, and further determining the proportion value of each central pixel point in each gray image of the significant object region in the corresponding significant object region, so as to obtain the pest body spatial position significant enhancement coefficient corresponding to each gray image of the significant object region;
according to the position of each central pixel point of each gray image of the significant object region, each central pixel point which is positioned outside the corresponding significant object region in each central pixel point of each gray image of the significant object region is determined, and ellipse equation fitting and linear equation fitting are respectively carried out on each central pixel point which is positioned outside the corresponding significant object region, so that ellipse fitting goodness and linear fitting goodness are obtained, and further pest body spatial distribution significant enhancement coefficients corresponding to each gray image of the significant object region are determined.
9. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 8, wherein the calculation formula for further determining the pest limb spatial distribution significant enhancement coefficient corresponding to each significant object area image is as follows:
Figure DEST_PATH_IMAGE046
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE048
the pest limb space distribution corresponding to the image of the salient object area is obviously enhanced by the coefficient,
Figure DEST_PATH_IMAGE050
and
Figure DEST_PATH_IMAGE052
respectively corresponding ellipse goodness of fit and straight line goodness of fit of the image of the salient object region,
Figure DEST_PATH_IMAGE054
and
Figure DEST_PATH_IMAGE056
respectively an ellipse goodness of fit amplification coefficient and a straight line goodness of fit amplification coefficient,
Figure DEST_PATH_IMAGE058
and
Figure DEST_PATH_IMAGE060
a first ellipse goodness-of-fit threshold and a second ellipse goodness-of-fit threshold,
Figure 967115DEST_PATH_IMAGE022
and
Figure DEST_PATH_IMAGE062
respectively a first linear goodness-of-fit threshold and a second linear goodness-of-fit threshold,
Figure DEST_PATH_IMAGE064
is a fixed value of goodness of fit.
10. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 1, wherein the screening of each salient object region image to obtain each pest region image comprises:
and judging whether the comprehensive significant value is greater than a set comprehensive significant value threshold value or not according to the comprehensive significant value corresponding to each significant object area image, and if so, judging the corresponding significant object area image as a pest area image.
CN202210694051.3A 2022-06-20 2022-06-20 Agricultural pest image intelligent identification method based on neural network Active CN114782682B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210694051.3A CN114782682B (en) 2022-06-20 2022-06-20 Agricultural pest image intelligent identification method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210694051.3A CN114782682B (en) 2022-06-20 2022-06-20 Agricultural pest image intelligent identification method based on neural network

Publications (2)

Publication Number Publication Date
CN114782682A true CN114782682A (en) 2022-07-22
CN114782682B CN114782682B (en) 2022-09-06

Family

ID=82420691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210694051.3A Active CN114782682B (en) 2022-06-20 2022-06-20 Agricultural pest image intelligent identification method based on neural network

Country Status (1)

Country Link
CN (1) CN114782682B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140205206A1 (en) * 2013-01-24 2014-07-24 Mayur Datar Systems and methods for resizing an image
CN104598908A (en) * 2014-09-26 2015-05-06 浙江理工大学 Method for recognizing diseases of crop leaves
US20150339589A1 (en) * 2014-05-21 2015-11-26 Brain Corporation Apparatus and methods for training robots utilizing gaze-based saliency maps
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN109872301A (en) * 2018-12-26 2019-06-11 浙江清华长三角研究院 A kind of color image preprocess method counted for rice pest identification
CN110428374A (en) * 2019-07-22 2019-11-08 北京农业信息技术研究中心 A kind of small size pest automatic testing method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140205206A1 (en) * 2013-01-24 2014-07-24 Mayur Datar Systems and methods for resizing an image
US20150339589A1 (en) * 2014-05-21 2015-11-26 Brain Corporation Apparatus and methods for training robots utilizing gaze-based saliency maps
CN104598908A (en) * 2014-09-26 2015-05-06 浙江理工大学 Method for recognizing diseases of crop leaves
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN109872301A (en) * 2018-12-26 2019-06-11 浙江清华长三角研究院 A kind of color image preprocess method counted for rice pest identification
CN110428374A (en) * 2019-07-22 2019-11-08 北京农业信息技术研究中心 A kind of small size pest automatic testing method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TENGFEI SONG: "Multi-scale self-searching saliency detection combined with rectangular diffusion", 《2017 12TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA)》 *
李文凤: "基于图像显著区域检测的SIFT特征匹配方法研究", 《微型机与应用》 *

Also Published As

Publication number Publication date
CN114782682B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN108875747A (en) A kind of wheat unsound grain recognition methods based on machine vision
CN112257702A (en) Crop disease identification method based on incremental learning
AU2020103260A4 (en) Rice blast grading system and method
Liao et al. Automatic segmentation of crop/background based on luminance partition correction and adaptive threshold
CN115578660B (en) Land block segmentation method based on remote sensing image
CN111882555B (en) Deep learning-based netting detection method, device, equipment and storage medium
CN111738931B (en) Shadow removal algorithm for aerial image of photovoltaic array unmanned aerial vehicle
CN112258545A (en) Tobacco leaf image online background processing system and online background processing method
CN112070717A (en) Power transmission line icing thickness detection method based on image processing
CN109903275B (en) Fermented grain mildewing area detection method based on self-adaptive multi-scale filtering and histogram comparison
CN111667509B (en) Automatic tracking method and system for moving target under condition that target and background colors are similar
CN111612797B (en) Rice image information processing system
CN114782682B (en) Agricultural pest image intelligent identification method based on neural network
CN115601690B (en) Edible fungus environment detection method based on intelligent agriculture
CN110223253B (en) Defogging method based on image enhancement
CN111611940A (en) Rapid video face recognition method based on big data processing
CN114820707A (en) Calculation method for camera target automatic tracking
Di et al. The research on the feature extraction of sunflower leaf rust characteristics based on color and texture feature
CN110348530B (en) Method for identifying lipstick number
Dai et al. Research of segmentation method on image of Lingwu Long Jujubes based on a new extraction model of Hue
Biswas et al. A novel inspection of paddy leaf disease classification using advance image processing techniques
CN116523910B (en) Intelligent walnut maturity detection method based on image data
CN115601358B (en) Tongue picture image segmentation method under natural light environment
CN116258968B (en) Method and system for managing fruit diseases and insects
CN112116580B (en) Detection method, system and equipment for camera support

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant