CN113989509A - Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition - Google Patents

Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition Download PDF

Info

Publication number
CN113989509A
CN113989509A CN202111608973.XA CN202111608973A CN113989509A CN 113989509 A CN113989509 A CN 113989509A CN 202111608973 A CN202111608973 A CN 202111608973A CN 113989509 A CN113989509 A CN 113989509A
Authority
CN
China
Prior art keywords
image
crop
training
pixel
crop pest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111608973.XA
Other languages
Chinese (zh)
Other versions
CN113989509B (en
Inventor
李玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengshui University
Original Assignee
Hengshui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengshui University filed Critical Hengshui University
Priority to CN202111608973.XA priority Critical patent/CN113989509B/en
Publication of CN113989509A publication Critical patent/CN113989509A/en
Application granted granted Critical
Publication of CN113989509B publication Critical patent/CN113989509B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention belongs to the technical field of image recognition, and particularly relates to a crop pest detection method, a crop pest detection system and crop pest detection equipment based on image recognition, aiming at solving the problems of low accuracy and precision of the existing crop pest detection result based on image recognition. The invention comprises the following steps: collecting a crop pest training image set; lossless compression and uploading to a cloud server; carrying out preprocessing operations of image filtering denoising, foreground and background segmentation and feature extraction on each image; combining expert prior knowledge to carry out soft label marking on the preprocessed image; constructing a crop pest detection model based on a convolutional neural network, and performing model training through a preprocessing training image set with a soft label; and carrying out crop pest detection through the trained crop pest detection model, and issuing a detection result in real time. The invention has high detection efficiency and high accuracy and precision of detection results, and can detect the type and the pest period of the crop pests in detail.

Description

Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a crop pest detection method, a crop pest detection system and crop pest detection equipment based on image recognition.
Background
In crop production, pests and diseases are always the basic problems which plague the growth of crops. Because of the characteristics of multiple types and high density of crop pests, the yield of crops is greatly reduced, and meanwhile, due to the imperfection in the aspect of crop pest identification, the pesticide spraying is excessive during crop pest treatment, so that the problem of serious ecological imbalance is caused.
Traditionally, human experts have been relied upon to diagnose insect pest abnormalities in crops. However, human expert diagnostics are time consuming, labor intensive, and the number of human experts is far from sufficient for a wide crop planting area where the breadth of operators is great. In addition, human expert diagnosis also has the defects of low recognition speed, low recognition accuracy and strong subjectivity due to different abilities of each expert.
With the development of image recognition technology, the crop pest and disease identification method based on deep learning target detection becomes a research hotspot with the advantages of non-invasion, high speed, low cost and the like. At present, the following methods are available for identifying crop diseases and insect pests: firstly, the image classification method classifies the plant diseases and insect pests of the image, cannot position the specific position of a target, is easily influenced by the environment in practical application, and is low in recognition rate especially under the condition that the target is small; secondly, based on a general target detection method, rectangular frame positioning is carried out on the target position, the category of the target is identified, the identification effect is easily influenced by a training set, and data of various sizes are required to be supplemented; thirdly, the example segmentation method carries out polygon area positioning on the target and identifies the target category, and the method has high labeling cost and low speed and also needs to supplement data with various sizes.
In general, in the prior art, the number of data sets is small, and the model training effect is poor, so that the accuracy and precision of the detection result cannot meet the requirements.
Disclosure of Invention
In order to solve the above problems in the prior art, namely the problem that the accuracy and precision of the existing crop pest detection result based on image recognition are low, the invention provides a crop pest detection method based on image recognition, which comprises the following steps:
step S10, collecting a crop pest training image set through an image collecting device at a set position of a user terminal; the crop pest training image set comprises images of different types of crops without pests, images of the same type of pests and different pest periods of different types of crops, and images of the same type of crops and different types of pests and different pest periods of the same type of crops;
step S20, each image acquisition device performs lossless compression on the acquired crop pest training images and uploads the compressed images to a cloud server;
step S30, the cloud server side carries out preprocessing operations of image filtering and denoising, foreground background segmentation and feature extraction on each crop pest training image;
step S40, combining expert prior knowledge to carry out soft label marking of each preprocessed image to obtain a preprocessed training image set with soft labels;
s50, constructing a crop pest detection model by the cloud server side based on the convolutional neural network, and performing model training through a preprocessing training image set with a soft label;
and step S60, based on the crop images uploaded by the image acquisition device in real time, carrying out crop pest detection by the cloud server through the trained crop pest detection model, and issuing the detection result to the corresponding user side in real time.
Further, the image filtering and denoising method comprises the following steps:
step S311, recording any pixel point in the once filtered image as
Figure 100002_DEST_PATH_IMAGE001
And is based on
Figure 874430DEST_PATH_IMAGE001
Nearest 4 x 4 pixel area pixel value pair
Figure 893201DEST_PATH_IMAGE001
Influence of pixel value on pixel point after primary filtering
Figure 23837DEST_PATH_IMAGE001
Pixel value of
Figure 520678DEST_PATH_IMAGE002
Obtaining a primary filtering image;
step S312, setting the height and width of a secondary filtering window as odd numbers, and skipping to step S314 if the maximum value and the minimum value of each pixel value in the secondary filtering window are equal or the central pixel value and the maximum value or the minimum value are equal; otherwise, calculating a difference value between the maximum value and the minimum value, presetting the weighting weight of the central pixel based on the difference value, and carrying out weighted summation on the central pixel value to obtain the weighted central pixel value;
step 313, calculating an average value of all pixel values in the secondary filtering window based on the weighted central pixel value, and taking the average value as a secondary filtering pixel value of the central pixel;
and S314, performing traversal secondary filtering on all pixels of the primary filtered image by adopting the method corresponding to the steps S312 to S313 through the secondary filtering window to finish filtering and denoising of the image.
Further, the pixel point after the primary filtering
Figure 171102DEST_PATH_IMAGE001
Pixel value of
Figure 95196DEST_PATH_IMAGE002
It is expressed as:
Figure 729439DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE005
and
Figure 983965DEST_PATH_IMAGE006
respectively a predefined primary filtering hyperparameter,
Figure 100002_DEST_PATH_IMAGE007
Figure 488896DEST_PATH_IMAGE008
Figure 100002_DEST_PATH_IMAGE009
Figure 567579DEST_PATH_IMAGE010
Figure 100002_DEST_PATH_IMAGE011
Figure 892381DEST_PATH_IMAGE012
Figure 100002_DEST_PATH_IMAGE013
Figure 199866DEST_PATH_IMAGE014
Figure 100002_DEST_PATH_IMAGE015
Figure 310036DEST_PATH_IMAGE016
Figure 100002_DEST_PATH_IMAGE017
Figure 310353DEST_PATH_IMAGE018
Figure 100002_DEST_PATH_IMAGE019
Figure 106139DEST_PATH_IMAGE020
Figure 100002_DEST_PATH_IMAGE021
and
Figure 482894DEST_PATH_IMAGE022
is a pixel point
Figure 227996DEST_PATH_IMAGE001
The 16 nearest neighbor pixel points nearest to the 4 x 4 pixel region,
Figure 664794DEST_PATH_IMAGE012
is located at
Figure 980500DEST_PATH_IMAGE001
The upper left corner of the base plate,
Figure 100002_DEST_PATH_IMAGE023
is a pixel point
Figure 629787DEST_PATH_IMAGE001
The pixel values of the 16 nearest neighbor pixel points nearest to the 4 x 4 pixel region,
Figure 494975DEST_PATH_IMAGE024
in the form of a first order filter function,
Figure 100002_DEST_PATH_IMAGE025
representing a matrix transposition.
Further, the first order filter function
Figure 86362DEST_PATH_IMAGE024
It is expressed as:
Figure 100002_DEST_PATH_IMAGE027
wherein the content of the first and second substances,
Figure 873052DEST_PATH_IMAGE028
Figure 100002_DEST_PATH_IMAGE029
Figure 326030DEST_PATH_IMAGE030
Figure 100002_DEST_PATH_IMAGE031
Figure 265299DEST_PATH_IMAGE032
Figure 100002_DEST_PATH_IMAGE033
and
Figure 778319DEST_PATH_IMAGE034
respectively, predefined first order filter function hyperparameters.
Further, the foreground and background segmentation method comprises the following steps:
step S321, extracting a crop region image in the filtered and denoised first preprocessed image as a second preprocessed image;
step S322, acquiring an RGB channel value of each pixel of the second preprocessed image, and respectively judging the category of each pixel according to a set threshold value based on the RGB channel value; the categories comprise green series pixels, yellow series pixels, blue series pixels, purple series pixels and red series pixels;
step S323, dividing the second preprocessed image into a green series image, a yellow series image, a blue series image, a purple series image and a red series image according to the ratio of each type of pixel in the total pixels of the second preprocessed image;
step S324, extracting corresponding color features of the divided images according to the color series to which the images belong, and performing image conversion based on the color features to obtain a third preprocessed image;
step S325, constructing a target selection function of the optimal segmentation threshold of the third preprocessed image through a maximum inter-class error method, and solving the function to obtain the optimal segmentation threshold;
step S326, performing foreground and background segmentation on the third preprocessed image based on the optimal segmentation threshold to obtain a foreground image.
Further, the method for marking the soft label of the crop pest training image by combining the expert priori knowledge comprises the following steps:
step S41, pre-training an SVM classification model based on expert prior knowledge, and acquiring the probability that each crop pest training image belongs to each hard label through the pre-trained SVM classification model; the hard tags comprise pest-free tags and different pest period categories of different categories of pests;
and step S42, dividing the crop pest training images belonging to the same hard label into a subset, and normalizing the probability of each image in the subset belonging to the hard label category to obtain the soft label of each image corresponding to the hard label category.
Further, a training image set expansion step is provided between step S40 and step S50, and the method includes:
s4a, adding a plurality of Gaussian white noises with set standard deviations to each image in the training image set respectively to obtain a first extended training image set;
step S4b, performing image turning, random cutting, random angle rotation and random local deformation processing on each image in the first extended training image set to obtain a second extended training image set;
and S4c, generating an extended image through a loop generation countermeasure network based on the second extended training image set, and completing the extension of the training image set.
In another aspect of the present invention, an image recognition-based crop pest detection system is provided, the detection system comprising:
the image acquisition module is configured to acquire a crop pest training image set through an image acquisition device at a set position; the crop pest training image set comprises images of different types of crops without pests, images of the same type of pests and different pest periods of different types of crops, and images of the same type of crops and different types of pests and different pest periods of the same type of crops;
the image compression and transmission module is configured to perform lossless compression on the collected crop pest training images by each image collection device and upload the compressed images to the cloud server;
the image preprocessing module is configured to carry out preprocessing operations of image filtering denoising, foreground background segmentation and feature extraction on each crop pest training image;
the soft label marking module is configured to combine expert priori knowledge to carry out soft label marking on each preprocessed image to obtain a preprocessed training image set with soft labels;
the model building and training module is configured to build a crop pest detection model based on a convolutional neural network and perform model training through a preprocessing training image set with a soft label;
and the pest detection and detection result issuing module is configured to carry out crop pest detection through the trained crop pest detection model based on the crop images uploaded in real time and issue the detection result in real time.
In a third aspect of the present invention, an electronic device is provided, including:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein the content of the first and second substances,
the memory stores instructions executable by the processor for execution by the processor to implement the image recognition-based crop pest detection method described above.
In a fourth aspect of the present invention, a computer-readable storage medium is provided, which stores computer instructions for being executed by the computer to implement the above-mentioned crop pest detection method based on image recognition.
The invention has the beneficial effects that:
(1) according to the crop pest detection method based on image recognition, the image is filtered twice, the first filtering retains more details for the image, the second filtering reduces the time complexity of algorithm processing through weighted average, the pixel change of the edge of the target image is better considered, and the accuracy, precision and processing efficiency of image denoising are improved.
(2) According to the crop pest detection method based on image recognition, before foreground and background segmentation is carried out on an image, the region where crops are located is firstly extracted, the influence of other sundries (such as a road surface, the sky and the like) on the accuracy of follow-up recognition is reduced, then the color system of pixels is judged according to the channel values of the pixels, the image is divided into images of various color systems according to the proportion of the pixels of various color systems in the image, corresponding color features are subsequently extracted, a target part of crops of various color systems can be effectively distinguished from a background part, the accuracy and precision of foreground and background segmentation are improved, and therefore the accuracy and precision of follow-up pest recognition are effectively improved.
(3) According to the crop pest detection method based on image recognition, the correlation and difference between different pests are described through the soft label, different types of pests of different crops can be effectively recognized, and the stage of the pest can be accurately recognized.
(4) According to the crop pest detection method based on image recognition, after image set division is carried out on the hard labels acquired through the SVM classification model, the soft labels are acquired by carrying out probability normalization on the hard labels to which the images belong, and the soft label marking of the images can be quickly and effectively realized without spending a large amount of manpower and material resources.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a crop pest detection method based on image recognition according to the present invention;
FIG. 2 is a block diagram of a computer system of a server for implementing embodiments of the method, system, and apparatus of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention discloses a crop pest detection method based on image recognition, which comprises the following steps:
step S10, collecting a crop pest training image set through an image collecting device at a set position of a user terminal; the crop pest training image set comprises images of different types of crops without pests, images of the same type of pests and different pest periods of different types of crops, and images of the same type of crops and different types of pests and different pest periods of the same type of crops;
step S20, each image acquisition device performs lossless compression on the acquired crop pest training images and uploads the compressed images to a cloud server;
step S30, the cloud server side carries out preprocessing operations of image filtering and denoising, foreground background segmentation and feature extraction on each crop pest training image;
step S40, combining expert prior knowledge to carry out soft label marking of each preprocessed image to obtain a preprocessed training image set with soft labels;
s50, constructing a crop pest detection model by the cloud server side based on the convolutional neural network, and performing model training through a preprocessing training image set with a soft label;
and step S60, based on the crop images uploaded by the image acquisition device in real time, carrying out crop pest detection by the cloud server through the trained crop pest detection model, and issuing the detection result to the corresponding user side in real time.
In order to more clearly describe the method for detecting pest damage to crops based on image recognition, the following describes in detail the steps in the embodiment of the present invention with reference to fig. 1.
The crop pest detection method based on image recognition of the first embodiment of the invention comprises the steps of S10-S60, and the steps are described in detail as follows:
step S10, collecting a crop pest training image set through an image collecting device at a set position of a user terminal; the crop pest training image set comprises images of different types of crops without pests, images of the same type of pests and different pest periods of different types of crops and images of the same type of crops and different types of pests and different pest periods of the same type of crops.
The setting position of the image capturing device is closely related to the type of crops, the growth period of the crops, and the position of the possible insect pest of each crop, so that the image capturing device needs to be installed in the place where the insect pest is possible and the type of the crops can be clearly displayed.
The image acquisition device can perform adaptive adjustment of the position and the angle of the camera according to the growth cycle of crops, for example, taking corn as an example, when the corn just sprouts, the whole plant is short, at the moment, the position of the camera needs to be set at a lower position, the angle of the camera is shooting from the upper side to the lower side, when the length of the corn is about 1 meter, the height of the camera needs to be correspondingly adjusted, the shooting angle is preferably close to a parallel angle, when the length of the corn is about 2 meters, the height of the camera needs to be further adjusted, the shooting angle is preferably shooting from the lower side to the upper side, the above-mentioned position and shooting angle which are only preferred by the camera are preferably used, in other embodiments, the adjustment can be performed according to corresponding application scenes, and the invention is not detailed herein one by one.
And step S20, each image acquisition device performs lossless compression on the acquired crop pest training images and uploads the compressed images to a cloud server.
Before image preprocessing, the types and growth cycles of crops can be identified, and the current types and possible insect pest types and insect pest periods in the current growth cycle are preliminarily screened through an expert priori knowledge base. Preliminary screening is done to insect pest class, the insect pest period that crops probably appear, and its advantage lies in, follow-up when discerning the insect pest, can match with preliminary screening result earlier, if match successfully in preliminary screening result, then need not to carry out subsequent processing again, can effectively reduce resource consumption, more be suitable for to embedded equipment and mobile device that computing power is limited, also further promoted recognition efficiency simultaneously, and then have more the advantage in the higher occasion of real-time requirement.
In one embodiment of the invention, the crop pest training image is collected through a WeChat public number, the crop image is collected by a user through a mobile phone, and the image is uploaded to the public number after image compression (preferably using image lossless compression software) is carried out through image compression software in the mobile phone.
And step S30, the cloud server side carries out preprocessing operations of image filtering and denoising, foreground background segmentation and feature extraction on each crop pest training image.
The method for filtering and denoising the image comprises the following steps:
step S311, recording any pixel point in the once filtered image as
Figure 317885DEST_PATH_IMAGE001
And is based on
Figure 105713DEST_PATH_IMAGE001
Nearest 4 x 4 pixel area pixel value pair
Figure 929181DEST_PATH_IMAGE001
Influence of pixel value on pixel point after primary filtering
Figure 878682DEST_PATH_IMAGE001
Pixel value of
Figure 905544DEST_PATH_IMAGE002
And obtaining a primary filtering image.
Pixel point after primary filtering
Figure 231483DEST_PATH_IMAGE001
Pixel value of
Figure 660191DEST_PATH_IMAGE002
As shown in formula (1):
Figure 543045DEST_PATH_IMAGE036
wherein the content of the first and second substances,
Figure 322782DEST_PATH_IMAGE005
and
Figure 186833DEST_PATH_IMAGE006
respectively a predefined primary filtering hyperparameter,
Figure 470046DEST_PATH_IMAGE007
Figure 26930DEST_PATH_IMAGE008
Figure 28384DEST_PATH_IMAGE009
Figure 679814DEST_PATH_IMAGE010
Figure 83113DEST_PATH_IMAGE011
Figure 810898DEST_PATH_IMAGE012
Figure 768490DEST_PATH_IMAGE013
Figure 505502DEST_PATH_IMAGE014
Figure 248461DEST_PATH_IMAGE015
Figure 147147DEST_PATH_IMAGE016
Figure 123193DEST_PATH_IMAGE017
Figure 132737DEST_PATH_IMAGE018
Figure 979470DEST_PATH_IMAGE019
Figure 563904DEST_PATH_IMAGE020
Figure 496088DEST_PATH_IMAGE021
and
Figure 574903DEST_PATH_IMAGE022
is a pixel point
Figure 541722DEST_PATH_IMAGE001
The 16 nearest neighbor pixel points nearest to the 4 x 4 pixel region,
Figure 782210DEST_PATH_IMAGE012
is located at
Figure 732849DEST_PATH_IMAGE001
The upper left corner of the base plate,
Figure 834928DEST_PATH_IMAGE023
is a pixel point
Figure 656254DEST_PATH_IMAGE001
The pixel values of the 16 nearest neighbor pixel points nearest to the 4 x 4 pixel region,
Figure 67643DEST_PATH_IMAGE024
in the form of a first order filter function,
Figure 974419DEST_PATH_IMAGE025
representing a matrix transposition.
First order filter function
Figure 395036DEST_PATH_IMAGE024
As shown in formula (2):
Figure 320136DEST_PATH_IMAGE038
wherein the content of the first and second substances,
Figure 168006DEST_PATH_IMAGE028
Figure 562079DEST_PATH_IMAGE029
Figure 520807DEST_PATH_IMAGE030
Figure 316725DEST_PATH_IMAGE031
Figure 86229DEST_PATH_IMAGE032
Figure 967597DEST_PATH_IMAGE033
and
Figure 464438DEST_PATH_IMAGE034
respectively, predefined first order filter function hyperparameters.
Step S312, setting the height and width of a secondary filtering window as odd numbers, and skipping to step S314 if the maximum value and the minimum value of each pixel value in the secondary filtering window are equal or the central pixel value and the maximum value or the minimum value are equal; otherwise, calculating a difference value between the maximum value and the minimum value, presetting the weighting weight of the central pixel based on the difference value, and carrying out weighted summation on the central pixel value to obtain the weighted central pixel value.
Step S313, based on the weighted central pixel value, calculates an average value of all pixel values in the secondary filtering window, and uses the average value as the secondary filtering pixel value of the central pixel.
And S314, performing traversal secondary filtering on all pixels of the primary filtered image by adopting the method corresponding to the steps S312 to S313 through the secondary filtering window to finish filtering and denoising of the image.
The image is filtered twice, the first filtering retains more details for the image, the second filtering reduces the time complexity of algorithm processing through weighted average, better considers the pixel change of the edge of the target image, and improves the accuracy, precision and processing efficiency of image denoising.
The foreground and background segmentation method comprises the following steps:
step S321, extracting a crop region map in the filtered and denoised first preprocessed image as a second preprocessed image.
Step S322, acquiring an RGB channel value of each pixel of the second preprocessed image, and respectively judging the category of each pixel according to a set threshold value based on the RGB channel value; the categories include green series pixels, yellow series pixels, blue series pixels, purple series pixels, and red series pixels.
In step S323, the second preprocessed image is divided into a green-series image, a yellow-series image, a blue-series image, a violet-series image, and a red-series image according to the ratio of each type of pixel in the total pixels of the second preprocessed image.
Step S324, extracting corresponding color features of the divided images according to the color series to which the images belong, and performing image conversion based on the color features to obtain a third preprocessed image.
Taking a green plant as an example (correspondingly, extracting super-green features of an image subsequently), according to the feature that a G color component in an RGB crop image is larger than an R color component and a B color component, increasing the G color quantity and only performing normalization processing on each color quantity value of RGB, wherein the super-green feature calculation method is shown as a formula (3):
Figure 380441DEST_PATH_IMAGE040
(3)
wherein the current pixel
Figure DEST_PATH_IMAGE041
Which represents the characteristics of super green color,
Figure 38956DEST_PATH_IMAGE042
Figure DEST_PATH_IMAGE043
Figure 391308DEST_PATH_IMAGE044
respectively representing the R color, G color, B color chromaticity values of the current pixel.
Step S325, construct the target selection function of the optimal segmentation threshold of the third preprocessed image by the maximum inter-class error method, and solve the function to obtain the optimal segmentation threshold.
The target selection function for the optimal segmentation threshold is shown in equation (4):
Figure 426260DEST_PATH_IMAGE046
(4)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE047
a function is selected for the purpose of the object,
Figure 665612DEST_PATH_IMAGE048
in order to optimize the segmentation threshold value(s),
Figure DEST_PATH_IMAGE049
is the pixel gray scale factor of the image,
Figure 245760DEST_PATH_IMAGE050
as an imageThe total average amount of gray levels is,
Figure DEST_PATH_IMAGE051
is less than or equal to
Figure 570562DEST_PATH_IMAGE052
The proportion of the image of the pixel average gray level in the total number of images.
In an embodiment of the present invention, the optimal segmentation threshold may also be obtained by comparing the first preprocessed image with the second preprocessed image:
comparing the first pre-processed image with the second pre-processed image to obtain a histogram of a double-peak shape;
and taking the pixel value corresponding to the lowest position between the two peaks in the histogram of the double-peak shape as the optimal segmentation threshold value of the image.
Step S326, performing foreground-background segmentation of the third preprocessed image based on the optimal segmentation threshold to obtain a foreground image
The feature extraction of the image is used as an important ring in the image recognition process, the visual features of the image are converted into quantitative features, and the selection and value of the feature parameters can directly influence the subsequent construction of the image recognition classifier. In one embodiment of the invention, in the aspect of feature extraction of image identification, a multi-feature parameter feature extraction method is adopted to extract features of a foreground image obtained after image segmentation, and the selected features mainly comprise three aspects of color, texture and form.
The method comprises the steps of extracting color features of an image by adopting an image color feature extraction method based on an HIS model, firstly collecting RGB model color information of the image, then converting by utilizing a conversion relation between RGB and the HIS model, and adopting a mode of firstly carrying out quantization processing on the image color information and then carrying out calculation processing on the color features, wherein the method specifically comprises the following steps:
processing the hue, saturation and brightness of an image by using a non-uniform quantization method, in one embodiment of the invention, the hue of the image is quantized to [0-7], the saturation and the brightness are respectively quantized to [0-3], and the final HIS model divides the color into 72 spaces;
unifying the quantized HIS model color space
Figure DEST_PATH_IMAGE053
Wherein, in the step (A),
Figure 409205DEST_PATH_IMAGE054
and
Figure DEST_PATH_IMAGE055
respectively representing quantization levels of V and S;
after the HIS model is used for completing image color quantization, four parameters of gray mean, variance, energy and entropy are used for representing image color information.
The image texture is used as an image characteristic capable of expressing the distribution condition of pixels in an image, and has a good complementary effect on better combining image color information and reflecting the real information condition of the image.
In one embodiment of the invention, the extraction of the image texture features is completed by adopting a difference-based method. The characteristic of the gray level change of the image is visually represented on the characteristic of the image texture when the image is outward. The gray difference method utilizes the difference value of the gray values of the image, and finally generates an image difference histogram for representing the texture characteristics of the image in a quantized manner by carrying out statistical analysis on the difference value.
The morphological characteristics are one of factors capable of describing crop insect pests, and the image characteristics can be better reflected by means of the morphological characteristics of the image and the combination of color and texture characteristics. Morphological characteristics irrelevant to morphological size and direction and fixed geometric characteristics are selected, and the morphological characteristics of the same type of insect pests, such as the rectangularity, the compactness, the narrow length, the roundness, the major axis and the minor axis of an equivalent ellipse, the foliation and the like, can be better represented.
Step S40, combining expert prior knowledge to carry out soft label marking of each preprocessed image, and obtaining a preprocessed training image set with soft labels:
step S41, pre-training an SVM classification model based on expert prior knowledge, and acquiring the probability that each crop pest training image belongs to each hard label through the pre-trained SVM classification model; the hard tags comprise pest-free tags and different pest period categories of different categories of pests;
and step S42, dividing the crop pest training images belonging to the same hard label into a subset, and normalizing the probability of each image in the subset belonging to the hard label category to obtain the soft label of each image corresponding to the hard label category.
And S50, constructing a crop pest detection model by the cloud server side based on the convolutional neural network, and performing model training through the preprocessing training image set with the soft label.
And step S60, based on the crop images uploaded by the image acquisition device in real time, carrying out crop pest detection by the cloud server through the trained crop pest detection model, and issuing the detection result to the corresponding user side in real time.
The detection result finally obtained by the invention comprises whether the crop has the insect pest, which insect pest exists in the crop and which stage (early stage, middle stage, late stage and the like) the insect pest of the crop is in, and different insect pest methods can be selected according to different insect pests at different stages, thereby providing an effective diagnosis basis for better protecting the crop and improving the crop yield.
Although the foregoing embodiments describe the steps in the above sequential order, those skilled in the art will understand that, in order to achieve the effect of the present embodiments, the steps may not be executed in such an order, and may be executed simultaneously (in parallel) or in an inverse order, and these simple variations are within the scope of the present invention.
The pest blocking alarm method based on crop pest identification of the second embodiment of the invention is based on the crop pest detection method based on image identification, and comprises the following steps:
dividing an area to be identified into a plurality of blocks according to the crop category, and dividing each block into sub-blocks with set sizes; the area of each sub-block is between the set upper threshold and the lower threshold, and the shape of each sub-block can be non-uniform, such as square, rectangle, circle, ellipse or irregular shape.
The method comprises the steps of obtaining a plurality of timing diagrams of crops in each sub-block (image acquisition can be carried out through an image acquisition device which is set at a fixed position and is adjustable in height and angle, image acquisition of each sub-block can also be carried out in a manual inspection mode, image acquisition of each sub-block can also be carried out through an unmanned aerial vehicle, and the acquisition device can also be mobile equipment such as a mobile phone), wherein the timing diagrams are image sequences which are continuously acquired according to set time intervals in a period when insect pests possibly occur in the growth of the crops, for example, 10 images (images of different points of one sub-block and images of the same point shot at different angles can be acquired every 1 day interval in a rice heading period.
And carrying out crop pest identification on each obtained image, taking a sub-block where a pest-damaged crop is located as a pest path starting point, and carrying out preliminary pest path prediction through a path prediction model.
And comparing the prediction result (the next sub-block with the possible insect pest) with the real sub-block with the insect pest to obtain the influence of the crop type, the block distribution and the weather condition between the two insect pests on the development direction of the insect pest.
And adjusting parameters of the path prediction model according to the influence, and iteratively performing prediction and model adjustment until a model training end condition is reached (the error between the prediction result and the real result is less than a set threshold value, or the number of model training times reaches the set number of times).
Insect pest path prediction is carried out through the trained model, and a prediction result is fed back, crop insect pest warning is carried out, and the warning content comprises: a sub-block where pests may occur, a category where pests may occur, a point in time where pests may occur.
And insect pest prevention is carried out on the sub-block with the insect pest alarm, insect pest blocking is carried out, and the influence of further extension of insect pests on crop harvest is reduced.
The invention provides a crop pest detection system based on image recognition, which comprises:
the image acquisition module is configured to acquire a crop pest training image set through an image acquisition device at a set position; the crop pest training image set comprises images of different types of crops without pests, images of the same type of pests and different pest periods of different types of crops, and images of the same type of crops and different types of pests and different pest periods of the same type of crops;
the image compression and transmission module is configured to perform lossless compression on the collected crop pest training images by each image collection device and upload the compressed images to the cloud server;
the image preprocessing module is configured to carry out preprocessing operations of image filtering denoising, foreground background segmentation and feature extraction on each crop pest training image;
the soft label marking module is configured to combine expert priori knowledge to carry out soft label marking on each preprocessed image to obtain a preprocessed training image set with soft labels;
the model building and training module is configured to build a crop pest detection model based on a convolutional neural network and perform model training through a preprocessing training image set with a soft label;
and the pest detection and detection result issuing module is configured to carry out crop pest detection through the trained crop pest detection model based on the crop images uploaded in real time and issue the detection result in real time.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the crop pest detection system based on image recognition provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the above embodiment may be combined into one module, or may be further split into a plurality of sub-modules, so as to complete all or part of the above described functions. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
An electronic apparatus according to a fourth embodiment of the present invention includes:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein the content of the first and second substances,
the memory stores instructions executable by the processor for execution by the processor to implement the image recognition-based crop pest detection method described above.
A computer-readable storage medium of a fifth embodiment of the present invention stores computer instructions for execution by the computer to implement the method for detecting pest damage to crops based on image recognition.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Referring now to FIG. 2, therein is shown a schematic block diagram of a computer system of a server for implementing embodiments of the method, system, and apparatus of the present application. The server shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 2, the computer system includes a Central Processing Unit (CPU)201 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data necessary for system operation are also stored. The CPU 201, ROM 202, and RAM 203 are connected to each other via a bus 204. An Input/Output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, and the like; an output section 207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 208 including a hard disk and the like; and a communication section 209 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 209 performs communication processing via a network such as the internet. A drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 210 as necessary, so that a computer program read out therefrom is mounted into the storage section 208 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 209 and/or installed from the removable medium 211. The above-described functions defined in the method of the present application are performed when the computer program is executed by the Central Processing Unit (CPU) 201. It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (8)

1. A crop pest detection method based on image recognition is characterized by comprising the following steps:
step S10, collecting a crop pest training image set through an image collecting device at a set position of a user terminal; the crop pest training image set comprises images of different types of crops without pests, images of the same type of pests and different pest periods of different types of crops, and images of the same type of crops and different types of pests and different pest periods of the same type of crops;
step S20, each image acquisition device performs lossless compression on the acquired crop pest training images and uploads the compressed images to a cloud server;
step S30, the cloud server side carries out preprocessing operations of image filtering and denoising, foreground background segmentation and feature extraction on each crop pest training image;
step S40, combining expert prior knowledge to carry out soft label marking of each preprocessed image to obtain a preprocessed training image set with soft labels;
s50, constructing a crop pest detection model by the cloud server side based on the convolutional neural network, and performing model training through a preprocessing training image set with a soft label;
step S60, based on the crop images uploaded by the image acquisition device in real time, the cloud server side carries out crop pest detection through the trained crop pest detection model, and issues the detection result to the corresponding user side in real time;
the image filtering and denoising method comprises the following steps:
step S311, recording any pixel point in the once filtered image as
Figure DEST_PATH_IMAGE001
And is based on
Figure 370568DEST_PATH_IMAGE001
Nearest 4 x 4 pixel area pixel value pair
Figure 482881DEST_PATH_IMAGE001
Influence of pixel value on pixel point after primary filtering
Figure 552468DEST_PATH_IMAGE001
Pixel value of
Figure 750231DEST_PATH_IMAGE002
Obtaining a primary filtering image;
step S312, setting the height and width of a secondary filtering window as odd numbers, and skipping to step S314 if the maximum value and the minimum value of each pixel value in the secondary filtering window are equal or the central pixel value and the maximum value or the minimum value are equal; otherwise, calculating a difference value between the maximum value and the minimum value, presetting the weighting weight of the central pixel based on the difference value, and carrying out weighted summation on the central pixel value to obtain the weighted central pixel value;
step 313, calculating an average value of all pixel values in the secondary filtering window based on the weighted central pixel value, and taking the average value as a secondary filtering pixel value of the central pixel;
and S314, performing traversal secondary filtering on all pixels of the primary filtered image by adopting the method corresponding to the steps S312 to S313 through the secondary filtering window to finish filtering and denoising of the image.
2. The image-recognition-based crop pest detection method according to claim 1, wherein the once-filtered pixel points
Figure 297887DEST_PATH_IMAGE001
Pixel value of
Figure 779553DEST_PATH_IMAGE002
It is expressed as:
Figure 20041DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE005
and
Figure 173942DEST_PATH_IMAGE006
respectively a predefined primary filtering hyperparameter,
Figure DEST_PATH_IMAGE007
Figure 10442DEST_PATH_IMAGE008
Figure DEST_PATH_IMAGE009
Figure 566189DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
Figure 446420DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE013
Figure 336884DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE015
Figure 960764DEST_PATH_IMAGE016
Figure DEST_PATH_IMAGE017
Figure 375609DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE019
Figure 692321DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
and
Figure 820814DEST_PATH_IMAGE022
is a pixel point
Figure 763231DEST_PATH_IMAGE001
The 16 nearest neighbor pixel points nearest to the 4 x 4 pixel region,
Figure 559149DEST_PATH_IMAGE012
is located at
Figure 577921DEST_PATH_IMAGE001
The upper left corner of the base plate,
Figure DEST_PATH_IMAGE023
is a pixel point
Figure 928130DEST_PATH_IMAGE001
The pixel values of the 16 nearest neighbor pixel points nearest to the 4 x 4 pixel region,
Figure 690550DEST_PATH_IMAGE024
in the form of a first order filter function,
Figure DEST_PATH_IMAGE025
representing a matrix transposition.
3. The method of claim 2, wherein the first order filter function is based on image recognition for crop pest detection
Figure 826127DEST_PATH_IMAGE024
It is expressed as:
Figure DEST_PATH_IMAGE027
wherein the content of the first and second substances,
Figure 219063DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE029
Figure 571415DEST_PATH_IMAGE030
Figure DEST_PATH_IMAGE031
Figure 340788DEST_PATH_IMAGE032
Figure DEST_PATH_IMAGE033
and
Figure 845719DEST_PATH_IMAGE034
respectively, predefined first order filter function hyperparameters.
4. The method for detecting the insect pest of the crops based on the image recognition is characterized in that the foreground and the background are segmented by the following steps:
step S321, extracting a crop region image in the filtered and denoised first preprocessed image as a second preprocessed image;
step S322, acquiring an RGB channel value of each pixel of the second preprocessed image, and respectively judging the category of each pixel according to a set threshold value based on the RGB channel value; the categories comprise green series pixels, yellow series pixels, blue series pixels, purple series pixels and red series pixels;
step S323, dividing the second preprocessed image into a green series image, a yellow series image, a blue series image, a purple series image and a red series image according to the ratio of each type of pixel in the total pixels of the second preprocessed image;
step S324, extracting corresponding color features of the divided images according to the color series to which the images belong, and performing image conversion based on the color features to obtain a third preprocessed image;
step S325, constructing a target selection function of the optimal segmentation threshold of the third preprocessed image through a maximum inter-class error method, and solving the function to obtain the optimal segmentation threshold;
step S326, performing foreground and background segmentation on the third preprocessed image based on the optimal segmentation threshold to obtain a foreground image.
5. The image recognition-based crop pest detection method according to claim 1, wherein the soft label marking of the crop pest training image is performed by combining expert prior knowledge, and the method comprises the following steps:
step S41, pre-training an SVM classification model based on expert prior knowledge, and acquiring the probability that each crop pest training image belongs to each hard label through the pre-trained SVM classification model; the hard tags comprise pest-free tags and different pest period categories of different categories of pests;
and step S42, dividing the crop pest training images belonging to the same hard label into a subset, and normalizing the probability of each image in the subset belonging to the hard label category to obtain the soft label of each image corresponding to the hard label category.
6. The method for detecting pest damage to crops based on image recognition as claimed in claim 1, wherein a training image set expansion step is further provided between step S40 and step S50, and the method comprises:
s4a, adding a plurality of Gaussian white noises with set standard deviations to each image in the training image set respectively to obtain a first extended training image set;
step S4b, performing image turning, random cutting, random angle rotation and random local deformation processing on each image in the first extended training image set to obtain a second extended training image set;
and S4c, generating an extended image through a loop generation countermeasure network based on the second extended training image set, and completing the extension of the training image set.
7. A crop pest detection system based on image recognition is characterized in that the detection system comprises:
the image acquisition module is configured to acquire a crop pest training image set through an image acquisition device at a set position; the crop pest training image set comprises images of different types of crops without pests, images of the same type of pests and different pest periods of different types of crops, and images of the same type of crops and different types of pests and different pest periods of the same type of crops;
the image compression and transmission module is configured to perform lossless compression on the collected crop pest training images by each image collection device and upload the compressed images to the cloud server;
the image preprocessing module is configured to carry out preprocessing operations of image filtering denoising, foreground background segmentation and feature extraction on each crop pest training image;
the soft label marking module is configured to combine expert priori knowledge to carry out soft label marking on each preprocessed image to obtain a preprocessed training image set with soft labels;
the model building and training module is configured to build a crop pest detection model based on a convolutional neural network and perform model training through a preprocessing training image set with a soft label;
the pest detection and detection result issuing module is configured to perform crop pest detection through a trained crop pest detection model based on the crop images uploaded in real time and issue a detection result in real time;
the image filtering and denoising method comprises the following steps:
step S311, recording any pixel point in the once filtered image as
Figure 940714DEST_PATH_IMAGE001
And is based on
Figure 547407DEST_PATH_IMAGE001
Nearest 4 x 4 pixel area pixel value pair
Figure 651629DEST_PATH_IMAGE001
Influence of pixel value on pixel point after primary filtering
Figure 11066DEST_PATH_IMAGE001
Pixel value of
Figure 276963DEST_PATH_IMAGE002
Obtaining a primary filtering image;
step S312, setting the height and width of a secondary filtering window as odd numbers, and skipping to step S314 if the maximum value and the minimum value of each pixel value in the secondary filtering window are equal or the central pixel value and the maximum value or the minimum value are equal; otherwise, calculating a difference value between the maximum value and the minimum value, presetting the weighting weight of the central pixel based on the difference value, and carrying out weighted summation on the central pixel value to obtain the weighted central pixel value;
step 313, calculating an average value of all pixel values in the secondary filtering window based on the weighted central pixel value, and taking the average value as a secondary filtering pixel value of the central pixel;
and S314, performing traversal secondary filtering on all pixels of the primary filtered image by adopting the method corresponding to the steps S312 to S313 through the secondary filtering window to finish filtering and denoising of the image.
8. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein the content of the first and second substances,
the memory stores instructions executable by the processor for execution by the processor to implement the image recognition based crop pest detection method of any one of claims 1-6.
CN202111608973.XA 2021-12-27 2021-12-27 Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition Active CN113989509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111608973.XA CN113989509B (en) 2021-12-27 2021-12-27 Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111608973.XA CN113989509B (en) 2021-12-27 2021-12-27 Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition

Publications (2)

Publication Number Publication Date
CN113989509A true CN113989509A (en) 2022-01-28
CN113989509B CN113989509B (en) 2022-03-04

Family

ID=79734492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111608973.XA Active CN113989509B (en) 2021-12-27 2021-12-27 Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition

Country Status (1)

Country Link
CN (1) CN113989509B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132883A (en) * 2023-05-08 2023-11-28 江苏商贸职业学院 GIS-based intelligent agricultural disaster discrimination method and system

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510529A (en) * 2018-03-14 2018-09-07 昆明理工大学 A kind of figure based on adaptive weight cuts solid matching method
CN109711471A (en) * 2018-12-28 2019-05-03 井冈山大学 A kind of rice disease image-recognizing method based on depth convolutional neural networks
CN110310291A (en) * 2019-06-25 2019-10-08 四川省农业科学院农业信息与农村经济研究所 A kind of rice blast hierarchy system and its method
CN110874835A (en) * 2019-10-25 2020-03-10 北京农业信息技术研究中心 Crop leaf disease resistance identification method and system, electronic equipment and storage medium
CN111179216A (en) * 2019-12-03 2020-05-19 中国地质大学(武汉) Crop disease identification method based on image processing and convolutional neural network
CN111460903A (en) * 2020-03-05 2020-07-28 浙江省农业科学院 System and method for monitoring growth of field broccoli based on deep learning
CN111860330A (en) * 2020-07-21 2020-10-30 陕西工业职业技术学院 Apple leaf disease identification method based on multi-feature fusion and convolutional neural network
CN112257702A (en) * 2020-11-12 2021-01-22 武荣盛 Crop disease identification method based on incremental learning
CN112700406A (en) * 2020-12-09 2021-04-23 深圳市芯汇群微电子技术有限公司 Wafer defect detection method based on convolutional neural network
CN113228047A (en) * 2018-10-24 2021-08-06 克莱米特公司 Plant disease detection using multi-stage, multi-scale deep learning
CN113379671A (en) * 2021-02-23 2021-09-10 华北电力大学 Partial discharge diagnosis system and diagnosis method for switch equipment
CN113468984A (en) * 2021-06-16 2021-10-01 哈尔滨理工大学 Crop pest and disease leaf identification system, identification method and pest and disease prevention method
CN113705472A (en) * 2021-08-30 2021-11-26 平安国际智慧城市科技股份有限公司 Abnormal camera checking method, device, equipment and medium based on image recognition
CN113705419A (en) * 2021-08-24 2021-11-26 上海劲牛信息技术有限公司 Crop disease, insect and weed identification processing method and device, electronic equipment and storage medium
CN113780027A (en) * 2021-08-16 2021-12-10 苏州科技大学 Multi-label object identification method, device and equipment based on augmented graph convolution
CN114004809A (en) * 2021-10-29 2022-02-01 北京百度网讯科技有限公司 Skin image processing method, device, electronic equipment and medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510529A (en) * 2018-03-14 2018-09-07 昆明理工大学 A kind of figure based on adaptive weight cuts solid matching method
CN113228047A (en) * 2018-10-24 2021-08-06 克莱米特公司 Plant disease detection using multi-stage, multi-scale deep learning
CN109711471A (en) * 2018-12-28 2019-05-03 井冈山大学 A kind of rice disease image-recognizing method based on depth convolutional neural networks
CN110310291A (en) * 2019-06-25 2019-10-08 四川省农业科学院农业信息与农村经济研究所 A kind of rice blast hierarchy system and its method
CN110874835A (en) * 2019-10-25 2020-03-10 北京农业信息技术研究中心 Crop leaf disease resistance identification method and system, electronic equipment and storage medium
CN111179216A (en) * 2019-12-03 2020-05-19 中国地质大学(武汉) Crop disease identification method based on image processing and convolutional neural network
CN111460903A (en) * 2020-03-05 2020-07-28 浙江省农业科学院 System and method for monitoring growth of field broccoli based on deep learning
CN111860330A (en) * 2020-07-21 2020-10-30 陕西工业职业技术学院 Apple leaf disease identification method based on multi-feature fusion and convolutional neural network
CN112257702A (en) * 2020-11-12 2021-01-22 武荣盛 Crop disease identification method based on incremental learning
CN112700406A (en) * 2020-12-09 2021-04-23 深圳市芯汇群微电子技术有限公司 Wafer defect detection method based on convolutional neural network
CN113379671A (en) * 2021-02-23 2021-09-10 华北电力大学 Partial discharge diagnosis system and diagnosis method for switch equipment
CN113468984A (en) * 2021-06-16 2021-10-01 哈尔滨理工大学 Crop pest and disease leaf identification system, identification method and pest and disease prevention method
CN113780027A (en) * 2021-08-16 2021-12-10 苏州科技大学 Multi-label object identification method, device and equipment based on augmented graph convolution
CN113705419A (en) * 2021-08-24 2021-11-26 上海劲牛信息技术有限公司 Crop disease, insect and weed identification processing method and device, electronic equipment and storage medium
CN113705472A (en) * 2021-08-30 2021-11-26 平安国际智慧城市科技股份有限公司 Abnormal camera checking method, device, equipment and medium based on image recognition
CN114004809A (en) * 2021-10-29 2022-02-01 北京百度网讯科技有限公司 Skin image processing method, device, electronic equipment and medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GUOXIONG ZHOU 等: "apid Detection of Rice Disease Based on FCM-KM and Faster R-CNN Fusion", 《IEEE ACCESS》 *
PRIYANKA SAHU 等: "Deep Learning Models for Beans Crop Diseases:Classification and Visualization Techniques", 《RESEARCHGATE》 *
吴健宇: "基于深度卷积神经网络的农作物病虫害识别及实现", <中国优秀硕士学位论文全文数据库 (农业科技辑)> *
李 博 等: "基于迁移学习的园艺作物叶部病害识别及应用", 《百度文库》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132883A (en) * 2023-05-08 2023-11-28 江苏商贸职业学院 GIS-based intelligent agricultural disaster discrimination method and system
CN117132883B (en) * 2023-05-08 2024-03-19 江苏商贸职业学院 GIS-based intelligent agricultural disaster discrimination method and system

Also Published As

Publication number Publication date
CN113989509B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
Aquino et al. Automated early yield prediction in vineyards from on-the-go image acquisition
US10977494B2 (en) Recognition of weed in a natural environment
CN107346434A (en) A kind of plant pest detection method based on multiple features and SVMs
CN110751019B (en) High-resolution image crop automatic extraction method and device based on deep learning
CN108875821A (en) The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing
US11455794B2 (en) System and method for orchard recognition on geographic area
EP3279831A1 (en) Recognition of weed in a natural environment using a digital image
CN113160150B (en) AI (Artificial intelligence) detection method and device for invasion of foreign matters in wire mesh
He et al. A robust method for wheatear detection using UAV in natural scenes
Der Yang et al. Real-time crop classification using edge computing and deep learning
Ji et al. In-field automatic detection of maize tassels using computer vision
CN104063686A (en) System and method for performing interactive diagnosis on crop leaf segment disease images
CN111199195A (en) Pond state full-automatic monitoring method and device based on remote sensing image
Liang et al. Low-cost weed identification system using drones
CN113989509B (en) Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition
Rahim et al. Deep learning-based accurate grapevine inflorescence and flower quantification in unstructured vineyard images acquired using a mobile sensing platform
Sukhanov et al. Fusion of LiDAR, hyperspectral and RGB data for urban land use and land cover classification
AHM et al. A deep convolutional neural network based image processing framework for monitoring the growth of soybean crops
CN117456358A (en) Method for detecting plant diseases and insect pests based on YOLOv5 neural network
CN116543325A (en) Unmanned aerial vehicle image-based crop artificial intelligent automatic identification method and system
CN114612794A (en) Remote sensing identification method for land covering and planting structure in finely-divided agricultural area
CN114596562A (en) Rice field weed identification method
CN112418112A (en) Orchard disease and pest monitoring and early warning method and system
CN116052141B (en) Crop growth period identification method, device, equipment and medium
CN116503741B (en) Intelligent prediction system for crop maturity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant