CN114782682A - Agricultural pest image intelligent identification method based on neural network - Google Patents
Agricultural pest image intelligent identification method based on neural network Download PDFInfo
- Publication number
- CN114782682A CN114782682A CN202210694051.3A CN202210694051A CN114782682A CN 114782682 A CN114782682 A CN 114782682A CN 202210694051 A CN202210694051 A CN 202210694051A CN 114782682 A CN114782682 A CN 114782682A
- Authority
- CN
- China
- Prior art keywords
- image
- scale
- region
- pest
- significant
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of data processing, in particular to an agricultural pest image intelligent identification method based on a neural network, which comprises the steps of obtaining a pest visible light image containing a pest body, carrying out corresponding data processing based on the pest visible light image, and determining the significant value of each area image of the pest visible light image in a scale area image under different scales so as to determine each significant object area image in each area image; determining a comprehensive significant value of each significant object area image, thereby determining each pest area image in each significant object area image; and identifying pests in each pest region image so as to obtain corresponding pest types. The invention can accurately determine each pest region image by adopting the mode of carrying out data identification and data processing on the image, thereby avoiding the pest identification result from being influenced by background and other interferents and effectively improving the identification accuracy of the pest category.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to an agricultural pest image intelligent identification method based on a neural network.
Background
Agriculture, as a basic industry, dominates the national economy. With climate change, change of farming and cultivation modes and improvement of crop multiple cropping indexes, crop diseases and insect pests are in a multi-occurrence and frequent trend, and major crop diseases and insect pests occur frequently, so that the insect pest control work needs to be developed urgently. Therefore, accurately identifying the agricultural pest species in real time becomes an important premise for developing pest situation control work. However, the pests in the crop field have small volume, various varieties, diversified morphological characteristics, similar and intraspecific varieties and are easy to be confused.
Rice is one of the main food crops in China, and about half of the population takes rice as staple food. At present, the diagnosis method of crop pests in China mainly depends on manual identification, has large subjective factors and poor real-time performance, and is easy to cause misjudgment. Physical prevention and control methods such as insect pest situation measuring and reporting lamps are time-consuming, labor-consuming and poor in accuracy. With the increasingly mature image processing and mode recognition technology, the method for automatically recognizing pests by extracting image features of agricultural pests improves the accuracy and timeliness of agricultural pest recognition, but recognition results are greatly influenced by image feature extraction, and the method is narrow in applicable range and weak in generalization capability. Because the pests are similar to other interference objects, the detection precision of the existing target detection algorithm is not high. The image processing method based on the convolutional neural network greatly surpasses the traditional machine vision method in model precision and generalization capability, and shows stronger robustness in the pest image recognition problem under the natural environment of the field, but the result of the existing target detection algorithm is greatly influenced by the image background and the extracted image characteristics, so that the detection result is not accurate enough.
Disclosure of Invention
The invention aims to provide an agricultural pest image intelligent identification method based on a neural network, which is used for solving the problem that the existing pest identification result is not accurate enough.
In order to solve the technical problem, the invention provides an agricultural pest image intelligent identification method based on a neural network, which comprises the following steps:
acquiring a pest visible light image containing a pest body, and performing data preprocessing on the pest visible light image so as to acquire a preprocessed pest visible light image;
carrying out region segmentation on the preprocessed pest visible light image to obtain each region image, and further obtaining scale region images of each region image under different scales;
acquiring a scale region gray image and a scale region Lab image corresponding to all scale region images, and performing data processing on the scale region gray image and the scale region Lab image so as to obtain significant values corresponding to all scale region images;
screening each area image according to the corresponding significant values of all the scale area images so as to obtain each significant object area image;
performing data processing on the images of the various salient object areas so as to obtain a pest limb space position salient enhancement coefficient and a pest limb space distribution salient enhancement coefficient corresponding to the images of the various salient object areas;
calculating a comprehensive significant value corresponding to each significant object region image according to the significant enhancement coefficient of the spatial position of the pest limbs, the significant enhancement coefficient of the spatial distribution of the pest limbs and the significant value corresponding to each significant object region image;
screening each salient object area image according to the comprehensive salient value corresponding to each salient object area image, thereby obtaining each pest area image;
and inputting each pest region image into a pest type identification network respectively so as to obtain a corresponding pest type.
Further, the performing data processing on the gray scale image in the scale area and the Lab image in the scale area to obtain significant values corresponding to all the images in the scale area includes:
determining the corresponding regional contrast between any one scale region image and other scale region images under the same scale according to the scale region Lab image of each scale region image under the same scale;
determining the central position of the scale region gray level image of each scale region image under the same scale according to the scale region gray level image of each scale region image under the same scale, thereby determining the corresponding spatial Euclidean distance between any one scale region image and other scale region images under the same scale;
determining the mean square error of the gray level of the scale region gray level image of each scale region image under the same scale according to the scale region gray level image of each scale region image under the same scale;
determining corresponding difference values between any one scale region image and other scale region images under the same scale according to the corresponding region contrast and the spatial Euclidean distance between any one scale region image and other scale region images under the same scale and the gray mean square error of the gray scale region gray scale images of any one scale region image and other scale region images under the same scale;
and determining the corresponding significant values of all the scale region images according to the corresponding difference values between any one scale region image and other scale region images under the same scale.
Further, the determining the corresponding regional contrast between any one scale region image and each other scale region image under the same scale includes:
determining an a-channel color mean value and a b-channel color mean value of the scale region Lab image of each scale region image under the same scale according to the scale region Lab image of each scale region image under the same scale;
calculating the absolute value of the difference value of the average value of the colors of the a channel and the absolute value of the difference value of the average value of the colors of the b channel of the scale area Lab image of any scale area image and other scale area images under the same scale, thereby obtaining the color difference of any scale area image and other scale area images under the same scale;
and determining the corresponding region contrast between any one scale region image and each other scale region image under the same scale according to the color difference between any one scale region image and each other scale region image under the same scale.
Further, the calculation formula for obtaining the color difference between any one scale region image and each other scale region image under the same scale is as follows:
wherein, the first and the second end of the pipe are connected with each other,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe difference in the color of (a) is,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe absolute value of the difference of the average values of the colors of the a-channel of the Lab image,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe absolute value of the difference of the mean values of the b-channel colors of the Lab image of the scale region,andare color adjustment parameters.
Further, the calculation formula for determining the corresponding region contrast between any one scale region image and each other scale region image under the same scale is as follows:
wherein, the first and the second end of the pipe are connected with each other,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe contrast of the corresponding region is set to be,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe difference in the color of (a) is,in order to be a threshold value for the difference in color,andare contrast adjustment parameters.
Further, the calculation formula for determining the correspondence between the corresponding difference values between any one scale region image and each of the other scale region images at the same scale is as follows:
wherein, the first and the second end of the pipe are connected with each other,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe corresponding difference value is set to be equal to the corresponding difference value,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe contrast of the corresponding area is compared with that of the area,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe corresponding spatial euclidean distance is calculated,for any ith scale area image under the same scaleThe mean square error of the gray levels of the gray level image in the scale region of (2),is the minimum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,is the maximum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,parameters are adjusted for the mean square error of the gray scale.
Further, the calculation formula for determining the saliency values corresponding to the images of all the scale regions is as follows:
wherein the content of the first and second substances,for any ith scale region image under the same scaleThe corresponding significance value is set to be,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe corresponding difference value is compared with the corresponding difference value,Kis the total number of other images in the area of each scale under the same scale.
Further, the data processing is performed on the images of the respective salient object regions to obtain a significant pest limb space position enhancement coefficient and a significant pest limb space distribution enhancement coefficient corresponding to the images of the respective salient object regions, and the method includes:
acquiring a gray image of each salient object region corresponding to each salient object region image, and performing edge detection on the gray image of each salient object region to obtain each edge pixel point of each gray image of each salient object region;
uniformly sampling each edge pixel point of each gray level image of the salient object regions, thereby obtaining each edge sampling pixel point of each gray level image of the salient object regions;
determining a central pixel point between any two edge sampling pixel points of the gray image of the same salient object region according to each edge sampling pixel point of the gray image of each salient object region, so as to obtain each central pixel point of the gray image of each salient object region;
determining each central pixel point in each central pixel point of each gray image of the significant object region in the corresponding significant object region according to the position of each central pixel point of each gray image of the significant object region, and further determining the proportion value of each central pixel point in each gray image of the significant object region in the corresponding significant object region, so as to obtain the pest body spatial position significant enhancement coefficient corresponding to each gray image of the significant object region;
according to the position of each central pixel point of each gray image of the significant object region, each central pixel point which is positioned outside the corresponding significant object region in each central pixel point of each gray image of the significant object region is determined, and ellipse equation fitting and linear equation fitting are respectively carried out on each central pixel point which is positioned outside the corresponding significant object region, so that ellipse fitting goodness and linear fitting goodness are obtained, and further pest body spatial distribution significant enhancement coefficients corresponding to each gray image of the significant object region are determined.
Further, the formula for further determining the pest limb space distribution significant enhancement coefficient corresponding to each significant object region image is as follows:
wherein the content of the first and second substances,a pest limb space distribution obvious enhancement coefficient corresponding to the image of the obvious object area,andrespectively the ellipse fitting goodness and the straight line fitting goodness corresponding to the image of the salient object region,andrespectively an ellipse goodness of fit amplification coefficient and a straight line goodness of fit amplification coefficient,anda first ellipse goodness-of-fit threshold and a second ellipse goodness-of-fit threshold,andrespectively a first straight line fitting goodness threshold value and a second straight line fitting goodness threshold value,is a fixed value of goodness of fit.
Further, screening the images of the regions of the respective salient objects to obtain images of the regions of the respective pests includes:
and judging whether the comprehensive significant value is greater than a set comprehensive significant value threshold value or not according to the comprehensive significant value corresponding to each significant object area image, and if so, judging the corresponding significant object area image as a pest area image.
The invention has the following beneficial effects: by acquiring a pest visible light image containing a pest body, performing data processing and data identification on the pest visible light image by using visible light identification equipment, and determining a significant value of a scale area image of each area image of the pest visible light image under different scales, thereby determining each significant object area image in each area image; determining a comprehensive significant value of each significant object area image, thereby determining each pest area image in each significant object area image; and identifying the pests in each pest region image to obtain the corresponding pest types. According to the pest type identification method and device, the corresponding data processing mode is adopted for the pest visible light images, each pest region image can be accurately determined, only the pest region images are identified when pest type identification is carried out, the pest identification result is prevented from being influenced by backgrounds and other interferents, and the pest type identification accuracy is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of an agricultural pest image intelligent identification method based on a neural network according to the present invention.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the technical solutions according to the present invention will be given with reference to the accompanying drawings and preferred embodiments. In the following description, the different references to "one embodiment" or "another embodiment" do not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The embodiment provides an agricultural pest image intelligent identification method based on a neural network. Then according to the characteristic that the color difference between the insect body and the background in the image is large, the remarkable objects in the image can be detected and separated, and then the remarkable objects are accurately identified according to the body texture, the limb space and the shape distribution condition of the pests, so as to determine the pest region image. And finally, identifying and classifying the pest region image by using a pest type identification network. Because the method segments the pest region image in the image before identifying the pests and only identifies the pest region image determined as the pests, the identification process and the result are not influenced by the background, and the pest identification precision is effectively improved.
Specifically, a flow chart corresponding to the intelligent agricultural pest image identification method based on the neural network is shown in fig. 1, and includes the following steps:
(1) the method comprises the steps of obtaining a pest visible light image containing a pest body, and carrying out data preprocessing on the pest visible light image so as to obtain a preprocessed pest visible light image.
The method comprises the steps of shooting rice attached with polypide by using a CMOS camera, and obtaining a pest visible light image containing the polypide, wherein the pest visible light image is an image in an RGB space. The pest visible light image is preprocessed to eliminate the influence caused by noise and part of external interference and enhance the accuracy of subsequent analysis. Since the image is subsequently converted from the RGB space to the Lab space, the present embodiment employs gaussian filtering to reduce noise of the pest visible light image, and performs convolution with the obtained pest visible light image by using a gaussian function to eliminate random noise. Of course, the implementer may also adopt other suitable denoising methods to preprocess the pest visible light image.
Since the color and shape of the polypide in the image are generally different from those of the background, the part with strong significance can be segmented according to the differences. However, dark patches, soil stains and the like often appear on the blades in the image after being damaged by worms, the shape and the color of the blades are similar to those of the worms, and the blades cannot be directly separated. Therefore, the segmented parts with strong significance are distinguished according to the surface texture and the limb characteristics of the worm body, and the image of the worm body part is selected. Firstly, in order to extract a significant part of a preprocessed pest visible light image, which has a large difference with a background, an object needs to be distinguished from the background according to differences of different positions in the image, and the method comprises the following specific implementation steps:
(2) and carrying out region segmentation on the preprocessed pest visible light image to obtain each region image, and further obtaining scale region images of each region image under different scales.
And (3) performing region segmentation on the preprocessed pest visible light image by adopting an SLIC (simple linear iterative clustering) super-pixel segmentation method to obtain each region image. In order to improve the accuracy of subsequently screening the image of the salient object region from the region image, interpolation is utilizedThe value algorithm, such as a nearest neighbor method, a bilinear interpolation method, and the like, reduces each region image into M different scales, thereby obtaining scale region images of each region image under different scales. Is provided withThe set of scales for the region image is, for example,=100%, meaning that the area image is the original size,meaning that the original region image is reduced to its original size. The specific dimension can be selected according to the requirement, and in the embodiment, the setting is carried out。
(3) And acquiring the gray scale images of the scale areas and the Lab images of the scale areas corresponding to the images of all the scale areas, and performing data processing on the gray scale images of the scale areas and the Lab images of the scale areas so as to obtain the significant values corresponding to the images of all the scale areas.
For the scale area images of each area image under any same scale, carrying out gray processing on the scale area images so as to obtain corresponding scale area gray images; and simultaneously, converting the scale area images into Lab space so as to obtain the corresponding scale area Lab images. Carrying out data identification and processing on a scale region gray image and a scale region Lab image corresponding to a scale region image of each region image under any same scale so as to obtain a significant value corresponding to the scale region image of each region image under any same scale, wherein the specific implementation process comprises the following steps:
(3.1) according to the scale region Lab image of each scale region image under the same scale, determining the corresponding region contrast between any one scale region image and other scale region images under the same scale, and the specific implementation steps comprise:
(3.1.1) determining the average value of the color of the channel a and the average value of the color of the channel b of the scale region Lab image of each scale region image under the same scale according to the scale region Lab image of each scale region image under the same scale.
For the scale region Lab images of all the scale region images under any scale, as the scale region Lab images are Lab space images, the international standard of color measurement of Lab space is adopted, and the color values comprise red to green color values and yellow to blue color values. Firstly, determining an average value of red to green color values of a Lab image in a scale area, namely determining an average value of a-channel colors of all pixel points of the Lab image in the scale area; meanwhile, determining the average value of the color values from yellow to blue of the Lab image in the scale area, namely determining the average value of the colors of the b channels of all the pixel points of the Lab image in the scale area.
(3.1.2) calculating the absolute value of the difference value of the average value of the colors of the a channel and the absolute value of the difference value of the average value of the colors of the b channel of the scale area Lab images of any scale area image and other scale area images under the same scale so as to obtain the color difference between any scale area image and other scale area images under the same scale, wherein the corresponding calculation formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe difference in the color of (a) is,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe absolute value of the difference of the a-channel color mean values of the Lab image of the scale region,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe absolute value of the difference of the mean values of the b-channel colors of the Lab image of the scale region,andare all color adjustment parameters for the purpose of limitingAnd highlighting pest color and weakening background color, the present embodiment is set,Has a value range of。
(3.1.3) according to the color difference between any one scale area image and other scale area images under the same scale, determining the corresponding area contrast between any one scale area image and other scale area images under the same scale, wherein the corresponding calculation formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe contrast of the corresponding area is compared with that of the area,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe difference in the color of (a) is,for the color difference threshold, the present embodiment sets,Andare contrast adjustment parameters, the purpose is to adjustValue range of (2), this embodiment setting,,Value range of。
According to any ith scale region image under the same scaleAnd other k-th scale region imagesCorresponding regional contrastAs can be seen from the calculation formula of (c),from any ith scale region image under the same scaleAnd other k-th scale region imagesThe difference between the average values of the color values is obtained, and the larger the difference between the color values of the two-scale area image is, the larger the difference is, theThe larger the area contrastThe larger.
And (3.2) determining the central position of the gray level image of the scale area of each scale area image under the same scale according to the gray level image of the scale area of each scale area image under the same scale, thereby determining the corresponding spatial Euclidean distance between any one scale area image and other scale area images under the same scale.
For the scale region gray level images of all scale region images under any same scale, calculating the position mean value of all pixel points in the scale region gray level images, thereby determining the central position of the scale region gray level images, and calculating the spatial Euclidean distance between any two central positions, thereby obtaining the corresponding spatial Euclidean distance between any one scale region image and other scale region images under the same scale, wherein the corresponding calculation formula is as follows:
wherein the content of the first and second substances,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe corresponding spatial euclidean distance is calculated,for any ith scale region imageThe abscissa of the central position of the gray scale image of the scale area, that is, the image of the arbitrary ith scale areaThe average value of the abscissa of all pixel points of the gray scale image in the scale area,for any ith scale region imageThe ordinate of the central position of the gray scale image of the scale area, that is, the arbitrary ith scale area imageThe average value of the vertical coordinates of all pixel points of the gray scale image in the scale area,for other k-th scale region imageThe abscissa of the central position of the gray scale image of the k-th scale area, that is, the other images of the k-th scale areaThe average value of the abscissa of all pixel points of the gray scale image in the scale area,for other k-th scale region imageThe ordinate of the central position of the gray scale image of the scale area, that is, the other k-th scale area imageThe average value of the vertical coordinates of all pixel points of the gray scale image in the scale area.
And (3.3) determining the mean square error of the gray scale image of the scale area of each scale area image under the same scale according to the gray scale image of the scale area image under the same scale.
And calculating the mean square error of the gray value of each pixel point in the gray image of the scale area for the gray image of the scale area of each scale area under the same scale, thereby obtaining the mean square error of the gray value of the gray image of the scale area.
(3.4) determining corresponding difference values between any scale region image and other scale region images under the same scale according to the corresponding region contrast and the spatial Euclidean distance between any scale region image and other scale region images under the same scale and the gray mean square error of the gray scale region gray scale image of any scale region image and other scale region images under the same scale, wherein the corresponding calculation formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe corresponding difference value is set to be equal to the corresponding difference value,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe contrast of the corresponding area is compared with that of the area,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe corresponding spatial euclidean distance is calculated,for any ith scale region image under the same scaleThe mean square error of the gray levels of the gray level image in the scale region of (2),is the minimum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,is the maximum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,adjusting parameters for the mean square error of the gray scale for the purpose of adjusting the mean square error of the gray scaleValue range of (1), this embodiment setting。
Any ith scale region image under the same scaleAnd other k-th scale region imagesCorresponding difference valueDue to the arbitrary ith scale region imageGray mean square error of gray scale image of scale regionOver a large range of values, are less easily controlled and estimated, and are therefore formulatedMean square error of gray scaleNormalization processing is carried out, and parameters are adjusted through mean square error of gray scaleIs adjusted so that its value range is。
According to the above-mentioned arbitrary ith scale region image under the same scaleAnd other k-th scale region imagesCorresponding difference valueThe difference value can be obtained by the calculation formulaFrom any ith scale region image under the same scaleCorresponding mean square error of gray scaleAny ith scale region image under the same scaleAnd other k-th scale region imagesCorresponding regional contrastAnd any ith scale region image under the same scaleAnd other k-th scale region imagesCorresponding spatial Euclidean distanceIt is determined that the user is to be,and withAndin a positive correlation withIn an anti-correlation relationship, i.e. whenAndthe larger the size of the tube is,the smaller the size of the product is,the larger the correspondence, the image of any ith scale regionThe more likely it is to be an area of pest damage.
(3.5) determining corresponding significant values of all the scale region images according to corresponding difference values between any one scale region image and other scale region images under the same scale, wherein the corresponding calculation formula is as follows:
wherein the content of the first and second substances,for any ith scale area image under the same scaleThe corresponding significance value is given to the corresponding one,is an arbitrary ith scale under the same scaleRegion imageAnd other k-th scale region imagesThe corresponding difference value is set to be equal to the corresponding difference value,Kis the total number of the images of other scale regions under the same scale.
According to the image of the arbitrary ith scale region under the same scaleCorresponding significance valueThe calculation formula shows that when the image is in any ith scale area under the same scaleWhen the difference value corresponding to each other scale area image is larger, the corresponding arbitrary ith scale area imageCorresponding significance valueThe larger the size, the more likely the i-th scale region image is to be the pest region image.
(4) And screening each area image according to the corresponding significant values of all the scale area images, thereby obtaining each significant object area image.
In this embodiment, in order to enhance the contrast between the salient region and the non-salient region, the calculation is expanded to multiple scales, that is, the scale region images of the region images under different scales are obtained, and then the salient values corresponding to the scale region images of the region images under different scales are determined. When the scale region images of a region image under different scales have larger significant values, the region image is regarded as a significant region which needs to be searched. Specifically, for any one region image, whether the significant values of the scale region images of the region image under different scales are all greater than a significant value threshold is judged, and if all the significant values are greater than the significant value threshold, the region image is determined as a significant object region image. In this embodiment, the significance threshold is set to 0.75, but of course, the significance threshold can be set selectively according to the actual effect. In this way, the salient object images in the respective region images are extracted based on the salient value threshold, and the respective salient object region images are obtained.
Secondly, after the object is distinguished from the background according to the difference of different positions in the image in the steps, the parts with strong significance are distinguished according to the surface texture and the limb characteristics of the polypide, and the image of the polypide part is selected, wherein the specific implementation steps are as follows:
(5) the method comprises the following steps of carrying out data processing on each image of the salient object region so as to obtain a pest limb space position salient enhancement coefficient and a pest limb space distribution salient enhancement coefficient corresponding to each image of the salient object region, wherein the specific steps comprise:
and (5.1) acquiring a gray image of the salient object region corresponding to each gray image of the salient object region, and performing edge detection on the gray image of the salient object region, thereby obtaining each edge pixel point of each gray image of the salient object region.
After obtaining each salient object area image, performing graying processing on each salient object area image respectively to obtain a salient object area grayscale image corresponding to each salient object area image. And then, performing edge detection on the gray level image of each significant object region by using a canny edge detection algorithm to obtain each edge binary image, wherein in each edge binary image, a pixel point with a pixel value of 1 is an edge pixel point of the significant object region, and a pixel point with a pixel value of 0 is an internal pixel point or a background pixel point of the significant object region. Because the edge pixel points detected by the edge detection may have a discontinuous phenomenon, the closed operation processing is performed on each edge binary image respectively to ensure the continuity of the edge pixel points, so that each final edge binary image is obtained. And determining each edge pixel point corresponding to each gray image of the salient object region according to the position of each pixel point with the pixel value of 1 in each final edge binary image.
And (5.2) uniformly sampling each edge pixel point of each gray level image of the salient object region, thereby obtaining each edge sampling pixel point of each gray level image of the salient object region.
For any final edge binary image, uniformly sampling the pixel point with the pixel value of 1 in the edge binary image, so as to obtain each sampling point, and in this embodiment, c sampling points can be obtained, where c = 200. According to the positions of the sampling points, each edge sampling pixel point corresponding to each gray level image of the salient object region can be determined.
And (5.3) determining a central pixel point between any two edge sampling pixel points of the gray image of the same salient object region according to each edge sampling pixel point of the gray image of each salient object region, thereby obtaining each central pixel point of the gray image of each salient object region.
For each sampling point of any final edge binary image, determining the position of the central point of a line segment formed by any two sampling points, wherein each central point corresponds to a pixel point, and as the number of the sampling points c =200 is set in the embodiment, the position of the central point of the line segment can be obtainedA center point according toAnd the position of each central point can determine each central pixel point in the corresponding gray image of the area of the salient object.
And (5.4) determining each central pixel point in each central pixel point of each gray image of the salient object region, which is located in the corresponding salient object region, according to the position of each central pixel point of each gray image of the salient object region, and further determining the proportion value of each central pixel point in each gray image of the salient object region, which is located in the corresponding salient object region, so as to obtain the spatial position significant enhancement coefficient of the pest body corresponding to each image of the salient object region.
And for any final edge binary image, assigning the pixel values of the internal pixel points of the significant object region to be 1, and keeping the pixel values of the background pixel points to be 0, at the moment, in any final edge binary image, the pixel values of the pixel points inside and at the edge of the significant object region are 1, and the pixel value of the background pixel point outside the significant object region is 0. On the basis of the above, according toThe position of a central point, count thisAnd the proportion value of the central point with the pixel value of 0 in each central point is the proportion value of each central pixel point in the corresponding significant object region of the gray level image of the corresponding significant object region. According to the proportion value of each central pixel point of each gray level image of each salient object region in the corresponding salient object region, the salient enhancement coefficient of the pest limb space position corresponding to each salient object region image can be determined, and the corresponding calculation formula is as follows:
wherein the content of the first and second substances,the space position of the pest limbs corresponding to the ith salient object area image is obviously enhanced by a coefficient,the proportion value of each central pixel point of the gray image of the ith salient object region in the corresponding salient object region is calculated,the ratio value of each central pixel point of the j-th significant object area gray level image in the corresponding significant object area is shown, n is the total number of the significant object area images,to adjust the coefficient by duty ratio for the purpose of adjustmentRange of values, the present embodiment sets。
According to the pest limb space position significant enhancement coefficient corresponding to the ith significant object region imageThe calculation formula shows that when the gray image of the ith salient object area is positioned in each central pixel point in the corresponding salient object area, the proportion value of each central pixel point isThe bigger the pest is, the obvious enhancement coefficient of the spatial position of the pest limbsThe larger the corresponding salient object region image is, the more likely the corresponding salient object region image is to be an image of a pest.
For pests, the biggest difference from other interfering objects is the unique limb characteristics of the pests, the midpoint obtained after connecting any two points selected from the image edge is positioned outside the pest body with higher probability, other interfering objects are usually arc-shaped or nearly arc-shaped edges, and the midpoint obtained after connecting any two points selected from the image edge is positioned in the interfering object with higher probability. Therefore, by using the characteristic, the proportion value of each central pixel point of the gray image of the salient object region, which is located in the corresponding salient object region, is determined, and the proportion value is used as an important basis for judging whether the salient object is a worm or not.
(5.5) according to the position of each central pixel point of each gray image of each significant object area, determining each central pixel point, which is positioned outside the corresponding significant object area, in each central pixel point of each gray image of each significant object area, and respectively performing ellipse equation fitting and linear equation fitting on each central pixel point positioned outside the corresponding significant object area, so as to obtain ellipse fitting goodness and linear fitting goodness, and further determining the pest body space distribution significant enhancement coefficient corresponding to each significant object area image.
And selecting each central pixel point of the gray-scale image of each remarkable object area, wherein the central pixel points are positioned outside the corresponding remarkable object area. According to the unique limb characteristics of the polypide, the polypide is generally symmetrically distributed in the image, and each central pixel point which is selected at the moment and is positioned outside the corresponding remarkable object area is distributed on two sides of the image in an arc shape. In addition, according to different postures of the insect body, when the insect body is distributed on one side in the picture, each central pixel point which is selected and positioned outside the corresponding salient object area is distributed on one side of the image in an arc shape. While for the rest of the interfering objects the morphology has no such feature. Based on the unique body characteristics of the worm body, the position conditions of all central pixel points which are positioned outside the corresponding remarkable object area in the selected central pixel points can be judged, and the selected central pixel points are analyzed so as to distinguish the worm body from other interferents.
Specifically, for each central pixel point of each gray scale image of the significant object region, which is located outside the corresponding significant object region, ellipse equation fitting and linear equation fitting are respectively performed on each central pixel point located outside the corresponding significant object region, so that ellipse fitting goodness and linear fitting goodness are obtained, and further, a pest body spatial distribution significant enhancement coefficient corresponding to the gray scale image of the significant object region is obtained, wherein the corresponding calculation formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,a pest limb space distribution significant enhancement coefficient corresponding to the ith significant object region image,andrespectively corresponding ellipse goodness of fit and straight line goodness of fit of the image of the salient object region,andrespectively, an ellipse goodness of fit amplification factor and a straight line goodness of fit amplification factor, which function is to amplify the ellipse goodness of fit and the straight line goodness of fit, the present embodiment is arranged,Anda first ellipse goodness-of-fit threshold and a second ellipse goodness-of-fit threshold,andthe first and second linear goodness of fit thresholds, respectively, are set by this embodiment,,For a fixed value of goodness of fit, this embodiment sets。
The pest limb space distribution obvious enhancement coefficient corresponding to the image of the obvious object areaIn the calculation formula, when the salient objects in the image of the salient object region are the polyps, if the polyps are symmetrically distributed in the image, the fitting effect of fitting the ellipse equation to each central pixel point outside the salient object region is better, so that the obtained ellipse fitting goodness is obtainedWill be greater than a certain value, i.e., greater than the second threshold value of goodness-of-fit of ellipseHowever, considering that the number of central pixel points located outside the significant object region is relatively large, the fitting effect of fitting the elliptic equation has certain limitation, and the obtained elliptic fitting goodnessNor too high, i.e., less than the first threshold of goodness-of-fit of ellipse(ii) a If the worm body is distributed on one side in the picture, linear equation fitting is carried out on each central pixel point outside the obvious object areaThe fitting effect is better, and the obtained straight line fitting goodnessWill be greater than a certain value, i.e. greater than the threshold value of goodness-of-fit of the second straight lineHowever, considering that the number of central pixel points located outside the significant object region is relatively large, the fitting effect of fitting the linear equation has certain limitations, and the obtained linear fitting goodness isNor too high, i.e. less than the first line goodness-of-fit thresholdAt this time, the goodness of fit to the ellipseGoodness of fit to a straight lineAmplifying to obtain a pest limb space distribution significant enhancement coefficient larger than 1So as to amplify the salient value of the salient object region image subsequently. When the salient objects in the image of the salient object area are other interferents, carrying out ellipse equation fitting and linear equation fitting on each central pixel point positioned outside the salient object area to obtain ellipse fitting goodnessGoodness of fit to a straight lineOther conditions can be met, and the fitting is directly optimizedFixed value of degree=1 as pest limb space distribution significant enhancement coefficientSo as to keep the saliency value of the salient object region image unchanged subsequently.
(6) And calculating a comprehensive significant value corresponding to each significant object area image according to the significant enhancement coefficient of the spatial position of the pest limbs, the significant enhancement coefficient of the spatial distribution of the pest limbs and the significant value corresponding to each significant object area image.
For each significant object region image, setting a significant pest limb enhancement coefficient according to the significant pest limb space position enhancement coefficient and the significant pest limb space distribution enhancement coefficient corresponding to the significant object region image, wherein the corresponding calculation formula is as follows:
wherein the content of the first and second substances,the pest limb significant enhancement coefficient corresponding to the ith significant object region image,the spatial position of the pest limb corresponding to the ith salient object region image is obviously enhanced by a coefficient,and significantly enhancing the spatial distribution of the pest limbs corresponding to the ith significant object area image.
The pest limb significant enhancement coefficient corresponding to the ith significant object area imageIn the calculation formula (2), take into accountConsidering the spatial position of the limbs of the pests to obviously enhance the coefficientAnd the spatial distribution of the limbs of the pests is obviously enhancedAll can generate amplification effect on pest identification, thereforeAndthe larger the corresponding pest limb is, the significant enhancement factorThe larger the more likely the salient object within the ith salient object region image is to be a pest.
For each significant object area image, determining a corresponding comprehensive significant value according to the corresponding pest limb significant enhancement coefficient and the significant value, wherein the corresponding calculation formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,for the integrated saliency value corresponding to the ith salient object region image,the pest limb significant enhancement coefficient corresponding to the ith significant object region image,the corresponding significant value of the ith significant object area image.
Healds corresponding to the i-th salient object area imageSum of significant valueIn the calculation formula (2), the more the salient object in the salient object area image conforms to the characteristics of the pest body, the more the pest body salient enhancement coefficientAnd a significant valueThe larger the corresponding integrated saliency valueThe larger the likelihood that the salient object in the salient object region image is an insect is.
(7) And screening the images of the salient object areas according to the comprehensive salient values corresponding to the images of the salient object areas so as to obtain the images of the pest areas.
The comprehensive significance threshold is set in advance, and the comprehensive significance threshold is set to be 2 in the embodiment. And judging whether the comprehensive significant value is greater than a set comprehensive significant value threshold value or not according to the comprehensive significant value corresponding to each significant object area image, and if so, judging the corresponding significant object area image as a pest area image. In this way, screening of images of each salient object region can be achieved, and therefore images of each pest region can be obtained.
(8) And inputting each pest region image into a pest type identification network respectively so as to obtain a corresponding pest type.
And inputting each pest region image into a pre-trained pest type recognition network formed by a neural network, and recognizing corresponding pest types such as rice planthoppers, locusts, tryporyza incertulas or leafhoppers by the pest type recognition network.
In the present embodiment, the neural network constituting the pest category identification network employs a convolutional neural network such as ResNet34 or SENet, the loss function of the neural network employs a cross entropy loss function, and the optimization algorithm employs an Adaptive motion estimation algorithm (Adam). The specific implementation process of adopting the neural network to form the pest category identification network and correspondingly training the pest category identification network belongs to the prior art, and is not described herein again.
According to the method, the characteristics that the difference between pests and the background is large are utilized, the significant value of the scale area image of each area image under different scales is determined, the area image containing the significant objects is extracted, then the comprehensive significant value is constructed according to the body space and the shape distribution condition of the pests, the comprehensive significant value corresponding to the pest image is obviously larger than other interferents, and the pest area image is further segmented. When the pest category is identified, only the pest region image is identified, so that the detection process and the result are prevented from being influenced by the background and other interferents, and the identification accuracy of the pest category is effectively improved.
It should be noted that: the above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (10)
1. An agricultural pest image intelligent identification method based on a neural network is characterized by comprising the following steps:
acquiring a pest visible light image containing a pest body, and performing data preprocessing on the pest visible light image so as to acquire a preprocessed pest visible light image;
carrying out region segmentation on the preprocessed pest visible light image to obtain each region image, and further obtaining scale region images of each region image under different scales;
acquiring a scale region gray image and a scale region Lab image corresponding to all scale region images, and performing data processing on the scale region gray image and the scale region Lab image so as to obtain significant values corresponding to all scale region images;
screening each area image according to the corresponding significant values of all the scale area images, thereby obtaining each significant object area image;
performing data processing on the images of the salient object areas to obtain a pest limb space position salient enhancement coefficient and a pest limb space distribution salient enhancement coefficient corresponding to the images of the salient object areas;
calculating a comprehensive significant value corresponding to each significant object region image according to the significant enhancement coefficient of the spatial position of the pest limbs, the significant enhancement coefficient of the spatial distribution of the pest limbs and the significant value corresponding to each significant object region image;
screening each salient object region image according to the comprehensive salient value corresponding to each salient object region image so as to obtain each pest region image;
and inputting each pest region image into a pest type identification network respectively so as to obtain a corresponding pest type.
2. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 1, wherein the step of performing data processing on the gray scale image of the scale area and the Lab image of the scale area so as to obtain the corresponding significant values of all the images of the scale area comprises the steps of:
determining the corresponding regional contrast between any one scale region image and other scale region images under the same scale according to the scale region Lab image of each scale region image under the same scale;
determining the central position of the scale region gray level image of each scale region image under the same scale according to the scale region gray level image of each scale region image under the same scale, thereby determining the corresponding space Euclidean distance between any one scale region image and other scale region images under the same scale;
determining the mean square error of the gray level of the scale region gray level image of each scale region image under the same scale according to the scale region gray level image of each scale region image under the same scale;
determining corresponding difference values between any scale region image and other scale region images at the same scale according to the corresponding region contrast and the corresponding spatial Euclidean distance between any scale region image and other scale region images at the same scale and the mean square error of the gray scale of the scale region gray scale image of any scale region image and other scale region images at the same scale;
and determining the corresponding significant values of all the scale region images according to the corresponding difference values between any one scale region image and other scale region images under the same scale.
3. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 2, wherein the determining of the corresponding regional contrast between the regional image of any scale and the regional images of other scales under the same scale comprises:
determining an a-channel color mean value and a b-channel color mean value of the scale region Lab image of each scale region image under the same scale according to the scale region Lab image of each scale region image under the same scale;
calculating the absolute value of the difference of the color mean values of the a channels and the b channels of the Lab images of the scale areas of any one scale area image and other scale area images under the same scale, thereby obtaining the color difference of any one scale area image and other scale area images under the same scale;
and determining the corresponding region contrast between any one scale region image and each other scale region image under the same scale according to the color difference between any one scale region image and each other scale region image under the same scale.
4. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 3, wherein the calculation formula for obtaining the color difference between the image of any one scale region and the images of other scale regions under the same scale is as follows:
wherein, the first and the second end of the pipe are connected with each other,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe difference in the color of (a) is,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe absolute value of the difference of the a-channel color mean values of the Lab image of the scale region,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe absolute value of the difference of the b-channel color mean values of the Lab image of the scale region,andare color adjustment parameters.
5. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 3, wherein the calculation formula for determining the corresponding regional contrast ratio between any one scale regional image and each other scale regional image under the same scale is as follows:
wherein the content of the first and second substances,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe contrast of the corresponding region is set to be,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe difference in the color of (a) is,in order to be the threshold value for the color difference,andare contrast adjustment parameters.
6. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 2, wherein the calculation formula for determining the correspondence of the corresponding difference values between any one scale region image and each other scale region image under the same scale is as follows:
wherein, the first and the second end of the pipe are connected with each other,for any ith scale region image under the same scaleAnd other k-th scale region imagesThe corresponding difference value is set to be equal to the corresponding difference value,is an arbitrary ith scale under the same scaleRegion imageAnd other k-th scale region imagesThe contrast of the corresponding region is set to be,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe corresponding spatial euclidean distance,for any ith scale region image under the same scaleThe mean square error of the gray levels of the gray level image in the scale region of (2),is the minimum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,is the maximum value of the mean square error of the gray scale image of the scale region of each scale region image under the same scale,parameters are adjusted for the mean square error of the gray scale.
7. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 6, wherein the calculation formula for determining the corresponding significant values of the images of all the scale areas is as follows:
wherein the content of the first and second substances,for any ith scale region image under the same scaleThe corresponding significance value is given to the corresponding one,for any ith scale area image under the same scaleAnd other k-th scale region imagesThe corresponding difference value is compared with the corresponding difference value,Kis the total number of other images in the area of each scale under the same scale.
8. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 1, wherein the data processing is performed on each significant object area image so as to obtain the significant pest limb space position enhancement coefficient and the significant pest limb space distribution enhancement coefficient corresponding to each significant object area image, and the method comprises the following steps:
acquiring a gray image of each salient object region corresponding to each salient object region image, and performing edge detection on the gray image of each salient object region to obtain each edge pixel point of each gray image of each salient object region;
uniformly sampling each edge pixel point of each gray image of the area of the salient object, thereby obtaining each edge sampling pixel point of each gray image of the area of the salient object;
determining a central pixel point between any two edge sampling pixel points of the gray image of the same salient object region according to each edge sampling pixel point of the gray image of each salient object region, so as to obtain each central pixel point of the gray image of each salient object region;
determining each central pixel point in each central pixel point of each gray image of the significant object region in the corresponding significant object region according to the position of each central pixel point of each gray image of the significant object region, and further determining the proportion value of each central pixel point in each gray image of the significant object region in the corresponding significant object region, so as to obtain the pest body spatial position significant enhancement coefficient corresponding to each gray image of the significant object region;
according to the position of each central pixel point of each gray image of the significant object region, each central pixel point which is positioned outside the corresponding significant object region in each central pixel point of each gray image of the significant object region is determined, and ellipse equation fitting and linear equation fitting are respectively carried out on each central pixel point which is positioned outside the corresponding significant object region, so that ellipse fitting goodness and linear fitting goodness are obtained, and further pest body spatial distribution significant enhancement coefficients corresponding to each gray image of the significant object region are determined.
9. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 8, wherein the calculation formula for further determining the pest limb spatial distribution significant enhancement coefficient corresponding to each significant object area image is as follows:
wherein the content of the first and second substances,the pest limb space distribution corresponding to the image of the salient object area is obviously enhanced by the coefficient,andrespectively corresponding ellipse goodness of fit and straight line goodness of fit of the image of the salient object region,andrespectively an ellipse goodness of fit amplification coefficient and a straight line goodness of fit amplification coefficient,anda first ellipse goodness-of-fit threshold and a second ellipse goodness-of-fit threshold,andrespectively a first linear goodness-of-fit threshold and a second linear goodness-of-fit threshold,is a fixed value of goodness of fit.
10. The intelligent agricultural pest image identification method based on the neural network as claimed in claim 1, wherein the screening of each salient object region image to obtain each pest region image comprises:
and judging whether the comprehensive significant value is greater than a set comprehensive significant value threshold value or not according to the comprehensive significant value corresponding to each significant object area image, and if so, judging the corresponding significant object area image as a pest area image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210694051.3A CN114782682B (en) | 2022-06-20 | 2022-06-20 | Agricultural pest image intelligent identification method based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210694051.3A CN114782682B (en) | 2022-06-20 | 2022-06-20 | Agricultural pest image intelligent identification method based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114782682A true CN114782682A (en) | 2022-07-22 |
CN114782682B CN114782682B (en) | 2022-09-06 |
Family
ID=82420691
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210694051.3A Active CN114782682B (en) | 2022-06-20 | 2022-06-20 | Agricultural pest image intelligent identification method based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114782682B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140205206A1 (en) * | 2013-01-24 | 2014-07-24 | Mayur Datar | Systems and methods for resizing an image |
CN104598908A (en) * | 2014-09-26 | 2015-05-06 | 浙江理工大学 | Method for recognizing diseases of crop leaves |
US20150339589A1 (en) * | 2014-05-21 | 2015-11-26 | Brain Corporation | Apparatus and methods for training robots utilizing gaze-based saliency maps |
CN108549891A (en) * | 2018-03-23 | 2018-09-18 | 河海大学 | Multi-scale diffusion well-marked target detection method based on background Yu target priori |
CN109872301A (en) * | 2018-12-26 | 2019-06-11 | 浙江清华长三角研究院 | A kind of color image preprocess method counted for rice pest identification |
CN110428374A (en) * | 2019-07-22 | 2019-11-08 | 北京农业信息技术研究中心 | A kind of small size pest automatic testing method and system |
-
2022
- 2022-06-20 CN CN202210694051.3A patent/CN114782682B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140205206A1 (en) * | 2013-01-24 | 2014-07-24 | Mayur Datar | Systems and methods for resizing an image |
US20150339589A1 (en) * | 2014-05-21 | 2015-11-26 | Brain Corporation | Apparatus and methods for training robots utilizing gaze-based saliency maps |
CN104598908A (en) * | 2014-09-26 | 2015-05-06 | 浙江理工大学 | Method for recognizing diseases of crop leaves |
CN108549891A (en) * | 2018-03-23 | 2018-09-18 | 河海大学 | Multi-scale diffusion well-marked target detection method based on background Yu target priori |
CN109872301A (en) * | 2018-12-26 | 2019-06-11 | 浙江清华长三角研究院 | A kind of color image preprocess method counted for rice pest identification |
CN110428374A (en) * | 2019-07-22 | 2019-11-08 | 北京农业信息技术研究中心 | A kind of small size pest automatic testing method and system |
Non-Patent Citations (2)
Title |
---|
TENGFEI SONG: "Multi-scale self-searching saliency detection combined with rectangular diffusion", 《2017 12TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA)》 * |
李文凤: "基于图像显著区域检测的SIFT特征匹配方法研究", 《微型机与应用》 * |
Also Published As
Publication number | Publication date |
---|---|
CN114782682B (en) | 2022-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875747A (en) | A kind of wheat unsound grain recognition methods based on machine vision | |
CN112257702A (en) | Crop disease identification method based on incremental learning | |
AU2020103260A4 (en) | Rice blast grading system and method | |
Liao et al. | Automatic segmentation of crop/background based on luminance partition correction and adaptive threshold | |
CN115578660B (en) | Land block segmentation method based on remote sensing image | |
CN111882555B (en) | Deep learning-based netting detection method, device, equipment and storage medium | |
CN111738931B (en) | Shadow removal algorithm for aerial image of photovoltaic array unmanned aerial vehicle | |
CN112258545A (en) | Tobacco leaf image online background processing system and online background processing method | |
CN112070717A (en) | Power transmission line icing thickness detection method based on image processing | |
CN109903275B (en) | Fermented grain mildewing area detection method based on self-adaptive multi-scale filtering and histogram comparison | |
CN111667509B (en) | Automatic tracking method and system for moving target under condition that target and background colors are similar | |
CN111612797B (en) | Rice image information processing system | |
CN114782682B (en) | Agricultural pest image intelligent identification method based on neural network | |
CN115601690B (en) | Edible fungus environment detection method based on intelligent agriculture | |
CN110223253B (en) | Defogging method based on image enhancement | |
CN111611940A (en) | Rapid video face recognition method based on big data processing | |
CN114820707A (en) | Calculation method for camera target automatic tracking | |
Di et al. | The research on the feature extraction of sunflower leaf rust characteristics based on color and texture feature | |
CN110348530B (en) | Method for identifying lipstick number | |
Dai et al. | Research of segmentation method on image of Lingwu Long Jujubes based on a new extraction model of Hue | |
Biswas et al. | A novel inspection of paddy leaf disease classification using advance image processing techniques | |
CN116523910B (en) | Intelligent walnut maturity detection method based on image data | |
CN115601358B (en) | Tongue picture image segmentation method under natural light environment | |
CN116258968B (en) | Method and system for managing fruit diseases and insects | |
CN112116580B (en) | Detection method, system and equipment for camera support |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |