CN115205194A - Method, system and device for detecting coverage rate of sticky trap based on image processing - Google Patents

Method, system and device for detecting coverage rate of sticky trap based on image processing Download PDF

Info

Publication number
CN115205194A
CN115205194A CN202210413289.4A CN202210413289A CN115205194A CN 115205194 A CN115205194 A CN 115205194A CN 202210413289 A CN202210413289 A CN 202210413289A CN 115205194 A CN115205194 A CN 115205194A
Authority
CN
China
Prior art keywords
image
insect
background
segmentation
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210413289.4A
Other languages
Chinese (zh)
Inventor
朱旭华
陈渝阳
袁娜朵
王闯
赵飞
杨红利
梁周瑞
李政
龚武龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Top Cloud Agri Technology Co ltd
Original Assignee
Zhejiang Top Cloud Agri Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Top Cloud Agri Technology Co ltd filed Critical Zhejiang Top Cloud Agri Technology Co ltd
Priority to CN202210413289.4A priority Critical patent/CN115205194A/en
Publication of CN115205194A publication Critical patent/CN115205194A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Abstract

The invention discloses a method, a system and a device for detecting the coverage rate of a sticky trap based on image processing, wherein the method comprises the following steps: based on an original insect image, carrying out image enhancement processing on the original insect image to obtain an enhanced image; identifying the background color of the enhanced image to obtain background color category information; selecting a target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information to obtain a target segmentation binary image; carrying out feature selection on the target segmentation binary image to obtain a feature selection binary image; and selecting a binary image according to the characteristics, and calculating the insect coverage rate. The method solves the problem that the segmentation effect is not ideal under different illumination and complex backgrounds in the image segmentation process, recovers the images under different illumination to solve the image target distortion caused by illumination change, and adopts different image segmentation algorithms for the backgrounds with different complexities, thereby improving the segmentation efficiency of the insect pasting board image and the accuracy of coverage rate calculation.

Description

Method, system and device for detecting coverage rate of sticky trap based on image processing
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a system and a device for detecting coverage rate of a pest sticking plate based on image processing.
Background
In the prior literature, i.e., published materials, there is a gap in the art regarding insect coverage detection based on computer vision. Through investigation and analysis of practical application scenes, the following summary is performed on the technology or algorithm related to target coverage rate detection:
and (3) regarding the coverage rate detection problem as the target segmentation problem of the image, namely, the coverage rate detection problem is to segment the target area in the image and then calculate. Conventional image segmentation methods are mainly classified into a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, and a segmentation method based on a specific theory. The target segmentation operation of the image is greatly influenced by the image quality, the image needs to be preprocessed first, the image quality is improved, and the preprocessed image is more suitable for observation of human eyes or processing of machines. In an image target segmentation algorithm, a threshold value is selected by using a method of selecting a threshold value by using a maximum prior entropy, and the threshold value is selected from the angle of an information theory, so that the algorithm has good segmentation effect on objects with different scales, and has the defect that when the image has a complex background, partial information can be lost during segmentation; for an image with a complex background, the influence of a frequency domain on the image is considered based on a spectral residual saliency region detection method, and response signal sampling of a target is realized by utilizing the characteristics that a low frequency band is mostly a target region and 'noise' is mostly concentrated in a high frequency band, so that the response signal sampling serves as an image segmentation basis.
The invention mainly solves the problems of image segmentation of insect images with different light intensities and different backgrounds and insect coverage rate calculation.
Disclosure of Invention
The invention provides a method, a system and a device for detecting insect coverage rate based on image processing, aiming at the defects of the existing image target segmentation technology.
In order to solve the technical problems, the invention is solved by the following technical scheme:
a method for detecting coverage rate of a sticky trap based on image processing comprises the following steps:
based on an original insect image, carrying out image enhancement processing on the original insect image to obtain an enhanced image, wherein the original insect image comprises a background;
identifying a background based on the enhanced image to obtain background color category information;
selecting a corresponding target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information to obtain a target segmentation binary image;
performing feature selection on the target segmentation binary image to obtain a feature selection binary image;
and selecting a binary image according to the characteristics, and calculating the insect coverage rate.
As an implementation manner, the image enhancement processing on the original insect image based on the original insect image to obtain an enhanced image includes the following steps:
carrying out fuzzy processing on an original insect image according to a specified scale to obtain a brightness image;
calculating a value of Log [ R (x, y) ] based on the original insect image and the luminance image, wherein the calculation formula adopted is Log [ R (x, y) ] = Log [ I (x, y) ] -Log [ L (x, y) ]
Wherein R (x, y) represents a reflection image, I (x, y) represents an original insect image, and L (x, y) represents a luminance image;
the Log [ R (x, y) ] is quantized to pixel values of 0 to 255, resulting in an enhanced image.
As an implementation manner, the enhanced image is subjected to background color identification, so as to obtain background color category information, where the background color category information is divided into a white background and a non-white background.
As an implementation manner, the performing background color identification on the enhanced image to obtain background color category information includes the following steps:
carrying out color clustering on the enhanced image by adopting a K-Means algorithm to obtain a clustering region identification map;
analyzing the clustering region identification graph and the enhanced image in a combined manner, and counting an RGB distribution histogram of each clustering region corresponding to the enhanced image;
counting the RGB channel gray value with the highest proportion in each region and the proportion of the RGB channel gray value in the corresponding region through the RGB distribution histogram, selecting the region where the maximum proportion value is located, obtaining the RGB channel gray value and the number of the region where the maximum proportion value is located, assuming that each clustering region comprises a subset number of S- { S1, S2,. Once, sk }, and the RGB value and the number N- { (RGB 1, num 1), (RGB 2, num 2),. Once (RGBK, numk) } of the maximum gray number corresponding to each clustering region, comparing each number in N with the corresponding clustering region in S, and taking the term with the highest proportion, namely (RGbx, numx);
based on the RGB channel gray value and the number of the area where the proportion maximum value is located, judging the RGB channel mean value as a white background or a non-white background, wherein the formula is as follows:
rg = abs(r - g)
rb = abs(r - b)
gb = abs(g - b)
and when the calculation results simultaneously satisfy (R > 120), (G > 120), (B > 120), (rg < = 20), (gb < = 20), (rb < = 20), or (otherwise) is a non-white background, wherein R is an R channel mean value in the clustering region, G is a G channel mean value in the clustering region, and B is a B channel mean value in the clustering region.
As an implementation manner, the selecting a corresponding target segmentation algorithm to detect insect regions according to the enhanced image and the background color category information, and obtaining a target segmentation binary map when the insect regions are white background, includes the following steps:
graying the enhanced image to obtain a grayscale image, wherein the size of the grayscale image is W × H, and the grayscale level is L, W represents the width of the grayscale image, and H represents the height of the grayscale image;
based on the gray image, forming each pixel point in the image and a neighborhood in the range of n × n around the pixel point into a region, and calculating the neighborhood average gray value of the region, wherein the formula is as follows:
Figure 485143DEST_PATH_IMAGE001
wherein, (x, y) is the current pixel point coordinate, n is the neighborhood side length, i represents the point gray scale of a pixel point in the gray scale image, and j represents the region gray scale mean value of the pixel point neighborhood;
establishing a mapping image F (x, y) according to each pixel point in the gray image and the neighborhood average gray value;
setting the probability P of the average value of the point gray level and the regional gray level to (i, j) according to the mapping image i,j Based on the probability P i,j Obtaining a two-dimensional histogram of the mean value of the point gray level and the regional gray level of the gray level image, and the probability P Of i, j The expression formula is:
Figure 145931DEST_PATH_IMAGE002
wherein i represents the point gray scale of a pixel point in the gray scale image, j represents the regional gray scale mean value of the neighborhood of the pixel point, W represents the width of the gray scale image, H represents the height of the gray scale image, and W x H represents the size of the gray scale image;
dividing the two-dimensional histogram into different areas by utilizing a segmentation threshold vector, wherein the different areas comprise a target area and a background area, the proportion of the target area to the background area is the largest according to the homomorphism, the gray levels of pixels in the target area and the background area are uniform, and the gray value of the pixels is approximate to the average gray value of the neighborhood; in a boundary neighborhood of the target and the background, the difference between the gray value of a pixel and the average gray value of the neighborhood is large, the pixel points of the target and the background are concentrated near a diagonal line, and two peaks respectively correspond to the target and the background. Therefore, it is known from the two-dimensional histogram analysis that the characteristic of double peaks, one valley and extreme values around the diagonal is presented. If the target area and the background area have different probability distributions, and the posterior probabilities of the target area and the background area are used for normalizing other areas to make the target entropy and the background entropy have additivity, the calculation formula of the discrete two-dimensional entropy is as follows:
Figure 491462DEST_PATH_IMAGE003
wherein i represents the point gray scale of a pixel point in the gray scale image, j represents the region gray scale mean value of the neighborhood of the pixel point, P i,j The probability of the occurrence of the average value of the point gray level and the area gray level is (i, j);
the discriminant function of the two-dimensional entropy of the target and the background is defined as:
Figure 845083DEST_PATH_IMAGE004
wherein s is the threshold of the iterative segmentation of the current round, t is the image gray level, P A In order to be the density probability,
Figure 377696DEST_PATH_IMAGE005
Figure 107754DEST_PATH_IMAGE006
,H L the entropy of information when (s, t) takes the maximum gray level is expressed by the same calculation method as H A
And according to the two-dimensional entropy set of the target area and the background area, selecting the maximum entropy in the two-dimensional entropy set as a segmentation threshold value to segment the gray level image to obtain a target segmentation binary image.
As an implementation manner, the selecting a corresponding target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information, and obtaining a target segmentation binary image when the insect region is a non-white background, includes the following steps:
acquiring a brightness channel of the enhanced image according to the enhanced image;
calculating a spectral residual based on the luminance channel, wherein a calculation formula of the spectral residual is as follows:
Figure 307791DEST_PATH_IMAGE007
wherein f represents a luminance image, R (f) represents a spectral residual, H is a smoothing filter of k × k, and L () represents a logarithmic operation;
carrying out inverse Fourier transform and Gaussian filtering on the spectrum residual error to obtain a background segmentation chart, wherein the formula is as follows:
Figure 566734DEST_PATH_IMAGE008
wherein p (x) is an original phase spectrum, G (x) is Gaussian convolution, f is a brightness image, and R (f) represents a spectrum residual error;
and negating the background segmentation image to obtain a target segmentation binary image.
As an implementation manner, the performing feature selection on the target segmentation binary image to obtain a feature selection binary image includes the following steps:
carrying out contour extraction on the target segmentation binary image to obtain an inner contour set and an outer contour set;
traversing the inner contour set, calculating the area of each inner contour, and filling the inner contour of a specific area region into a foreground pixel value to obtain a filling result image;
traversing the outer contour set, calculating the area of each inner contour and the length-width ratio of the minimum external rectangle, and filling the outer contour which simultaneously meets the area and length-width ratio threshold screening intervals into background pixel values based on the filling result image to obtain a feature screening image.
As an implementation, a binary map is selected for the features and insect coverage is calculated, the formula for insect coverage is:
Figure 852222DEST_PATH_IMAGE009
wherein fontPixelSum is the foreground number of the feature screening image, M is the width of the feature screening image, N is the height of the feature screening image, and M × N is the size of the feature screening image.
An insect coverage rate detection system based on image processing comprises an image enhancement module, an image background color identification module, an insect segmentation module, a feature selection module and an insect coverage rate calculation module;
the image enhancement module is used for carrying out image enhancement processing on the original insect image based on the original insect image to obtain an enhanced image, wherein the original insect image comprises a background;
the image background color identification module is used for carrying out background color identification on the enhanced image to obtain background color category information;
the insect segmentation module selects a corresponding target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information to obtain a target segmentation binary image;
the characteristic selection module is used for carrying out characteristic selection on the target segmentation binary image to obtain a characteristic selection binary image;
and the insect coverage rate calculation module selects a binary image for the features and calculates the insect coverage rate.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of:
based on an original insect image, carrying out image enhancement processing on the original insect image to obtain an enhanced image, wherein the original insect image comprises a background;
carrying out background color identification on the enhanced image to obtain background color category information;
selecting a corresponding target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information to obtain a target segmentation binary image;
performing feature selection on the target segmentation binary image to obtain a feature selection binary image;
and selecting a binary image according to the characteristics, and calculating the insect coverage rate.
An insect coverage detection apparatus based on image processing, comprising a memory, a processor and a computer program stored in the memory and running on the processor, the processor implementing the computer program when executing the computer program the method steps of:
based on an original insect image, carrying out image enhancement processing on the original insect image to obtain an enhanced image, wherein the original insect image comprises a background;
carrying out background color identification on the enhanced image to obtain background color category information;
selecting a corresponding target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information to obtain a target segmentation binary image;
performing feature selection on the target segmentation binary image to obtain a feature selection binary image;
and selecting a binary image according to the characteristics, and calculating the insect coverage rate.
Due to the adoption of the technical scheme, the invention has the remarkable technical effects that:
the method solves the problem of unsatisfactory segmentation effect caused by different illumination intensities and different complex backgrounds in the image segmentation process, can recover the image under different illumination intensities to solve the image target distortion caused by illumination intensity change, and simultaneously adopts different image segmentation algorithms for the backgrounds with different complexities to improve the segmentation efficiency of the insect image and the accuracy of coverage rate calculation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for detecting insect coverage according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an insect coverage detection system according to an embodiment of the present invention;
FIG. 3 is a view showing the construction of an apparatus used in the embodiment of the present invention;
fig. 4-6 are diagrams of detection effects in the embodiment of the invention.
Detailed Description
The present invention will be further described in detail with reference to the following examples, which are illustrative of the present invention and are not intended to limit the present invention thereto.
Example 1:
a method of insect coverage detection based on image processing, as shown in fig. 1, comprising the steps of:
s100, based on an original insect image, carrying out image enhancement processing on the original insect image to obtain an enhanced image, wherein the original insect image comprises a background;
s200, identifying the color of the background based on the enhanced image to obtain background color category information;
s300, selecting a corresponding target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information to obtain a target segmentation binary image;
s400, performing feature selection on the target segmentation binary image to obtain a feature selection binary image;
and S500, selecting a binary image according to the characteristics, and calculating the insect coverage rate.
In the present invention, the original insect image may be a general image including a plurality or a plurality of insects, or an insect sticker image, and in the present invention, the insect sticker image obtained by the insect sticker apparatus of fig. 3 may be combined. The original insect image is not limited as long as the method of the present invention can be applied.
The method solves the problem of unsatisfactory segmentation effect caused by different illumination intensities and different complex backgrounds in the image segmentation process, can recover the images under different illumination intensities to solve the image target distortion caused by illumination intensity change, and simultaneously adopts different image segmentation algorithms for the backgrounds with different complexities to improve the segmentation efficiency of the insect images and the accuracy of coverage rate calculation.
In one embodiment, since there may be uneven brightness in the original insect image, the object color in the image is determined by the reflection capability of the object to long-wave (red), medium-wave (green), and short-wave (blue) light, rather than the absolute value of the reflected light intensity, so as to achieve statistical analysis based on color sense uniformity (color constancy), with uniformity, without being affected by illumination unevenness, and achieve balance among dynamic range compression, edge enhancement, and color constancy, thereby realizing adaptive enhancement of various types of images. Specifically, in step S100, based on an original insect image, an image enhancement process is performed on the original insect image to obtain an enhanced image, including the following steps:
blurring the original insect image according to a specified scale to obtain a brightness image;
calculating a value of Log [ R (x, y) ] based on the original insect image and the luminance image, wherein the calculation formula adopted is Log [ R (x, y) ] = Log [ I (x, y) ] -Log [ L (x, y) ]
Wherein R (x, y) represents a reflected image, I (x, y) represents an original insect image, and L (x, y) represents a luminance image;
and quantizing Log [ R (x, y) ] into pixel values ranging from 0 to 255 as final output to obtain an enhanced image.
After obtaining the enhanced image, the background color in the enhanced image needs to be identified, and in step S200, the identifying the color of the background based on the enhanced image to obtain the background color category information includes the following steps:
carrying out color clustering on the enhanced image by adopting a K-Means algorithm to obtain a clustering region identification map;
analyzing the clustering region identifier graph and the enhanced image in a combined manner, and counting an RGB distribution histogram of each clustering region corresponding to the enhanced image;
counting the RGB channel gray value with the highest proportion in each region and the proportion of the RGB channel gray value in the corresponding region through the RGB distribution histogram, selecting the region where the maximum proportion value is located, obtaining the RGB channel gray value and the number of the region where the maximum proportion value is located, assuming that each clustering region comprises a subset number of S- { S1, S2,. Once, sk }, and the RGB value and the number N- { (RGB 1, num 1), (RGB 2, num 2),. Once (RGBK, numk) } of the maximum gray number corresponding to each clustering region, comparing each number in N with the corresponding clustering region in S, and taking the term with the highest proportion, namely (RGbx, numx);
based on the gray value and the number of the RGB channels of the area where the maximum proportion value is located, the average value of the RGB channels is judged to be a white background or a non-white background, and the formula is as follows:
rg = abs(r - g)
rb = abs(r - b)
gb = abs(g - b)
and when the calculation results simultaneously satisfy (R > 120), (G > 120), (B > 120), (rg < = 20), (gb < = 20), (rb < = 20), and are white backgrounds, otherwise, the calculation results are non-white backgrounds, wherein R is an R channel mean value in the clustering region, G is a G channel mean value in the clustering region, and B is a B channel mean value in the clustering region.
In step S300, selecting a corresponding target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information, performing HSV channel separation according to the enhanced image when the background color category is white, and performing two-dimensional maximum entropy threshold segmentation on a luminance channel to obtain an insect region, thereby obtaining a target segmentation binary image, including the following steps:
graying the enhanced image to obtain a grayscale image, wherein the size of the grayscale image is W x H, the grayscale level is L, W represents the width of the grayscale image, and H represents the height of the grayscale image;
based on the gray image, forming each pixel point in the image and a neighborhood in the range of n × n around the pixel point into a region, and calculating the neighborhood average gray value of the region, wherein the formula is as follows:
Figure 621857DEST_PATH_IMAGE010
wherein, (x, y) is the current pixel point coordinate, n is the neighborhood side length, i represents the point gray scale of a pixel point in the gray scale image, and j represents the regional gray scale mean value of the pixel point neighborhood;
establishing a mapping image F (x, y) according to each pixel point in the gray image and the neighborhood average gray value;
setting the probability P of the average value of the point gray level and the regional gray level to (i, j) according to the mapping image i,j Based on the probability P i,j Obtaining a two-dimensional histogram of the mean value of the point gray level and the area gray level of the gray level image, and the probability P i,j Is expressed as:
Figure 410822DEST_PATH_IMAGE011
wherein i represents the point gray scale of a pixel point in the gray scale image, j represents the regional gray scale mean value of the neighborhood of the pixel point, W represents the width of the gray scale image, H represents the height of the gray scale image, and W x H represents the size of the gray scale image;
dividing the two-dimensional histogram into different regions including a target region and a background region by using a segmentation threshold vector, wherein the proportion of the target region to the background region is the largest according to the homomorphism, the gray levels of pixels in the target region and the background region are uniform, and the gray value of the pixel is close to the average gray value of the neighborhood; in a boundary neighborhood of the target and the background, the difference between the gray value of a pixel and the average gray value of the neighborhood is large, the pixel points of the target and the background are concentrated near a diagonal line, and two peaks respectively correspond to the target and the background. Therefore, it is known from the two-dimensional histogram analysis that the characteristic of double peaks, one valley and extreme values around the diagonal is presented. Setting the target area and the background area to have different probability distributions, and normalizing other areas by using the posterior probability of the target area and the background area to enable the target entropy and the background entropy to have additivity, wherein the discrete two-dimensional entropy is as follows:
Figure 371825DEST_PATH_IMAGE012
wherein i represents the point gray scale of a pixel point in the gray scale image, j represents the region gray scale mean value of the neighborhood of the pixel point, P i,j The probability of the occurrence of the average value of the point gray level and the area gray level is (i, j);
the discriminant function of the two-dimensional entropy of the target and the background is defined as:
Figure 879029DEST_PATH_IMAGE013
wherein, the threshold value is segmented in the iteration of the current round, t is the image gray level, P A In order to be the density probability,
Figure 482049DEST_PATH_IMAGE014
Figure 656678DEST_PATH_IMAGE015
,H L expressing the entropy of the information when (s, t) takes the maximum gray level, the calculation method is the same as H A
And according to the two-dimensional entropy set of the target area and the background area, selecting the maximum entropy in the two-dimensional entropy set as a segmentation threshold value to segment the gray level image to obtain a target segmentation binary image.
In step S300, selecting a corresponding target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information, and when the background color category is non-white, including the following steps:
acquiring a brightness channel of the enhanced image according to the enhanced image;
calculating a spectral residual based on the luminance channel, wherein a calculation formula of the spectral residual is as follows:
Figure 257424DEST_PATH_IMAGE016
wherein f represents a luminance image, R (f) represents a spectral residual, H is a smoothing filter of k × k, and L () represents a logarithmic operation;
performing Fourier inversion and Gaussian filtering on the spectrum residual error to obtain a background segmentation chart, wherein the formula is as follows:
Figure 517504DEST_PATH_IMAGE017
wherein p (x) is an original phase spectrum, G (x) is Gaussian convolution, f represents a brightness image, and R (f) represents a spectrum residual error;
and negating the background segmentation image to obtain a target segmentation binary image.
In step S400, the performing feature selection on the target segmentation binary image to obtain a feature selection binary image includes the following steps:
carrying out contour extraction on the target segmentation binary image to obtain an inner contour set and an outer contour set;
traversing the inner contour set, calculating the area of each inner contour, and filling the inner contour of a specific area region into a foreground pixel value to obtain a filling result image;
and traversing the outer contour set, calculating the area of each inner contour and the length-width ratio of the minimum circumscribed rectangle, and filling the outer contour which simultaneously meets the area and length-width ratio threshold screening intervals into background pixel values based on the filling result image to obtain a feature screening image.
In step S500, a binary map is selected according to the features, and an insect coverage rate is calculated, wherein the formula of the insect coverage rate is as follows:
Figure 861898DEST_PATH_IMAGE018
wherein fontPixelSum is the foreground number of the feature screening image, M is the width of the feature screening image, N is the height of the feature screening image, and M × N is the size of the feature screening image.
In a specific embodiment, the insect coverage detection based on image processing can be applied to the coverage detection of the sticky trap, and as shown in an instrument configuration diagram of the coverage detection of the sticky trap shown in fig. 3, the instrument configuration diagram includes an inlet 1 for allowing an insect to enter, a light-transmitting plate 2 for allowing light to enter into the instrument, a camera 3 for shooting an image of the sticky trap, a storage box 5 for storing the sticky trap to be used, and a conveyor belt 4 for replacing the sticky trap, after the camera 3 collects the image of the sticky trap in the instrument, the image of the sticky trap is subjected to steps S100 to S500 to obtain the coverage of the sticky trap, fig. 4 to 6 are detection effect diagrams of the coverage of the sticky trap, fig. 4 has an insect coverage of 12.13%, fig. 5 has an insect coverage of 5.3%, fig. 6 has an insect coverage of 42,06%, and when the coverage reaches a certain value, the conveyor belt 4 is replaced with a new sticky trap.
Example 2:
a method for detecting insect coverage based on image processing, as shown in fig. 2, includes an image enhancement module 100, an image background color identification module 200, an insect segmentation module 300, a feature selection module 400, and an insect coverage calculation module 500;
the image enhancement module 100 performs image enhancement processing on an original insect image based on the original insect image to obtain an enhanced image, wherein the original insect image includes a background;
the image background color identification module 200 is configured to perform background color identification on the enhanced image to obtain background color category information;
the insect segmentation module 300 selects a corresponding target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information to obtain a target segmentation binary image;
the feature selection module 400 is configured to perform feature selection on the target segmentation binary image to obtain a feature selection binary image;
the insect coverage calculation module 500 selects a binary map for the features and calculates the insect coverage.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In addition, it should be noted that the specific embodiments described in the present specification may be different in terms of the parts, the shapes of the components, the names of the components, and the like. All equivalent or simple changes in the structure, characteristics and principles of the invention which are described in the patent conception are included in the protection scope of the invention. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.

Claims (11)

1. A method for detecting insect coverage based on image processing, comprising the steps of:
based on an original insect image, carrying out image enhancement processing on the original insect image to obtain an enhanced image, wherein the original insect image comprises a background;
identifying the background color based on the enhanced image to obtain background color category information;
selecting a corresponding target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information to obtain a target segmentation binary image;
performing feature selection on the target segmentation binary image to obtain a feature selection binary image;
and selecting a binary image according to the characteristics, and calculating the insect coverage rate.
2. The method of detecting insect coverage based on image processing as claimed in claim 1, wherein said image enhancement processing is performed on said original insect image based on said original insect image to obtain an enhanced image, comprising the steps of:
decomposing an original insect image into a reflection image and a brightness image;
carrying out fuzzy processing on the brightness image according to a specified scale to obtain a fuzzy brightness image;
calculating a value of Log [ R (x, y) ] based on the original insect image and the blurred brightness image, wherein the calculation formula adopted is Log [ R (x, y) ] = Log [ I (x, y) ] -Log [ L (x, y) ]
Wherein, R (x, y) represents a reflection image, I (x, y) represents an original insect image, and L (x, y) represents a blurred brightness image;
the Log [ R (x, y) ] is quantized to pixel values from 0 to 255, resulting in an enhanced image.
3. The method of insect coverage detection based on image processing as claimed in claim 1, wherein said performing background color identification on said enhanced image results in background color category information, said background color category information being divided into a white background and a non-white background.
4. The method for detecting insect coverage based on image processing as claimed in claim 1 or 3, wherein the background color identification of the enhanced image to obtain the background color category information comprises the following steps:
carrying out color clustering on the enhanced images by adopting a K-Means algorithm to obtain a clustering region identification map;
analyzing the clustering region identifier graph and the enhanced image in a combined manner, and counting an RGB distribution histogram of each clustering region corresponding to the enhanced image;
counting the RGB channel gray value with the highest ratio in each region and the proportion of the RGB channel gray value in the corresponding region through the RGB distribution histogram, selecting the region where the maximum ratio is located, and acquiring the RGB channel gray value and the number of the regions where the maximum ratio is located;
and judging the RGB channel mean value based on the RGB channel gray value and the number of the area where the proportion maximum value is located, and judging the RGB channel mean value to be a white background or a non-white background.
5. The method of image processing-based insect coverage detection according to claim 4, wherein said obtaining a target segmentation binary image when it is a white background comprises the steps of:
graying the enhanced image to obtain a gray image;
based on the gray image, calculating neighborhood average gray values of all pixel points in the gray image, and establishing a mapping image according to all the pixel points in the gray image and the neighborhood average gray values;
according to the mapping image, based on the probability P of the point gray level and the region gray level average value to occur (i, j) i,j Obtaining a two-dimensional histogram of the mean value of the point gray level and the regional gray level of the gray level image, and the probability P i,j The expression formula is:
Figure 487927DEST_PATH_IMAGE001
wherein i represents the point gray scale of a pixel point in the gray scale image, j represents the regional gray scale mean value of the neighborhood of the pixel point, W represents the width of the gray scale image, H represents the height of the gray scale image, and W x H represents the size of the gray scale image;
dividing the two-dimensional histogram into different regions, including a target region and a background region,
based on the target area and the background area, solving a two-dimensional entropy set of the target area and the background area;
and according to the two-dimensional entropy set of the target area and the background area, selecting the maximum entropy in the two-dimensional entropy set as a segmentation threshold value to segment the gray level image to obtain a target segmentation binary image.
6. The method of image processing-based insect coverage detection according to claim 4, wherein said obtaining a target segmentation binary image when it is a non-white background comprises the steps of:
acquiring a brightness channel of the enhanced image according to the enhanced image;
calculating a spectral residual based on the luminance channel;
carrying out Fourier inversion and Gaussian filtering on the spectrum residual error to obtain a background segmentation chart;
and negating the background segmentation image to obtain a target segmentation binary image.
7. The method of claim 1, wherein the step of performing feature selection on the target segmentation binary image to obtain a feature selection binary image comprises the steps of:
carrying out contour extraction on the target segmentation binary image to obtain an inner contour set and an outer contour set;
traversing the inner contour set, calculating the area of each inner contour, and filling the inner contour of a specific area region into a foreground pixel value to obtain a filling result image;
traversing the outer contour set, calculating the area of each inner contour and the length-width ratio of the minimum external rectangle, and filling the outer contour which simultaneously meets the area and length-width ratio threshold screening intervals into background pixel values based on the filling result image to obtain a feature screening image.
8. The method of image processing-based insect coverage detection according to claim 1, wherein a binary map is selected for the features and insect coverage is calculated according to the formula:
Figure 670646DEST_PATH_IMAGE002
wherein fontPixelSum is the foreground number of the feature screening image, M is the width of the feature screening image, N is the height of the feature screening image, and M × N is the size of the feature screening image.
9. An insect coverage rate detection system based on image processing is characterized by comprising an image enhancement module, an image background color identification module, an insect segmentation module, a feature selection module and an insect coverage rate calculation module;
the image enhancement module is used for carrying out image enhancement processing on the original insect image based on the original insect image to obtain an enhanced image, wherein the original insect image comprises a background;
the image background color identification module is used for carrying out background color identification on the enhanced image to obtain background color category information;
the insect segmentation module selects a corresponding target segmentation algorithm to detect an insect region according to the enhanced image and the background color category information to obtain a target segmentation binary image;
the characteristic selection module is used for carrying out characteristic selection on the target segmentation binary image to obtain a characteristic selection binary image;
and the insect coverage rate calculation module selects a binary image for the features and calculates the insect coverage rate.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 8.
11. An insect coverage detection apparatus based on image processing, comprising a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor when executing the computer program implements the method steps of any one of claims 1 to 8.
CN202210413289.4A 2022-04-20 2022-04-20 Method, system and device for detecting coverage rate of sticky trap based on image processing Pending CN115205194A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210413289.4A CN115205194A (en) 2022-04-20 2022-04-20 Method, system and device for detecting coverage rate of sticky trap based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210413289.4A CN115205194A (en) 2022-04-20 2022-04-20 Method, system and device for detecting coverage rate of sticky trap based on image processing

Publications (1)

Publication Number Publication Date
CN115205194A true CN115205194A (en) 2022-10-18

Family

ID=83574491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210413289.4A Pending CN115205194A (en) 2022-04-20 2022-04-20 Method, system and device for detecting coverage rate of sticky trap based on image processing

Country Status (1)

Country Link
CN (1) CN115205194A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116403094A (en) * 2023-06-08 2023-07-07 成都菁蓉联创科技有限公司 Embedded image recognition method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116403094A (en) * 2023-06-08 2023-07-07 成都菁蓉联创科技有限公司 Embedded image recognition method and system
CN116403094B (en) * 2023-06-08 2023-08-22 成都菁蓉联创科技有限公司 Embedded image recognition method and system

Similar Documents

Publication Publication Date Title
CN108197546B (en) Illumination processing method and device in face recognition, computer equipment and storage medium
Lin et al. Intensity and edge based adaptive unsharp masking filter for color image enhancement
US8295606B2 (en) Device and method for detecting shadow in image
US7103228B2 (en) Local change of an image sharpness of photographic images with masks
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
US20020114512A1 (en) Color clustering and segmentation using sigma filtering
CN111784605B (en) Image noise reduction method based on region guidance, computer device and computer readable storage medium
EP1807805A1 (en) Method and apparatus for red-eye detection in an acquired digital image
JP2004348733A (en) Method and device for detecting red-eye area in digital image
WO2007025578A1 (en) Image segmentation method and system
CN109544583B (en) Method, device and equipment for extracting interested area of leather image
TW201432617A (en) Image processor with edge-preserving noise suppression functionality
US20110116710A1 (en) System and method for detection of specularity in an image
CN108563979B (en) Method for judging rice blast disease conditions based on aerial farmland images
US20130342694A1 (en) Method and system for use of intrinsic images in an automotive driver-vehicle-assistance device
CN105701491A (en) Method for making fixed-format document image template and application thereof
CN115205194A (en) Method, system and device for detecting coverage rate of sticky trap based on image processing
US8428352B1 (en) Post processing for improved generation of intrinsic images
Iwashima et al. Full reference image quality assessment by CNN feature maps and visual saliency
KR100488014B1 (en) YCrCb color based human face location detection method
CN109191481B (en) Rapid threshold segmentation method for shoe print image under poor exposure condition
Khan et al. Detection of blur and non-blur regions using frequency-based multi-level fusion transformation and classification via KNN matting
CN116258968B (en) Method and system for managing fruit diseases and insects
Long et al. An Efficient Method For Dark License Plate Detection
CN115797345B (en) Seafood baking abnormality identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination