CN112669304B - Pollen detection method based on digital image processing technology - Google Patents

Pollen detection method based on digital image processing technology Download PDF

Info

Publication number
CN112669304B
CN112669304B CN202110008965.5A CN202110008965A CN112669304B CN 112669304 B CN112669304 B CN 112669304B CN 202110008965 A CN202110008965 A CN 202110008965A CN 112669304 B CN112669304 B CN 112669304B
Authority
CN
China
Prior art keywords
image
points
center
pollen
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110008965.5A
Other languages
Chinese (zh)
Other versions
CN112669304A (en
Inventor
王萍
林必艺
侯谨毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202110008965.5A priority Critical patent/CN112669304B/en
Publication of CN112669304A publication Critical patent/CN112669304A/en
Application granted granted Critical
Publication of CN112669304B publication Critical patent/CN112669304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a pollen detection method based on a digital image processing technology, which comprises the following steps: 1) Preprocessing of the image: and graying the original image acquired by the microscope to obtain a gray image. 2) Acquiring a connected region: and performing edge detection and Hough gradient circle detection on the obtained gray level image to obtain a Hough detection circle center coordinate set, and performing region growth by taking each circle center as a growth seed point to obtain a connected region set. 3) Calculating and screening the characteristics of the connected region: and calculating the characteristics of each connected region in the connected region set, and screening the connected regions through the characteristics, wherein the screened connected regions are regarded as detection results. The method realizes pollen detection by using a digital image processing technology, and overcomes the defects of low efficiency, high cost, high possibility of making mistakes and the like in the manual pollen detection. The method for screening the communicated regions by constructing the characteristics has the advantages of clear principle, good interpretability and the like.

Description

Pollen detection method based on digital image processing technology
Technical Field
The invention relates to the field of digital image processing and pollen image detection, in particular to a method for detecting pollen in an image acquired by a microscope by using a digital image processing technology.
Background
Pollen is one of the common allergens and harms the health of some sensitive people. Therefore, the pollen concentration in the air can be detected and broadcasted in real time to help allergic patients to take preventive measures in time, and the method has remarkable social benefit and economic benefit. The main process of pollen concentration detection commonly used at present comprises the steps of firstly collecting pollen in the air and making the pollen smear, then observing the pollen smear through a microscope, manually carrying out detection and quantity statistics on the pollen in the pollen smear, and finally reversely calculating the concentration of the pollen in the air through the counted pollen quantity.
The inventor finds that the following defects exist in the manual detection and quantity statistics of the pollen in the pollen smear in the existing pollen concentration detection process:
in the pollen detection step, the pollen smear is observed by a manual mobile microscope, the pollen in the pollen smear is detected and counted, the economic cost and the time cost are high, the accuracy is greatly interfered by the professional level and the mental state of a detector, and errors are easy to occur.
Therefore, an efficient and stable pollen detection method is needed to optimize the pollen detection steps. With the continuous development of integrated circuit technology, the computer computing power is continuously improved, and more fields begin to use various computer technologies to assist people to reduce the labor amount and improve the labor efficiency.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a pollen detection method based on digital image processing, which solves the problems in the prior art that the economic cost and the time cost are high, the accuracy is greatly interfered by the professional level and the mental state of a detector, the error is easy to occur and the like in the manual pollen detection and quantity statistics.
The technical scheme of the invention is as follows: a pollen detection method based on digital image processing comprises the following steps:
1) And (4) preprocessing the image. Graying an original image acquired by a microscope to obtain a grayscale image; 2) And acquiring a connected region. Performing edge detection and Hough gradient circle detection on the obtained gray level image to obtain a Hough detection circle center coordinate set, and performing region growth by taking each circle center as a growth seed point to obtain a connected region set; 3) And calculating and screening the characteristics of the connected region. And calculating the characteristics of each connected region in the connected region set, and screening the connected regions according to the characteristics, wherein the screened connected regions are regarded as detection results.
The method comprises the following steps: preprocessing an image;
1-1) reading and storing image data using Mat class of OpenCV computer vision software library. Reading the original image S after the image file, wherein S comprises R s ,G s ,B s Three channels.
1-2) by converting R of the original image S s ,G s ,B s And carrying out weighted average on the three channels to obtain a gray image A. R is s ,G s ,B s The corresponding weights for the three channels are 0.114,0.587 and 0.299, respectively.
Step two: acquiring a connected region;
2-1) carrying out Canny edge detection on the gray level image A obtained in the step 1). Calculation of the horizontal gradient T of the points on the graph by Sobel operator x And a vertical gradient T y The overall gradient T at that point is calculated. And (3) judging the edge by using a dual-threshold algorithm, wherein the judgment rule is as follows: for points where the gradient is greater than the high threshold, an edge is identified; for points where the gradient is less than the low threshold, a non-edge is identified; for points with a gradient between the two, it is taken as the junction between the high threshold points, and if there are no high threshold points or other points between the high and low thresholds in the neighborhood of the point, the point is considered a non-edge point. With a high threshold value of 100 and a low threshold value of 50. And obtaining an edge image B after the edge detection is finished.
2-2) carrying out Hough gradient circle detection on the edge image B. For each edge point on the edge image, in its gradient vector direction (T) x ,T y ) Minimum radial distance r of min And the maximum radius distance r max Voting is carried out on the line segment between the two points of the line segment, and the coordinates of the two end points of the line segment are respectively (S) x ,S y ) And (E) x ,E y ) The calculation formulas of the two endpoints are shown in formula (1) and formula (2)
Figure BDA0002884554950000021
Figure BDA0002884554950000022
Wherein r is min And r max Are 10 and 150, respectively. And judging whether the voting value of each coordinate point with the voting value larger than 20 is the maximum voting value within the range of 31 x 31 by taking the coordinate as the center, and if so, judging that the coordinate is the coordinate of the center of a Hough detection circle. After voting of all the edge points is completed, n Hough detection circle center coordinates are obtained to form a Hough detection circle center coordinate set
Figure BDA0002884554950000023
2-3) for each Hough detection circle center coordinate C in the Hough detection circle center coordinate set C i Constructing a pixel point set d i Let d i ={S(c i ) H, wherein S (c) i ) For the original image S to be located at c i The pixel point of (2). And performing region growing based on the pixel value characteristics in the original image S. Continuously traversing the points which are not traversed in the current set, and judging four points in the four-connection direction of the points. For the four points, if the pixel values of the three channels and the center coordinates c of the circle are determined i Pixel point S (c) i ) If the difference value of the three channel values is less than 70, the three channel values are determined as growth area points and added into the current set d i And growth continues. This step is repeated until the current set d i All points in (a) have been judged and no new points are added. At the moment, the growth is finished, and the center coordinates c of each Hough detection circle i A connected region e is obtained after the growth is finished i . All the obtained connected region sets are recorded as
Figure BDA0002884554950000031
Step three: calculating and screening the characteristics of the connected region;
3-1) calculating connected component e i Upper and lower boundaries and left and right boundaries of (1), calculated from the boundariesTo the envelope rectangle aspect ratio f i
3-2) calculating connected component e i The number of the pixel points is a communicated region area value s i
3-3) calculating connected component e i Area value s of the outer envelope circle ci Calculating the area value s of the connected region i And area value s of the outer circle ci To obtain the circularity characteristic g of the connected region i
3-4) calculating the coordinates c of the center of the circle of the outsourcing circle of the connected area wi Radius of outer circle r ci Calculating all contour points of connected domain to the center c of the outer circle wi The distance of (c). Calculating the distance upper limit r by the formula (3) and the formula (4) maxi And a lower distance limit r mini . Calculating the center coordinate c of the distance outer envelope circle wi At a distance upper limit r maxi And a lower distance limit r mini The ratio of the number of the contour points in between is the effective contour ratio h i
r maxi =r ci *1.2 (3)
r mini =r ci *0.8 (4)
3-5) connecting the areas e i Radial expansion to obtain a radially expanded image z i . In the pollen image, the color difference between the pixel points close to the center distance is small, and in the impurity image, the color difference between the pixel points close to the center distance is relatively large. Will connect the areas e i Performing radial expansion using the radially expanded image z i The variance value of the line segment in the horizontal direction to evaluate the expanded connected component e i The color difference information of (a). The unfolded image is calculated by selecting an unfolding center point and stretching each circle from the inside to the outside to form a radially unfolded image z i One line segment above. z is a radical of formula i The upper coordinate is (x) z ,y z ) The pixel value of the pixel point (b) is calculated by substituting the pixel value into the formula (5) and the formula (6) to obtain the expanded connected region e i Mapping coordinates (x) of pixel points s ,y s ) Taking the pixel value of the mapping pixel point,as the unfolded image coordinate is (x) z ,y z ) Pixel value (coordinate (x)) of the pixel point of (a) s ,y s ) And when the pixel values are non-integer values, bilinear interpolation is used, and approximate values are taken as pixel values of mapping pixel points).
x s =x z *cos(π*y z /r ci ) (5)
y s =x z *sin(π*y z /r ci ) (6)
After the unfolded image is obtained, the end points in the unfolded image are respectively calculated as (r) ci /2,-r ci ),(r ci /2,r ci ) And the end point is (r) ci /3,-r ci ),(r ci /3,r ci ) And the variance value of the pixel points of the two line segments. The average value of the variance values of the two line segments is the average transverse line fluctuation ratio j' i
3-6) calculating the corrected average transverse fluctuation ratio j i . Communicating region e i Selecting average transverse fluctuation ratios j 'obtained from different unfolding centers' i Instead, the unrolling is modified in an iterative manner. Initial time is outside envelope circle center coordinate c wi As the current deployment center c zi . Respectively calculating the current expansion center c zi And four connected points are taken as the unfolding centers, and the corresponding average transverse fluctuation rates are obtained. Taking the point with the minimum average transverse fluctuation rate as the corrected current expansion center c zi . Continuously repeating the correction steps until the corrected current expansion center c zi And the corresponding average transverse line fluctuation rate is smaller than the average transverse fluctuation rates corresponding to the four communicating points in the four communicating directions, and the correction of the current expansion center is completed. After the correction is completed, the current deployment center c zi The corresponding average transverse wave power is recorded as the average transverse wave power characteristic j of the connected region i
3-7) for each connected region e i From the feature vector k i =(f i ,s i ,g i ,h i ,j i ) Comprehensively judging whether the pollen image is the pollen image. If rectangular aspect ratio f is enveloped i 2, the area value of the communication area is more than 100 and less than s i < 1300, circleShape degree characteristic g i Greater than 0.62, effective profile ratio h i Greater than 0.6, and correcting the average transverse fluctuation ratio j i If the number is less than 200, the connected region is judged to be the pollen image.
3-8) judging all connected areas e as pollen images i The envelope rectangle of (a) is labeled in the original image. Judging as connected region e of pollen image i The number of (2) is the number of pollen detected.
Compared with the prior art, the invention has the beneficial effects that:
the method comprehensively analyzes the characteristics of the pollen image under a microscope, constructs a pollen detection method based on a digital image processing technology, applies the method to detect and identify the pollen in the image, replaces manual pollen detection and statistics, and has the advantages of high processing speed, low economic cost, high accuracy, low possibility of errors and the like. The method for screening the communicated regions by constructing the characteristics has the advantages of clear principle, good interpretability and the like.
Drawings
FIG. 1 is a flow chart of a pollen detection method based on digital image processing technology proposed by the present invention;
FIG. 2 is an original image;
FIG. 3 is an edge image;
FIG. 4 is a set of connected regions, where (a), (b), (c), (d), (e), (f), (g) are the seven connected regions contained in the set of connected regions;
FIG. 5 is a set of connected component contours, where (a), (b), (c), (d), (e), (f), (g) are the seven connected component contours contained in the set of connected component contours;
FIG. 6 is a foreign object image and its radial development, wherein (a) is the foreign object image and (b) is the corresponding radial development of (a);
FIG. 7 is a pollen image and its radial development, wherein (a) is the pollen image and (b) is the corresponding radial development;
fig. 8 is an exemplary view of an expanded view and an expanded view in a radial expanding step, in which (a) is an exemplary view of the expanded view, and (b) is an exemplary view of the expanded view.
Detailed Description
The technical solutions of the present invention are further described in detail with reference to the accompanying drawings and specific embodiments, which are only illustrative of the present invention and are not intended to limit the present invention.
The method comprises the following steps: preprocessing an image;
1-1) reading and storing image data by using Mat class of OpenCV computer vision software library. Reading the original image S after the image file, wherein S comprises R s ,G s ,B s Three channels, the image is shown in figure 2.
1-2) by passing R of the original image S s ,G s ,B s And carrying out weighted average on the three channels to obtain a gray image A. R is s ,G s ,B s The corresponding weights for the three channels are 0.114,0.587 and 0.299, respectively.
Step two: acquiring a connected region;
2-1) carrying out Canny edge detection on the gray level image A obtained in the first step. Calculation of horizontal gradient T by Sobel operator x And a vertical gradient T y The overall gradient T at that point is calculated. And (3) judging the edge by using a dual-threshold algorithm, wherein the judgment rule is as follows: for points where the gradient is greater than the high threshold, an edge is identified; for points where the gradient is less than the low threshold, a non-edge is identified; for points with a gradient between the two, it is taken as the junction between the high threshold points, and if there are no high threshold points or other points between the high and low thresholds in the neighborhood of the point, then it is considered a non-edge point. With a high threshold value of 100 and a low threshold value of 50. After the edge detection is completed, an edge image B is obtained, and the edge image is shown in fig. 3.
2-2) carrying out Hough gradient circle detection on the edge image B. For each edge point on the edge image, in its gradient vector direction (T) x ,T y ) Minimum radial distance r of min And the maximum radius distance r max Voting is carried out on the line segment between the two points, and the coordinates of the two end points of the line segment are respectively (S) x ,S y ) And (E) x ,E y ) Of two end pointsThe calculation formula is shown in formula (1) and formula (2)
Figure BDA0002884554950000051
Figure BDA0002884554950000052
Wherein r is min And r max Are 10 and 150, respectively. And judging whether the voting value of each coordinate point is greater than 20 or not, wherein the voting value is the maximum voting value within the range of 31-31 taking the coordinate as the center, and if so, judging that the coordinate is the coordinate of the center of a Hough detection circle. After voting is finished on all the edge points, obtaining circle center coordinates of n Hough detection circles to form a Hough detection circle center coordinate set
Figure BDA0002884554950000053
2-3) for each Hough detection circle center coordinate C in the Hough detection circle center coordinate set C i Constructing a pixel point set d i Let d be i ={S(c i ) In which S (c) i ) For the original image S to be located at c i The pixel point of (2). And performing region growing based on the pixel value characteristics in the original image S. Continuously traversing the points which are not traversed in the current set, and judging four points in the four-connection direction of the points. For these four points, if the pixel values and center coordinates c of three channels are determined i Pixel point S (c) i ) If the difference value of the three channel values is less than 70, the three channel values are determined as growth area points and added into the current set d i And growth continues. This step is repeated until the current set d i All points in (a) have been judged and no new points are added. At this moment, the growth is finished, and the center c of each Hough detection circle is i A connected region e is obtained after the growth is finished i . All the obtained connected region sets are recorded as
Figure BDA0002884554950000054
The set of connected components is shown in FIG. 4, which contains 7 connected components.
Step three: calculating and screening the characteristics of the connected region;
3-1) calculating connected component e i The upper and lower boundaries and the left and right boundaries of the rectangular region are calculated to obtain the length-width ratio f of the outsourcing rectangle i
3-2) calculating connected region e i The number of the pixel points is a communicated region area value s i
3-3) calculating connected component e i Area value s of the outer envelope circle ci Calculating the area value s of the connected region i And area value s of the outer circle ci To obtain the circularity characteristic g of the connected region i
3-4) calculating the coordinates c of the center of the circle of the outsourcing circle of the connected area wi Radius of outer circle r ci Calculating all contour points of connected domain to the center c of the outer circle wi The connected component profile is shown in fig. 5. Calculating the distance upper limit r by the formula (3) and the formula (4) maxi And lower limit of distance r mini . Calculating the center coordinate c of the distance outer envelope circle wi At the upper distance limit r maxi And lower limit of distance r mini The ratio of the number of the contour points in between is the effective contour ratio h i
r maxi =r ci *1.2 (3)
r mini =r ci *0.8 (4)
3-5) connecting the areas e i Radial expansion to obtain a radially expanded image z i . In the pollen image, the color difference between the pixel points close to the center distance is small, and in the impurity image, the color difference between the pixel points close to the center distance is relatively large. Will connect the areas e i Performing radial expansion using the radially expanded image z i The variance value of the line segment in the horizontal direction to evaluate the expanded connected component e i The color difference information of (a). The expanded image is calculated by selecting an expanded central point and dividing each circle from the inside to the outsideStretching into a radially expanded image z i One line segment above. z is a radical of i The upper coordinate is (x) z ,y z ) The pixel value of the pixel point (b) is calculated by substituting the pixel value into the formula (5) and the formula (6) to obtain the expanded connected region e i Coordinates (x) of the mapped pixel points s ,y s ) Taking the pixel value of the mapping pixel point as the coordinate of the expanded image as (x) z ,y z ) Pixel value (coordinate (x)) of the pixel point of (c) s ,y s ) And when the pixel values are non-integer values, bilinear interpolation is used, and approximate values are taken as pixel values of mapping pixel points). An impurity image and its radial development are shown in fig. 6, and a pollen image and its development are shown in fig. 7. The developed view and the example view of the developed view are shown in fig. 8, in which a development example of a circle of radius r is illustrated.
x s =x z *cos(π*y z /r ci ) (5)
y s =x z *sin(π*y z /r ci ) (6)
After the unfolded image is obtained, respectively calculating the end points in the unfolded image as (r) ci /2,-r ci ),(r ci /2,r ci ) And the end point is (r) ci /3,-r ci ),(r ci /3,r ci ) And the variance value of the pixel points of the two line segments. The average value of the variance values of the two line segments is the average transverse line fluctuation ratio j' i
3-6) modified mean transverse fluctuation ratio j i . Communicating region e i Selecting average transverse fluctuation ratios j 'obtained from different unfolding centers' i Instead, the unrolling is modified in an iterative manner. Initial time is outside envelope circle center coordinate c wi As the current deployment center c zi . Respectively calculating the current expansion center c zi And four connected points are taken as unfolding centers, and the corresponding average transverse fluctuation rates are obtained. Taking the point with the minimum average transverse fluctuation rate as the corrected current expansion center c zi . Continuously repeating the correction steps until the corrected current expansion center c zi The corresponding average transverse line fluctuation rate is less than four connected point pairs in the four connected directionsAnd finishing the correction of the current unfolding center according to the average transverse fluctuation rate. After the correction is completed, the current deployment center c zi The corresponding average lateral fluctuation rate is expressed as the corrected average lateral fluctuation rate j of the connected region i
3-7) for each connected region e i According to the feature vector k i =(f i ,s i ,g i ,h i ,j i ) Comprehensively judging whether the pollen image is the pollen image. If rectangular aspect ratio f is enveloped i 2, the area value of the communication area is more than 100 and less than s i < 1300, circularity feature g i Greater than 0.62, effective contour ratio h i Greater than 0.6, corrected average lateral fluctuation ratio j i If the number is less than 200, the connected region is judged to be a pollen image.
3-8) all connected regions e judged as pollen images i The envelope rectangle of (a) is labeled in the original image. Judging as connected region e of pollen image i The number of (2) is the number of pollen detected.
The feasibility of the pollen detection method proposed by the present invention is verified by the following specific tests, which are described in detail below:
the data set tested was 436 images taken with an electron-ocular microscope containing 812 pollen. After detection using the algorithm, the number of correctly detected pollen was 751, the number of incorrectly detected pollen was 34, and the number of missed detections was 61. The calculated achievable accuracy was 95.7% and the recall was 92.5%.
Although the present invention has been described in connection with the accompanying drawings, the present invention is not limited to the above-described embodiments, which are intended to be illustrative rather than restrictive, and many modifications may be made by those skilled in the art without departing from the spirit of the present invention as disclosed in the appended claims.

Claims (3)

1. A pollen detection method based on a digital image processing technology is characterized by comprising the following steps:
1) Preprocessing of an image: graying an original image acquired by a microscope to obtain a gray image;
2) Obtaining a connected region: performing edge detection and Hough gradient circle detection on the obtained gray level image to obtain a Hough detection circle center coordinate set, and performing region growth by taking each circle center as a growth seed point to obtain a connected region set;
3) Calculating and screening the characteristics of the connected region: calculating the characteristics of each connected region in the connected region set, calculating the characteristics to screen the connected regions, and considering the screened connected regions as detected pollen;
the step 3) of calculating and screening the characteristics of the connected regions comprises the following steps:
(1) Calculating connected region e i The upper and lower boundaries and the left and right boundaries of the frame are calculated to obtain the length-width ratio f of the outsourcing rectangle i
(2) Calculating connected component e i The number of the pixel points is a communicated region area value s i
(3) Calculating connected component e i Area value s of the outer envelope circle ci Calculating the area value s of the connected region i And area value s of the outer circle ci To obtain the circularity characteristic g of the connected region i
(4) Calculating the coordinates c of the center of the circle of the outsourcing of the connected region wi Radius of outer circle r ci Calculating all contour points of connected domain to the center of the outer circle wi The distance of (a); calculating the distance upper limit r by the formula (3) and the formula (4) maxi And lower limit of distance r mini (ii) a Calculating the center coordinate c of the distance outer envelope circle wi At the upper distance limit r maxi And a lower distance limit r mini The ratio of the number of the contour points in between is the effective contour ratio h i
r maxi =r ci *1.2 (3)
r mini =r ci *0.8 (4)
(5) Will connect the areas e i Radial expansion to obtain a radially expanded image z i (ii) a In the pollen image, all pixel points close to the center distanceThe color difference of the image is small, and the color difference between each pixel point with the similar distance from the center in the impurity image is relatively large; will connect the areas e i Performing radial expansion using the radially expanded image z i Evaluating the variance value of the line segment in the horizontal direction to evaluate the expanded connected region e i Color difference information of (a); the unfolded image is calculated by selecting an unfolding center point and stretching each circle from the inside to the outside to form a radially unfolded image z i A line segment of (a); z is a radical of formula i The upper coordinate is (x) z ,y z ) The pixel value of the pixel point (b) is calculated by substituting it into the formulas (5) and (6) to obtain the developed connected component e i Coordinates (x) of the mapped pixel points s ,y s ) Taking the pixel value of the mapping pixel point as the coordinate of the expanded image as (x) z ,y z ) Pixel value, coordinate (x) of the pixel point of (c) s ,y s ) When the pixel values are non-integers, bilinear interpolation is used, and approximate values are taken as pixel values of mapping pixel points;
x s =x z *cos(π*y z /r ci ) (5)
y s =x z *sin(π*y z /r ci ) (6)
after the unfolded image is obtained, the end points in the unfolded image are respectively calculated as (r) ci /2,-r ci ),(r ci /2,r ci ) And the end point is (r) ci /3,-r ci ),(r ci /3,r ci ) The variance value of pixel points of the two line segments; the average value of the variance values of the two line segments is the average transverse line fluctuation ratio j' i
(6) Calculating the corrected average transverse fluctuation rate j i (ii) a Communicating region e i Selecting average transverse fluctuation ratios j 'obtained from different unfolding centers' i Different, the expansion is corrected in an iterative mode; initial time is outside envelope circle center coordinate c wi As the current deployment center c zi (ii) a Respectively calculating the current expansion center c zi And four connected points are taken as unfolding centers, and the corresponding average transverse fluctuation rates are obtained; taking the point with the minimum average transverse fluctuation rate as the corrected current expansion center c zi (ii) a Without interruption of the flowRepeating the correction step until the corrected current expansion center c zi The corresponding average transverse line fluctuation rate is smaller than the average transverse fluctuation rates corresponding to the four communicating points in the four communicating directions, and the correction of the current expansion center is completed; after the correction is completed, the current deployment center c zi The corresponding average transverse wave power is recorded as the average transverse wave power characteristic j of the connected region i
(7) For each connected region e i From the feature vector k i =(f i ,s i ,g i ,h i ,j i ) Comprehensively judging whether the pollen image is a pollen image; if rectangular aspect ratio f is enveloped i <2, area value of connected region 100<s i <1300, circularity feature g i >0.62, effective Profile ratio h i >0.6, correcting the average lateral fluctuation ratio j i <200, judging the connected region as a pollen image;
(8) All connected regions e judged as pollen images i The outer wrapping rectangle is marked in the original image; judging as connected region e of pollen image i The number of (2) is the number of pollen detected.
2. The pollen detection method based on digital image processing technology as claimed in claim 1, wherein the step 1) of preprocessing the image comprises the steps of:
(1) Reading and storing image data by using Mat class of OpenCV computer vision software library; reading the original image S after the image file, wherein S comprises R s ,G s ,B s Three channels;
(2) By dividing R of the original image S s ,G s ,B s Carrying out weighted average on the three channels to obtain a gray image A; r s ,G s ,B s The corresponding weights for the three channels are 0.114,0.587 and 0.299, respectively.
3. The pollen detection method based on the digital image processing technology as claimed in claim 1, wherein the step 2) of obtaining the connected region comprises the steps of:
(1) Carrying out Canny edge detection on the gray level image obtained in the step 1); calculation of the horizontal gradient T of the points on the graph by Sobel operator x And a vertical gradient T y Calculating to obtain the overall gradient T of the point; and (3) judging the edge by using a dual-threshold algorithm, wherein the judgment rule is as follows: for points where the gradient is greater than the high threshold, an edge is identified; for points where the gradient is less than the low threshold, a non-edge is identified; regarding a point with a gradient between the two points as a connecting point between the high threshold points, if the point neighborhood with the gradient between the two points has no high threshold point or other points connected with the high threshold point between the high threshold value and the low threshold value, the point is considered as a non-edge point; wherein the high threshold value is 100 and the low threshold value is 50; obtaining an edge image B after the edge detection is finished;
(2) Carrying out Hough gradient circle detection on the edge image B; for each edge point on the edge image, in its gradient vector direction (T) x ,T y ) Minimum radial distance r of min And the maximum radius distance r max Voting is carried out on the line segment between the two points of the line segment, and the coordinates of the two end points of the line segment are respectively (S) x ,S y ) And (E) x ,E y ) The calculation formulas of the two endpoints are shown in formula (1) and formula (2)
Figure FDA0003840957660000031
Figure FDA0003840957660000032
Wherein r is min And r max Are respectively 10 and 150; for each coordinate point with the voting value larger than 20, judging whether the voting value is the maximum voting value within the range of 31 x 31 by taking the coordinate as the center, and if so, judging that the coordinate is the center coordinate of a Hough detection circle; after voting is finished on all the edge points, obtaining circle center coordinates of n Hough detection circles to form a Hough detection circle center coordinate set
Figure FDA0003840957660000033
(3) For each Hough detection circle center coordinate C in the Hough detection circle center coordinate set C i Constructing a pixel point set d i Let d be i ={S(c i ) In which S (c) i ) For the original image S at c i The pixel points of (2); performing region growing based on pixel value characteristics in the original image S; continuously traversing points which are not traversed in the current set, and judging four points in four communication directions of the points; for these four points, if the pixel values and center coordinates c of three channels are determined i Pixel point S (c) i ) If the difference value of the three channel values is less than 70, the three channel values are determined as growth area points and added into the current set d i Neutralizing and continuing to grow; this step is repeated until the current set d i All the points in the list are judged and no new point is added; at the moment, the growth is finished, and the center coordinates c of each Hough detection circle i A connected region e is obtained after the growth is finished i (ii) a All the obtained connected region sets are recorded as
Figure FDA0003840957660000034
CN202110008965.5A 2021-01-05 2021-01-05 Pollen detection method based on digital image processing technology Active CN112669304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110008965.5A CN112669304B (en) 2021-01-05 2021-01-05 Pollen detection method based on digital image processing technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110008965.5A CN112669304B (en) 2021-01-05 2021-01-05 Pollen detection method based on digital image processing technology

Publications (2)

Publication Number Publication Date
CN112669304A CN112669304A (en) 2021-04-16
CN112669304B true CN112669304B (en) 2023-02-28

Family

ID=75413011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110008965.5A Active CN112669304B (en) 2021-01-05 2021-01-05 Pollen detection method based on digital image processing technology

Country Status (1)

Country Link
CN (1) CN112669304B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114838664B (en) * 2022-07-04 2022-09-23 江西农业大学 In-situ pileus size measuring method based on black-skin termitomyces albuminosus
CN115202026A (en) * 2022-07-08 2022-10-18 天津大学 Collection system suitable for air-borne pollen slide microscopic image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295588A (en) * 2016-08-17 2017-01-04 电子科技大学 The automatic identifying method of leukocyte in a kind of leucorrhea micro-image
CN108229579A (en) * 2018-01-26 2018-06-29 南京信息工程大学 Pollen image classification recognition methods based on robust invariable rotary textural characteristics
CN111583175A (en) * 2020-03-30 2020-08-25 山东浪潮通软信息科技有限公司 Erythrocyte image detection method, erythrocyte image detection equipment and erythrocyte image detection medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8011133B2 (en) * 2007-06-27 2011-09-06 Pioneer Hi-Bred International, Inc. Method and apparatus of high-throughput pollen extraction, counting, and use of counted pollen for characterizing a plant

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295588A (en) * 2016-08-17 2017-01-04 电子科技大学 The automatic identifying method of leukocyte in a kind of leucorrhea micro-image
CN108229579A (en) * 2018-01-26 2018-06-29 南京信息工程大学 Pollen image classification recognition methods based on robust invariable rotary textural characteristics
CN111583175A (en) * 2020-03-30 2020-08-25 山东浪潮通软信息科技有限公司 Erythrocyte image detection method, erythrocyte image detection equipment and erythrocyte image detection medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"A Computational Approach for Recognition of Electronic Microscope Plant Pollen Images";Hao Tian et al.;《2008 Congress on Image and Sinal Processing》;20080716;全文 *
"一类圆形细胞图像的粘连分割方法研究";蔡竞;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20110915;第四章-第五章 *
"基于ARM_Linux的转基因作物花粉图像采集系统的研制";胡汉峰;《中国优秀博硕士学位论文全文数据库(硕士) 农业科技辑》;20111015;第1.3节、第3.2节、第6.2节 *

Also Published As

Publication number Publication date
CN112669304A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
CN112669304B (en) Pollen detection method based on digital image processing technology
CN109741356B (en) Sub-pixel edge detection method and system
CN111968144B (en) Image edge point acquisition method and device
CN113362306B (en) Packaged chip defect detection method based on deep learning
CN108491786B (en) Face detection method based on hierarchical network and cluster merging
Garg et al. Unsupervised curvature-based retinal vessel segmentation
CN104077577A (en) Trademark detection method based on convolutional neural network
CN112862760A (en) Bearing outer ring surface defect area detection method
CN108186051B (en) Image processing method and system for automatically measuring double-apical-diameter length of fetus from ultrasonic image
CN110414308B (en) Target identification method for dynamic foreign matters on power transmission line
CN112907460B (en) Remote sensing image enhancement method
WO2022198898A1 (en) Picture classification method and apparatus, and device
Cheng et al. Superpixel classification based optic cup segmentation
CN106296763A (en) A kind of metal material Industry CT Image Quality method for quickly correcting
CN108961301A (en) It is a kind of based on the unsupervised Chaetoceros image partition method classified pixel-by-pixel
CN112017109B (en) Online ferrographic video image bubble elimination method
CN105787912A (en) Classification-based step type edge sub pixel localization method
CN115375629A (en) Method for detecting line defect and extracting defect information in LCD screen
CN114119437B (en) GMS-based image stitching method for improving distortion of moving object
CN108447038A (en) A kind of mesh denoising method based on non local full variation operator
CN112017221B (en) Multi-modal image registration method, device and equipment based on scale space
CN113610041A (en) Reading identification method and device for pointer instrument
CN107240093B (en) Automatic diagnosis method for cancer cells
CN113298725A (en) Correction method for superposition error of ship icon image
CN115994870A (en) Image processing method for enhancing denoising

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant