CN113298768A - Cotton detection, segmentation and counting method and system - Google Patents

Cotton detection, segmentation and counting method and system Download PDF

Info

Publication number
CN113298768A
CN113298768A CN202110551755.0A CN202110551755A CN113298768A CN 113298768 A CN113298768 A CN 113298768A CN 202110551755 A CN202110551755 A CN 202110551755A CN 113298768 A CN113298768 A CN 113298768A
Authority
CN
China
Prior art keywords
cotton
area
region
splitting
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110551755.0A
Other languages
Chinese (zh)
Other versions
CN113298768B (en
Inventor
杨公平
张岩
孙启玉
李广阵
褚德峰
张同心
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Fengshi Information Technology Co ltd
Shandong University
Original Assignee
Shandong Fengshi Information Technology Co ltd
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Fengshi Information Technology Co ltd, Shandong University filed Critical Shandong Fengshi Information Technology Co ltd
Priority to CN202110551755.0A priority Critical patent/CN113298768B/en
Publication of CN113298768A publication Critical patent/CN113298768A/en
Application granted granted Critical
Publication of CN113298768B publication Critical patent/CN113298768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a cotton detection segmentation counting method and a system, which belong to the technical field of computer vision, and are used for obtaining mask image matrixes of image backgrounds and foregrounds, initializing a grabCut algorithm and segmenting; morphological processing, namely extracting the relevant attribute of each connected domain; combining two connected domains which meet the condition into one region; splitting a connected domain, the area of which is larger than a preset first area threshold value and the ratio of the long axis to the short axis of which is larger than a preset ratio threshold value, into two independent regions; cotton counts were performed on the individual connected domains after splitting. The invention ensures the accuracy while keeping the light weight of the model, fast and convenient deployment; the algorithm segmentation effect is improved, and the efficiency of subsequent merging and splitting processes is improved; repeated operation is avoided, and the merging speed is improved; the length detection is not needed to be carried out on the positions such as the bottleneck position in the splitting process, the detected positions with the highest brightness are ensured to respectively belong to the two areas, and the robustness is good.

Description

Cotton detection, segmentation and counting method and system
Technical Field
The invention relates to the technical field of computer vision recognition, in particular to a white cotton detection, segmentation and counting method and system based on color and morphological characteristics.
Background
Cotton is one of the most important economic fiber crops in the world, accounts for nearly 80% of the total yield of natural fibers in the world, white cotton in the mature period is the most important phenotypic character for predicting the yield of cotton fibers, and the number of cotton bolls is an important index of the yield of the fibers.
The white cotton is divided, positioned and detected, and the number of the bolls is counted, so that the physiological and genetic mechanisms of the growth and development of crops can be better known, the white cotton can be used as an important index for predicting yield potential and evaluating the growth condition of the crops, and the white cotton is beneficial to making crop management decisions in time. Thereby maximizing profit by preventing yield loss due to pests and reducing costs.
Traditional yield prediction is based on manual sampling or visual inspection and the experience of the relevant personnel themselves, is prone to error, and is not practical for evaluating thousands of plots in a plant breeding program. And the system based on computer vision is combined with an agricultural robot or an unmanned aerial vehicle, so that the segmentation positioning detection of cotton, the statistics of cotton boll number, the automation of yield prediction and the like can be realized, the efficiency is obviously improved, and human errors are reduced.
In recent years, various vision-based computer models have been developed by many researchers. These models cover a wide range of techniques such as traditional machine vision, machine learning, deep learning, etc. Although the above methods have achieved better accuracy in past studies and solved some of the problems, there is still room for improvement in this field of research.
The traditional method for cotton segmentation and detection positioning at present has the problems of low recognition rate and poor robustness, and the depth method has the problems of large model volume, difficulty in flexible deployment in practical application scenes and the like.
Disclosure of Invention
The invention aims to provide a cotton detection segmentation counting method and a cotton detection segmentation counting system which can improve the efficiency of cotton segmentation and detection positioning and the counting accuracy while ensuring the portability and convenient deployment of a model, so as to solve at least one technical problem in the background technology.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a cotton detection, segmentation and counting method, comprising:
combining color features, calculating an h-channel histogram to obtain a mask image matrix of an image background and a foreground, initializing a grabCut algorithm, and segmenting;
performing primary morphological processing on the segmented picture, filtering out a region with a smaller area, filling closed holes in the region, and extracting the relevant attribute of each connected domain;
combining the areas, namely combining two connected areas meeting the conditions into one area;
splitting the region into two independent regions for a connected region with the area larger than a preset first area threshold value and the ratio of the long axis to the short axis larger than a preset ratio threshold value;
cotton counts were performed on the individual connected domains after splitting.
Preferably, for each cotton image, calculating a color histogram of an h channel of the cotton image, respectively calculating a maximum value on two intervals of the histogram to obtain two peak values, and calculating a minimum value coordinate between the two peak values of the interval;
taking the point of which the pixel value is smaller than the small peak value in the h-channel image as a determined background point; taking the part larger than the large peak value as a determined foreground point; taking the part which is larger than the small peak value and smaller than the minimum value coordinate as a possible foreground point; taking the part which is larger than the minimum coordinate and smaller than the large peak value as a pending background point; marking is completed to obtain a mask matrix;
and then, segmenting by using a grabCut segmentation algorithm, initializing the grabCut algorithm by using the mask matrix obtained in the last step, segmenting, iterating the algorithm for a certain number of times, and obtaining a cotton foreground region after primary segmentation after segmentation is completed.
Preferably, the segmentation graph is searched for the connected domains, the area of each connected domain is calculated, the connected domains with the areas smaller than the second area threshold are deleted, and holes are filled in the connected domains;
and retrieving all connected domains again, respectively obtaining the edge outline and the minimum circumscribed rectangle of each connected domain, calculating the ratio of the length to the width of the minimum circumscribed rectangle of each region as the ratio of the length to the length of the region, and calculating the inclination angle and the midpoint position of each edge of the minimum circumscribed rectangle as the related attributes of each connected domain.
Preferably, for a circumscribed rectangle of a connected domain, calculating the angle difference between four sides of the circumscribed rectangle and each side of all other circumscribed rectangles;
if the angle difference of the two edges is less than 20 degrees, the two edges are considered to have a parallel relation, and then the distance between the two edges is judged;
if the angle difference between the two edges is less than 50 degrees and greater than 20 degrees, the two edges are considered to be close enough, the two edges are parallel at the moment, and the distance between the two edges is very close, the connected domains in the two minimum circumscribed rectangles are considered to be separated by objects and need to be combined into an area;
two areas needing to be combined are respectively and independently extracted, and the gap between the two areas is connected by adopting closed operation;
the above operations are repeated until no parallel and close edges are detected.
Preferably, when the edges of the two connected domains are compared and detected, the list is adopted to store the information of the two regions and the edges which meet the conditions; and for two connected domains, only one piece of information is maintained in the list, and if the closer edges of the two regions are detected subsequently, the information in the list is updated, so that the two regions are only subjected to one merging operation in one merging, and the repeated operation is avoided.
Preferably, if the area of one connected domain is larger than the first area threshold value and the ratio of the long axis to the short axis is larger than the ratio threshold value of the long axis to the short axis, the area is considered to have the condition of cotton overlapping or adhesion, and the division of the area is required;
wherein the content of the first and second substances,
extracting connected domains needing to be split separately; calculating the distance from each point in the connected domain to the nearest background point by adopting a distance transformation algorithm, and taking the distance as the value of the pixel point;
obtaining a gray level image after distance conversion, performing cyclic threshold segmentation on the gray level image, and increasing the threshold value each time until two connected domains are detected in the segmented image;
calculating the coordinates of the central points of the two connected domains as the centers of the two separated areas after the splitting;
connecting the two central points, comparing the areas of the two connected domains, and shifting the position of the split point to a region with a large area;
splitting the connected domain to be split into two independent connected domains along the direction perpendicular to the connecting line by taking the splitting point as a reference point;
and repeating the process until no region with the area larger than the first area threshold value and the ratio of the long axis to the short axis larger than the threshold value of the long axis and the short axis is detected.
Preferably, the distance transformation algorithm flow is as follows:
the input picture is a binary image F, the foreground is 1, and the background is 0;
and updating the value of the P point element of the pixel point in the matrix as follows by using a mask L traversing the matrix F from top left, from left to right and from top to bottom:
F(P)=min{F(P),D(P,q)+F(q)},P∈F,q∈maskL;
wherein D (P, q) is the distance between the point P and the pixel where q is located;
with the mask R traversing the matrix F from bottom right, from right to left, and from bottom to top, the values of the P point elements in the matrix are updated as follows:
F(P)=min{F(P),D(P,q)+F(q)},P∈F,q∈maskR;
and finally obtaining a matrix updated twice, namely the result of distance transformation.
Preferably, after the steps of region merging and splitting are completed, all connected regions in the image are detected again, the minimum circumscribed rectangle of each connected region is calculated, the minimum circumscribed rectangle frame is drawn by opencv, the detection result of white cotton is presented, and the number of the connected regions is counted and used as the output result of cotton counting.
In a second aspect, the present invention provides a cotton detection, segmentation and counting system, comprising:
the segmentation module is used for calculating an h-channel histogram by combining color features to obtain a mask image matrix of an image background and a foreground, initializing a grabCut algorithm and segmenting;
the extraction module is used for carrying out primary morphological processing on the segmented picture, filtering out a region with a smaller area, filling closed holes in the region and extracting the relevant attribute of each connected domain;
the merging module is used for merging the areas, and merging the two connected areas meeting the condition into one area;
the splitting module is used for splitting the region into two independent regions for a connected region, the area of which is larger than a preset first area threshold value and the ratio of the long axis to the short axis of which is larger than a preset ratio threshold value;
and the counting module is used for counting the cotton in the separated connected domains after the splitting.
In a third aspect, the present invention provides a non-transitory computer readable storage medium comprising instructions for performing the cotton detection segmentation counting method as described above.
In a fourth aspect, the invention provides an electronic device comprising a non-transitory computer readable storage medium as described above; and one or more processors capable of executing the instructions of the non-transitory computer-readable storage medium.
The invention has the beneficial effects that: the accuracy of the model is also ensured while the model is kept light in volume, light in weight, quick and convenient to deploy; the algorithm segmentation effect is improved, and the efficiency of the subsequent merging and splitting processing process is further improved; in the process of area combination, at most one piece of information is maintained in every two areas, so that repeated operation is avoided, and the combination speed is increased; the splitting process does not need to detect the length of the positions such as the bottleneck position, the splitting process is simple and quick, and the subsequent cyclic threshold segmentation ensures that the positions with the highest brightness respectively belong to two regions, so that the robustness is better.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a cotton detection, segmentation and counting method according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by way of the drawings are illustrative only and are not to be construed as limiting the invention.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
For the purpose of facilitating an understanding of the present invention, the present invention will be further explained by way of specific embodiments with reference to the accompanying drawings, which are not intended to limit the present invention.
It should be understood by those skilled in the art that the drawings are merely schematic representations of embodiments and that the elements shown in the drawings are not necessarily required to practice the invention.
Example 1
The embodiment 1 of the invention provides a cotton detection, segmentation and counting system, which comprises:
the segmentation module is used for calculating an h-channel histogram by combining color features to obtain a mask image matrix of an image background and a foreground, initializing a grabCut algorithm and segmenting;
the extraction module is used for carrying out primary morphological processing on the segmented picture, filtering out a region with a smaller area, filling closed holes in the region and extracting the relevant attribute of each connected domain;
the merging module is used for merging the areas, and merging the two connected areas meeting the condition into one area;
the splitting module is used for splitting the region into two independent regions for a connected region, the area of which is larger than a preset first area threshold value and the ratio of the long axis to the short axis of which is larger than a preset ratio threshold value;
and the counting module is used for counting the cotton in the separated connected domains after the splitting.
In this embodiment 1, the cotton detecting, dividing and counting method is implemented by using the cotton detecting, dividing and counting system, and the method includes:
combining color features, calculating an h-channel histogram to obtain a mask image matrix of an image background and a foreground, initializing a grabCut algorithm, and segmenting;
performing primary morphological processing on the segmented picture, filtering out a region with a smaller area, filling closed holes in the region, and extracting the relevant attribute of each connected domain;
combining the areas, namely combining two connected areas meeting the conditions into one area;
splitting the region into two independent regions for a connected region with the area larger than a preset first area threshold value and the ratio of the long axis to the short axis larger than a preset ratio threshold value;
cotton counts were performed on the individual connected domains after splitting.
In this embodiment 1, for each cotton image, a color histogram of an h channel is calculated, a maximum value is calculated on two intervals of the histogram respectively to obtain two peak values, and a minimum value coordinate between the two peak values of the intervals is calculated;
taking the point of which the pixel value is smaller than the small peak value in the h-channel image as a determined background point; taking the part larger than the large peak value as a determined foreground point; taking the part which is larger than the small peak value and smaller than the minimum value coordinate as a possible foreground point; taking the part which is larger than the minimum coordinate and smaller than the large peak value as a pending background point; marking is completed to obtain a mask matrix;
and then, segmenting by using a grabCut segmentation algorithm, initializing the grabCut algorithm by using the mask matrix obtained in the last step, segmenting, iterating the algorithm for a certain number of times, and obtaining a cotton foreground region after primary segmentation after segmentation is completed.
In this embodiment 1, the segmentation map is searched for connected domains, the area of each connected domain is calculated, the connected domains with the areas smaller than the second area threshold are deleted, and holes are filled in each connected domain;
and retrieving all connected domains again, respectively obtaining the edge outline and the minimum circumscribed rectangle of each connected domain, calculating the ratio of the length to the width of the minimum circumscribed rectangle of each region as the ratio of the length to the length of the region, and calculating the inclination angle and the midpoint position of each edge of the minimum circumscribed rectangle as the related attributes of each connected domain.
In this embodiment 1, for a circumscribed rectangle of a connected domain, the angle difference between four sides of the circumscribed rectangle and each side of all other circumscribed rectangles is calculated;
if the angle difference of the two edges is less than 20 degrees, the two edges are considered to have a parallel relation, and then the distance between the two edges is judged;
if the angle difference between the two edges is less than 50 degrees and greater than 20 degrees, the two edges are considered to be close enough, the two edges are parallel at the moment, and the distance between the two edges is very close, the connected domains in the two minimum circumscribed rectangles are considered to be separated by objects and need to be combined into an area;
two areas needing to be combined are respectively and independently extracted, and the gap between the two areas is connected by adopting closed operation;
the above operations are repeated until no parallel and close edges are detected.
In this embodiment 1, when performing comparison detection on the edges of two connected domains, a list is used to store information of two regions and edges meeting conditions; and for two connected domains, only one piece of information is maintained in the list, and if the closer edges of the two regions are detected subsequently, the information in the list is updated, so that the two regions are only subjected to one merging operation in one merging, and the repeated operation is avoided.
In this embodiment 1, if the area of one connected domain is larger than the first area threshold and the ratio of the long axis to the short axis is larger than the ratio threshold of the long axis to the short axis, it is considered that the region has the condition of cotton overlapping or adhesion, and the region needs to be split;
wherein the content of the first and second substances,
extracting connected domains needing to be split separately; calculating the distance from each point in the connected domain to the nearest background point by adopting a distance transformation algorithm, and taking the distance as the value of the pixel point;
obtaining a gray level image after distance conversion, performing cyclic threshold segmentation on the gray level image, and increasing the threshold value each time until two connected domains are detected in the segmented image;
calculating the coordinates of the central points of the two connected domains as the centers of the two separated areas after the splitting;
connecting the two central points, comparing the areas of the two connected domains, and shifting the position of the split point to a region with a large area;
splitting the connected domain to be split into two independent connected domains along the direction perpendicular to the connecting line by taking the splitting point as a reference point;
and repeating the process until no region with the area larger than the first area threshold value and the ratio of the long axis to the short axis larger than the threshold value of the long axis and the short axis is detected.
In this embodiment 1, the distance transformation algorithm flow is as follows:
the input picture is a binary image F, the foreground is 1, and the background is 0;
and updating the value of the P point element of the pixel point in the matrix as follows by using a mask L traversing the matrix F from top left, from left to right and from top to bottom:
F(P)=min{F(P),D(P,q)+F(q)},P∈F,q∈maskL
wherein D (P, q) is the distance between the point P and the pixel where q is located;
with the mask R traversing the matrix F from bottom right, from right to left, and from bottom to top, the values of the P point elements in the matrix are updated as follows:
F(P)=min{F(P),D(P,q)+F(q)},P∈F,q∈maskR
and finally obtaining a matrix updated twice, namely the result of distance transformation.
In this embodiment 1, after the step of region merging and splitting is completed, all connected regions in the image are detected again, the minimum circumscribed rectangle of each connected region is calculated, the minimum circumscribed rectangle frame is drawn by opencv, the detection result of white cotton is presented, and the number of the connected regions is counted as the output result of cotton counting.
Example 2
In order to improve the segmentation and detection effect of white cotton in the mature period under the natural field conditions, this embodiment 2 provides a white cotton segmentation and detection counting method based on color and morphological characteristics, which can further improve the efficiency, speed and robustness of cotton segmentation and detection counting.
As shown in fig. 1, the method described in this embodiment 2 can be roughly divided into 4 steps:
step1, combining color characteristics, calculating an h-channel histogram to obtain a mask image matrix of an image background and a foreground, initializing a grabCut algorithm, and segmenting.
And 2, carrying out primary morphological processing on the segmented picture, filtering out a region with a smaller area, filling closed small holes in the region, and then extracting the relevant attribute of each connected region.
And 3, combining the areas, namely combining the two connected areas meeting the conditions into one area.
And 4, splitting the region, namely splitting the region with larger area and larger length and minor axis into two separate regions. And finally, drawing and presenting the detection result, and counting the cotton.
In this example 2, for step1, color features are combined with grabCut segmentation
Firstly, randomly selecting a plurality of pictures from cotton pictures, manually marking a white cotton area and a background area, then respectively counting histogram features of r, g, b and h, s and v color channels of the two areas, and finding that the color histograms of the two areas present two separate peak states on an h channel, the separation is most obvious, the intersection area is least, and therefore the h channel color features are used for preliminary cotton segmentation.
For each cotton image, calculating a color histogram of an h channel, respectively calculating a maximum value on a range from 0 to 75 of the histogram and a range from 75 to 255 of the histogram, namely a peak value of two peaks, respectively recording coordinates as m1 and m2(m1< m2), and calculating a minimum value coordinate on a range from m1 to m2 as v. Taking a point with a pixel value less than m1 in the h-channel image as a determined background point, and marking the point as 0; taking the part larger than m2 as a determined foreground point, and marking the part as 1; the part larger than m1 and smaller than v is taken as a possible foreground point, labeled 2; and taking the part which is larger than v and smaller than m2 as a pending background point, marking as 3, and marking to obtain a mask matrix mask.
And then, segmenting by using a grabCut segmentation algorithm, initializing the grabCut algorithm by using the mask matrix obtained in the previous step, segmenting, setting the iteration times of the algorithm as 10 times, and obtaining a cotton foreground region after preliminary segmentation after the segmentation is finished.
In this embodiment 2, for step2, morphological preliminary processing and connected component attribute extraction
And (3) searching communicated regions of the segmentation map preliminarily obtained in the last step, calculating the area of each communicated region, and deleting the regions with undersized areas, wherein the set area threshold value is 800. Then filling fine holes in each communication area to enable the foreground area to be more complete and smooth.
Then, all connected regions are searched again, the edge outline and the minimum circumscribed rectangle of each connected region are respectively obtained, for the minimum circumscribed rectangle of each region, the length-width ratio of the minimum circumscribed rectangle is calculated to be used as the ratio e of the length to the length of the region, the inclination angle and the midpoint position cen of each side of the minimum circumscribed rectangle are calculated, wherein the inclination angle is calculated from the slope k of the side, and the calculation mode is as follows:
angle=tan-1k×(180°/π)。
in this embodiment 2, for step3, region merging:
for a circumscribed rectangle of a certain area, calculating the angle difference value between four sides of the circumscribed rectangle and each side of all other circumscribed rectangles, wherein the calculation of the angle difference is represented as follows:
diffa=|anglem-anglen|,m∈boxi,n∈boxi
if diffaIf the angle is less than 20 degrees, the two edges are considered to have a parallel relation, the distance between the two edges is further judged, and when the distance between the two edges is calculated, a method for calculating the Euclidean distance of the middle point of the line segment is adopted, and the method is represented as follows:
Figure BDA0003075762980000121
if diffd is less than 50, the distance between the two sides is considered to be close enough, the two sides are parallel at the moment, and the distance between the two sides is very close, the connected areas in the two minimum circumscribed rectangles are considered to be separated by objects such as stalks, so that fine gaps exist during the division, and the two minimum circumscribed rectangles need to be combined into one area.
In this embodiment 2, when performing comparison detection on the edges of two regions, a list is adopted to store information of the two regions and the edges meeting the conditions, and the information is processed together later, and for the two regions, only one piece of information is maintained in the list, and if the edge closer to the two regions is detected later, the information in the list is updated, so that it is ensured that only one merging operation is performed on the two regions in one merging, and a repeated operation is avoided.
Two areas needing to be combined are extracted independently, the influence of subsequent operation on other areas is avoided, because the two areas are adjacent and very close, the gap between the two areas is connected by adopting a closed operation, and the kernel size of the closed operation is set to be 16 in the invention. The above operations are repeated until no parallel and close edges are detected.
In this example 2, for step4, region splitting
If a connected domain area is greater than 8500 and the ratio of the major axis to the minor axis is greater than 1.5, the region is considered to have cotton overlap or blocking, and splitting of the region is required. Firstly, the connected domain S to be split is extracted independently for splitting operation, and misoperation on other areas is avoided.
Because the edge shape of the boundary between two cotton regions is usually a bottleneck when the two cotton regions are adhered, that is, the distance between the two cotton regions is narrower than the length of other parts, in this embodiment 2, a distance transformation algorithm is used to calculate the distance between each point in the region and the nearest background point, and the distance is used as the value of the pixel point, and the distance transformation algorithm flow is as follows:
the Step1 input picture is a binary image F, the foreground is 1, and the background is 0;
step2 uses mask L to traverse matrix F from top left, left to right, and top to bottom, and updates the value of the pixel point P element in the matrix as follows:
F(P)=min{F(P),D(P,q)+F(q)},P∈F,q∈maskL
where D (P, q) is the distance of the pixel where point P and q are located. The euclidean distance is used for this distance in this example 2.
Step3 uses mask R to traverse matrix F from bottom right to left and bottom up, updating the values of P point elements in the matrix as follows:
F(P)=min{F(P),D(P,q)+F(q)},P∈F,q∈maskR
step4 finally obtains a matrix updated twice, namely the result of distance transformation.
Obtaining a gray scale image after distance transformation, wherein the gray scale image usually presents two central bright spots and the edge area is dark; a cyclic thresholding is then applied to the gray map, each time raising the threshold until the segmented image detects two connected components s _ area _1 and s _ area _ 2. The coordinates of the center points of the two connected domains are respectively taken as center1 and center2, and are taken as the centers of the two separated regions after splitting, and the calculation mode is as follows:
Figure BDA0003075762980000131
connecting two central points, comparing the areas of sa1 and sa2, and slightly shifting the split point position to the area with large area, the split point position is calculated as follows:
coords=(xc1×0.75+xc2×0.35,yc1×0.75+yc2×0.35);
the connected region S is split into two separate connected regions along a direction perpendicular to the connecting line with the split point as a reference point.
The above process is repeated until no region with an area greater than 8500 and a ratio of the major axis to the minor axis greater than 1.5 is detected.
And after the steps of region merging and splitting are completed, re-detecting all connected regions in the image, calculating the minimum external rectangle of each connected region, drawing a minimum external rectangle frame by using opencv, presenting the detection result of the white cotton, and counting the number of the connected regions to be used as the output result of cotton counting.
Example 3
Embodiment 3 of the present invention provides a non-transitory computer-readable storage medium including instructions for executing a cotton detection segmentation counting method, the method including:
combining color features, calculating an h-channel histogram to obtain a mask image matrix of an image background and a foreground, initializing a grabCut algorithm, and segmenting;
performing primary morphological processing on the segmented picture, filtering out a region with a smaller area, filling closed holes in the region, and extracting the relevant attribute of each connected domain;
combining the areas, namely combining two connected areas meeting the conditions into one area;
splitting the region into two independent regions for a connected region with the area larger than a preset first area threshold value and the ratio of the long axis to the short axis larger than a preset ratio threshold value;
cotton counts were performed on the individual connected domains after splitting.
Example 4
Embodiment 4 of the present invention provides an electronic device, including a non-transitory computer-readable storage medium; and one or more processors capable of executing the instructions of the non-transitory computer-readable storage medium. The non-transitory computer readable storage medium includes instructions for performing a cotton detection segmentation counting method, the method comprising:
combining color features, calculating an h-channel histogram to obtain a mask image matrix of an image background and a foreground, initializing a grabCut algorithm, and segmenting;
performing primary morphological processing on the segmented picture, filtering out a region with a smaller area, filling closed holes in the region, and extracting the relevant attribute of each connected domain;
combining the areas, namely combining two connected areas meeting the conditions into one area;
splitting the region into two independent regions for a connected region with the area larger than a preset first area threshold value and the ratio of the long axis to the short axis larger than a preset ratio threshold value;
cotton counts were performed on the individual connected domains after splitting.
In summary, the cotton detection, segmentation and counting method and system provided by the embodiment of the invention are based on color and morphological characteristics, combine with a grabCut segmentation algorithm and the like to segment, detect and position white cotton in a mature period, and finally count according to a detection result, compared with various current traditional methods and deep neural network methods, the method and system provided by the invention have the advantages that the model volume is light, the model volume is fast and convenient to deploy, and the accuracy is guaranteed; before the grabCut algorithm is used for segmentation, a mask map is obtained through calculation according to the h-channel color histogram, and then the grabCut algorithm is initialized by using the mask map, so that the segmentation effect of the algorithm is improved, and the efficiency of the subsequent merging and splitting processes is further improved; in the process of area combination, at most one piece of information is maintained in every two areas, so that repeated operation is avoided, and the combination speed is increased; the splitting process is based on a distance transformation algorithm, length detection is not needed to be carried out on positions such as bottleneck positions, simplicity and rapidness are achieved, and subsequent cycle threshold segmentation guarantees that the positions with the highest detected brightness respectively belong to two regions, and robustness is better.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Although the present disclosure has been described with reference to the specific embodiments shown in the drawings, it is not intended to limit the scope of the present disclosure, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive faculty based on the technical solutions disclosed in the present disclosure.

Claims (10)

1. A cotton detection, segmentation and counting method is characterized by comprising the following steps:
combining color features, calculating an h-channel histogram to obtain a mask image matrix of an image background and a foreground, initializing a grabCut algorithm, and segmenting;
performing primary morphological processing on the segmented picture, filtering out a region with a smaller area, filling closed holes in the region, and extracting the relevant attribute of each connected domain;
combining the areas, namely combining two connected areas meeting the conditions into one area;
splitting the region into two independent regions for a connected region with the area larger than a preset first area threshold value and the ratio of the long axis to the short axis larger than a preset ratio threshold value;
cotton counts were performed on the individual connected domains after splitting.
2. The cotton detection division counting method according to claim 1, characterized in that:
for each cotton image, calculating a color histogram of an h channel of the cotton image, respectively calculating a maximum value on two intervals of the histogram to obtain two peak values, and calculating a minimum value coordinate between the two peak values of the interval;
taking the point of which the pixel value is smaller than the small peak value in the h-channel image as a determined background point; taking the part larger than the large peak value as a determined foreground point; taking the part which is larger than the small peak value and smaller than the minimum value coordinate as a possible foreground point; taking the part which is larger than the minimum coordinate and smaller than the large peak value as a pending background point; marking is completed to obtain a mask matrix;
and then, segmenting by using a grabCut segmentation algorithm, initializing the grabCut algorithm by using the mask matrix obtained in the last step, segmenting, iterating the algorithm for a certain number of times, and obtaining a cotton foreground region after primary segmentation after segmentation is completed.
3. The cotton detection division counting method according to claim 2, characterized in that:
searching the connected domains of the segmentation graph, respectively calculating the area of each connected domain, deleting the connected domains with the areas smaller than a second area threshold value, and filling holes in each connected domain;
and retrieving all connected domains again, respectively obtaining the edge outline and the minimum circumscribed rectangle of each connected domain, calculating the ratio of the length to the width of the minimum circumscribed rectangle of each region as the ratio of the length to the length of the region, and calculating the inclination angle and the midpoint position of each edge of the minimum circumscribed rectangle as the related attributes of each connected domain.
4. The cotton detection division counting method according to claim 3, characterized in that:
calculating the angle difference between the four sides of a circumscribed rectangle of a connected domain and each side of all other circumscribed rectangles;
if the angle difference of the two edges is less than 20 degrees, the two edges are considered to have a parallel relation, and then the distance between the two edges is judged;
if the angle difference between the two edges is less than 50 degrees and greater than 20 degrees, the two edges are considered to be close enough, the two edges are parallel at the moment, and the distance between the two edges is very close, the connected domains in the two minimum circumscribed rectangles are considered to be separated by objects and need to be combined into an area;
two areas needing to be combined are respectively and independently extracted, and the gap between the two areas is connected by adopting closed operation;
the above operations are repeated until no parallel and close edges are detected.
5. The cotton detection division counting method according to claim 4, characterized in that:
when the edges of the two connected domains are compared and detected, the information of the two regions and the edges which meet the conditions is stored by adopting a list; and for two connected domains, only one piece of information is maintained in the list, and if the closer edges of the two regions are detected subsequently, the information in the list is updated, so that the two regions are only subjected to one merging operation in one merging, and the repeated operation is avoided.
6. The cotton detection division counting method according to claim 5, characterized in that:
if the area of one connected domain is larger than the first area threshold value and the ratio of the long axis to the short axis is larger than the ratio threshold value of the long axis to the short axis, the area is considered to have the condition of cotton overlapping or adhesion, and the division of the area is required;
wherein the content of the first and second substances,
extracting connected domains needing to be split separately; calculating the distance from each point in the connected domain to the nearest background point by adopting a distance transformation algorithm, and taking the distance as the value of the pixel point;
obtaining a gray level image after distance conversion, performing cyclic threshold segmentation on the gray level image, and increasing the threshold value each time until two connected domains are detected in the segmented image;
calculating the coordinates of the central points of the two connected domains as the centers of the two separated areas after the splitting;
connecting the two central points, comparing the areas of the two connected domains, and shifting the position of the split point to a region with a large area;
splitting the connected domain to be split into two independent connected domains along the direction perpendicular to the connecting line by taking the splitting point as a reference point;
and repeating the process until no region with the area larger than the first area threshold value and the ratio of the long axis to the short axis larger than the threshold value of the long axis and the short axis is detected.
7. The cotton detection division counting method according to claim 6, wherein:
the distance transformation algorithm flow is as follows:
the input picture is a binary image F, the foreground is 1, and the background is 0;
and updating the value of the P point element of the pixel point in the matrix as follows by using a mask L traversing the matrix F from top left, from left to right and from top to bottom:
F(P)=min{F(P),D(P,q)+F(q)},P∈F,q∈maskL;
wherein D (P, q) is the distance between the point P and the pixel where q is located;
with the mask R traversing the matrix F from bottom right, from right to left, and from bottom to top, the values of the P point elements in the matrix are updated as follows:
F(P)=min{F(P),D(P,q)+F(q)},P∈F,q∈maskR;
and finally obtaining a matrix updated twice, namely the result of distance transformation.
8. The cotton detection division counting method according to claim 7, wherein:
and after the steps of region merging and splitting are completed, re-detecting all connected regions in the image, calculating the minimum external rectangle of each connected region, drawing a minimum external rectangle frame by using opencv, presenting the detection result of the white cotton, and counting the number of the connected regions to be used as the output result of cotton counting.
9. A cotton testing, segmenting and counting system, comprising:
the segmentation module is used for calculating an h-channel histogram by combining color features to obtain a mask image matrix of an image background and a foreground, initializing a grabCut algorithm and segmenting;
the extraction module is used for carrying out primary morphological processing on the segmented picture, filtering out a region with a smaller area, filling closed holes in the region and extracting the relevant attribute of each connected domain;
the merging module is used for merging the areas, and merging the two connected areas meeting the condition into one area;
the splitting module is used for splitting the region into two independent regions for a connected region, the area of which is larger than a preset first area threshold value and the ratio of the long axis to the short axis of which is larger than a preset ratio threshold value;
and the counting module is used for counting the cotton in the separated connected domains after the splitting.
10. An electronic device comprising a non-transitory computer-readable storage medium; and one or more processors capable of executing the instructions of the non-transitory computer-readable storage medium; wherein the non-transitory computer readable storage medium includes instructions for performing the cotton detection segment count method of any of claims 1-8.
CN202110551755.0A 2021-05-20 2021-05-20 Cotton detection, segmentation and counting method and system Active CN113298768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110551755.0A CN113298768B (en) 2021-05-20 2021-05-20 Cotton detection, segmentation and counting method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110551755.0A CN113298768B (en) 2021-05-20 2021-05-20 Cotton detection, segmentation and counting method and system

Publications (2)

Publication Number Publication Date
CN113298768A true CN113298768A (en) 2021-08-24
CN113298768B CN113298768B (en) 2022-11-08

Family

ID=77323125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110551755.0A Active CN113298768B (en) 2021-05-20 2021-05-20 Cotton detection, segmentation and counting method and system

Country Status (1)

Country Link
CN (1) CN113298768B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294099A (en) * 2022-09-26 2022-11-04 南通宝丽金属科技有限公司 Method and system for detecting hairline defect in steel plate rolling process

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441721A (en) * 2008-11-28 2009-05-27 江苏大学 Device and method for counting overlapped circular particulate matter
CN101980249A (en) * 2010-11-12 2011-02-23 中国气象局气象探测中心 Automatic observation method and device for crop development and growth
CN102129576A (en) * 2011-02-28 2011-07-20 西安电子科技大学 Method for extracting duty ratio parameter of all-sky aurora image
CN102509086A (en) * 2011-11-22 2012-06-20 西安理工大学 Pedestrian object detection method based on object posture projection and multi-features fusion
CN102915530A (en) * 2011-08-01 2013-02-06 佳能株式会社 Method and device for segmentation of input image
CN104112153A (en) * 2014-07-17 2014-10-22 上海透云物联网科技有限公司 Method for bar code recognition based on mobile terminal and system thereof
CN104751187A (en) * 2015-04-14 2015-07-01 山西科达自控股份有限公司 Automatic meter-reading image recognition method
CN106251336A (en) * 2016-07-20 2016-12-21 南方电网科学研究院有限责任公司 A kind of method by USFPF feature identification fault wire jumper yoke plate
CN106295789A (en) * 2015-06-10 2017-01-04 浙江托普云农科技股份有限公司 A kind of crop seed method of counting based on image procossing
US20170228826A1 (en) * 2016-02-08 2017-08-10 Apdn (B.V.I.) Inc. Identifying marked articles in the international stream of commerce
CN107220647A (en) * 2017-06-05 2017-09-29 中国农业大学 Crop location of the core method and system under a kind of blade crossing condition
CN109472221A (en) * 2018-10-25 2019-03-15 辽宁工业大学 A kind of image text detection method based on stroke width transformation
CN110472598A (en) * 2019-08-20 2019-11-19 齐鲁工业大学 SVM machine pick cotton flower based on provincial characteristics contains miscellaneous image partition method and system
CN110490861A (en) * 2019-08-22 2019-11-22 石河子大学 A kind of recognition methods and system of the aphid on yellow plate
CN111627059A (en) * 2020-05-28 2020-09-04 桂林市思奇通信设备有限公司 Method for positioning center point position of cotton blade
CN111738159A (en) * 2020-06-23 2020-10-02 桂林市思奇通信设备有限公司 Cotton terminal bud positioning method based on vector calibration

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441721A (en) * 2008-11-28 2009-05-27 江苏大学 Device and method for counting overlapped circular particulate matter
CN101980249A (en) * 2010-11-12 2011-02-23 中国气象局气象探测中心 Automatic observation method and device for crop development and growth
CN102129576A (en) * 2011-02-28 2011-07-20 西安电子科技大学 Method for extracting duty ratio parameter of all-sky aurora image
CN102915530A (en) * 2011-08-01 2013-02-06 佳能株式会社 Method and device for segmentation of input image
CN102509086A (en) * 2011-11-22 2012-06-20 西安理工大学 Pedestrian object detection method based on object posture projection and multi-features fusion
CN104112153A (en) * 2014-07-17 2014-10-22 上海透云物联网科技有限公司 Method for bar code recognition based on mobile terminal and system thereof
CN104751187A (en) * 2015-04-14 2015-07-01 山西科达自控股份有限公司 Automatic meter-reading image recognition method
CN106295789A (en) * 2015-06-10 2017-01-04 浙江托普云农科技股份有限公司 A kind of crop seed method of counting based on image procossing
US20170228826A1 (en) * 2016-02-08 2017-08-10 Apdn (B.V.I.) Inc. Identifying marked articles in the international stream of commerce
CN106251336A (en) * 2016-07-20 2016-12-21 南方电网科学研究院有限责任公司 A kind of method by USFPF feature identification fault wire jumper yoke plate
CN107220647A (en) * 2017-06-05 2017-09-29 中国农业大学 Crop location of the core method and system under a kind of blade crossing condition
CN109472221A (en) * 2018-10-25 2019-03-15 辽宁工业大学 A kind of image text detection method based on stroke width transformation
CN110472598A (en) * 2019-08-20 2019-11-19 齐鲁工业大学 SVM machine pick cotton flower based on provincial characteristics contains miscellaneous image partition method and system
CN110490861A (en) * 2019-08-22 2019-11-22 石河子大学 A kind of recognition methods and system of the aphid on yellow plate
CN111627059A (en) * 2020-05-28 2020-09-04 桂林市思奇通信设备有限公司 Method for positioning center point position of cotton blade
CN111738159A (en) * 2020-06-23 2020-10-02 桂林市思奇通信设备有限公司 Cotton terminal bud positioning method based on vector calibration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHANGPENG SUN ET AL.: "Image processing algorithms for infield single cotton boll counting and yield prediction", 《COMPUTERS AND ELECTRONICS IN AGRICULTURE》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294099A (en) * 2022-09-26 2022-11-04 南通宝丽金属科技有限公司 Method and system for detecting hairline defect in steel plate rolling process

Also Published As

Publication number Publication date
CN113298768B (en) 2022-11-08

Similar Documents

Publication Publication Date Title
Jia et al. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot
Gai et al. A detection algorithm for cherry fruits based on the improved YOLO-v4 model
Tian et al. Apple detection during different growth stages in orchards using the improved YOLO-V3 model
Maheswari et al. Intelligent fruit yield estimation for orchards using deep learning based semantic segmentation techniques—a review
CN106875406B (en) Image-guided video semantic object segmentation method and device
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
Anjna et al. Review of image segmentation technique
CN110992381B (en) Moving object background segmentation method based on improved Vibe+ algorithm
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
Li et al. An overlapping-free leaf segmentation method for plant point clouds
CN112598713A (en) Offshore submarine fish detection and tracking statistical method based on deep learning
Chen et al. Citrus fruits maturity detection in natural environments based on convolutional neural networks and visual saliency map
Lv et al. A visual identification method for the apple growth forms in the orchard
CN110176024A (en) Method, apparatus, equipment and the storage medium that target is detected in video
WO2020029915A1 (en) Artificial intelligence-based device and method for tongue image splitting in traditional chinese medicine, and storage medium
CN110222582A (en) A kind of image processing method and camera
CN113298768B (en) Cotton detection, segmentation and counting method and system
Jia et al. Accurate segmentation of green fruit based on optimized mask RCNN application in complex orchard
CN113313692B (en) Automatic banana young plant identification and counting method based on aerial visible light image
CN111191531A (en) Rapid pedestrian detection method and system
CN108664968B (en) Unsupervised text positioning method based on text selection model
CN112446417B (en) Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation
Lin et al. A novel approach for estimating the flowering rate of litchi based on deep learning and UAV images
CN113705579A (en) Automatic image annotation method driven by visual saliency
KR102283452B1 (en) Method and apparatus for disease classification of plant leafs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant