CN112183230A - Identification and central point positioning method for pears in natural pear orchard environment - Google Patents
Identification and central point positioning method for pears in natural pear orchard environment Download PDFInfo
- Publication number
- CN112183230A CN112183230A CN202010938804.1A CN202010938804A CN112183230A CN 112183230 A CN112183230 A CN 112183230A CN 202010938804 A CN202010938804 A CN 202010938804A CN 112183230 A CN112183230 A CN 112183230A
- Authority
- CN
- China
- Prior art keywords
- pear
- image
- area
- images
- convex hull
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 235000014443 Pyrus communis Nutrition 0.000 title claims abstract description 123
- 238000000034 method Methods 0.000 title claims abstract description 44
- 241000220324 Pyrus Species 0.000 title claims abstract description 40
- 235000021017 pears Nutrition 0.000 title claims abstract description 34
- 239000002420 orchard Substances 0.000 title claims description 9
- 238000012545 processing Methods 0.000 claims abstract description 21
- 238000012216 screening Methods 0.000 claims abstract description 15
- 238000004364 calculation method Methods 0.000 claims abstract description 13
- 230000000694 effects Effects 0.000 claims abstract description 13
- 230000000877 morphologic effect Effects 0.000 claims abstract description 12
- 238000010606 normalization Methods 0.000 claims abstract description 10
- 230000011218 segmentation Effects 0.000 claims abstract description 8
- 240000001987 Pyrus communis Species 0.000 claims description 117
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 claims description 3
- 244000088401 Pyrus pyrifolia Species 0.000 claims description 2
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 230000007613 environmental effect Effects 0.000 abstract description 6
- 238000002474 experimental method Methods 0.000 description 5
- 238000005286 illumination Methods 0.000 description 4
- 235000013399 edible fruits Nutrition 0.000 description 3
- 235000012055 fruits and vegetables Nutrition 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for identifying and positioning a central point of a pear in a natural pear garden environment. The method comprises the steps of collecting RGB images of pears through an RGB-D binocular camera, and converting the RGB images into HSV color images; reading 10 template images, calculating histograms of the template images and carrying out normalization processing; identifying a pear region in the image by using a back projection algorithm through the histogram information of the template image; the method comprises the steps of filling holes in an identified image, performing morphological processing and convex hull operation to obtain a better identification effect, and realizing segmentation processing of an overlapped part by a concave part formed by subtracting an original area from a convex hull of an overlapped pear; then, screening pear regions according to the area and the circle area rate; and finally, carrying out ellipse fitting and central point calculation on the screened area. The method has high algorithm efficiency and strong environmental adaptability, can quickly realize the identification and the central point positioning of the pears, and has higher final accuracy.
Description
Technical Field
The invention relates to the field of picking machinery, in particular to an image recognition and positioning algorithm of a fruit and vegetable picking robot, and particularly relates to a method for recognizing and positioning a central point of a pear in a natural pear orchard environment.
Background
In China, the pear is grown very mature and the annual yield is huge. However, most of pear picking work is still carried out manually, the picking efficiency is low, and the labor cost is high. Moreover, the pears need to be picked in a short time after being mature, the working strength of workers is high, and the picking work is dangerous due to the fact that the heights of some pear trees are higher. With the continuous improvement of automation technology, many domestic and foreign research institutes and companies begin to research different types of fruit and vegetable picking robots to effectively reduce the fruit and vegetable picking cost and improve the picking efficiency, and the pear picking robot also becomes one direction.
The problem of identifying the pears by a machine is firstly solved by researching the pear picking robot, but the growing environment of the pears is extremely complex under the natural pear garden environment. First, the illumination change in the orchard has the greatest impact on visual identification of the pears, and the over-bright illumination makes the pears unable to be identified correctly. Secondly, the integrity of fruit identification is affected by the shielding of interferents such as leaves, branches and the like. Finally, the overlap between the fruits can affect the identification and positioning of individual pears by the machine. Therefore, how to realize the rapid identification and positioning of a single complete pear becomes one of the key problems to be solved urgently by the picking robot.
Disclosure of Invention
The invention aims to overcome the problems in the prior art, provides the identification and central point positioning method of the pears in the natural pear orchard environment, can quickly realize identification and central point positioning of the pears, and has the advantages of high algorithm efficiency, strong environmental adaptability and higher final accuracy.
In order to realize the purpose of the invention, the following invention conception is adopted:
firstly, a picture of a pear on a pear tree is shot through a camera and converted into a picture under an HSV color space. And then reading a template picture containing the pear features and calculating a histogram of the template. And comparing the histogram of the picture to be identified with the histogram of the template picture through a back projection algorithm, thereby finding out the region containing the pear features in the picture. And filling the unrecognized and noise-eliminated pear and filling the shielded part of the pear according to the obtained result through hole filling, morphological processing and convex hull operation, and then removing the area which does not accord with the pear characteristics according to the standard obtained by experiments. And finally, carrying out ellipse fitting on the identification area and calculating the coordinate of the central point.
According to the inventive concept, the invention adopts the following technical scheme:
a method for identifying and positioning a central point of a pear under a natural pear garden environment is characterized by comprising the following specific operation steps:
a. collecting images of pears to be picked: the mechanical arm moves to a designated area, the binocular camera is controlled through a program to shoot RGB color images of the pears on the pear trees and convert the RGB color images into HSV color images, and compared with RBG color spaces, the HSV color spaces are more visual, are closer to the perception experience of people on colors, and are more suitable for image processing;
b. calculating a histogram of the template image: reading 10 RGB template images containing pear characteristics, converting the images from RGB color space images into HSV color space images, calculating histograms of the images, and performing normalization processing, so that the calculation amount of a computer can be reduced, and the image recognition speed is improved;
c. identification of pear regions: b, identifying a pear region in the image obtained in the step a through the histogram information of the template image obtained in the step b by adopting a back projection algorithm;
d. and (3) processing of identifying the area: c, removing unidentified holes in the image obtained in the step c by using a hole filling algorithm, removing fine noise points by using morphological opening operation, removing shielding influence of branches and leaves by using convex hull operation, and dividing the overlapped pears by using a subtraction part of the convex hulls and the original region, wherein the specific mode is the overlapped pear division in a specific implementation mode;
e. screening pear areas: d, screening the region obtained in the step d, taking the region with the area smaller than or larger than the set value and the circle area rate smaller than the set value as a non-pear region, and excluding the region;
f. contour fitting and center point calculation: and carrying out ellipse fitting on the screened areas, and calculating the coordinates (X, Y) of the central point of each pear sub-area.
Preferably, the specific method of step a above:
a-1, controlling a camera to shoot an RGB color image, dividing a single three-color channel image into R, G, B three single-color channel images, namely converting each pixel point into R, G, B three channels according to colors, and dividing each channel into 0-255 grades according to the brightness of the color points (the same as the gray value principle);
and a-2, converting each pixel point of the image into H, S, V single-channel images and combining the three single-channel images into an HSV image, wherein H, S, V represents hue (H), saturation (S) and brightness (V) respectively. First, R ', G ', B ' are defined as follows:
the H channel is divided into four cases per pixel point value:
1) when max (R ', G', B ') is min (R', G ', B'), H is 0 °;
each pixel point value of the S channel is divided into two cases:
1) when max (R ', G ', B ') is 0, S is 0;
the V channel has per pixel point values: v ═ max (R ', G ', B ').
Preferably, the specific method of step b above:
b-1, reading 10 pre-prepared pear feature template images, and converting the images from an RGB color space to an HSV color space;
b-2, dividing the color range of 0-255 into 16 groups of color intervals, and calculating a histogram of the image;
b-3, carrying out normalization treatment on the histogram obtained in the step b-2:
where src (i, j) is the original pixel point value,dst (i, j) is the normalized pixel point value, which is the 2-norm of all pixel point values in the image.
Preferably, the specific method of step c above:
c-1, comparing the template image histogram obtained in the step b with the image obtained in the step a by using a back projection algorithm, wherein the template histogram is regarded as prior probability distribution of pear features, the back projection is to calculate the probability that a certain specific part in the image obtained in the step a belongs to the prior distribution, and the brighter part in the obtained back projection image indicates that the probability that the part belongs to a pear region is higher;
c-2, performing threshold segmentation on the reverse projection graph, and setting the threshold to be 0.05 through experimental selection to obtain the best effect;
and c-3, because the image is matrix in nature, mathematical operation can be carried out. Therefore, more complete recognition effect can be achieved by adding the 10 threshold-segmented back projection images to obtain 1 total back projection image.
Preferably, the specific method of step d above:
and d-1. due to the influence of illumination, unidentified parts exist in the pear regions, and form hole parts reflected on the reverse projection graph, and the hole parts are filled by using a hole filling algorithm:
1. setting an original image as A;
2. stretch a one to two pixels out and fill the value with the background color (0), labeled B;
3. filling the large background of B by using a flood filling algorithm, wherein the filling value is foreground color (255), the seed point is (0,0) (step 2 can ensure that (0,0) is positioned in the large background), and the mark is C;
4. cutting the filled image into the size of an original image, and marking the image as D;
5. adding the negation of the D and the A to complete hole filling;
d-2, performing morphological opening operation processing on the image after the hole filling by using an elliptical kernel with the size of 7x7 and an anchor point positioned in the center to remove fine noise;
d-3, calculating the area outline after the hole is filled, wherein the area outline may have a recess due to the shielding of leaves, branches and the like, and calculating a convex hull of the outline by using a convex hull algorithm so as to fill up the unrecognized recess part;
and d-4, dividing the overlapped pears by the concave part obtained by subtracting the original overlapped area from the convex hull of the overlapped part, wherein the specific mode is the overlapped pear division in the specific embodiment.
Preferably, the specific method of step e above:
e-1. two criteria for the screening were first defined: d, area and circle area ratio, wherein the area refers to the area of the convex hull drawn in the step d, and the circle area ratio is defined as follows:
the circle area ratio is the area of the convex hull/the minimum enclosing circle area of the convex hull;
and e-2, calculating the area of each convex hull region in the graph, drawing a minimum enclosing circle of each convex hull and calculating the area of the enclosing circle, wherein the region with the convex hull area larger or smaller than a set value or the circle area ratio smaller than the set value is regarded as a non-pear region, and the region is excluded.
Preferably, the specific method of step f above:
f-1. calculating the contour of each pear region and calculating the 0-order moment m of the contour00And 1 order moment m10,m01;
f-2. calculate the center point (X, Y) of each pear sub-region, defined as follows:
and f-3, fitting the positions of the pear areas by using an ellipse fitting algorithm and drawing a fitted ellipse and a central point in the original image.
The pear identification method based on the back projection has the advantages that pear identification is carried out by using the back projection, the algorithm efficiency is high, the environmental adaptability is strong, pear identification and central point positioning can be rapidly realized, and the final accuracy is high.
Compared with the prior art, the invention has the following obvious and prominent substantive characteristics and remarkable technical progress:
1. the method comprises the steps of collecting RGB images of pears through an RGB-D binocular camera, and converting the RGB images into HSV color images; reading 10 template images, calculating histograms of the template images and carrying out normalization processing; identifying a pear region in the image by using a back projection algorithm through the histogram information of the template image; the method comprises the steps of filling holes in an identified image, performing morphological processing and convex hull operation to obtain a better identification effect, and realizing segmentation processing of an overlapped part by a concave part formed by subtracting an original area from a convex hull of an overlapped pear; then, screening pear regions according to the area and the circle area rate; finally, carrying out ellipse fitting and central point calculation on the screened area;
2. the method has high algorithm efficiency and strong environmental adaptability, can quickly realize the identification and the central point positioning of the pears, and has higher final accuracy.
Drawings
FIG. 1 is a flow chart of pear identification and location.
Fig. 2 is a picture of a pear in a natural bouquet environment, wherein fig. 2a is an original picture and fig. 2b is a picture converted into HSV.
Fig. 3 is a 10 picture of a template containing pear features.
Fig. 4 is a binary image obtained by back-projecting the template picture and fig. 2b, wherein fig. 4a is a single calculated result picture, and fig. 4b is a picture obtained by adding 10 result pictures.
Fig. 5 is a picture after hole filling.
Fig. 6 is a picture after the on operation.
Fig. 7 is a picture after the convex hull operation.
Fig. 8 shows a picture of the overlapping pear division processing, fig. 8a shows a picture after the opening operation, fig. 8b shows a picture after the convex hull operation, fig. 8c shows the concave portions and the center points of the two previous minus parts, and fig. 8d shows a picture after division.
Fig. 9 is a picture of the final recognition effect that shows the ellipse fitting and the center point.
Detailed Description
The preferred embodiments of the present invention are described in detail below with reference to the accompanying drawings:
the first embodiment is as follows:
referring to fig. 1, a method for identifying and positioning a central point of a pear under a natural pear garden environment comprises the following specific operation steps:
a. pear image acquisition:
the mechanical arm moves to a designated area, and an RGB-D binocular camera is controlled by a program to shoot RGB color images of the pears on the pear tree and convert the RGB color images into HSV color images;
b. calculating a histogram of the template image:
reading 10 RGB template images containing pear characteristics, converting the images from RGB color space images into HSV color space images, calculating histograms of the images, and performing normalization processing;
c. identification of pear regions:
b, identifying a pear region in the image obtained in the step a through the histogram information of the template image obtained in the step b by adopting a back projection algorithm;
d. and (3) identifying areas:
and c, removing unidentified holes in the image obtained in the step c by using a hole filling algorithm, removing fine noise points by using morphological opening operation, removing shielding influence of branches and leaves by using convex hull operation, and segmenting the overlapped pears by using a subtraction part of the convex hulls and the original region.
e. Screening pear areas:
d, screening the areas obtained in the step d, and taking the areas with the areas smaller than or larger than the set value and the circle area rate smaller than the set value as non-pear areas to be excluded from the identification area;
f. contour fitting and center point calculation:
and carrying out ellipse fitting on the screened areas, and calculating the coordinates (X, Y) of the central point of each pear sub-area.
According to the identification and central point positioning method for the pears in the natural pear garden environment, the identification and central point positioning of the pears can be quickly achieved, the algorithm efficiency is high, the environmental adaptability is strong, and the final accuracy is high.
Example two:
this embodiment is substantially the same as the first embodiment, and is characterized in that:
in this embodiment, the specific method of step a includes:
a-1, controlling a camera to shoot an RGB color image, dividing a single three-color channel image into R, G, B three single-color channel images, namely converting each pixel point into R, G, B three channels according to color respectively, and dividing each channel into 0-255 grades according to the brightness of the color points;
a-2, converting each pixel point of the image into H, S, V single-channel images and synthesizing into an HSV image, wherein H, S, V represents hue (H), saturation (S) and brightness (V) respectively; first, R ', G ', B ' are defined as follows:
the H channel is divided into four cases per pixel point value: 1) when max (R ', G', B ') is min (R', G ', B'), H is 0 °; 2) when max (R ', G', B ') ═ R',3) when max (R ', G', B ') ═ G',4): when max (R ', G', B ') ═ B',
each pixel point value of the S channel is divided into two cases: 1) when max (R ', G ', B ') is 0, S is 0; 2) when max (R ', G ', B ') is 0,
the V channel has per pixel point values: v ═ max (R ', G ', B ').
In this embodiment, the specific method of step b includes:
b-1, reading 10 pre-prepared pear feature template images, and converting the images from an RGB color space to an HSV color space;
b-2, dividing the color range of 0-255 into 16 groups of color intervals, and calculating a histogram of the image;
b-3, carrying out normalization treatment on the histogram obtained in the step b-2:
where src (i, j) is the original pixel point value,dst (i, j) is the normalized pixel point value, which is the 2-norm of all pixel point values in the image.
In this embodiment, the specific method of step c includes:
c-1, comparing the template image histogram obtained in the step b with the image obtained in the step a by using a back projection algorithm, wherein the template histogram is regarded as prior probability distribution of pear features, the back projection is to calculate the probability that a certain specific part in the image obtained in the step a belongs to the prior distribution, and the brighter part in the obtained back projection image indicates that the probability that the part belongs to a pear region is higher;
c-2, performing threshold segmentation on the reverse projection graph, and setting the threshold to be 0.05 through experimental selection to obtain the best effect;
and c-3, adding the 10 threshold-value-segmented reverse projection images to obtain 1 total reverse projection image.
In this embodiment, the specific method of step d includes:
d-1, filling unidentified hole parts in the pear sub-area by using a hole filling algorithm: let the original image be a. A is stretched out by one to two pixels and the value is filled in as background color (0), labeled B. The large background of B is filled in using a flood filling algorithm with a fill value of foreground (255) and a seed point of (0,0) (step 2 can ensure that (0,0) is on the large background), labeled C. Cutting the filled image into the size of an original image, marking the image as D, and adding the negation of D and A to complete hole filling;
d-2, performing morphological opening operation on the total reverse projection graph obtained in the step c by using an oval kernel with the size of 7x7 and the anchor point positioned in the center to remove tiny noise points;
d-3, identifying the area outline after the hole is filled, and completing the shielded concave part of the pear by using a convex hull algorithm;
and d-4, dividing the overlapped pears by the concave part obtained by subtracting the original overlapped area from the convex hull of the overlapped part.
In this embodiment, the specific method of step e includes:
e-1. two criteria for the screening were first defined: d, area and circle area ratio, wherein the area refers to the area of the convex hull drawn in the step d, and the circle area ratio is defined as follows:
area ratio of circle is equal to area of convex hull/area of minimum enclosing circle of convex hull
And e-2, calculating the area of each convex hull region in the graph, drawing a minimum enclosing circle of each convex hull and calculating the area of the enclosing circle, wherein the region with the convex hull area larger or smaller than a set value or the circle area ratio smaller than the set value is regarded as a non-pear region, and the region is excluded.
In this embodiment, the specific method of step f includes:
f-1. calculating the contour of each pear region and calculating the 0-order moment m of the contour00And 1 order moment m10,m01;
f-2. calculate the center point (X, Y) of each pear sub-region, defined as follows:
and f-3, fitting the positions of the pear areas by using an ellipse fitting algorithm and drawing a fitted ellipse and a central point in the original image.
In the method, an RGB image of the pear is collected through an RGB-D binocular camera and is converted into an HSV color image; reading 10 template images, calculating histograms of the template images and carrying out normalization processing; identifying a pear region in the image by using a back projection algorithm through the histogram information of the template image; and (3) filling holes in the identified image, performing morphological processing and convex hull operation to obtain a better identification effect, and realizing segmentation processing of an overlapped part by the concave part formed by subtracting the convex hull of the overlapped pear from the original area.
Example three:
the flow chart of this embodiment is shown in fig. 1, and the following describes the specific steps of the identification and center point location method for pears in natural pear orchard environment.
1. Pear image acquisition:
in this embodiment, an RGB-D binocular camera is used to collect images of a pear sub-area on a pear tree, the size of the image is 640x480, the color space is RGB, as shown in fig. 2a, and the images are converted into HSV color spaces by formulas (1), (2), and (3), as shown in fig. 2 b.
V=max(R′,G′,B′) (3)
2. Calculating a template image histogram:
reading 10 template pictures containing pear features prepared in advance is shown in fig. 3, and the pictures are also converted from an RGB color space to an HSV color space. The value range of HSV in OpenCV is 0-255, a histogram with 256 values needs to be calculated by directly using the HSV, the classification is fine, and the calculation amount is too large. Experiments prove that most pear regions cannot be identified and the program running speed is slow. Therefore, after the experiment, the color range is selected to be divided into 16 groups of intervals, so that the histogram feature of the pear can be greatly reserved while the calculation amount is reduced, and the calculation speed of the program is improved. And then reuse the formulaNormalizing the histogram, mapping the interval from 0-255 to 0-1, further reducing the calculation amount and improving the operation speed, wherein src (i, j) in the formula is the original pixel point value,dst (i, j) is the normalized pixel point value, which is the 2-norm of all pixel point values in the image.
3. Identifying pear regions:
comparing the template image histogram with fig. 2b by a back projection algorithm, wherein the template histogram is regarded as a prior probability distribution of the pear features, the back projection is to calculate the probability that a certain specific part in fig. 2b belongs to the prior distribution, and the brighter part in the obtained back projection image indicates that the probability that the part belongs to the template region (i.e. pear) is higher. Then, threshold segmentation is carried out on the obtained back projection image, experiments show that the best identification effect can be obtained by setting the threshold to be 0.05, and the back projection effect image obtained by a single template picture is shown in fig. 4 a. Considering that the picture is essentially a matrix, mathematical operations can be performed, and therefore, an attempt is made to add a plurality of backprojection pictures to obtain a better recognition effect. The experiment shows that the recognition effect of the template shown in FIG. 4b is obviously improved compared with that of a single template.
4. And (3) identifying areas:
due to the influence of illumination, parts which cannot be identified exist in the pear areas, hole parts are formed by reflecting on the reverse projection drawing, and the hole parts are filled by using a hole filling algorithm: 1. let the original image be a. 2. A is stretched out by one to two pixels and the value is filled in as background color (0), labeled B. 3. The large background of B is filled in using a flood filling algorithm with a fill value of foreground (255) and a seed point of (0,0) (step 2 can ensure that (0,0) is on the large background), labeled C. 4. The filled image is cropped to the original image size, marked as D. 5. The hole filling is completed by adding the inverse of D and A, as shown in FIG. 5. The elliptical kernel with size 7x7 and anchor point at the center is used to perform morphological opening operation on the image to remove noise, as shown in fig. 6. The contour of the area of fig. 6 is plotted and the convex hull of the contour area is calculated to fill the concave parts that cannot be identified due to the occlusion of branches and leaves, as shown in fig. 7.
5. And (3) splitting overlapped pears:
and part of pears are overlapped together and need to be divided. Fig. 8a and 8b have been obtained by previous processing. The concave parts at the two sides of the overlapped pears are obtained by subtracting fig. 8b and fig. 8a, the opening operation is carried out, the central points of the two concave parts are calculated as shown in fig. 8c, and the division of the overlapped pears by connecting the two central points is realized as shown in fig. 8 d.
6. Screening pear areas:
two criteria for screening were first defined: area and circle area ratio, the area refers to the area of the convex hull obtained in the previous step, and the circle area ratio is defined as follows:
circle area ratio (convex hull area/minimum bounding circle area of convex hull)
Calculating the area of each convex hull region in the graph, drawing the minimum enclosing circle of each convex hull and calculating the area of the enclosing circle, wherein the region of which the area of the convex hull is larger than or smaller than a set value or the area ratio of the circle is smaller than the set value is regarded as a non-pear region, and the region is excluded.
7. Contour fitting and center point calculation:
the remaining area after screening is considered as the part of the pear, and contour fitting and central point calculation are needed to be carried out on the part. The pear belongs to round fruit in shape, so the ellipse is selected to be used for fitting the identification part. Then 0-order moment m of the contour of the identified part is calculated00And 1 order moment m10,m01The center point (X, Y) of each pear region is obtained by the following definition:
finally, the fitted ellipse and center point are plotted in the original drawing, as shown in fig. 9.
According to the identification and central point positioning method for the pears in the natural pear garden environment, RGB images of the pears are collected through an RGB-D binocular camera and are converted into HSV color images; reading 10 template images, calculating histograms of the template images and carrying out normalization processing; identifying a pear region in the image by using a back projection algorithm through the histogram information of the template image; the method comprises the steps of filling holes in an identified image, performing morphological processing and convex hull operation to obtain a better identification effect, and realizing segmentation processing of an overlapped part by a concave part formed by subtracting an original area from a convex hull of an overlapped pear; then, screening pear regions according to the area and the circle area rate; and finally, carrying out ellipse fitting and central point calculation on the screened area. The method has high algorithm efficiency and strong environmental adaptability, can quickly realize the identification and the central point positioning of the pears, and has higher final accuracy.
The embodiments of the present invention have been described with reference to the accompanying drawings, but the present invention is not limited to the embodiments, and various changes and modifications can be made according to the purpose of the invention, and any changes, modifications, substitutions, combinations or simplifications made according to the spirit and principle of the technical solution of the present invention shall be equivalent substitutions, as long as the purpose of the present invention is met, and the present invention shall fall within the protection scope of the present invention without departing from the technical principle and inventive concept of the present invention.
Claims (7)
1. A method for identifying and positioning a central point of a pear under a natural pear garden environment is characterized by comprising the following specific operation steps:
a. pear image acquisition:
the mechanical arm moves to a designated area, and an RGB-D binocular camera is controlled by a program to shoot RGB color images of the pears on the pear tree and convert the RGB color images into HSV color images;
b. calculating a histogram of the template image:
reading 10 RGB template images containing pear characteristics, converting the images from RGB color space images into HSV color space images, calculating histograms of the images, and performing normalization processing;
c. identification of pear regions:
b, identifying a pear region in the image obtained in the step a through the histogram information of the template image obtained in the step b by adopting a back projection algorithm;
d. and (3) identifying areas:
c, removing unidentified holes in the image obtained in the step c by using a hole filling algorithm, removing fine noise points by using morphological opening operation, removing shielding influence of branches and leaves by using convex hull operation, and segmenting the overlapped pears by using a subtraction part of the convex hulls and the original region;
e. screening pear areas:
d, screening the areas obtained in the step d, and taking the areas with the areas smaller than or larger than the set value and the circle area rate smaller than the set value as non-pear areas to be excluded from the identification area;
f. contour fitting and center point calculation:
and carrying out ellipse fitting on the screened areas, and calculating the coordinates (X, Y) of the central point of each pear sub-area.
2. The method for identifying and locating the central point of a pear in a natural pear garden environment according to claim 1, wherein the specific method of step a comprises:
a-1, controlling a camera to shoot an RGB color image, dividing a single three-color channel image into R, G, B three single-color channel images, namely converting each pixel point into R, G, B three channels according to color respectively, and dividing each channel into 0-255 grades according to the brightness of the color points;
a-2, converting each pixel point of the image into H, S, V single-channel images and synthesizing into an HSV image, wherein H, S, V represents hue (H), saturation (S) and brightness (V) respectively; first, R ', G ', B ' are defined as follows:
the H channel is divided into four cases per pixel point value:
1) when max (R ', G', B ') is min (R', G ', B'), H is 0 °;
each pixel point value of the S channel is divided into two cases:
1) when max (R ', G, B') is 0, S is 0;
the V channel has per pixel point values: v ═ max (R ', G, B').
3. The method for identifying and locating the central point of a pear in a natural pear orchard environment according to claim 1, wherein the method of step b comprises the following steps:
b-1, reading 10 pre-prepared pear feature template images, and converting the images from an RGB color space to an HSV color space;
b-2, dividing the color range of 0-255 into 16 groups of color intervals, and calculating a histogram of the image;
b-3, carrying out normalization treatment on the histogram obtained in the step b-2:
4. The method for identifying and locating the central point of a pear in a natural pear orchard environment according to claim 1, wherein the specific method of the step c comprises the following steps:
c-1, comparing the template image histogram obtained in the step b with the image obtained in the step a by using a back projection algorithm, wherein the template histogram is regarded as prior probability distribution of pear features, the back projection is to calculate the probability that a certain specific part in the image obtained in the step a belongs to the prior distribution, and the brighter part in the obtained back projection image indicates that the probability that the part belongs to a pear region is higher;
c-2, performing threshold segmentation on the reverse projection graph, and setting the threshold to be 0.05 through experimental selection to obtain the best effect;
and c-3, adding the 10 threshold-value-segmented reverse projection images to obtain 1 total reverse projection image.
5. The method for identifying and locating the central point of a pear in a natural pear garden environment according to claim 1, wherein the specific method of step d comprises:
d-1, filling unidentified hole parts in the pear sub-area by using a hole filling algorithm:
setting an original image as A; stretch a one to two pixels out and fill the value with the background color (0), labeled B; filling the large background of B by using a flood filling algorithm, wherein the filling value is foreground color (255), the seed point is (0,0) (step 2 can ensure that (0,0) is positioned in the large background), and the mark is C; cutting the filled image into the size of an original image, marking the image as D, and adding the negation of D and A to complete hole filling;
d-2, performing morphological opening operation on the total reverse projection graph obtained in the step c by using an oval kernel with the size of 7x7 and the anchor point positioned in the center to remove tiny noise points;
d-3, identifying the area outline after the hole is filled, and completing the shielded concave part of the pear by using a convex hull algorithm;
and d-4, dividing the overlapped pears by the concave part obtained by subtracting the original overlapped area from the convex hull of the overlapped part.
6. The method for identifying and locating the central point of a pear in a natural pear orchard environment according to claim 1, wherein the specific method of the step e comprises the following steps:
e-1. two criteria for the screening were first defined: d, area and circle area ratio, wherein the area refers to the area of the convex hull drawn in the step d, and the circle area ratio is defined as follows:
the circle area ratio is equal to the area of the convex hull/the area of the minimum enclosing circle of the convex hull;
and e-2, calculating the area of each convex hull region in the graph, drawing a minimum enclosing circle of each convex hull and calculating the area of the enclosing circle, wherein the region with the convex hull area larger or smaller than a set value or the circle area ratio smaller than the set value is regarded as a non-pear region, and the region is excluded.
7. The method of claim 1, wherein the step f comprises the following steps:
f-1. calculating the contour of each pear region and calculating the 0-order moment m of the contour00And 1 order moment m10,m01;
f-2. calculate the center point (X, Y) of each pear sub-region, defined as follows:
and f-3, fitting the positions of the pear areas by using an ellipse fitting algorithm and drawing a fitted ellipse and a central point in the original image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010938804.1A CN112183230A (en) | 2020-09-09 | 2020-09-09 | Identification and central point positioning method for pears in natural pear orchard environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010938804.1A CN112183230A (en) | 2020-09-09 | 2020-09-09 | Identification and central point positioning method for pears in natural pear orchard environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112183230A true CN112183230A (en) | 2021-01-05 |
Family
ID=73920034
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010938804.1A Pending CN112183230A (en) | 2020-09-09 | 2020-09-09 | Identification and central point positioning method for pears in natural pear orchard environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112183230A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112907516A (en) * | 2021-01-27 | 2021-06-04 | 山东省计算中心(国家超级计算济南中心) | Sweet corn seed identification method and device for plug seedling |
CN113361315A (en) * | 2021-02-23 | 2021-09-07 | 仲恺农业工程学院 | Banana string identification method based on background saturation compression and difference threshold segmentation fusion |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316043A (en) * | 2017-07-04 | 2017-11-03 | 上海大学 | A kind of stacking mushroom method for quickly identifying of picking robot |
CN108416814A (en) * | 2018-02-08 | 2018-08-17 | 广州大学 | Quick positioning and recognition methods and the system on a kind of pineapple head |
CN108447068A (en) * | 2017-12-22 | 2018-08-24 | 杭州美间科技有限公司 | Ternary diagram automatic generation method and the foreground extracting method for utilizing the ternary diagram |
CN110334692A (en) * | 2019-07-17 | 2019-10-15 | 河南科技大学 | A kind of blind way recognition methods based on image procossing |
CN110363784A (en) * | 2019-06-28 | 2019-10-22 | 青岛理工大学 | A kind of recognition methods being overlapped fruit |
CN111046782A (en) * | 2019-12-09 | 2020-04-21 | 上海海洋大学 | Fruit rapid identification method for apple picking robot |
-
2020
- 2020-09-09 CN CN202010938804.1A patent/CN112183230A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316043A (en) * | 2017-07-04 | 2017-11-03 | 上海大学 | A kind of stacking mushroom method for quickly identifying of picking robot |
CN108447068A (en) * | 2017-12-22 | 2018-08-24 | 杭州美间科技有限公司 | Ternary diagram automatic generation method and the foreground extracting method for utilizing the ternary diagram |
CN108416814A (en) * | 2018-02-08 | 2018-08-17 | 广州大学 | Quick positioning and recognition methods and the system on a kind of pineapple head |
CN110363784A (en) * | 2019-06-28 | 2019-10-22 | 青岛理工大学 | A kind of recognition methods being overlapped fruit |
CN110334692A (en) * | 2019-07-17 | 2019-10-15 | 河南科技大学 | A kind of blind way recognition methods based on image procossing |
CN111046782A (en) * | 2019-12-09 | 2020-04-21 | 上海海洋大学 | Fruit rapid identification method for apple picking robot |
Non-Patent Citations (5)
Title |
---|
占求香: "成熟柑橘识别与树干轮廓提取方法的研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
周小军: "柑橘采摘机器人成熟果实定位及障碍物检测研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
宋怀波: "基于凸壳的重叠苹果目标分割与重建算法", 《农业工程学报》 * |
宋鑫: "基于轮廓片段空间关系的目标识别系统设计", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
贾伟宽: "基于智能优化的苹果采摘机器人目标识别研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112907516A (en) * | 2021-01-27 | 2021-06-04 | 山东省计算中心(国家超级计算济南中心) | Sweet corn seed identification method and device for plug seedling |
CN113361315A (en) * | 2021-02-23 | 2021-09-07 | 仲恺农业工程学院 | Banana string identification method based on background saturation compression and difference threshold segmentation fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112136505B (en) | Fruit picking sequence planning method based on visual attention selection mechanism | |
CN105718945B (en) | Apple picking robot night image recognition method based on watershed and neural network | |
CN111666883B (en) | Grape picking robot target identification and fruit stalk clamping and cutting point positioning method | |
CN107527343B (en) | A kind of agaricus bisporus stage division based on image procossing | |
CN105701829B (en) | A kind of bagging green fruit image partition method | |
Das et al. | Detection and classification of acute lymphocytic leukemia | |
CN112183230A (en) | Identification and central point positioning method for pears in natural pear orchard environment | |
CN105184216A (en) | Cardiac second region palm print digital extraction method | |
CN112990103B (en) | String mining secondary positioning method based on machine vision | |
CN112132153B (en) | Tomato fruit identification method and system based on clustering and morphological processing | |
CN104700417A (en) | Computer image based automatic identification method of timber knot flaws | |
CN112883881B (en) | Unordered sorting method and unordered sorting device for strip-shaped agricultural products | |
CN105574514A (en) | Greenhouse immature tomato automatic identification method | |
CN116843581B (en) | Image enhancement method, system, device and storage medium for multi-scene graph | |
CN107886493A (en) | A kind of wire share split defect inspection method of transmission line of electricity | |
CN111612797B (en) | Rice image information processing system | |
CN117456358A (en) | Method for detecting plant diseases and insect pests based on YOLOv5 neural network | |
CN112068705A (en) | Bionic robot fish interaction control method and system based on gesture recognition | |
CN111401121A (en) | Method for realizing citrus segmentation based on super-pixel feature extraction | |
CN106897989A (en) | A kind of fingerprint image dividing method calculated based on line density | |
CN115601690A (en) | Edible fungus environment detection method based on intelligent agriculture | |
Hu et al. | Research on the location of citrus based on RGB-D binocular camera | |
CN111507995B (en) | Image segmentation method based on color image pyramid and color channel classification | |
CN107194320A (en) | A kind of greenhouse green pepper picking robot target identification method based on image characteristic analysis | |
CN113269750A (en) | Banana leaf disease image detection method and system, storage medium and detection device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210105 |