CN110648359A - Fruit target positioning and identifying method and system - Google Patents
Fruit target positioning and identifying method and system Download PDFInfo
- Publication number
- CN110648359A CN110648359A CN201910901551.8A CN201910901551A CN110648359A CN 110648359 A CN110648359 A CN 110648359A CN 201910901551 A CN201910901551 A CN 201910901551A CN 110648359 A CN110648359 A CN 110648359A
- Authority
- CN
- China
- Prior art keywords
- fruit target
- fruit
- target
- image
- positioning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 235000013399 edible fruits Nutrition 0.000 title claims abstract description 200
- 238000000034 method Methods 0.000 title claims abstract description 46
- 239000013598 vector Substances 0.000 claims abstract description 68
- 230000011218 segmentation Effects 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 18
- 238000003709 image segmentation Methods 0.000 claims abstract description 15
- 238000013139 quantization Methods 0.000 claims abstract description 11
- 238000004590 computer program Methods 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000001035 drying Methods 0.000 claims description 2
- 238000009499 grossing Methods 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 241000220225 Malus Species 0.000 description 2
- 235000006679 Mentha X verticillata Nutrition 0.000 description 2
- 235000002899 Mentha suaveolens Nutrition 0.000 description 2
- 235000001636 Mentha x rotundifolia Nutrition 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000013585 weight reducing agent Substances 0.000 description 2
- 241000220324 Pyrus Species 0.000 description 1
- 235000021016 apples Nutrition 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000739 chaotic effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 235000021017 pears Nutrition 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30128—Food products
Abstract
The invention discloses a fruit target positioning and identifying method and a fruit target positioning and identifying system, which solve the problem of poor positioning and identifying efficiency of a visual system of a picking robot, have the advantages of simplicity, high practical speed, capability of obtaining a relatively accurate identifying result and suitability for real-time operation of the picking robot; the method comprises the following steps: acquiring an RGB image and a depth image of a fruit target acquired by a camera; drawing a contour map of a fruit target depth image, carrying out quantization processing on the contour map to obtain a vector gradient field, rotating all vectors in the vector gradient field to form a vortex, and positioning the maximum position of the vorticity as the center of a circle of a fruit target; segmenting the RGB image of the fruit target by a graph theory-based segmentation method; and identifying the fruit target area by using the circle center of the fruit target and the image area obtained after the RGB image segmentation.
Description
Technical Field
The invention relates to the technical field of agricultural machinery, in particular to a fruit target positioning and identifying method and system suitable for a fruit picking robot.
Background
The machine vision system of the agricultural robot is used for sensing environmental information and identifying and positioning target objects, and is widely applied to target identification of the picking robot. The rapid and accurate identification and positioning of the target object are realized, and the reliability and the real-time performance of the picking robot are directly influenced.
Accurate identification and positioning of objects is a key to the vision system. In recent years, static fruit target recognition, dynamic fruit target recognition, occlusion or overlapping fruit target recognition, and the like have been advanced relatively, and efficient recognition of fruit targets directly affects the work efficiency of agricultural robots. Most of the current methods adopt a method of firstly identifying and then positioning, and most of the identification of target fruits adopts steps of segmentation, feature extraction, classifier identification and the like, although scholars propose segmentation algorithms based on various optimized clusters, neural networks and the like; various support vector machines, an optimal identification algorithm of a deep network and the like. The algorithms pay more attention to the recognition effect of the target fruit, but neglect the recognition or positioning efficiency, so that most relevant researches are difficult to meet the real-time operation requirement.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a fruit target positioning and identifying method and a fruit target positioning and identifying system, which solve the problem of poor positioning and identifying efficiency of a vision system of a picking robot.
The technical scheme of the fruit target positioning and identifying method provided by the invention on one hand is as follows:
a fruit target positioning and identifying method comprises the following steps:
acquiring an RGB image and a depth image of a fruit target acquired by a camera;
drawing a contour map of a fruit target depth image, carrying out quantization processing on the contour map to obtain a vector gradient field, rotating all vectors in the vector gradient field to form a vortex, and positioning the maximum position of the vorticity as the center of a circle of a fruit target;
segmenting the RGB image of the fruit target by a graph theory-based segmentation method;
and identifying the fruit target area by using the circle center of the fruit target and the image area obtained after the RGB image segmentation.
The technical scheme of the fruit target positioning and identifying system provided by the invention on the other hand is as follows:
a fruit target location identification system, the system comprising:
the image acquisition module is used for acquiring an RGB image and a depth image of the fruit target acquired by the camera;
the fruit target positioning module is used for drawing a contour map of a fruit target depth image, carrying out quantization processing on the contour map to obtain a vector gradient field, rotating all vectors in the vector gradient field to form a vortex, and positioning the maximum vorticity position as the center of a circle of a fruit target;
the image segmentation module is used for segmenting the RGB image of the fruit target based on a graph theory segmentation method;
and the fruit target identification module is used for identifying the fruit target area based on the circle center of the fruit target and the image area obtained after the RGB image segmentation.
Another aspect of the present invention provides a computer-readable storage medium, wherein:
a computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring an RGB image and a depth image of a fruit target acquired by a camera;
drawing a contour map of a fruit target depth image, carrying out quantization processing on the contour map to obtain a vector gradient field, rotating all vectors in the vector gradient field to form a vortex, and positioning the maximum position of the vorticity as the center of a circle of a fruit target;
segmenting the RGB image of the fruit target by a graph theory-based segmentation method;
and identifying the fruit target area by using the circle center of the fruit target and the image area obtained after the RGB image segmentation.
Another aspect of the present invention provides a processing apparatus, including:
a processing apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps when executing the program:
acquiring an RGB image and a depth image of a fruit target acquired by a camera;
drawing a contour map of a fruit target depth image, carrying out quantization processing on the contour map to obtain a vector gradient field, rotating all vectors in the vector gradient field to form a vortex, and positioning the maximum position of the vorticity as the center of a circle of a fruit target;
segmenting the RGB image of the fruit target by a graph theory-based segmentation method;
and identifying the fruit target area by using the circle center of the fruit target and the image area obtained after the RGB image segmentation.
Through the technical scheme, the invention has the beneficial effects that:
(1) the fruit target recognition method can quickly and accurately recognize the fruit target under the complex background, is quick and stable, has high recognition rate, can be used for real-time recognition of the picking robot, and improves the problem of recognition efficiency in the operation process of the picking robot;
(2) the fruit is positioned on the basis of the depth image, the three-dimensional shape characteristics of the fruit are utilized, the fruit is not limited by the color characteristics of the fruit, only the shape characteristics of a target are considered, the color threshold value does not need to be repeatedly modified, and the fruit positioning method can be applied to various robot vision systems for fruit picking or pre-production.
(3) The invention realizes the positioning of the target fruit firstly and then identifies the target area, greatly improves the identification and positioning efficiency of the target fruit and can better adapt to the real-time operation requirement of the picking robot.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the application and not to limit the invention.
FIG. 1 is a flowchart of a fruit target location identification method according to an embodiment;
FIG. 2(a) is a depth image taken of the same sample point in the first embodiment;
FIG. 2(b) is an RGB image captured at the same sampling point in the first embodiment
FIG. 3 is a contour diagram obtained according to the three-dimensional characteristics of the depth image in the first embodiment;
FIG. 4 is a diagram illustrating a depth map quantized and mapped into a two-dimensional plane to obtain a direction vector which is divergent outward along the surface direction of an apple in the first embodiment;
FIG. 5 is a vorticity center diagram obtained by rotating the vector of FIG. 4 by 90 degrees in the first embodiment;
FIG. 6 is a segmented image of an RGB image after segmentation according to an embodiment I;
FIG. 7 is a diagram illustrating the result of region identification in the first embodiment;
FIG. 8 is a flowchart illustrating an embodiment of an image segmentation process.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
Fig. 1 is a flowchart of a fruit target positioning and identifying method according to the present embodiment. As shown in fig. 1, the fruit target positioning and identifying method includes the following steps:
s101, acquiring an RGB image and a corresponding depth image of a fruit target.
In this embodiment, a camera equipped with a Kinect camera is used, and in the same environment, the RGB image and the corresponding depth image are taken at the same angle for the same sampling point.
In this embodiment, the RGB image and the depth image are collected in a natural light environment, including a front light condition and a back light condition. The image acquisition uses a lens: kinect V2(Microsoft), RGB image acquisition resolution 1920 × 1080, depth image resolution 512 × 424, and output format JPG. And (4) simultaneously outputting the RGB image and the depth image of the fruit target at each sample sampling point. The RGB image is shown in fig. 2(b), and the depth image is shown in fig. 2 (a).
The working principle that the camera provided with the Kinect camera shoots to obtain the depth image is as follows:
an infrared speckle transmitter transmits infrared beams to the fruit target, the beams are reflected back to the depth camera after reaching the fruit target, and the distance between the camera and the fruit target is calculated by utilizing the geometrical relationship between the returned speckles. The depth camera senses the surrounding environment in a black-and-white spectrum mode, and in the obtained depth image, the deeper the black depth is, the farther the target is represented, and the lighter the white color is, the closer the target is represented. The grey zone between black and white represents the physical distance between the target and the sensor, so that the three-dimensional shape information of the fruit target can be reflected on the depth image.
And S102, preprocessing the depth image of the obtained fruit target.
Specifically, because the depth image of the fruit target directly obtained by the Kinect camera has high noise and relatively fuzzy edges, firstly, the edge information of the depth image of the fruit target is smoothed, and then, the depth image of the fruit target is subjected to drying processing by adopting a mean filter.
S103, analyzing the three-dimensional gradient field information of the fruit target depth image by using the equal depth information, performing two-dimensional projection on the three-dimensional gradient field information, and rotating to obtain the vorticity of the fruit target depth image, wherein the center of each collected vorticity is the circle center of the fruit target.
Specifically, the specific implementation method of step 103 is as follows:
(1) and drawing a contour map of the distance between the two-dimensional fruit target and the camera according to the three-dimensional information of the depth image.
And calculating the three-dimensional geometrical characteristics of the fruit target depth image by using the mapped depth information, and drawing a distance contour map of the fruit target from the camera.
As can be seen from the principle of the depth sensor, the pixels closer to the camera have smaller distance values, as shown in fig. 3, because the surface of the fruit target is a convex spherical surface, the distance value of the center of the fruit target is smaller, and the distance value of the periphery is relatively larger.
(2) And carrying out quantization processing on the depth information of the contour map.
A vector gradient field is established in the contour map.And a group of vectors (u, v) on a vector gradient field in a three-dimensional space are represented, and u and v respectively represent the partial derivatives of the depth D in the x and y directions in a three-dimensional coordinate, so that the gradient direction of the fruit target is obtained.
Because the target surface of the fruit is a convex parabola, a vector is madeThe set of vectors in the vector gradient field is mapped to a two-dimensional plane along the partial derivatives of the depth D in the plane (x, y) direction, and the resulting direction vectors exhibit an outward divergence in the gradient field in the direction of the fruit target surface, i.e. the direction vectors that diverge outward perpendicular to the contour, as shown in fig. 4.
(3) And locating the center of a circle of the fruit target.
Rotating all vectors in fig. 4 uniformly with a fixed origin by 90 ° clockwise, the angular velocity (direction only) of the pixel referenced by the adjacent vector is:
wherein the content of the first and second substances,is a partial operator in x, y, z direction,representing a vector in the z-axis direction. Because the change rate of the size of the vortex is increased from the edge to the circle center, the vectors are spliced into a shape similar to a circular arc in the gradient field. If the vector magnitude is the same, the angular velocity is:
where φ represents the arc length size (similar to the vector size) and t represents the same time. The gradient vector of the original convex parabolic surface rotates clockwise; and the gradient vector of the original concave parabolic surface rotates anticlockwise and is in a chaotic state. In the depth image, the maximum vorticity can be obtained at the position with the fastest distance change, so that the position with the maximum angular speed change is the center of a circle of the fruit target, as shown in fig. 5.
The vector after rotation is in a clockwise vortex shape on the surface of the fruit target, and the position where the vorticity is maximum is positioned as the center of the fruit target.
In the embodiment, in the depth image mode, according to the gradient information of the depth image of the fruit target, the circle center of the fruit target is searched through the vorticity center, and the quick and accurate positioning of the fruit target is realized.
And S104, carrying out graph theory-based segmentation on the RGB image of the fruit target to obtain a super pixel region of the fruit target.
Referring to fig. 8, the step 104 is implemented as follows:
and S104-1, performing Gaussian filtering noise reduction on the RGB image of the fruit target before segmentation, wherein sigma is 0.8.
S104-2, based on the concept of graph theory, for the input picture G (V, E), the input picture G (V, E) is provided with M vertexes V and N edges E, the edges are sorted by weight reduction, and E is obtained by sorting1,e2,…eN。
S104-3, defining a divided evaluation mechanism to control whether the pixel points are merged, from e1Initially, if the two vertices on the edge are not in the same region, and the merge condition is satisfied: dissimilarity not greater than within the two, i.e. wi,j≤Mint(Ci,Cj) Then the two pixels are merged.
The difference inside the superpixel region is defined as:
i.e. the maximum value of dissimilarity among the edges connected by each vertex in this region.
The difference between the two regions within the super-pixel region is defined as:
i.e. vi,vjThe minimum value of the dissimilarity between the connecting lines of the vertexes in the two regions is respectively.
Area comparison: by examining two regions Dif (C)1,C2) With respect to at least one region Int (C)1) And Int (C)2) Whether the difference inside is large is evaluated whether a boundary exists between the two regions. The threshold function is used to control the extent to which the difference between two regions must be greater than the minimum internal difference, i.e.:
wherein the minimum internal difference MInt is defined as:
M Int(C1,C2)=min(Int(C1)+τ(C1),Int(C2)+τ(C2)
and S104-4, if not, updating the threshold value and the area label until the condition is met, and merging to all edges to finish traversing. The resulting segmented image is shown in fig. 6.
In the embodiment, under an RGB image mode, the RGB image of the fruit target is efficiently segmented by constructing an evaluation constraint mechanism to optimize a graph theory-based segmentation algorithm, and the fruit target area is searched.
S105, determining the target area of the fruit.
Regarding the fruit target as a quasi-circle, step 103, knowing the center of the fruit target in a depth image gradient field mode, and then step 104, using graph theory-based segmentation on the acquired fruit target RGB image, and labeling different segmentation areas by using different RGB colors to obtain a superpixel area.
And determining a super pixel region with the center of the fruit target as the region with the fruit target. And scanning the block of super pixel region to obtain the area of the super pixel region, taking the radius of a circle with the area equal to that of the super pixel region as the radius, taking the center of the circle of the obtained fruit target as the center of the circle, and fitting the fruit target region.
Specifically, the specific implementation manner of step 105 is as follows:
s105-1, traversing the segmented image obtained in the step 104, and finding out a super pixel region containing the central point of the fruit target calibrated in the step 103.
S105-2, scanning the super pixel region where the center point of the fruit target is located to obtain the area of the super pixel region.
And S105-3, fitting the outline of the fruit target by taking the center point of the fruit target as the center of a circle and the radius of the circle with the area equal to that of the super pixel region as the radius to obtain the fruit target region.
In the embodiment, two image modes are combined, the boundary of the fruit target area is scanned from the circle center, the maximum radius is found, the fruit target contour is fitted, and finally the fruit target is identified and positioned.
According to the fruit target positioning and identifying method provided by the embodiment, a kinect camera is used for shooting a depth image and an RGB image at the same sampling point respectively and simultaneously under a natural light source; performing two-dimensional mapping on three-dimensional information based on image depth information on the basis of a depth image to realize fruit target center positioning, firstly, rotating a gradient vector clockwise by 90 degrees by using gradient field information of the depth image on an acquired fruit target depth image, wherein a vortex center is a fruit target circle center to determine a fruit target position; then introducing a graph theory-based segmentation algorithm, and searching a fruit target region by constructing a self-adaptive threshold; and finally, fitting a fruit target region according to the area of the super-pixel region where the circle center of the fruit target is located, and realizing the identification and positioning of the fruit target.
The fruit target targeted by the fruit target positioning and identifying method provided by the embodiment can be green fruits such as apples, oranges, pears and the like.
Example two
This embodiment provides a fruit target location identification system, and this system includes:
the image acquisition module is used for acquiring an RGB image and a depth image of the fruit target acquired by the camera;
the fruit target positioning module is used for drawing a contour map of a fruit target depth image, carrying out quantization processing on the contour map to obtain a vector gradient field, rotating all vectors in the vector gradient field to form a vortex, and positioning the maximum vorticity position as the center of a circle of a fruit target;
the image segmentation module is used for segmenting the RGB image of the fruit target based on a graph theory segmentation method;
and the fruit target identification module is used for identifying the fruit target area based on the circle center of the fruit target and the image area obtained after the RGB image segmentation.
In this embodiment, the specific implementation method of the fruit target positioning module is as follows:
calculating three-dimensional geometric characteristic information of the fruit target depth image by using the distance value between the fruit target and the camera;
drawing a distance contour map of the fruit target from the camera according to the three-dimensional geometric feature information of the target depth image;
establishing a vector gradient field in the contour map, performing partial derivatives on any one group of vectors in the vector gradient field along a set depth in the direction of a two-dimensional plane, mapping all vector groups in the vector gradient field to the two-dimensional plane, and enabling the obtained vectors to be in an outward divergence shape in the gradient field along the direction of the surface of a fruit target;
rotating all vectors in the vector gradient field by a set angle in a clockwise direction according to a fixed origin;
the vector after rotation is in a clockwise vortex shape on the surface direction of the fruit target, and the position with the maximum vorticity is positioned as the center of the circle of the fruit target.
In this embodiment, a specific implementation method of the image segmentation module is as follows:
carrying out Gaussian filtering denoising processing on the RGB image of the fruit target;
based on graph theory, the multiple edges of the RGB image are sorted according to weight reduction,
defining a segmentation evaluation mechanism comprising a difference inside a super-pixel region and a difference between the two regions;
according to the sorting result, traversing from the edge with the largest weight, and judging whether two vertexes on the edge are in the same area or not and whether a merging condition is met or not; if the two pixel points are not in the same region and meet the merging condition, merging the two pixel points, and if the merging condition is not met, updating the threshold value and the region label until the condition is met until all the edges are traversed, so as to obtain the segmented image.
In this embodiment, the specific implementation method of the fruit target identification module is as follows:
traversing the RGB image after segmentation, and finding out a super-pixel region containing the circle center of the fruit target;
scanning a super pixel region where the circle center of the fruit target is located to obtain the area of the super pixel region;
and fitting the outline of the fruit target by taking the circle center of the fruit target as the circle center and the radius of a circle with the area equal to that of the super pixel region as the radius to obtain the fruit target region.
EXAMPLE III
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring an RGB image and a depth image of a fruit target acquired by a camera;
drawing a contour map of a fruit target depth image, carrying out quantization processing on the contour map to obtain a vector gradient field, rotating all vectors in the vector gradient field to form a vortex, and positioning the maximum position of the vorticity as the center of a circle of a fruit target;
segmenting the RGB image of the fruit target by a graph theory-based segmentation method;
and identifying the fruit target area by using the circle center of the fruit target and the image area obtained after the RGB image segmentation.
Example four
The embodiment provides a processing apparatus, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the following steps:
acquiring an RGB image and a depth image of a fruit target acquired by a camera;
drawing a contour map of a fruit target depth image, carrying out quantization processing on the contour map to obtain a vector gradient field, rotating all vectors in the vector gradient field to form a vortex, and positioning the maximum position of the vorticity as the center of a circle of a fruit target;
segmenting the RGB image of the fruit target by a graph theory-based segmentation method;
and identifying the fruit target area by using the circle center of the fruit target and the image area obtained after the RGB image segmentation.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.
Claims (10)
1. A fruit target positioning and identifying method is characterized by comprising the following steps:
acquiring an RGB image and a depth image of a fruit target acquired by a camera;
drawing a contour map of a fruit target depth image, carrying out quantization processing on the contour map to obtain a vector gradient field, rotating all vectors in the vector gradient field to form a vortex, and positioning the maximum position of the vorticity as the center of a circle of a fruit target;
segmenting the RGB image of the fruit target by a graph theory-based segmentation method;
and identifying the fruit target area by using the circle center of the fruit target and the image area obtained after the RGB image segmentation.
2. The fruit target location identification method of claim 1, further comprising the steps of smoothing the edge information of the fruit target depth image and de-drying the fruit target depth image using a mean filter.
3. The fruit target positioning and identifying method according to claim 1, wherein the contour map is drawn by:
calculating three-dimensional geometric characteristic information of the fruit target depth image by using the distance value between the fruit target and the camera;
and drawing a distance contour map of the fruit target from the camera according to the three-dimensional geometric characteristic information of the target depth image.
4. The fruit target positioning and identifying method according to claim 1, wherein the method for quantizing the contour map comprises:
and establishing a vector gradient field in the contour map, performing partial derivatives on any group of vectors in the vector gradient field along a set depth in the direction of a two-dimensional plane, mapping all vector groups in the vector gradient field to the two-dimensional plane, and enabling the obtained vectors to be in an outward divergence shape in the gradient field along the direction of the surface of the fruit target.
5. The fruit target positioning and identifying method according to claim 1, wherein the fruit target center positioning method comprises:
respectively rotating all vectors in the vector gradient field by a set angle in a clockwise direction according to a fixed origin;
the vector after rotation is in a clockwise vortex shape on the surface direction of the fruit target, and the position with the maximum vorticity is positioned as the center of the circle of the fruit target.
6. The fruit target positioning and identifying method according to claim 1, wherein the graph theory based segmentation method comprises the steps of segmenting the RGB image of the fruit target:
carrying out Gaussian filtering denoising processing on the RGB image of the fruit target;
based on graph theory, a plurality of edges of the RGB image are sorted according to the weight from big to small,
defining a segmentation evaluation mechanism comprising a difference inside a super-pixel region and a difference between the two regions;
according to the sorting result, traversing from the edge with the largest weight, and judging whether two vertexes on the edge are in the same area or not and whether a merging condition is met or not; if the two pixel points are not in the same region and meet the merging condition, merging the two pixel points, and if the merging condition is not met, updating the threshold value and the region label until the condition is met until all the edges are traversed, so as to obtain the segmented image.
7. The fruit target location and identification method according to claim 1, wherein the fruit target region identification method comprises:
traversing the RGB image after segmentation, and searching a super-pixel region containing the circle center of the fruit target;
scanning a super pixel region where the circle center of the fruit target is located to obtain the area of the super pixel region;
and fitting the outline of the fruit target by taking the circle center of the fruit target as the circle center and the radius of a circle with the area equal to that of the super pixel region as the radius to obtain the fruit target region.
8. A fruit target location identification system, the system comprising:
the image acquisition module is used for acquiring an RGB image and a depth image of the fruit target acquired by the camera;
the fruit target positioning module is used for drawing a contour map of a fruit target depth image, carrying out quantization processing on the contour map to obtain a vector gradient field, rotating all vectors in the vector gradient field to form a vortex, and positioning the maximum vorticity position as the center of a circle of a fruit target;
the image segmentation module is used for segmenting the RGB image of the fruit target based on a graph theory segmentation method;
and the fruit target identification module is used for identifying the fruit target area based on the circle center of the fruit target and the image area obtained after the RGB image segmentation.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for fruit target location identification according to any one of claims 1 to 7.
10. A processing apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of the method for fruit target location identification as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910901551.8A CN110648359B (en) | 2019-09-23 | 2019-09-23 | Fruit target positioning and identifying method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910901551.8A CN110648359B (en) | 2019-09-23 | 2019-09-23 | Fruit target positioning and identifying method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110648359A true CN110648359A (en) | 2020-01-03 |
CN110648359B CN110648359B (en) | 2022-07-08 |
Family
ID=68992503
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910901551.8A Active CN110648359B (en) | 2019-09-23 | 2019-09-23 | Fruit target positioning and identifying method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110648359B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112132785A (en) * | 2020-08-25 | 2020-12-25 | 华东师范大学 | Transmission electron microscope image recognition and analysis method and system for two-dimensional material |
CN113344844A (en) * | 2021-04-14 | 2021-09-03 | 山东师范大学 | Target fruit detection method and system based on RGB-D multimode image information |
CN114067206A (en) * | 2021-11-16 | 2022-02-18 | 哈尔滨理工大学 | Spherical fruit identification and positioning method based on depth image |
CN114260895A (en) * | 2021-12-22 | 2022-04-01 | 江苏大学 | Method and system for determining grabbing obstacle avoidance direction of mechanical arm of picking machine |
CN114067206B (en) * | 2021-11-16 | 2024-04-26 | 哈尔滨理工大学 | Spherical fruit identification positioning method based on depth image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180092304A1 (en) * | 2016-09-15 | 2018-04-05 | Francis Wilson Moore | Methods for Pruning Fruit Plants and Methods for Harvesting Fruit |
CN108668637A (en) * | 2018-04-25 | 2018-10-19 | 江苏大学 | A kind of machine vision places grape cluster crawl independent positioning method naturally |
WO2019100647A1 (en) * | 2017-11-21 | 2019-05-31 | 江南大学 | Rgb-d camera-based object symmetry axis detection method |
-
2019
- 2019-09-23 CN CN201910901551.8A patent/CN110648359B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180092304A1 (en) * | 2016-09-15 | 2018-04-05 | Francis Wilson Moore | Methods for Pruning Fruit Plants and Methods for Harvesting Fruit |
WO2019100647A1 (en) * | 2017-11-21 | 2019-05-31 | 江南大学 | Rgb-d camera-based object symmetry axis detection method |
CN108668637A (en) * | 2018-04-25 | 2018-10-19 | 江苏大学 | A kind of machine vision places grape cluster crawl independent positioning method naturally |
Non-Patent Citations (3)
Title |
---|
DAEUN CHOI ETC.: "Machine vision system for early yield estimation of citrus in a site-specific manner", 《2015 ASABE ANNUAL INTERNATIONAL MEETING》 * |
PEDRO F. FELZENSZWALB ETC.: "Efficient graph-based image segmentation", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》 * |
贾耕云 等: "基于超像素的Graph-Based图像分割算法", 《北京邮电大学学报》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112132785A (en) * | 2020-08-25 | 2020-12-25 | 华东师范大学 | Transmission electron microscope image recognition and analysis method and system for two-dimensional material |
CN112132785B (en) * | 2020-08-25 | 2023-12-15 | 华东师范大学 | Transmission electron microscope image identification and analysis method and system for two-dimensional material |
CN113344844A (en) * | 2021-04-14 | 2021-09-03 | 山东师范大学 | Target fruit detection method and system based on RGB-D multimode image information |
CN114067206A (en) * | 2021-11-16 | 2022-02-18 | 哈尔滨理工大学 | Spherical fruit identification and positioning method based on depth image |
CN114067206B (en) * | 2021-11-16 | 2024-04-26 | 哈尔滨理工大学 | Spherical fruit identification positioning method based on depth image |
CN114260895A (en) * | 2021-12-22 | 2022-04-01 | 江苏大学 | Method and system for determining grabbing obstacle avoidance direction of mechanical arm of picking machine |
CN114260895B (en) * | 2021-12-22 | 2023-08-22 | 江苏大学 | Method and system for determining grabbing obstacle avoidance direction of mechanical arm of picking robot |
Also Published As
Publication number | Publication date |
---|---|
CN110648359B (en) | 2022-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zermas et al. | Fast segmentation of 3d point clouds: A paradigm on lidar data for autonomous vehicle applications | |
CN110648359B (en) | Fruit target positioning and identifying method and system | |
CN106981077B (en) | Infrared image and visible light image registration method based on DCE and LSS | |
Adam et al. | H-RANSAC: A hybrid point cloud segmentation combining 2D and 3D data | |
JP6955783B2 (en) | Information processing methods, equipment, cloud processing devices and computer program products | |
WO2015017941A1 (en) | Systems and methods for generating data indicative of a three-dimensional representation of a scene | |
Holz et al. | Towards semantic scene analysis with time-of-flight cameras | |
Abbeloos et al. | Point pair feature based object detection for random bin picking | |
Hu et al. | An automatic 3D registration method for rock mass point clouds based on plane detection and polygon matching | |
Yogeswaran et al. | 3d surface analysis for automated detection of deformations on automotive body panels | |
CN114783068A (en) | Gesture recognition method, gesture recognition device, electronic device and storage medium | |
US11468609B2 (en) | Methods and apparatus for generating point cloud histograms | |
CN111798453A (en) | Point cloud registration method and system for unmanned auxiliary positioning | |
WO2022111682A1 (en) | Moving pedestrian detection method, electronic device and robot | |
Yuan et al. | 3D point cloud recognition of substation equipment based on plane detection | |
Ückermann et al. | Realtime 3D segmentation for human-robot interaction | |
Lu et al. | Long range traversable region detection based on superpixels clustering for mobile robots | |
CN113628170A (en) | Laser line extraction method and system based on deep learning | |
Tabkha et al. | Semantic enrichment of point cloud by automatic extraction and enhancement of 360° panoramas | |
CN107274477B (en) | Background modeling method based on three-dimensional space surface layer | |
Dryanovski et al. | Real-time pose estimation with RGB-D camera | |
CN114897999B (en) | Object pose recognition method, electronic device, storage medium, and program product | |
Novacheva | Building roof reconstruction from LiDAR data and aerial images through plane extraction and colour edge detection | |
Ma et al. | Depth image denoising and key points extraction for manipulation plane detection | |
WO2021114775A1 (en) | Object detection method, object detection device, terminal device, and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |