CN112802113A - Method for determining grabbing point of object in any shape - Google Patents

Method for determining grabbing point of object in any shape Download PDF

Info

Publication number
CN112802113A
CN112802113A CN202110165940.6A CN202110165940A CN112802113A CN 112802113 A CN112802113 A CN 112802113A CN 202110165940 A CN202110165940 A CN 202110165940A CN 112802113 A CN112802113 A CN 112802113A
Authority
CN
China
Prior art keywords
connected domain
determining
convexity
center
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110165940.6A
Other languages
Chinese (zh)
Other versions
CN112802113B (en
Inventor
段文杰
夏冬青
丁有爽
邵天兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mech Mind Robotics Technologies Co Ltd
Original Assignee
Mech Mind Robotics Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mech Mind Robotics Technologies Co Ltd filed Critical Mech Mind Robotics Technologies Co Ltd
Priority to CN202110165940.6A priority Critical patent/CN112802113B/en
Publication of CN112802113A publication Critical patent/CN112802113A/en
Application granted granted Critical
Publication of CN112802113B publication Critical patent/CN112802113B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The invention provides a method for determining a grabbing point of an object with any shape, which comprises the following steps: acquiring a probability map of a part which can be grabbed of a target object; carrying out binarization processing on the probability map of the part which can be grabbed to obtain a binary image of a connected domain; identifying and calculating the convexity of the connected domain, and judging the convexity and a preset value V1; when the convexity is larger than or equal to a preset value V1, judging that the gravity center of the target object is on the target object body, and determining the gravity center of the connected domain as a grabbing point; and when the convexity is less than the preset value V1, judging that the gravity center of the target object is outside the target object body, and determining the center of the maximum inscribed rectangle or the center of the maximum inscribed circle of the connected domain as the grabbing point. The object grabbing point determining method can determine the grabbing point of an object of any shape (such as an object with the gravity center on the body or a non-convex object with the gravity center not on the body).

Description

Method for determining grabbing point of object in any shape
Technical Field
The invention belongs to the field of robots, relates to a technology for identifying a grabbing point of an object, and particularly relates to a method for determining the grabbing point of an object in any shape.
Background
With the development of industrial intelligence, an industrial robot gradually replaces a worker to operate an object, for example, the operations of grabbing, moving, assembling and the like of the object are all realized through the industrial robot, so that the working efficiency can be improved, the working strength of the worker can be reduced, the error rate of the operation can be reduced, and the stable product quality can be ensured.
Currently, the operation processes of an industrial robot, such as grabbing, moving, assembling and the like, of a target object are all identified and confirmed through an upper grabbing point of the target object. Currently, a deep learning model is generally adopted to determine the capture point. Because the shapes of the objects are various, including the object with the gravity center on the body and the object with the gravity center not on the body, in the actual operation process, for the target object with the gravity center on the body, the grabbing point judged by the model is on the body, so that the industrial robot can grab the target object; however, for an object having a non-convex object, such as a ring or a horseshoe, whose center of gravity is not on the body, the grasping point determined by the model is often in a hollow portion rather than on the body, so that the industrial robot cannot perform the grasping operation.
Therefore, it is necessary to improve the existing method for confirming the grasping point of the object, so as to ensure that the grasping point of the object with any shape can be confirmed, and the grasping operation of the industrial robot can be realized.
Disclosure of Invention
The invention aims to provide a method for determining a grabbing point of an object with an arbitrary shape, which can determine the grabbing point of the object with the arbitrary shape (such as an object with the center of gravity on a body or a non-convex object with the center of gravity not on the body).
The technical scheme for realizing the purpose of the invention is as follows:
in a first aspect, the present invention provides a method for determining a grasp point of an object having an arbitrary shape, including the steps of:
acquiring a probability map of a part which can be grabbed of a target object;
carrying out binarization processing on the probability map of the part which can be grabbed to obtain a binary image of a connected domain;
identifying and calculating the convexity of the connected domain, and judging the convexity and a preset value V1;
when the convexity is larger than or equal to a preset value V1, judging that the gravity center of the target object is on the target object body, and determining the gravity center of the connected domain as a grabbing point;
and when the convexity is less than the preset value V1, judging that the gravity center of the target object is outside the target object body, and determining the center of the maximum inscribed rectangle or the center of the maximum inscribed circle of the connected domain as the grabbing point.
Optionally, the method for determining that the center of the maximum inscribed rectangle or the center of the maximum inscribed circle of the connected domain is the capture point comprises the following steps:
when the convexity is smaller than a preset value V1, determining the minimum circumscribed rectangle of the connected domain;
calculating the ratio a of the long side to the wide side of the minimum circumscribed rectangle;
when the ratio a is larger than a preset value V2, determining the center of the maximum inscribed rectangle of the connected domain as a grabbing point;
and when the ratio a is less than or equal to a preset value V2, determining the maximum inscribed circle center of the connected domain as the grabbing point.
Optionally, the connected component, the maximum inscribed rectangle, the maximum inscribed circle, and the maximum inscribed rectangle are identified by pixel gradients.
Optionally, the obtaining of the probability map of the graspable portion of the target object includes the following steps:
acquiring an image of a target object;
and inputting the image into a deep learning model for training, and outputting a probability map of the part which can be grabbed of the target object.
Optionally, the defining and calculating method of the convexity of the connected domain is as follows:
acquiring an outer surrounding convex polygon of a connected domain;
respectively calculating the area b of the connected domain and the area c of the outer surrounding convex polygon;
the b/c ratio is calculated and defined as convexity.
In a second aspect, the present invention provides a method for determining a grasp point of an object having an arbitrary shape, including the steps of:
acquiring a probability map of the graspable part of the object set;
performing binarization processing on the probability map of the part which can be grabbed to obtain a binary image, wherein the binary image comprises at least 2 connected domains, and each connected domain corresponds to 1 target object;
identifying and calculating the convexity of the connected domain of each target object, and judging the convexity and a preset value V1;
when the convexity is larger than or equal to a preset value V1, judging that the gravity center of the target object corresponding to the connected domain is on the target object body, and determining the gravity center of the connected domain as a grabbing point;
and when the convexity is less than the preset value V1, judging that the gravity center of the target object corresponding to the connected domain is outside the target object body, and determining the center of the maximum inscribed rectangle or the center of the maximum inscribed circle of the connected domain as the grabbing point.
Optionally, the method for determining that the center of the maximum inscribed rectangle or the center of the maximum inscribed circle of the connected domain is the capture point comprises the following steps:
when the convexity is smaller than a preset value V1, determining the minimum circumscribed rectangle of the connected domain;
calculating the ratio a of the long side to the wide side of the minimum circumscribed rectangle;
when the ratio a is larger than a preset value V2, determining the center of the maximum inscribed rectangle of the connected domain as a grabbing point;
and when the ratio a is less than or equal to a preset value V2, determining the maximum inscribed circle center of the connected domain as the grabbing point.
Optionally, the connected component, the maximum inscribed rectangle, the maximum inscribed circle, and the maximum inscribed rectangle are identified by pixel gradients.
Optionally, obtaining a probability map of the graspable portion of the object set includes the following steps:
acquiring an image of an object set;
and inputting the image into a deep learning model for training, and outputting a probability map of the part which can be grabbed of the object set.
Optionally, the defining and calculating method of the convexity of the connected domain is as follows:
acquiring an outer surrounding convex polygon of a connected domain;
respectively calculating the area b of the connected domain and the area c of the outer surrounding convex polygon;
the b/c ratio is calculated and defined as convexity.
Compared with the prior art, the invention has the beneficial effects that: according to the method for determining the grabbing point of the object in the arbitrary shape, disclosed by the invention, the convexity of the connected domain is calculated, and the convexity is compared with the preset value V1, so that whether the gravity center of the object in the arbitrary shape is on the body or not is judged. For the target object with the gravity center not on the body, the maximum inscribed rectangle of the connected domain is introduced, and the preset value V2 is combined to judge that the grabbing point of the target object is at the center of the maximum inscribed rectangle or the center of the maximum inscribed circle of the connected domain.
Drawings
In order to more clearly illustrate the technical solution of the embodiment of the present invention, the drawings used in the description of the embodiment will be briefly introduced below. It should be apparent that the drawings in the following description are only for illustrating the embodiments of the present invention or technical solutions in the prior art more clearly, and that other drawings can be obtained by those skilled in the art without any inventive work.
Fig. 1 is a flowchart of a method for determining a grasp point of an object having an arbitrary shape according to embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of a minimum outer bounding convex polygon of a connected domain in example 1 and example 2 of the present invention;
FIG. 3 is a schematic diagram of the maximum inscribed rectangle of the connected domain in examples 1 and 2 of the present invention;
FIG. 4 is a schematic diagram of the maximum inscribed circle of the connected domain in examples 1 and 2 of the present invention;
fig. 5 is a flowchart of a method for determining a grasp point of an object having an arbitrary shape according to embodiment 2 of the present invention.
Detailed Description
The invention will be further described with reference to specific embodiments, and the advantages and features of the invention will become apparent as the description proceeds. These examples are illustrative only and do not limit the scope of the present invention in any way. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention, and that such changes and modifications may be made without departing from the spirit and scope of the invention.
In the description of the present embodiments, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the device or element referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit to a number of indicated technical features. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the invention, the meaning of "a plurality" is two or more unless otherwise specified.
Example 1:
the embodiment provides a method for determining a grabbing point of an object with an arbitrary shape, as shown in fig. 1, including the following steps:
and S1, acquiring a probability map of the graspable part of the target object.
Specifically, obtaining a probability map of a graspable portion of a target object includes:
s101, acquiring a target object image;
and S102, inputting the image into a deep learning model for training, and outputting a probability map of the part which can be grabbed of the target object.
Optionally, the image has a single target object, so the acquired image may be a color image or a depth image, the color image or the depth image is input into the depth learning model, and a 2D probability map that the object can be successfully sucked by a suction cup on each pixel point in the predicted image is segmented based on the depth learning semantics, that is, a part probability map can be captured.
And S2, carrying out binarization processing on the graspable part probability map to obtain a binary image of the connected domain.
And S3, identifying and calculating the convexity of the connected domain, and judging the convexity and the preset value V1.
Specifically, the method for defining and calculating the convexity of the connected domain comprises the following steps:
firstly, an outer surrounding convex polygon of a connected domain is obtained,
then, respectively calculating the area b of the connected domain and the area c of the outer surrounding convex polygon;
finally, the b/c ratio is calculated and defined as convexity.
It should be noted that, in the present invention, the ratio between the connected component area of the target object and the area of the outer surrounding convex polygon is expressed by introducing the concept of convexity, which can also be described in other ways, and is not limited in detail herein.
Specifically, the minimum outer surrounding convex polygon is preferably selected as the outer surrounding convex polygon of the connected region of the target object, as shown in fig. 2, the solid line partial region is the outline of the connected region of the target object, and the dotted line partial region is the minimum outer surrounding convex polygon of the obtained connected region, and in fig. 2, the minimum outer surrounding convex polygon of the connected region is obtained by surrounding the recessed portion of the concave object with a straight line.
In this embodiment, when the outer surrounding convex polygon is selected as the smallest outer surrounding convex polygon, since the area b of the connected domain is equal to or smaller than the area c of the smallest outer surrounding convex polygon, and the larger the area of the recess or protrusion of the object is, the larger the difference between the area b of the connected domain and the area c of the smallest outer surrounding convex polygon is; when the area of the recess or protrusion of the object is smaller, the difference between the area b of the connected domain and the area c of the smallest outer surrounding convex polygon is smaller, and therefore, in the present embodiment, the ratio of b/c is defined to be in the range of 0.6-1.
S301, when the convexity is larger than or equal to the preset value V1, judging that the gravity center of the target object is on the target object body, and determining the gravity center of the connected domain as a grabbing point.
S302, when the convexity is smaller than the preset value V1, judging that the gravity center of the target object is outside the target object body, namely the target object is in a non-convex shape, and determining the center of the maximum inscribed rectangle or the center of the maximum inscribed circle of the connected domain as the grabbing point.
Optionally, when the center of gravity of the target object is determined to be outside the target object body, it needs to be determined whether the capture point of the target object is the center of the largest inscribed rectangle of the connected domain or the center of the largest inscribed circle of the connected domain, and the determination method is as follows:
and S3021, when the convexity is less than the preset value V1, determining the minimum circumscribed rectangle of the connected domain.
S3022, calculating the ratio a of the long side to the wide side of the minimum circumscribed rectangle, and under the normal condition, when the difference value between the long side value and the wide side value of the rectangle is large, the shape of the rectangle is long and narrow, and the rectangle has the characteristic of extending in the length direction; when the difference value between the long side value and the wide side value of the rectangle is small, the shape of the rectangle is close to a square, and the rectangle has the characteristic of extending in the width direction. In the present embodiment, the preferred selection ratio a is in the range of 1.5 to 2.0.
When the ratio a is greater than the preset value V2, as shown in FIG. 3, it is indicated that the minimum circumscribed rectangle has the characteristic of extending in the "long" direction, and therefore, the center of the maximum inscribed rectangle of the connected domain is determined as the grasping point;
when the ratio a is less than or equal to the preset value V2, as shown in FIG. 4, it is indicated that the minimum circumscribed rectangle has the characteristic of extending in the "width" direction, and therefore, the center of the maximum inscribed circle of the connected domain is determined as the grasping point.
Optionally, the connected component, the maximum inscribed rectangle, the maximum inscribed circle, and the maximum inscribed rectangle are identified by a pixel gradient.
Example 2:
the embodiment provides a method for determining a grabbing point of an object with an arbitrary shape, as shown in fig. 5, including the following steps:
and S1, acquiring a probability map of the graspable parts of the object set.
Specifically, obtaining a probability map of a graspable portion of an object set includes:
s101, acquiring an image of an object set;
and S102, inputting the image into a deep learning model for training, and outputting a probability map of the part which can be grabbed of the object set.
Optionally, the image has a plurality of target objects, so the acquired image may be a color image or a depth image, the color image or the depth image or the color image and the depth image are input into a depth learning model, and a 2D probability map that the object can be successfully sucked by a suction cup on each pixel point in the predicted image is segmented based on the depth learning semantics, that is, a part probability map can be captured.
And S2, performing binarization processing on the graspable part probability map to obtain a binary image, wherein the binary image comprises at least 2 connected domains, and each connected domain corresponds to 1 target object.
Since the object set may contain more than 2 target objects, the processed binary image includes connected components of each target object.
And S3, identifying and calculating the convexity of the connected domain of each target object, and judging the convexity and the preset value V1.
Specifically, the method for defining and calculating the convexity of the connected domain comprises the following steps:
firstly, an outer surrounding convex polygon of a connected domain is obtained,
then, respectively calculating the area b of the connected domain and the area c of the outer surrounding convex polygon;
finally, the b/c ratio is calculated and defined as convexity.
It should be noted that, in the present invention, the ratio between the connected component area of the target object and the area of the outer surrounding convex polygon is expressed by introducing the concept of convexity, which can also be described in other ways, and is not limited in detail herein.
Specifically, the minimum outer surrounding convex polygon is preferably selected as the outer surrounding convex polygon of the connected region of the target object, as shown in fig. 2, the solid line partial region is the outline of the connected region of the target object, and the dotted line partial region is the minimum outer surrounding convex polygon of the obtained connected region, and in fig. 2, the minimum outer surrounding convex polygon of the connected region is obtained by surrounding the recessed portion of the concave object with a straight line.
In this embodiment, when the outer surrounding convex polygon is selected as the smallest outer surrounding convex polygon, since the area b of the connected domain is equal to or smaller than the area c of the smallest outer surrounding convex polygon, and the larger the area of the recess or protrusion of the object is, the larger the difference between the area b of the connected domain and the area c of the smallest outer surrounding convex polygon is; when the area of the recess or protrusion of the object is smaller, the difference between the area b of the connected domain and the area c of the smallest outer surrounding convex polygon is smaller, and therefore, in the present embodiment, the ratio of b/c is defined to be in the range of 0.6-1.
S301, when the convexity is larger than or equal to a preset value V1, judging that the gravity center of a target object corresponding to the connected domain is on the target object body, and determining the gravity center of the connected domain as a grabbing point;
s302, when the convexity is smaller than the preset value V1, judging that the gravity center of the target object corresponding to the connected domain is outside the target object body, namely the target object is in a non-convex shape, and determining the center of the maximum inscribed rectangle or the center of the maximum inscribed circle of the connected domain as the grabbing point.
Optionally, when the center of gravity of the target object is determined to be outside the target object body, it needs to be determined whether the capture point of the target object is the center of the largest inscribed rectangle of the connected domain or the center of the largest inscribed circle of the connected domain, and the determination method is as follows:
and S3021, when the convexity is less than the preset value V1, determining the minimum circumscribed rectangle of the connected domain.
S3022, calculating the ratio a of the long side to the wide side of the minimum circumscribed rectangle;
when the ratio a is greater than the preset value V2, as shown in FIG. 3, it is indicated that the minimum circumscribed rectangle has the characteristic of extending in the "long" direction, and therefore, the center of the maximum inscribed rectangle of the connected domain is determined as the grasping point;
when the ratio a is less than or equal to the preset value V2, as shown in FIG. 4, it is indicated that the minimum circumscribed rectangle has the characteristic of extending in the "width" direction, and therefore, the center of the maximum inscribed circle of the connected domain is determined as the grasping point.
Optionally, the connected component, the maximum inscribed rectangle, the maximum inscribed circle, and the maximum inscribed rectangle are identified by pixel gradients.
In the following, the confirmation of the grasp point of the center of gravity on the body will be described by taking an elongated object as an example and using the grasp point determination methods in examples 1 and 2, and the grasp point confirmation process will be described as follows:
s101, acquiring a slender object for photographing, and acquiring an image of the slender object;
and S102, inputting the image into a deep learning model for training, and outputting a probability map of the part which can be grabbed of the slender object.
And S2, carrying out binarization processing on the probability map of the part which can be grabbed on the slender object to obtain a binary image of the connected domain. Since the elongated objects have a regular pattern, the connected domains of the elongated objects obtained are regular.
And S3, identifying and calculating the convexity of the connected domain, and judging the convexity and the preset value V1.
Specifically, in one case, when the thickness of each position of the elongated object is uniform, the shape of the connected domain of the elongated object is the same as the shape of the smallest outer surrounding convex polygon of the connected domain, and the area ratio b/c between the two is close to 1, so that the convexity of the elongated object is approximately equal to 1, which is greater than a preset value V1 (0.6-1), and when the center of gravity of the elongated object is judged to be on the body thereof and the center of gravity thereof coincides with the center of the elongated object, the center of gravity (i.e., the center) of the connected domain of the elongated object is determined as the grasping point, i.e., the center of gravity of the elongated object is determined as the grasping point.
Specifically, in another case, when the thickness of each position of the elongated object is not uniform, the shape of the connected component of the elongated object is not greatly different from the shape of the smallest outer surrounding convex polygon of the connected component. And when the area ratio b/c between the two, namely the convexity is 0.75-0.85, is smaller than a preset value V1 (0.6-1), judging that the gravity center of the elongated object is on the body and is close to a thicker position in the elongated object, and determining that the gravity center of the connected domain is a grabbing point, namely the gravity center of the elongated object is the grabbing point.
The following describes a method for confirming a grasp point by using the grasp point determination methods in embodiments 1 and 2, taking an object having a shape in fig. 2, 3, and 4 as an example, and a process for confirming a grasp point is as follows:
s101, obtaining the objects shown in the figures 2, 3 and 4 for photographing, and obtaining images of the slender objects;
and S102, inputting the image into a deep learning model for training, and outputting the probability map of the part which can be captured in the figures 2, 3 and 4.
S2, binarizing the probability maps of the graspable parts in fig. 2, 3, and 4 to obtain a binary image of the connected component.
And S3, identifying and calculating the convexity of the connected domain, and judging the convexity and the preset value V1.
Specifically, in fig. 2, the ratio of the area b of the connected domain of the object to the area c of the minimum outer-surrounding convex polygon, that is, when the convexity is 0.8, and 0.8 is greater than the preset value V1 (0.6-1), the center of gravity of the target object is determined to be the capture point.
Specifically, the ratio of the area b of the connected domain of the object in fig. 3 to the area c of the minimum outer surrounding convex polygon, i.e., when the convexity is 0.3, 0.3 is less than the preset value V1 (0.6-1); in fig. 4, when the convexity is 0.42 and 0.42 < the predetermined value V1, it is determined that the center of gravity of the object in fig. 3 and 4 is outside the object body, that is, the object is in a non-convex shape, and the center of the maximum inscribed rectangle or the center of the maximum inscribed circle of the connected domain is determined as the capture point.
S3021, when the convexity is less than the predetermined value V1 (0.6-1), determining the minimum bounding rectangle of the connected domain of the objects in FIGS. 3 and 4, the method for determining the minimum bounding rectangle in FIGS. 3 and 4 is shown in FIG. 2,
and S3022, calculating the ratio a of the long side to the wide side of the minimum bounding rectangle.
Specifically, the ratio a of the object in fig. 3 is 2.5, 2.5 is greater than the preset value V2 (1.5-2.0), which indicates that the minimum circumscribed rectangle has the characteristic of "long" extension, and therefore, as shown in fig. 3, the center of the maximum inscribed rectangle of the connected domain of the object is determined as the grasping point.
Specifically, the ratio a of the objects in FIG. 4 is 1.13, 1.13 is less than a preset value V2 (1.5-2.0),
when the ratio a is less than or equal to the preset value V2, the minimum circumscribed rectangle is indicated to have the characteristic of extending in the width direction, and therefore, as shown in FIG. 4, the maximum inscribed circle center of the connected domain of the object is determined as the grasping point of the minimum circumscribed rectangle.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (10)

1. A method for determining a grabbing point of an object with an arbitrary shape is characterized by comprising the following steps:
acquiring a probability map of a part which can be grabbed of a target object;
carrying out binarization processing on the probability map of the part which can be grabbed to obtain a binary image of a connected domain;
identifying and calculating the convexity of the connected domain, and judging the convexity and a preset value V1;
when the convexity is larger than or equal to a preset value V1, judging that the gravity center of the target object is on the target object body, and determining the gravity center of the connected domain as a grabbing point;
and when the convexity is less than the preset value V1, judging that the gravity center of the target object is outside the target object body, and determining the center of the maximum inscribed rectangle or the center of the maximum inscribed circle of the connected domain as the grabbing point.
2. The method for determining the grasp point of an arbitrary-shaped object according to claim 1, wherein the determination method that the maximum inscribed rectangle center or the maximum inscribed circle center of the connected component is the grasp point is:
when the convexity is smaller than a preset value V1, determining the minimum circumscribed rectangle of the connected domain;
calculating the ratio a of the long side to the wide side of the minimum circumscribed rectangle;
when the ratio a is larger than a preset value V2, determining the center of the maximum inscribed rectangle of the connected domain as a grabbing point;
and when the ratio a is less than or equal to a preset value V2, determining the maximum inscribed circle center of the connected domain as a grabbing point.
3. The method according to claim 2, wherein the connected component, the maximum inscribed rectangle, the maximum inscribed circle, and the maximum inscribed rectangle are identified by a pixel gradient.
4. The method for determining the grasp point of an arbitrary-shaped object according to claim 1, wherein the step of obtaining the probability map of the graspable portion of the target object comprises the steps of:
acquiring an image of a target object;
and inputting the image into a deep learning model for training, and outputting a probability map of the part which can be grabbed of the target object.
5. The method for determining the grasp point of an arbitrary object according to claim 1, wherein the convexity of the connected component is defined and calculated by:
acquiring an outer surrounding convex polygon of the connected domain;
respectively calculating the area b of the connected domain and the area c of the outer surrounding convex polygon;
the ratio of b/c is calculated and defined as the convexity.
6. A method for determining a grabbing point of an object with an arbitrary shape is characterized by comprising the following steps:
acquiring a probability map of the graspable part of the object set;
performing binarization processing on the probability map of the part which can be grabbed to obtain a binary image, wherein the binary image comprises at least 2 connected domains, and each connected domain corresponds to 1 target object;
identifying and calculating the convexity of the connected domain of each target object, and judging the convexity and a preset value V1;
when the convexity is larger than or equal to a preset value V1, judging that the gravity center of the target object corresponding to the connected domain is on the target object body, and determining the gravity center of the connected domain as a grabbing point;
and when the convexity is less than the preset value V1, judging that the gravity center of the target object corresponding to the connected domain is outside the target object body, and determining the center of the maximum inscribed rectangle or the center of the maximum inscribed circle of the connected domain as the grabbing point.
7. The method for determining the grasp point of an arbitrary-shaped object according to claim 6, wherein the determination method that the maximum inscribed rectangle center or the maximum inscribed circle center of the connected component is the grasp point is:
when the convexity is smaller than a preset value V1, determining the minimum circumscribed rectangle of the connected domain;
calculating the ratio a of the long side to the wide side of the minimum circumscribed rectangle;
when the ratio a is larger than a preset value V2, determining the center of the maximum inscribed rectangle of the connected domain as a grabbing point;
and when the ratio a is less than or equal to a preset value V2, determining the maximum inscribed circle center of the connected domain as a grabbing point.
8. The method according to claim 7, wherein the connected component, the maximum inscribed rectangle, the maximum inscribed circle, and the maximum inscribed rectangle are identified by a pixel gradient.
9. The method for determining the grasp point of an arbitrary shape object according to claim 6, wherein said obtaining a probability map of the graspable site of the object set comprises the steps of:
acquiring an image of an object set;
and inputting the image into a deep learning model for training, and outputting a probability map of the part which can be grabbed of the object set.
10. The method for determining the grasp point of an arbitrary object according to claim 6, wherein the convexity of the connected component is defined and calculated by: acquiring an outer surrounding convex polygon of the connected domain; respectively calculating the area b of the connected domain and the area c of the outer surrounding convex polygon; the ratio of b/c is calculated and defined as the convexity.
CN202110165940.6A 2021-02-05 2021-02-05 Method for determining grabbing points of object in any shape Active CN112802113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110165940.6A CN112802113B (en) 2021-02-05 2021-02-05 Method for determining grabbing points of object in any shape

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110165940.6A CN112802113B (en) 2021-02-05 2021-02-05 Method for determining grabbing points of object in any shape

Publications (2)

Publication Number Publication Date
CN112802113A true CN112802113A (en) 2021-05-14
CN112802113B CN112802113B (en) 2024-03-19

Family

ID=75814587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110165940.6A Active CN112802113B (en) 2021-02-05 2021-02-05 Method for determining grabbing points of object in any shape

Country Status (1)

Country Link
CN (1) CN112802113B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022252562A1 (en) * 2021-05-31 2022-12-08 梅卡曼德(北京)机器人科技有限公司 Robot-based non-planar structure determination method and apparatus, electronic device, and storage medium
CN115741690A (en) * 2022-11-14 2023-03-07 中冶赛迪技术研究中心有限公司 Material bag grabbing method and system, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102126221A (en) * 2010-12-23 2011-07-20 中国科学院自动化研究所 Method for grabbing object by mechanical hand based on image information
CN108510062A (en) * 2018-03-29 2018-09-07 东南大学 A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network
CN108668637A (en) * 2018-04-25 2018-10-19 江苏大学 A kind of machine vision places grape cluster crawl independent positioning method naturally
CN108818586A (en) * 2018-07-09 2018-11-16 山东大学 A kind of object center of gravity detection method automatically grabbed suitable for manipulator
CN109859208A (en) * 2019-01-03 2019-06-07 北京化工大学 Scene cut and Target Modeling method based on concavity and convexity and RSD feature
US20200017317A1 (en) * 2018-07-16 2020-01-16 XYZ Robotics Global Inc. Robotic system for picking, sorting, and placing a plurality of random and novel objects
CN111462232A (en) * 2020-03-13 2020-07-28 广州大学 Object grabbing method and device and storage medium
CN111553949A (en) * 2020-04-30 2020-08-18 张辉 Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
KR20200097572A (en) * 2019-02-08 2020-08-19 한양대학교 산학협력단 Training data generation method and pose determination method for grasping object
CN111724444A (en) * 2020-06-16 2020-09-29 中国联合网络通信集团有限公司 Method and device for determining grabbing point of target object and grabbing system
CN112077842A (en) * 2020-08-21 2020-12-15 上海明略人工智能(集团)有限公司 Clamping method, clamping system and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102126221A (en) * 2010-12-23 2011-07-20 中国科学院自动化研究所 Method for grabbing object by mechanical hand based on image information
CN108510062A (en) * 2018-03-29 2018-09-07 东南大学 A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network
CN108668637A (en) * 2018-04-25 2018-10-19 江苏大学 A kind of machine vision places grape cluster crawl independent positioning method naturally
CN108818586A (en) * 2018-07-09 2018-11-16 山东大学 A kind of object center of gravity detection method automatically grabbed suitable for manipulator
US20200017317A1 (en) * 2018-07-16 2020-01-16 XYZ Robotics Global Inc. Robotic system for picking, sorting, and placing a plurality of random and novel objects
CN109859208A (en) * 2019-01-03 2019-06-07 北京化工大学 Scene cut and Target Modeling method based on concavity and convexity and RSD feature
KR20200097572A (en) * 2019-02-08 2020-08-19 한양대학교 산학협력단 Training data generation method and pose determination method for grasping object
CN111462232A (en) * 2020-03-13 2020-07-28 广州大学 Object grabbing method and device and storage medium
CN111553949A (en) * 2020-04-30 2020-08-18 张辉 Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
CN111724444A (en) * 2020-06-16 2020-09-29 中国联合网络通信集团有限公司 Method and device for determining grabbing point of target object and grabbing system
CN112077842A (en) * 2020-08-21 2020-12-15 上海明略人工智能(集团)有限公司 Clamping method, clamping system and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ZHANG JIAHAO ET AL.: "Robotic grasp detection based on image processing and random forest", 《MULTIMEDIA TOOLS AND APPLICATIONS》, vol. 79, pages 2427, XP037029727, DOI: 10.1007/s11042-019-08302-9 *
张心怡: "基于三维重建及关系推理的未知堆叠物体抓取方法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, pages 138 - 1798 *
朱鹏飞: "基于视觉机器人的织物分拣系统研究", 中国优秀硕士学位论文全文数据库 (工程科技Ⅰ辑), 15 June 2018 (2018-06-15), pages 024 - 35 *
杜学丹 等: "一种基于深度学习的机械臂抓取方法", 《机器人》, vol. 39, no. 06, pages 820 - 828 *
谢金路: "基于图像视觉的分拣机器人运动控制", 中国优秀硕士学位论文全文数据库 (信息科技辑), 15 January 2019 (2019-01-15), pages 140 - 1943 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022252562A1 (en) * 2021-05-31 2022-12-08 梅卡曼德(北京)机器人科技有限公司 Robot-based non-planar structure determination method and apparatus, electronic device, and storage medium
CN115741690A (en) * 2022-11-14 2023-03-07 中冶赛迪技术研究中心有限公司 Material bag grabbing method and system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112802113B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN112802113A (en) Method for determining grabbing point of object in any shape
CN109493313B (en) Vision-based steel coil positioning method and equipment
CN112883820A (en) Road target 3D detection method and system based on laser radar point cloud
CN111428731A (en) Multi-class target identification and positioning method, device and equipment based on machine vision
JP6177541B2 (en) Character recognition device, character recognition method and program
CN115880296B (en) Machine vision-based prefabricated part quality detection method and device
CN113985830A (en) Feeding control method and device for sealing nail, electronic equipment and storage medium
CN115035034A (en) Automatic positioning method and system for thin-wall welding
CN113762159B (en) Target grabbing detection method and system based on directional arrow model
CN111414907A (en) Data set labeling method, data set labeling device and computer-readable storage medium
KR102623766B1 (en) Method, computing device and computer program for recognizing plane through 3d sensor data analysis and curved weld line using the same
CN109313708B (en) Image matching method and vision system
CN113610833A (en) Material grabbing method and device, electronic equipment and storage medium
CN113538557B (en) Box volume measuring device based on three-dimensional vision
CN113034526A (en) Grabbing method, grabbing device and robot
CN112025693A (en) Pixel-level target capture detection method and system of asymmetric three-finger grabber
CN107710229A (en) Shape recognition process, device, equipment and computer-readable storage medium in image
CN115690364A (en) AR model acquisition method, electronic device and readable storage medium
CN111861997A (en) Method, system and device for detecting circular hole size of pattern board
CN112824307A (en) Crane jib control method based on machine vision
CN111507287A (en) Method and system for extracting road zebra crossing corner points in aerial image
CN116243716B (en) Intelligent lifting control method and system for container integrating machine vision
JP2000194861A (en) Method and device for recognizing image
CN114043531B (en) Table tilt angle determination, use method, apparatus, robot, and storage medium
JP6314464B2 (en) Image processing apparatus, image processing method, and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: Room 1100, 1st Floor, No. 6 Chuangye Road, Shangdi Information Industry Base, Haidian District, Beijing 100085

Applicant after: MECH-MIND (BEIJING) ROBOTICS TECHNOLOGIES CO.,LTD.

Address before: 100085 1001, floor 1, building 3, No.8 Chuangye Road, Haidian District, Beijing

Applicant before: MECH-MIND (BEIJING) ROBOTICS TECHNOLOGIES CO.,LTD.

Country or region before: China

GR01 Patent grant
GR01 Patent grant