CN109955244B - Grabbing control method and device based on visual servo and robot - Google Patents

Grabbing control method and device based on visual servo and robot Download PDF

Info

Publication number
CN109955244B
CN109955244B CN201711431821.0A CN201711431821A CN109955244B CN 109955244 B CN109955244 B CN 109955244B CN 201711431821 A CN201711431821 A CN 201711431821A CN 109955244 B CN109955244 B CN 109955244B
Authority
CN
China
Prior art keywords
target object
grabbing
pose information
relative
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711431821.0A
Other languages
Chinese (zh)
Other versions
CN109955244A (en
Inventor
熊友军
安昭辉
唐靖华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201711431821.0A priority Critical patent/CN109955244B/en
Publication of CN109955244A publication Critical patent/CN109955244A/en
Application granted granted Critical
Publication of CN109955244B publication Critical patent/CN109955244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/10Programme-controlled manipulators characterised by positioning means for manipulator elements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention is suitable for the technical field of robots and provides a grabbing control method and device based on visual servo and a robot. The method comprises the following steps: acquiring pose information of a target object relative to a grabbing device; judging whether the target object in the current pose information is in a grabbing area or not based on the pose information; if the target object is in the grabbing area, adjusting the motion state of the grabbing device; and when the motion state belongs to a preset grabbing state, grabbing the target object in the grabbing area. The invention actively adjusts the motion state of the gripping device according to the pose information of the target object, thereby being not limited by the motion trail of the object, realizing the effective gripping of the moving object and having stronger practicability and usability.

Description

Grabbing control method and device based on visual servo and robot
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a grabbing control method and device based on visual servo and a robot.
Background
Conventional gripping devices, such as industrial robots, mostly grip a static object by determining the position of the static object and planning a path to the position to grip the static object. However, the object in motion, especially the moving object in uncertain motion track, cannot be effectively grabbed.
Therefore, it is necessary to provide a solution to the above problems.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for controlling grabbing based on visual servoing, and a robot, so as to achieve effective grabbing of a moving object.
The first aspect of the embodiments of the present invention provides a grabbing control method based on visual servoing, including:
acquiring pose information of a target object relative to a grabbing device;
judging whether the target object in the current pose information is in a grabbing area or not based on the pose information;
if the target object is in the grabbing area, adjusting the motion state of the grabbing device;
and when the motion state belongs to a preset grabbing state, grabbing the target object in the grabbing area.
Optionally, before acquiring pose information of the target object with respect to the grasping apparatus, the method further includes:
acquiring photos of a target object at multiple angles;
constructing a three-dimensional model of the target object based on the picture, and extracting feature information of the three-dimensional model;
and matching the image information acquired by the camera with the characteristic information to perform visual servo tracking on the target object.
Optionally, the acquiring pose information of the target object relative to the grasping apparatus includes:
acquiring a relative coordinate of a target object relative to a camera, and taking the relative coordinate as first position and attitude information of the target object;
and converting the first pose information into pose information of the target object relative to the grabbing device through coordinate transformation.
Optionally, the acquiring pose information of the target object relative to the grasping apparatus includes:
moving the target object for a preset distance in the direction opposite to the grabbing direction, and acquiring a centroid coordinate of the target object;
and taking the relative coordinate of the centroid coordinate of the target object relative to the grabbing device as the pose information of the target object relative to the grabbing device.
Optionally, the determining, based on the pose information, whether the target object in the current pose information is in the capture area includes:
judging whether the relative coordinate of the centroid coordinate of the target object relative to the grabbing device is (0,0, 0);
and if so, determining that the target object in the current pose information is in the grabbing area.
A second aspect of an embodiment of the present invention provides a capture control device based on visual servoing, including:
the first acquisition module is used for acquiring pose information of the target object relative to the grabbing device;
the judging module is used for judging whether the target object in the current pose information is in the grabbing area or not based on the pose information;
the adjusting module is used for adjusting the motion state of the grabbing device if the target object is in the grabbing area;
and the grabbing module is used for grabbing the target object in the grabbing area when the motion state belongs to a preset grabbing state.
Optionally, before acquiring pose information of the target object with respect to the grasping apparatus, the method further includes:
the second acquisition module is used for acquiring photos of the target object from multiple angles;
the extraction module is used for constructing a three-dimensional model of the target object based on the picture and extracting the characteristic information of the three-dimensional model;
and the tracking module is used for matching the image information acquired by the camera with the characteristic information to perform visual servo tracking on the target object.
Optionally, the first obtaining module includes:
the acquisition unit is used for acquiring the relative coordinate of the target object relative to the camera and taking the relative coordinate as the first position and attitude information of the target object;
and the conversion unit is used for converting the first pose information into pose information of the target object relative to the grabbing device through coordinate transformation.
Optionally, the first obtaining module includes:
the moving unit is used for moving the target object for a preset distance in the direction opposite to the grabbing direction and then acquiring the centroid coordinate of the target object;
and the setting unit is used for taking the relative coordinates of the centroid coordinates of the target object relative to the grabbing device as the pose information of the target object relative to the grabbing device.
Optionally, the determining module includes:
and the judging unit is used for judging whether the relative coordinate of the centroid coordinate of the target object relative to the grabbing device is (0,0, 0).
And the determining unit is used for determining that the target object in the current pose information is in the grabbing area if the target object is in the grabbing area.
A third aspect of embodiments of the present invention provides a robot, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method of the first aspect when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, wherein the computer program, when executed by a processor, implements the steps of the method of the first aspect.
In the embodiment of the invention, the pose information of the target object relative to the grabbing device is acquired, whether the target object in the current pose information is in the grabbing area is judged based on the pose information, if the target object is in the grabbing area, the motion state of the grabbing device is adjusted, and when the motion state belongs to the preset grabbing state, the target object in the grabbing area is grabbed, namely the motion state of the grabbing device is actively adjusted according to the pose information of the target object, so that the motion state is not limited by the motion track of the object, the effective grabbing of the moving object is realized, and the practical applicability and the usability are strong.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a capture control method based on visual servoing according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating an implementation of a capture control method based on visual servoing according to a second embodiment of the present invention;
fig. 3 is a schematic flowchart of a specific implementation of step 101 according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating another specific implementation of step 101 according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating an implementation of step 102 according to an embodiment of the present invention;
fig. 6 is a block diagram of a grabbing control device based on visual servoing according to a third embodiment of the present invention;
fig. 7 is a schematic view of a robot according to a fourth embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when … …" or "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Example one
Fig. 1 is a schematic flow chart illustrating an implementation of a capture control method based on visual servoing according to an embodiment of the present invention. As shown in fig. 1, the grabbing control method based on visual servoing may specifically include the following steps:
step 101: and acquiring pose information of the target object relative to the grabbing device.
Wherein the target object is a grasping target of the robot. The grabbing device can be a mechanical arm tool of a robot, and the pose information of the target object relative to the grabbing device is acquired.
As shown in fig. 2, specifically, acquiring pose information of the target object with respect to the grasping apparatus includes:
step 201: and acquiring the relative coordinate of the target object relative to the camera, and taking the relative coordinate as the first attitude information of the target object.
Step 202: and converting the first pose information into pose information of the target object relative to the grabbing device through coordinate transformation.
For step 201 and step 202, since the robot acquires pose information of the object through a camera (e.g., a depth camera), the pose information being the pose of the centroid coordinates of the target object relative to the camera coordinates, a certain transformation of the pose of the target object is required. To calculate the relative pose between the target object and the grasping apparatus (e.g. robot arm tool), the pose of the coordinates of the camera with respect to the base needs to be determined by hand-eye calibration:
Figure BDA0001525083310000061
the pose of the target object with respect to the grasping apparatus is:
Figure BDA0001525083310000062
wherein the content of the first and second substances,
Figure BDA0001525083310000063
showing the pose of the camera relative to the base,
Figure BDA0001525083310000064
showing the pose of the grasping apparatus relative to the base,
Figure BDA0001525083310000065
indicating the pose of the target object relative to the gripping means,
Figure BDA0001525083310000066
representing the pose of the target object relative to the camera.
As shown in fig. 3, further, the acquiring pose information of the target object with respect to the grasping apparatus includes:
step 301: and moving the target object for a preset distance in the direction opposite to the grabbing direction, and then obtaining the centroid coordinates of the target object.
Step 302: and taking the relative coordinate of the centroid coordinate of the target object relative to the grabbing device as the pose information of the target object relative to the grabbing device.
For step 301 and step 302, moving the acquired coordinates of the target object by a predetermined distance in the opposite direction of the grabbing direction, and taking the coordinates as the coordinates of the target object; or moving the target object in the opposite direction of the grabbing direction for a preset distance, and then obtaining the barycentric coordinate of the target object, wherein the method is to enable the grabbing device to be started smoothly through a motion planning algorithm in order to enable the barycentric coordinate of the grabbing device to be close to the barycentric coordinate of the target object after the target object moves for the preset distance continuously, and simultaneously, the track output by the motion planning is input to a visual servo control system of the robot.
Step 102: and judging whether the target object in the current pose information is in the grabbing area or not based on the pose information.
As shown in fig. 4, on the basis of the above steps 301 and 302, specifically, the judging whether the target object at the current pose information is in the grab area based on the pose information includes:
step 401: and judging whether the relative coordinate of the centroid coordinate of the target object relative to the grabbing device is (0,0, 0).
Step 402: and if so, determining that the target object in the current pose information is in the grabbing area.
For steps 401 and 402, when the coordinates of the center of mass of the target object relative to the coordinates of the grasping apparatus is (0,0,0), it is stated that the coordinates of the center of mass of the target object after moving the predetermined distance coincide with the coordinates of the grasping apparatus, that is, the target object is at the predetermined distance from the grasping apparatus. As can be appreciated, the predetermined distance is small,
step 103: and if the target object is in the grabbing area, adjusting the motion state of the grabbing device.
It will be appreciated that the speed of movement of the target object relative to the gripping means is not necessarily zero when the target object is at the predetermined distance from the gripping means. And when the target object is at the preset distance from the gripping device, starting to adjust the motion state of the gripping device.
Step 104: and when the motion state belongs to a preset grabbing state, grabbing the target object in the grabbing area.
Illustratively, the preset gripping state is that the movement speed of the target object relative to the gripping device is zero. And when the motion state of the grabbing device belongs to a preset grabbing state, grabbing the target object in the grabbing area. It should be noted that, when the motion state belongs to the preset grabbing state, the distance between the grabbing device and the target object at this time has changed from the preset distance to zero, that is, the coordinates of the grabbing device coincide with the coordinates of the centroid (coordinates of the centroid not moved) of the target object.
In the embodiment of the invention, the pose information of the target object relative to the grabbing device is acquired, whether the target object in the current pose information is in the grabbing area is judged based on the pose information, if the target object is in the grabbing area, the motion state of the grabbing device is adjusted, and when the motion state belongs to the preset grabbing state, the target object in the grabbing area is grabbed, namely the motion state of the grabbing device is actively adjusted according to the pose information of the target object, so that the motion state is not limited by the motion track of the object, the effective grabbing of the moving object is realized, and the practical applicability and the usability are strong.
Example two
Fig. 5 is a schematic flow chart illustrating an implementation of the capture control method based on visual servoing according to the second embodiment of the present invention. As shown the method may comprise the steps of:
step 501: and acquiring photos of the target object from multiple angles.
Illustratively, in a simple background environment, a target object is taken around, and photographs at multiple angles are taken.
Step 502: and constructing a three-dimensional model of the target object based on the picture, and extracting the characteristic information of the three-dimensional model.
Based on the photos of the target object from multiple angles in step 501, a three-dimensional object model of the target object is generated by a three-dimensional reconstruction technique, and then SIFT (Scale Invariant Feature Transform) features are extracted from the object model.
Step 503: and matching the image information acquired by the camera with the characteristic information to perform visual servo tracking on the target object.
By utilizing an SIFT feature matching algorithm, image information acquired by a depth camera is matched with feature information (such as SIFT features) extracted through an object model in real time, the pose information of an object is obtained in real time, and even if the object is partially shielded, the identification and pose estimation can be carried out through the unshielded features, so that the visual servo tracking is carried out on the target object.
Step 504: and acquiring pose information of the target object relative to the grabbing device.
Step 505: and judging whether the target object in the current pose information is in the grabbing area or not based on the pose information.
Step 506: and if the target object is in the grabbing area, adjusting the motion state of the grabbing device.
Step 507: and when the motion state belongs to a preset grabbing state, grabbing the target object in the grabbing area.
The steps 504 to 507 are the same as the steps 101 to 104, and reference may be made to the related description of the steps 101 to 104, which is not repeated herein.
In the embodiment of the invention, the photos of the target object at multiple angles are obtained, the three-dimensional model of the target object is constructed based on the photos, the characteristic information of the three-dimensional model is extracted, the image information acquired by the camera is matched with the characteristic information to perform visual servo tracking on the target object, and even if the object is partially shielded, the identification and pose estimation can be performed through the unshielded characteristic, so that the identification accuracy is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
EXAMPLE III
Referring to fig. 6, a block diagram of a capture control device based on visual servoing according to a third embodiment of the present invention is shown. The visual servo-based grip control apparatus 60 includes: a first obtaining module 61, a judging module 62, an adjusting module 63 and a grabbing module 64. The specific functions of each module are as follows:
and the first acquisition module 61 is used for acquiring the pose information of the target object relative to the grabbing device.
And a judging module 62, configured to judge whether the target object in the current pose information is in the capture area based on the pose information.
And the adjusting module 63 is configured to adjust a motion state of the grabbing device if the target object is in the grabbing area.
And the grabbing module 64 is configured to grab the target object in the grabbing area when the motion state belongs to a preset grabbing state.
Optionally, before acquiring pose information of the target object with respect to the grasping apparatus, the method further includes:
the second acquisition module is used for acquiring photos of the target object from multiple angles;
the extraction module is used for constructing a three-dimensional model of the target object based on the picture and extracting the characteristic information of the three-dimensional model;
and the tracking module is used for matching the image information acquired by the camera with the characteristic information to perform visual servo tracking on the target object.
Optionally, the first obtaining module 61 includes:
the acquisition unit is used for acquiring the relative coordinate of the target object relative to the camera and taking the relative coordinate as the first position and attitude information of the target object;
and the conversion unit is used for converting the first pose information into pose information of the target object relative to the grabbing device through coordinate transformation.
Optionally, the first obtaining module 61 includes:
the moving unit is used for moving the target object for a preset distance in the direction opposite to the grabbing direction and then acquiring the centroid coordinate of the target object;
and the setting unit is used for taking the relative coordinates of the centroid coordinates of the target object relative to the grabbing device as the pose information of the target object relative to the grabbing device.
Optionally, the determining module 62 includes:
and the judging unit is used for judging whether the relative coordinate of the centroid coordinate of the target object relative to the grabbing device is (0,0, 0).
And the determining unit is used for determining that the target object in the current pose information is in the grabbing area if the target object is in the grabbing area.
In the embodiment of the invention, the pose information of the target object relative to the grabbing device is acquired, whether the target object in the current pose information is in the grabbing area is judged based on the pose information, if the target object is in the grabbing area, the motion state of the grabbing device is adjusted, and when the motion state belongs to the preset grabbing state, the target object in the grabbing area is grabbed, namely the motion state of the grabbing device is actively adjusted according to the pose information of the target object, so that the motion state is not limited by the motion track of the object, the effective grabbing of the moving object is realized, and the practical applicability and the usability are strong.
Example four
Fig. 7 is a schematic view of a robot according to a fourth embodiment of the present invention. As shown in fig. 7, the robot 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72, such as a grab control method program based on visual servoing, stored in said memory 71 and executable on said processor 70. The processor 70, when executing the computer program 72, implements the steps of the above-described embodiments of the visual servoing-based grabbing control method, such as the steps 101 to 104 shown in fig. 1. Alternatively, the processor 70, when executing the computer program 72, implements the functions of the modules in the above-described device embodiments, such as the functions of the modules 61 to 64 shown in fig. 6.
Illustratively, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 72 in the robot 7. For example, the computer program 72 may be divided into a first acquiring module, a determining module, an adjusting module and a capturing module, and the specific functions of each module are as follows:
the first acquisition module is used for acquiring pose information of the target object relative to the grabbing device.
And the judging module is used for judging whether the target object in the current pose information is in the grabbing area or not based on the pose information.
And the adjusting module is used for adjusting the motion state of the grabbing device if the target object is in the grabbing area.
And the grabbing module is used for grabbing the target object in the grabbing area when the motion state belongs to a preset grabbing state.
The robot 7 may be a desktop computer, a notebook, a palm computer, or other computing devices. The robot may include, but is not limited to, a processor 70, a memory 71. Those skilled in the art will appreciate that fig. 7 is merely an example of a robot and is not intended to be limiting and may include more or fewer components than those shown, or some components in combination, or different components, for example the robot may also include input output devices, network access devices, buses, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the robot 7, such as a hard disk or a memory of the robot 7. The memory 71 may also be an external storage device of the robot 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the robot 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the robot 7. The memory 71 is used for storing the computer program and other programs and data required by the robot. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided by the present invention, it should be understood that the disclosed apparatus/robot and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/robot are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (7)

1. A grabbing control method based on visual servo is characterized by comprising the following steps:
acquiring pose information of a target object relative to a grabbing device;
judging whether the target object in the current pose information is in a grabbing area or not based on the pose information;
if the target object is in the grabbing area, adjusting the motion state of the grabbing device;
when the motion state belongs to a preset grabbing state, grabbing the target object in the grabbing area;
the acquiring pose information of the target object relative to the grabbing device comprises:
acquiring a relative coordinate of a target object relative to a camera, and taking the relative coordinate as first position and attitude information of the target object;
converting the first pose information into pose information of the target object relative to the grabbing device through coordinate transformation;
or after moving the target object for a preset distance in the direction opposite to the grabbing direction, acquiring the centroid coordinate of the target object;
and taking the relative coordinate of the centroid coordinate of the target object relative to the grabbing device as the pose information of the target object relative to the grabbing device.
2. The vision-servo-based grab control method of claim 1, further comprising, before acquiring pose information of the target object with respect to the grab device:
acquiring photos of a target object at multiple angles;
constructing a three-dimensional model of the target object based on the picture, and extracting feature information of the three-dimensional model;
and matching the image information acquired by the camera with the characteristic information to perform visual servo tracking on the target object.
3. The vision-servo-based grip control method according to claim 1, wherein said determining, based on the pose information, whether the target object at the current pose information is in the grip area includes:
judging whether the relative coordinate of the centroid coordinate of the target object relative to the grabbing device is (0,0, 0);
and if so, determining that the target object in the current pose information is in the grabbing area.
4. A visual servo-based grip control apparatus, comprising:
the first acquisition module is used for acquiring pose information of the target object relative to the grabbing device;
the judging module is used for judging whether the target object in the current pose information is in the grabbing area or not based on the pose information;
the adjusting module is used for adjusting the motion state of the grabbing device if the target object is in the grabbing area;
the grabbing module is used for grabbing the target object in the grabbing area when the motion state belongs to a preset grabbing state;
the first obtaining module comprises:
the acquisition unit is used for acquiring the relative coordinate of the target object relative to the camera and taking the relative coordinate as the first position and attitude information of the target object;
the conversion unit is used for converting the first pose information into pose information of the target object relative to the grabbing device through coordinate transformation;
or the moving unit is used for moving the target object for a preset distance in the direction opposite to the grabbing direction and then acquiring the centroid coordinate of the target object;
and the setting unit is used for taking the relative coordinates of the centroid coordinates of the target object relative to the grabbing device as the pose information of the target object relative to the grabbing device.
5. The vision-servo-based grip control apparatus according to claim 4, further comprising, before acquiring pose information of the target object with respect to the grip apparatus:
the second acquisition module is used for acquiring photos of the target object from multiple angles;
the extraction module is used for constructing a three-dimensional model of the target object based on the picture and extracting the characteristic information of the three-dimensional model;
and the tracking module is used for matching the image information acquired by the camera with the characteristic information to perform visual servo tracking on the target object.
6. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 3 are implemented when the computer program is executed by the processor.
7. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 3.
CN201711431821.0A 2017-12-26 2017-12-26 Grabbing control method and device based on visual servo and robot Active CN109955244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711431821.0A CN109955244B (en) 2017-12-26 2017-12-26 Grabbing control method and device based on visual servo and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711431821.0A CN109955244B (en) 2017-12-26 2017-12-26 Grabbing control method and device based on visual servo and robot

Publications (2)

Publication Number Publication Date
CN109955244A CN109955244A (en) 2019-07-02
CN109955244B true CN109955244B (en) 2020-12-15

Family

ID=67022007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711431821.0A Active CN109955244B (en) 2017-12-26 2017-12-26 Grabbing control method and device based on visual servo and robot

Country Status (1)

Country Link
CN (1) CN109955244B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110509275B (en) * 2019-08-26 2022-11-15 广东弓叶科技有限公司 Article clamping method and robot
CN111015655B (en) * 2019-12-18 2022-02-22 深圳市优必选科技股份有限公司 Mechanical arm grabbing method and device, computer readable storage medium and robot
CN111225554B (en) * 2020-02-19 2021-10-29 鲁班嫡系机器人(深圳)有限公司 Bulk object grabbing and assembling method, device, controller and system
CN111483803B (en) * 2020-04-17 2022-03-04 湖南视比特机器人有限公司 Control method, capture system and storage medium
CN113062697B (en) * 2021-04-29 2023-10-31 北京三一智造科技有限公司 Drill rod loading and unloading control method and device and drill rod loading and unloading equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009501939A (en) * 2005-07-18 2009-01-22 オハイオ ステイト ユニバーシティ Method and system for ultra-precise measurement and control of object motion with 6 degrees of freedom by projection and measurement of interference fringes
CN101402199B (en) * 2008-10-20 2011-01-26 北京理工大学 Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation
JP6322949B2 (en) * 2013-10-10 2018-05-16 セイコーエプソン株式会社 Robot control apparatus, robot system, robot, robot control method, and robot control program
CN105751200B (en) * 2014-12-05 2017-10-20 济南鲁智电子科技有限公司 The method of operating of all-hydraulic autonomous mechanical arm
CN104596502B (en) * 2015-01-23 2017-05-17 浙江大学 Object posture measuring method based on CAD model and monocular vision
CN205905026U (en) * 2016-08-26 2017-01-25 沈阳工学院 Robot system based on two mesh stereovisions
CN106485746A (en) * 2016-10-17 2017-03-08 广东技术师范学院 Visual servo mechanical hand based on image no demarcation and its control method
CN107234625B (en) * 2017-07-07 2019-11-26 中国科学院自动化研究所 The method of visual servo positioning and crawl

Also Published As

Publication number Publication date
CN109955244A (en) 2019-07-02

Similar Documents

Publication Publication Date Title
CN109955244B (en) Grabbing control method and device based on visual servo and robot
CN108044627B (en) Method and device for detecting grabbing position and mechanical arm
CN111015655B (en) Mechanical arm grabbing method and device, computer readable storage medium and robot
CN111815754B (en) Three-dimensional information determining method, three-dimensional information determining device and terminal equipment
CN108381549B (en) Binocular vision guide robot rapid grabbing method and device and storage medium
CN107748890A (en) A kind of visual grasping method, apparatus and its readable storage medium storing program for executing based on depth image
US11833692B2 (en) Method and device for controlling arm of robot
CN104915947A (en) Image processing device, system, image processing method, and image processing program
CN112509036B (en) Pose estimation network training and positioning method, device, equipment and storage medium
CN110599544A (en) Workpiece positioning method and device based on machine vision
KR20170036747A (en) Method for tracking keypoints in a scene
CN113172636B (en) Automatic hand-eye calibration method and device and storage medium
CN116249607A (en) Method and device for robotically gripping three-dimensional objects
CN108555902B (en) Method and device for sorting articles by robot and robot
CN110458177B (en) Method for acquiring image depth information, image processing device and storage medium
CN108284075B (en) Method and device for sorting articles by robot and robot
CN108145712B (en) Method and device for sorting articles by robot and robot
CN115713547A (en) Motion trail generation method and device and processing equipment
WO2018135326A1 (en) Image processing device, image processing system, image processing program, and image processing method
Kim et al. Method for user interface of large displays using arm pointing and finger counting gesture recognition
Lee et al. Robust multithreaded object tracker through occlusions for spatial augmented reality
Liu et al. Research on Target Pose Measurement Based on AprilTag Identification
CN111223139A (en) Target positioning method and terminal equipment
CN114310892B (en) Object grabbing method, device and equipment based on point cloud data collision detection
CN111583317B (en) Image alignment method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen Youbixuan Technology Co.,Ltd.

Address before: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Youbixuan Technology Co.,Ltd.