CN109079786B - Mechanical arm grabbing self-learning method and equipment - Google Patents

Mechanical arm grabbing self-learning method and equipment Download PDF

Info

Publication number
CN109079786B
CN109079786B CN201810942390.2A CN201810942390A CN109079786B CN 109079786 B CN109079786 B CN 109079786B CN 201810942390 A CN201810942390 A CN 201810942390A CN 109079786 B CN109079786 B CN 109079786B
Authority
CN
China
Prior art keywords
point cloud
mechanical arm
cloud group
candidate object
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810942390.2A
Other languages
Chinese (zh)
Other versions
CN109079786A (en
Inventor
卢策吾
方浩树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Flexiv Robotics Ltd
Original Assignee
Flexiv Robotics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flexiv Robotics Ltd filed Critical Flexiv Robotics Ltd
Priority to CN201810942390.2A priority Critical patent/CN109079786B/en
Publication of CN109079786A publication Critical patent/CN109079786A/en
Application granted granted Critical
Publication of CN109079786B publication Critical patent/CN109079786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention aims to provide a mechanical arm grabbing self-learning method and equipment, wherein acting force is applied to a point cloud group which may be an object through the tail end of a mechanical arm, and whether the object is the object or not is judged by utilizing the motion conditions of a force sensor and the tail end of the mechanical arm; after pushing away the object, verify whether the point cloud group of candidate object belongs to same object, try out the actual boundary of object according to the change around the motion point cloud simultaneously, realize that the arm can carry out self-exploration, the position that can snatch is learned, and the position that can snatch is learned to the arm ability efficient, reduce cost to reach good effect.

Description

Mechanical arm grabbing self-learning method and equipment
Technical Field
The invention relates to the field of computers, in particular to a mechanical arm grabbing self-learning method and equipment.
Background
Given an object, given a task, given a hand, it is best how to grasp the object according to the task? The current existing method automatically learns the grabbing of objects by reinforcement learning. The biggest problem with this approach, however, is the problem of data collection. In the existing method for automatically learning and grabbing by using reinforcement learning, 50 mechanical arms are required to be used for acquiring data, and the time of more than one month is collected. Therefore, the traditional robot needs a large amount of learning samples for grabbing, and the cost is very high.
Disclosure of Invention
The invention aims to provide a mechanical arm grabbing self-learning method and equipment.
According to one aspect of the invention, a mechanical arm grabbing self-learning method is provided, and the method comprises the following steps:
step S1, obtaining an RGBD image of a scene to obtain a point cloud group of candidate objects in the RGBD image of the scene;
step S2, obtaining the normal direction of the surface of the point cloud group of the candidate object by using the depth information in the RGBD image of the scene as the motion direction of the mechanical arm approaching the point cloud group, obtaining the contact point of the mechanical arm on the surface of the candidate object according to the motion direction, and after moving the mechanical arm to the vicinity of the contact point, using the mechanical arm to touch the contact point on the surface of the candidate object;
step S3, moving the tail end of the mechanical arm to enable the mechanical arm to start to apply acting force to the normal direction of the contact point, detecting the applied acting force by a force sensor connected with the mechanical arm, and if the point cloud group of the candidate object moves before the acting force reaches the preset threshold value, determining that the candidate object is an object to be grabbed;
step S4, verifying the point cloud group of the candidate object according to the movement of the point cloud group of the candidate object in the process of applying the acting force, and if the verification result shows that the point cloud group of the candidate object belongs to the same object, obtaining the actual boundary of the object according to the front-back change in the movement of the point cloud group of the candidate object;
and step S5, obtaining point clouds of the object from the actual boundary of the object, calculating a grabbing point of the mechanical arm according to the obtained point clouds of the object by using a model-free grabbing method, and controlling the mechanical arm to grab the object according to the grabbing point.
Further, in the above method, obtaining a point cloud group of candidate objects in the RGBD image of the scene includes:
obtaining a point cloud group of candidate objects in the RGBD image of the scene through a depth neural network or the prior of geometric information, wherein the point cloud group of the candidate objects is a point cloud group possibly being objects
Further, in the above method, the deep neural network includes VoxelNet.
Further, in the above method, the priori of the geometric information includes Real-Time 3D Segmentation of segmented Scenes for Robot profiling.
Further, in the above method, a contact point of the mechanical arm on the surface of the candidate object is obtained according to the motion direction and by using a path plan.
Further, in the above method, after the force sensor connected to the robot arm detects the applied force, the method further includes: if the point cloud group of the candidate object does not move when the acting force reaches the preset threshold, determining that the point cloud group of the candidate object is not an object, ending the process, and returning to the step S1.
Further, in the above method, verifying the point cloud group of the candidate object includes:
verifying the point cloud set of the candidate object using optical flow or dense coreespondance.
Further, in the above method, after verifying the point cloud group of the candidate object, the method further includes:
and if the verification result shows that the point cloud group of the candidate object belongs to a plurality of objects, randomly selecting one of the objects, and solving the actual body boundary of the selected object according to the forward and backward change in the movement of the point cloud group of the candidate object.
According to another aspect of the present invention, there is also provided a computing-based device, including:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
the method comprises the steps of obtaining an RGBD image of a scene to obtain a point cloud group of candidate objects in the RGBD image of the scene;
obtaining the normal direction of the surface of the point cloud group of the candidate object by using the depth information in the RGBD image of the scene as the motion direction of the mechanical arm approaching the point cloud group, obtaining the contact point of the mechanical arm on the surface of the candidate object according to the motion direction, and after moving the mechanical arm to the vicinity of the contact point, touching the contact point on the surface of the candidate object by using the mechanical arm;
moving the tail end of the mechanical arm to enable the mechanical arm to start to apply an acting force to the normal direction of the contact point, detecting the applied acting force by a force sensor connected with the mechanical arm, and judging the candidate object to be an object to be grabbed if the point cloud group of the candidate object moves before the acting force reaches the preset threshold value;
verifying the point cloud group of the candidate object according to the movement of the point cloud group of the candidate object in the process of applying the acting force, and if the point cloud group of the candidate object belongs to the same object according to the verification result, obtaining the actual boundary of the object according to the front-back change in the movement of the point cloud group of the candidate object;
and obtaining the point cloud of the object from the actual boundary of the object, calculating the grabbing point of the mechanical arm according to the obtained point cloud of the object by using a model-free grabbing method, and controlling the mechanical arm to grab the object according to the grabbing point.
According to another aspect of the present invention, there is also provided a computer-readable storage medium having stored thereon computer-executable instructions, wherein the computer-executable instructions, when executed by a processor, cause the processor to:
the method comprises the steps of obtaining an RGBD image of a scene to obtain a point cloud group of candidate objects in the RGBD image of the scene;
obtaining the normal direction of the surface of the point cloud group of the candidate object by using the depth information in the RGBD image of the scene as the motion direction of the mechanical arm approaching the point cloud group, obtaining the contact point of the mechanical arm on the surface of the candidate object according to the motion direction, and after moving the mechanical arm to the vicinity of the contact point, touching the contact point on the surface of the candidate object by using the mechanical arm;
moving the tail end of the mechanical arm to enable the mechanical arm to start to apply an acting force to the normal direction of the contact point, detecting the applied acting force by a force sensor connected with the mechanical arm, and judging the candidate object to be an object to be grabbed if the point cloud group of the candidate object moves before the acting force reaches the preset threshold value;
verifying the point cloud group of the candidate object according to the movement of the point cloud group of the candidate object in the process of applying the acting force, and if the point cloud group of the candidate object belongs to the same object according to the verification result, obtaining the actual boundary of the object according to the front-back change in the movement of the point cloud group of the candidate object;
and obtaining the point cloud of the object from the actual boundary of the object, calculating the grabbing point of the mechanical arm according to the obtained point cloud of the object by using a model-free grabbing method, and controlling the mechanical arm to grab the object according to the grabbing point.
Compared with the prior art, the method and the device have the advantages that acting force is applied to the point cloud group which may be an object through the tail end of the mechanical arm, and whether the point cloud group is the object or not is judged according to the motion conditions of the force sensor and the tail end of the mechanical arm; after pushing away the object, verify whether the point cloud group of candidate object belongs to same object, try out the actual boundary of object according to the change around the motion point cloud simultaneously, realize that the arm can carry out self-exploration, the position that can snatch is learned, and the position that can snatch is learned to the arm ability efficient, reduce cost to reach good effect.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 illustrates a flow diagram of a robotic arm grasping self-learning method in accordance with an aspect of the present invention.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
As shown in fig. 1, the present application provides a robot arm grasping self-learning method, which includes: step S1, obtaining an RGBD (R is Red, G is Green Green, B is Blue, and D is Depth) image of a scene, and obtaining a point cloud group of candidate objects in the RGBD image of the scene;
in some embodiments, the point cloud sets of the object candidates in the RGBD image of the scene may be obtained through a deep neural network (e.g., VoxelNet) or a priori of geometric information (e.g., Real-Time 3D Segmentation of filtered Scenes for Robot profiling), and the point cloud sets of the object candidates are the point cloud sets that may be objects;
step S2, obtaining the normal direction of the surface of the point cloud group of the candidate object by using the depth information in the RGBD image of the scene as the motion direction of the mechanical arm approaching the point cloud group, obtaining the contact point of the mechanical arm on the surface of the candidate object according to the motion direction, and after moving the mechanical arm to the vicinity of the contact point, using the mechanical arm to touch the contact point on the surface of the candidate object;
here, the contact point is a position where the mechanical arm can contact the surface of the object, and in some embodiments, the contact point of the mechanical arm on the surface of the candidate object can be obtained according to the motion direction and by using a path plan;
step S3, moving the tail end of the mechanical arm to enable the mechanical arm to start to apply acting force to the normal direction of the contact point, detecting the applied acting force by a force sensor connected with the mechanical arm, and if the point cloud group of the candidate object moves before the acting force reaches the preset threshold value, determining that the candidate object is an object to be grabbed, wherein the object to be grabbed is an object which can be grabbed;
in some embodiments, after the force sensor coupled to the robotic arm detects the applied force at the same time, further comprising: if the point cloud group of the candidate object does not move when the acting force reaches the preset threshold, determining that the point cloud group of the candidate object is not an object, ending the process, and returning to the step S1;
step S4, verifying the point cloud group of the candidate object by using optical flow or dense coreespondance according to the movement of the point cloud group of the candidate object in the process of applying the acting force, and if the verification result shows that the point cloud group of the candidate object belongs to the same object, obtaining the actual boundary of the object according to the front-back change in the movement of the point cloud group of the candidate object;
in some embodiments, after verifying the point cloud set of the candidate object using optical flow or dense coreespondance, the method further comprises: if the verification result shows that the point cloud group of the candidate object belongs to a plurality of objects, one object is randomly selected, and the actual body boundary of the selected object is obtained according to the forward and backward change in the movement of the point cloud group of the candidate object;
step S5, obtaining the point cloud of the object from the actual boundary of the object, calculating the grabbing point of the mechanical arm by using a model-free grabbing method (such as grapp position detection) according to the obtained point cloud of the object, and controlling the mechanical arm to grab the object according to the grabbing point.
In the method, acting force is applied to a point cloud group which may be an object through the tail end of the mechanical arm, and whether the point cloud group is the object or not is judged by utilizing the motion conditions of the force sensor and the tail end of the mechanical arm; after pushing away the object, verify whether the point cloud group of candidate object belongs to same object, try out the actual boundary of object according to the change around the motion point cloud simultaneously, realize that the arm can carry out self-exploration, the position that can snatch is learned, and the position that can snatch is learned to the arm ability efficient, reduce cost to reach good effect.
According to another aspect of the present invention, there is also provided a computing-based device, including:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
the method comprises the steps of obtaining an RGBD image of a scene to obtain a point cloud group of candidate objects in the RGBD image of the scene;
obtaining the normal direction of the surface of the point cloud group of the candidate object by using the depth information in the RGBD image of the scene as the motion direction of the mechanical arm approaching the point cloud group, obtaining the contact point of the mechanical arm on the surface of the candidate object according to the motion direction, and after moving the mechanical arm to the vicinity of the contact point, touching the contact point on the surface of the candidate object by using the mechanical arm;
moving the tail end of the mechanical arm to enable the mechanical arm to start to apply an acting force to the normal direction of the contact point, detecting the applied acting force by a force sensor connected with the mechanical arm, and judging the candidate object to be an object to be grabbed if the point cloud group of the candidate object moves before the acting force reaches the preset threshold value;
verifying the point cloud group of the candidate object according to the movement of the point cloud group of the candidate object in the process of applying the acting force, and if the point cloud group of the candidate object belongs to the same object according to the verification result, obtaining the actual boundary of the object according to the front-back change in the movement of the point cloud group of the candidate object;
and obtaining the point cloud of the object from the actual boundary of the object, calculating the grabbing point of the mechanical arm according to the obtained point cloud of the object by using a model-free grabbing method, and controlling the mechanical arm to grab the object according to the grabbing point.
According to another aspect of the present invention, there is also provided a computer-readable storage medium having stored thereon computer-executable instructions, wherein the computer-executable instructions, when executed by a processor, cause the processor to:
the method comprises the steps of obtaining an RGBD image of a scene to obtain a point cloud group of candidate objects in the RGBD image of the scene;
obtaining the normal direction of the surface of the point cloud group of the candidate object by using the depth information in the RGBD image of the scene as the motion direction of the mechanical arm approaching the point cloud group, obtaining the contact point of the mechanical arm on the surface of the candidate object according to the motion direction, and after moving the mechanical arm to the vicinity of the contact point, touching the contact point on the surface of the candidate object by using the mechanical arm;
moving the tail end of the mechanical arm to enable the mechanical arm to start to apply an acting force to the normal direction of the contact point, detecting the applied acting force by a force sensor connected with the mechanical arm, and judging the candidate object to be an object to be grabbed if the point cloud group of the candidate object moves before the acting force reaches the preset threshold value;
verifying the point cloud group of the candidate object according to the movement of the point cloud group of the candidate object in the process of applying the acting force, and if the point cloud group of the candidate object belongs to the same object according to the verification result, obtaining the actual boundary of the object according to the front-back change in the movement of the point cloud group of the candidate object;
and obtaining the point cloud of the object from the actual boundary of the object, calculating the grabbing point of the mechanical arm according to the obtained point cloud of the object by using a model-free grabbing method, and controlling the mechanical arm to grab the object according to the grabbing point.
For details of the embodiments of the apparatus and the computer-readable storage medium, reference may be made to corresponding parts of the embodiments of the methods, and details are not described herein again.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
It should be noted that the present invention may be implemented in software and/or in a combination of software and hardware, for example, as an Application Specific Integrated Circuit (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Also, the software programs (including associated data structures) of the present invention can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Further, some of the steps or functions of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present invention can be applied as a computer program product, such as computer program instructions, which when executed by a computer, can invoke or provide the method and/or technical solution according to the present invention through the operation of the computer. Program instructions which invoke the methods of the present invention may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the invention herein comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or solution according to embodiments of the invention as described above.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (9)

1. A mechanical arm grabbing self-learning method comprises the following steps:
step S1, obtaining an RGBD image of a scene to obtain a point cloud group of candidate objects in the RGBD image of the scene;
step S2, obtaining the normal direction of the surface of the point cloud group of the candidate object by using the depth information in the RGBD image of the scene as the motion direction of the mechanical arm approaching the point cloud group, obtaining the contact point of the mechanical arm on the surface of the candidate object according to the motion direction, and after moving the mechanical arm to the vicinity of the contact point, using the mechanical arm to touch the contact point on the surface of the candidate object;
step S3, moving the tail end of the mechanical arm to enable the mechanical arm to start to apply acting force to the normal direction of the contact point, detecting the applied acting force by a force sensor connected with the mechanical arm, and if the point cloud group of the candidate object moves before the acting force reaches a preset threshold value, determining that the candidate object is an object to be grabbed; after the force sensor connected with the mechanical arm detects the applied acting force, the method further comprises the following steps: if the point cloud group of the candidate object does not move when the acting force reaches the preset threshold, determining that the point cloud group of the candidate object is not an object, and returning to the step S1;
step S4, verifying the point cloud group of the candidate object according to the movement of the point cloud group of the candidate object in the process of applying the acting force, and if the verification result shows that the point cloud group of the candidate object belongs to the same object, obtaining the actual boundary of the object according to the front-back change in the movement of the point cloud group of the candidate object;
and step S5, obtaining point clouds of the object from the actual boundary of the object, calculating a grabbing point of the mechanical arm according to the obtained point clouds of the object by using a grabbing method, and controlling the mechanical arm to grab the object according to the grabbing point.
2. The method of claim 1, wherein obtaining a point cloud set of object candidates in an RGBD image of the scene comprises:
and obtaining a point cloud group of candidate objects in the RGBD image of the scene through a depth neural network or the prior of geometric information, wherein the point cloud group of the candidate objects is a point cloud group which may be an object.
3. The method of claim 2, wherein the deep neural network comprises VoxelNet.
4. The method of claim 2, wherein the a priori of the geometric information comprises Real-Time 3D Segmentation of segmented Scenes for Robot ranging.
5. The method according to claim 1, wherein a contact point of the robotic arm on the candidate object surface is found according to the motion direction and using path planning.
6. The method of claim 1, wherein validating the point cloud set of candidate objects comprises:
verifying the point cloud set of the candidate object using optical flow or dense coreespondance.
7. The method of claim 1, wherein after validating the point cloud set of candidate objects, further comprising:
and if the verification result shows that the point cloud group of the candidate object belongs to a plurality of objects, randomly selecting one of the objects, and solving the actual body boundary of the selected object according to the forward and backward change in the movement of the point cloud group of the candidate object.
8. A computing-based device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
the method comprises the steps of obtaining an RGBD image of a scene to obtain a point cloud group of candidate objects in the RGBD image of the scene;
obtaining the normal direction of the surface of the point cloud group of the candidate object by using the depth information in the RGBD image of the scene as the motion direction of the mechanical arm approaching the point cloud group, obtaining the contact point of the mechanical arm on the surface of the candidate object according to the motion direction, and after moving the mechanical arm to the vicinity of the contact point, touching the contact point on the surface of the candidate object by using the mechanical arm;
moving the tail end of the mechanical arm to enable the mechanical arm to start to apply an acting force to the normal direction of the contact point, detecting the applied acting force by a force sensor connected with the mechanical arm, and judging the candidate object to be an object to be grabbed if the point cloud group of the candidate object moves before the acting force reaches a preset threshold value; after the force sensor connected with the mechanical arm detects the applied acting force, the method further comprises the following steps: if the point cloud group of the candidate object does not move when the acting force reaches the preset threshold, determining that the point cloud group of the candidate object is not an object, and returning to the step S1;
verifying the point cloud group of the candidate object according to the movement of the point cloud group of the candidate object in the process of applying the acting force, and if the point cloud group of the candidate object belongs to the same object according to the verification result, obtaining the actual boundary of the object according to the front-back change in the movement of the point cloud group of the candidate object;
and obtaining the point cloud of the object from the actual boundary of the object, calculating the grabbing point of the mechanical arm according to the obtained point cloud of the object by using a grabbing method, and controlling the mechanical arm to grab the object according to the grabbing point.
9. A computer-readable storage medium having computer-executable instructions stored thereon, wherein the computer-executable instructions, when executed by a processor, cause the processor to:
the method comprises the steps of obtaining an RGBD image of a scene to obtain a point cloud group of candidate objects in the RGBD image of the scene;
obtaining the normal direction of the surface of the point cloud group of the candidate object by using the depth information in the RGBD image of the scene as the motion direction of the mechanical arm approaching the point cloud group, obtaining the contact point of the mechanical arm on the surface of the candidate object according to the motion direction, and after moving the mechanical arm to the vicinity of the contact point, touching the contact point on the surface of the candidate object by using the mechanical arm;
moving the tail end of the mechanical arm to enable the mechanical arm to start to apply an acting force to the normal direction of the contact point, detecting the applied acting force by a force sensor connected with the mechanical arm, and judging the candidate object to be an object to be grabbed if the point cloud group of the candidate object moves before the acting force reaches a preset threshold value; after the force sensor connected with the mechanical arm detects the applied acting force, the method further comprises the following steps: if the point cloud group of the candidate object does not move when the acting force reaches the preset threshold, determining that the point cloud group of the candidate object is not an object, and returning to the step S1;
verifying the point cloud group of the candidate object according to the movement of the point cloud group of the candidate object in the process of applying the acting force, and if the point cloud group of the candidate object belongs to the same object according to the verification result, obtaining the actual boundary of the object according to the front-back change in the movement of the point cloud group of the candidate object;
and obtaining the point cloud of the object from the actual boundary of the object, calculating the grabbing point of the mechanical arm according to the obtained point cloud of the object by using a grabbing method, and controlling the mechanical arm to grab the object according to the grabbing point.
CN201810942390.2A 2018-08-17 2018-08-17 Mechanical arm grabbing self-learning method and equipment Active CN109079786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810942390.2A CN109079786B (en) 2018-08-17 2018-08-17 Mechanical arm grabbing self-learning method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810942390.2A CN109079786B (en) 2018-08-17 2018-08-17 Mechanical arm grabbing self-learning method and equipment

Publications (2)

Publication Number Publication Date
CN109079786A CN109079786A (en) 2018-12-25
CN109079786B true CN109079786B (en) 2021-08-27

Family

ID=64793889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810942390.2A Active CN109079786B (en) 2018-08-17 2018-08-17 Mechanical arm grabbing self-learning method and equipment

Country Status (1)

Country Link
CN (1) CN109079786B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110450153B (en) * 2019-07-08 2021-02-19 清华大学 Mechanical arm object active picking method based on deep reinforcement learning
CN111906782B (en) * 2020-07-08 2021-07-13 西安交通大学 Intelligent robot grabbing method based on three-dimensional vision
CN112053398B (en) * 2020-08-11 2021-08-27 浙江大华技术股份有限公司 Object grabbing method and device, computing equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105415370A (en) * 2014-09-16 2016-03-23 发那科株式会社 Article Pickup Apparatus For Picking Up Randomly Piled Articles
CN105598965A (en) * 2015-11-26 2016-05-25 哈尔滨工业大学 Robot under-actuated hand autonomous grasping method based on stereoscopic vision
CN107336234A (en) * 2017-06-13 2017-11-10 赛赫智能设备(上海)股份有限公司 A kind of reaction type self study industrial robot and method of work
CN107748890A (en) * 2017-09-11 2018-03-02 汕头大学 A kind of visual grasping method, apparatus and its readable storage medium storing program for executing based on depth image
CN108127666A (en) * 2017-12-29 2018-06-08 深圳市越疆科技有限公司 A kind of grasping means of mechanical arm, system and mechanical arm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018051704A (en) * 2016-09-29 2018-04-05 セイコーエプソン株式会社 Robot control device, robot, and robot system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105415370A (en) * 2014-09-16 2016-03-23 发那科株式会社 Article Pickup Apparatus For Picking Up Randomly Piled Articles
CN105598965A (en) * 2015-11-26 2016-05-25 哈尔滨工业大学 Robot under-actuated hand autonomous grasping method based on stereoscopic vision
CN107336234A (en) * 2017-06-13 2017-11-10 赛赫智能设备(上海)股份有限公司 A kind of reaction type self study industrial robot and method of work
CN107748890A (en) * 2017-09-11 2018-03-02 汕头大学 A kind of visual grasping method, apparatus and its readable storage medium storing program for executing based on depth image
CN108127666A (en) * 2017-12-29 2018-06-08 深圳市越疆科技有限公司 A kind of grasping means of mechanical arm, system and mechanical arm

Also Published As

Publication number Publication date
CN109079786A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
KR102108854B1 (en) Real-time object detection method and apparatus by deep learning network model
CN109079786B (en) Mechanical arm grabbing self-learning method and equipment
US9990546B2 (en) Method and apparatus for determining target region in video frame for target acquisition
US11205276B2 (en) Object tracking method, object tracking device, electronic device and storage medium
CN107728615B (en) self-adaptive region division method and system
US20220371581A9 (en) Method and Apparatus for Avoidance Control of Vehicle, Electronic Device and Storage Medium
JP7233571B2 (en) Method and apparatus for learning and testing an object detection network that detects objects on images using attention maps
CN108829116B (en) Barrier-avoiding method and equipment based on monocular cam
CN109683617B (en) Automatic driving method and device and electronic equipment
KR101969623B1 (en) Face recognition with parallel detection and tracking, and/or grouped feature motion shift tracking
CN110348392B (en) Vehicle matching method and device
EP3719697A1 (en) Method and device for determining whether a hand cooperates with a manual steering element of a vehicle
KR102013781B1 (en) a Method for object detection using two cameras with different focal lengths and apparatus thereof
CN110245641A (en) A kind of target tracking image pickup method, device, electronic equipment
Lade et al. Simulation of self driving car using deep learning
CN109102026A (en) A kind of vehicle image detection method, apparatus and system
CN116416444B (en) Object grabbing point estimation, model training and data generation method, device and system
CN112784639A (en) Intersection detection, neural network training and intelligent driving method, device and equipment
CN110084825B (en) Image edge information navigation-based method and system
CN109753157B (en) Gesture control method and device of display screen
CN112990099B (en) Method and device for detecting lane line
CN109785362A (en) Target object tracking, device and storage medium based on target object detection
CN110288629A (en) Target detection automatic marking method and device based on moving Object Detection
WO2024012234A1 (en) Target detection method, computer device, computer-readable storage medium and vehicle
CN114972492A (en) Position and pose determination method and device based on aerial view and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant