CN108189032B - Automatic fetching method based on visual recognition and mechanical arm - Google Patents

Automatic fetching method based on visual recognition and mechanical arm Download PDF

Info

Publication number
CN108189032B
CN108189032B CN201711479273.9A CN201711479273A CN108189032B CN 108189032 B CN108189032 B CN 108189032B CN 201711479273 A CN201711479273 A CN 201711479273A CN 108189032 B CN108189032 B CN 108189032B
Authority
CN
China
Prior art keywords
target object
taking
face
gravity
origin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711479273.9A
Other languages
Chinese (zh)
Other versions
CN108189032A (en
Inventor
郎需林
王旭照
刘培超
刘主福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rizhao Yuejiang Intelligent Technology Co ltd
Original Assignee
Rizhao Yuejiang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rizhao Yuejiang Intelligent Technology Co ltd filed Critical Rizhao Yuejiang Intelligent Technology Co ltd
Priority to CN201711479273.9A priority Critical patent/CN108189032B/en
Publication of CN108189032A publication Critical patent/CN108189032A/en
Application granted granted Critical
Publication of CN108189032B publication Critical patent/CN108189032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to the technical field of automation, and discloses a visual recognition automatic fetching method and a mechanical arm, wherein the visual recognition automatic fetching method comprises the following steps: visually acquiring the shape and the size of the target object to obtain a digital model; and calculating the gravity center of the digital model, and driving the executing structure to take the target object according to the position of the gravity center. According to the automatic visual identification taking method and the mechanical arm, the gravity center of the target object is intelligently determined in a mode of visual identification and calculation before taking, and then the execution structure is driven according to the gravity center position to take the target, so that the target can be stably taken up, and the problem of falling after taking is avoided.

Description

Automatic fetching method based on visual recognition and mechanical arm
Technical Field
The invention relates to the technical field of automation, in particular to automatic taking of a mechanical arm.
Background
A robotic arm is a mechanical structure that mimics a human hand and has an actuating end that can move in three coordinates in space. Through special position detection and power control, the mechanical arm can intelligently simulate a human hand, partial functions of the human hand are realized, and taking is one of important functions.
In the taking operation, the execution end moves to the position near the target object, the target object is taken up in the modes of vacuum adsorption, clamping of the clamping jaws and the like, then the execution end is driven to move to the destination, the target object is put down, and the taking process is completed. In order to make the taking more intelligent, the mechanical arm in the prior art automatically identifies the position of the target object in a visual identification mode, and the mobile execution end is close to and takes the target object. In the taking process participated in by visual identification, the position of a target object can be usually only identified, but the shape of the target object cannot be accurately judged, so that the target object with preset parameters can only be taken when being taken, for example, products with known sizes and weights are sorted in a production line, when the target object without the preset parameters is faced, for example, building blocks with different sizes and shapes stacked together are faced, if the method of simply adopting the visual identification position and then taking is adopted, because the shapes and the sizes of the target objects are different, the mechanical arm cannot judge the correct taking position of the target object, and the problems of failure in taking or falling after taking and the like are easily caused.
Disclosure of Invention
The invention aims to provide a visual recognition automatic taking method and a mechanical arm, and aims to solve the problem that the mechanical arm in the prior art cannot judge the correct taking position of a target object during taking, so that taking is easy to fail or the mechanical arm falls off after taking.
The invention is realized in such a way, and provides a visual identification automatic taking method, which is used for a mechanical arm to intelligently take a target object, wherein the mechanical arm is provided with an execution structure for taking, and the method comprises the following steps: visually acquiring the shape and the size of the target object to obtain a digital model; and calculating the gravity center of the digital model, and driving the executing structure to take the target object according to the position of the gravity center.
Furthermore, a straight line which vertically passes through the gravity center of the target object is taken as a gravity center axis, the executing structure comprises two clamping jaws which simultaneously act on the side walls of the target object, and a connecting line between contact points of each clamping jaw and the side wall of the target object passes through the gravity center axis.
Furthermore, a straight line which vertically passes through the gravity center of the target object is taken as a gravity center axis, the execution structure comprises a plurality of clamping jaws which simultaneously act on the side wall of the target object, contact points of the clamping jaws and the side wall of the target object are enclosed to form an action pattern, and the gravity center axis passes through the action pattern.
Further, the center of gravity axis passes through a center of gravity point of the action pattern.
Further, the executing structure comprises a sucker acting on the upper end face of the target object, the target object is of a prism structure with a regular end face, and the gravity center of the upper end face of the target object serves as an acting point of the sucker.
Further, the executing structure comprises a sucker acting on the upper end face of the target object, and the target object is a prism structure with an irregular end face; determining a minimum regular pattern which is externally connected with the upper end face of the target object, taking the geometric center of the minimum regular pattern as an origin, obtaining a transformation vector a by referring to a plurality of vacant areas between the minimum regular pattern and the edge of the upper end face of the target object, and moving the origin according to the transformation vector a to be used as an action point of the sucker.
Further, the calculation method of the transformation vector a is as follows: and establishing a plane coordinate system by using the origin, calculating the area of each vacant area, determining the direction according to the position of the vacant area relative to the origin, determining an offset vector b by using the area and the direction, and adding all the offset vectors b to obtain the transformation vector a.
Further, the executing structure is a plurality of suckers which act on the upper end face of the target object simultaneously, and the projection point of the gravity center of the target object on the upper end face of the target object is located in a figure formed by surrounding the suckers.
The invention also provides a mechanical arm, which comprises an execution end and a visual acquisition structure: the shape and the size of the target object are collected; the processing circuit: the system is used for establishing a digital model according to the shape and the size of the target object and calculating and searching the position of the center of gravity of the target object; and the execution structure is arranged at the execution end and used for taking the target object according to the gravity center position.
Further, the actuation structure comprises a clamping jaw or a suction cup.
Compared with the prior art, the automatic visual identification taking method and the mechanical arm intelligently determine the gravity center of the target object through a mode of visual identification and calculation before taking, and then drive the execution structure to take the target according to the gravity center position, so that the target can be stably taken up, and the problem of falling after taking is avoided.
Drawings
Fig. 1 is a schematic diagram of a visual recognition automatic fetching method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a visual recognition automatic fetching method according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of a visual recognition automatic fetching method according to a third embodiment of the present invention;
FIG. 4 is a diagram illustrating a visual recognition automatic fetching method according to a fourth embodiment of the present invention;
FIG. 5 is a schematic diagram of a visual recognition automatic fetching method according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a robot arm according to a sixth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
In the description of the present invention, it is to be understood that the terms "length", "width", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are merely for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The implementation of the present embodiment is described in detail below with reference to specific drawings.
The first embodiment is as follows:
as shown in fig. 1 and fig. 6, the present embodiment provides a visual recognition automatic picking method for intelligently picking up an object 2 by a robot arm 1, and the robot arm 1 has an executing structure 11 for picking up.
The visual recognition automatic fetching method comprises the following steps:
visually acquiring the shape and the size of the target object 2 to obtain a digital model of the target object;
the center of gravity of the digital model is calculated, and then the actuator 11 is driven to take the object 2 in accordance with the position of the center of gravity.
The execution structure 1111a directly acts on the gravity center position of the object 2 according to the gravity center position of the object 2, can stably act on the gravity center position of the object 2 in the taking process, and is not easy to fall off and the like in the taking process.
In this embodiment, the actuator 11 is two jaws 11a that act simultaneously on the side wall of the object 2, i.e. by two points of contact with the side wall of the object 2.
After the center of gravity of the target object 2 is determined, if the straight line of the center of gravity is taken as the center of gravity axis 21, the connecting line between the contact points of each clamping jaw 11a and the side wall of the target object 2 passes through the center of gravity axis 21, so that the two clamping jaws 11a can accurately apply force on the center of gravity axis 21 to clamp the target object 2.
The second embodiment:
as shown in fig. 2 and 6, the actuating structure 11 in this embodiment is a plurality of clamping jaws 11b acting on the side wall of the object 2 at the same time, and since the number of the clamping jaws 11b is multiple, there are multiple contact points between the clamping jaws and the side wall of the object, and the multiple contact points surround the action pattern 11b1, so that the gravity center axis 21 passes through the action pattern 11b1. The center line axis of the object 2 is positioned between the action points of the plurality of clamping jaws 11b, and the object 2 is clamped by accurately applying force on the gravity center axis 21.
Preferably, the position of the center of gravity point of the action pattern 11b1 itself is the point where the center of gravity axis 21 coincides with the action pattern 11b1.
Example three:
as shown in fig. 3 and 6, the actuator 11 in this embodiment is a suction cup 11c acting on the upper end surface of the object 2, the object 2 has a prism structure with regular end surfaces, and the center of gravity of the upper end surface of the object 2 serves as the acting point of the suction cup 11 c.
Because the prism structure has certain spatial regularity, the projection of the gravity center on the upper end surface is the gravity center point of the upper end surface, and the point is selected as the action point of the sucking disc 11c, so that the force can be accurately applied to the gravity center axis 21 to tightly suck the target object 2.
Example four:
as shown in fig. 4 and 6, the actuator 11 in this embodiment is also a suction cup 11c acting on the upper end surface of the object 2a, and is different from the third embodiment in that the object 2a in this embodiment has a prism structure with irregular end surfaces.
Because the end face is irregular, it is difficult to directly determine the gravity center position, so the embodiment adopts an approximate simulation mode for determination, and the specific method is as follows: the minimum regular pattern 2a1 which is externally connected with the upper end face of the target 2 is determined, a transformation vector a is obtained by taking the geometric center of the minimum regular pattern 2a1 as an origin and referring to a plurality of vacant areas 2a11 between the minimum regular pattern 2a1 and the edge of the upper end face of the target 2, and the origin is moved according to the transformation vector a and then is used as an action point of the sucker 11 c.
That is, if the area of the vacant region 2a11 in a certain direction is larger, the actual center of gravity of the object 2 is more distant from the origin in the opposite direction of the vacant region 2a11, and according to this principle, the conversion vector a can be obtained, and the point where the origin is moved according to the conversion vector a can be regarded as the center of gravity position of the end surface of the object 2, and this point can be regarded as the action point of the suction cup 11c to secure the object 2.
The specific calculation method of the transformation vector a is as follows: a plane coordinate system is established with the origin, the area of each vacant region 2a11 is calculated and the direction is determined according to its position relative to the origin, and an offset vector b is determined from the area and the direction. The larger the area of the corresponding absent area 2a11, the larger the value of the offset vector b. The direction of the offset vector b is: pointing from the corresponding vacant region 2a11 to the origin.
After the offset vectors b corresponding to all the vacant regions 2a11 are determined, the transformation vectors a can be obtained by adding all the offset vectors b. Then, the origin is moved according to the conversion vector a, and the position of the center of gravity of the end surface of the target 2 is obtained.
Example five:
as shown in fig. 5 and 6, the actuator 11 in this embodiment is a plurality of suction cups 11d that act on the upper end surface of the object 2 at the same time, and the projection point of the center of gravity of the object 2 on the upper end surface thereof (i.e., the projection of the center of gravity axis 21 of the object 2) is located within the figure formed by the surrounding of the plurality of suction cups 11 d. In other words, the gravity axis 21 of the object 2 is located between the plurality of suction cups, and the principle of the second embodiment is similar to that of the first embodiment, so that the object 2 can be firmly sucked by the gravity axis with a smooth strength.
Example six:
as shown in fig. 6, the present embodiment provides a robot arm 1, comprising an execution end 12, a vision acquisition structure 14, a processing circuit 13, and an execution structure 11.
The vision collecting structure 14 is used for collecting the shape and size of the target 2, and should at least include two cameras, etc. which are located in different directions to capture the target 2 and collect information thereof.
The processing circuit 13 is used to build a digital model based on the shape and size of the object 2 and to computationally find its position of the center of gravity.
And the execution structure 11 is arranged at the execution end 12 and is used for taking the target object 2 according to the gravity center position.
Because the mechanical arm 1 in this embodiment can directly act on the gravity center position of the target object 2 after calculation, and the target object 2 is taken, so that the intelligent taking device has certain intelligent taking capability, and is not easy to fall off in the taking process.
The actuating structure 11 may comprise two or more clamping jaws or suction cups, which act on the side walls of the object 2 to hold the object 2; the number of the suction cups may be one or more, and the suction cups simultaneously act on the upper end surface of the object 2 to suck the object.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions and improvements made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. Automatic method of taking of visual identification for the intelligent target object of taking of arm, the arm has the execution structure that is used for taking, its characterized in that includes:
visually acquiring the shape and the size of the target object to obtain a digital model;
calculating the gravity center of the digital model, and driving the execution structure to take the target object according to the position of the gravity center;
the executing structure comprises a sucker acting on the upper end face of the target object, and the target object is of a prism structure with irregular end faces;
and determining a minimum regular pattern which is externally connected with the upper end face of the target object, taking the geometric center of the minimum regular pattern as an origin, obtaining a transformation vector a by referring to a plurality of vacant areas between the minimum regular pattern and the edge of the upper end face of the target object, and moving the origin according to the transformation vector a to be used as an action point of the sucker.
2. The visual recognition automatic fetching method of claim 1, wherein the transformation vector a is calculated by: and establishing a plane coordinate system by using the origin, calculating the area of each vacant area, determining the direction according to the position of the vacant area relative to the origin, determining an offset vector b by using the area and the direction, and adding all the offset vectors b to obtain the transformation vector a.
3. The visual recognition automatic picking method according to claim 1, wherein the execution structure is a plurality of suction cups acting on the upper end face of the object simultaneously, and the projection point of the center of gravity of the object on the upper end face thereof is located within a figure formed by surrounding the plurality of suction cups.
4. The arm, including the execution end, its characterized in that still includes:
the visual collection structure is as follows: the shape and the size of the target object are collected;
the processing circuit: the system is used for establishing a digital model according to the shape and the size of the target object and calculating and searching the position of the center of gravity of the target object; and
the execution structure is arranged at the execution end and used for taking the target object according to the gravity center position;
the executing structure comprises a sucker acting on the upper end surface of a target object, and the target object is of a prism structure with an irregular end surface;
and determining a minimum regular pattern which is externally connected with the upper end face of the target object, taking the geometry of the minimum regular pattern as an origin, obtaining a transformation vector a by referring to a plurality of vacant areas between the minimum regular pattern and the edge of the upper end face of the target object, and moving the origin according to the transformation vector a to be used as an action point of the sucker.
CN201711479273.9A 2017-12-29 2017-12-29 Automatic fetching method based on visual recognition and mechanical arm Active CN108189032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711479273.9A CN108189032B (en) 2017-12-29 2017-12-29 Automatic fetching method based on visual recognition and mechanical arm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711479273.9A CN108189032B (en) 2017-12-29 2017-12-29 Automatic fetching method based on visual recognition and mechanical arm

Publications (2)

Publication Number Publication Date
CN108189032A CN108189032A (en) 2018-06-22
CN108189032B true CN108189032B (en) 2023-01-03

Family

ID=62586641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711479273.9A Active CN108189032B (en) 2017-12-29 2017-12-29 Automatic fetching method based on visual recognition and mechanical arm

Country Status (1)

Country Link
CN (1) CN108189032B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110220882B (en) * 2019-05-30 2022-05-17 深圳前海达闼云端智能科技有限公司 Sample detection method, sample detection device, sample calculation device, and computer storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005169564A (en) * 2003-12-11 2005-06-30 Toyota Motor Corp Method for gripping object of optional shape by robot
JP2005231710A (en) * 2004-02-23 2005-09-02 Yanmar Co Ltd Boxing machine
KR20120111245A (en) * 2011-03-31 2012-10-10 성균관대학교산학협력단 Finger gait planning method of robotic hands and finger gait planning apparatus of robotic hands
JP2014210310A (en) * 2013-04-18 2014-11-13 ファナック株式会社 Robot system equipped with robot for carrying work
CN105184019A (en) * 2015-10-12 2015-12-23 中国科学院自动化研究所 Robot grabbing method and system
CN105598965A (en) * 2015-11-26 2016-05-25 哈尔滨工业大学 Robot under-actuated hand autonomous grasping method based on stereoscopic vision
CN106485746A (en) * 2016-10-17 2017-03-08 广东技术师范学院 Visual servo mechanical hand based on image no demarcation and its control method
CN106934813A (en) * 2015-12-31 2017-07-07 沈阳高精数控智能技术股份有限公司 A kind of industrial robot workpiece grabbing implementation method of view-based access control model positioning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005169564A (en) * 2003-12-11 2005-06-30 Toyota Motor Corp Method for gripping object of optional shape by robot
JP2005231710A (en) * 2004-02-23 2005-09-02 Yanmar Co Ltd Boxing machine
KR20120111245A (en) * 2011-03-31 2012-10-10 성균관대학교산학협력단 Finger gait planning method of robotic hands and finger gait planning apparatus of robotic hands
JP2014210310A (en) * 2013-04-18 2014-11-13 ファナック株式会社 Robot system equipped with robot for carrying work
CN105184019A (en) * 2015-10-12 2015-12-23 中国科学院自动化研究所 Robot grabbing method and system
CN105598965A (en) * 2015-11-26 2016-05-25 哈尔滨工业大学 Robot under-actuated hand autonomous grasping method based on stereoscopic vision
CN106934813A (en) * 2015-12-31 2017-07-07 沈阳高精数控智能技术股份有限公司 A kind of industrial robot workpiece grabbing implementation method of view-based access control model positioning
CN106485746A (en) * 2016-10-17 2017-03-08 广东技术师范学院 Visual servo mechanical hand based on image no demarcation and its control method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于机器视觉的工件分拣系统研究;李亚伟;《上海工程技术大学》;20161115;全文 *
零件分拣系统中图像处理算法研究;单晓杭;《机电工程》;20100531;第27卷(第5期);全文 *

Also Published As

Publication number Publication date
CN108189032A (en) 2018-06-22

Similar Documents

Publication Publication Date Title
JP6793428B1 (en) Robot multi-gripper assembly and method for gripping and holding objects
JP7352260B2 (en) Robot system with automatic object detection mechanism and its operating method
CN111791239B (en) Method for realizing accurate grabbing by combining three-dimensional visual recognition
JP2021030439A (en) Robotic multi-gripper assemblies and methods for gripping and holding objects
WO2017015898A1 (en) Control system for robotic unstacking equipment and method for controlling robotic unstacking
US9707682B1 (en) Methods and systems for recognizing machine-readable information on three-dimensional objects
US9205558B1 (en) Multiple suction cup control
JP2023155399A (en) Robotic system with piece-loss management mechanism
CN106965180A (en) The mechanical arm grabbing device and method of bottle on streamline
CN112025701B (en) Method, device, computing equipment and storage medium for grabbing object
CN107009358A (en) A kind of unordered grabbing device of robot based on one camera and method
JP2019198949A (en) Robot system for taking out work-piece loaded in bulk state and robot system control method
CN110666801A (en) Grabbing industrial robot for matching and positioning complex workpieces
US20180333857A1 (en) Workpiece picking system
US20220292702A1 (en) Image processor, imaging device, robot and robot system
CN113538459B (en) Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection
CN108189032B (en) Automatic fetching method based on visual recognition and mechanical arm
JP2023154055A (en) Robotic multi-surface gripper assemblies and methods for operating the same
US20230173660A1 (en) Robot teaching by demonstration with visual servoing
Pan et al. Manipulator package sorting and placing system based on computer vision
CN113800270A (en) Robot control method and system for logistics unstacking
CN114074331A (en) Disordered grabbing method based on vision and robot
CN114193440A (en) Robot automatic grabbing system and method based on 3D vision
CN206645534U (en) A kind of unordered grabbing device of robot based on double camera
Zhang et al. Aligning micro-gripper to ring object in high precision with microscope vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210202

Address after: 276800 north of Shantou Road West of Hangzhou Road East of Luzhou road Rizhao Economic Development Zone Shandong Province

Applicant after: Rizhao Yuejiang Intelligent Technology Co.,Ltd.

Address before: 518000 4th floor, building 8, area a, Tanglang Industrial Zone, Taoyuan Street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN YUEJIANG TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant