CN110962121A - Movement device for loading 3D detection unit and material grabbing method thereof - Google Patents

Movement device for loading 3D detection unit and material grabbing method thereof Download PDF

Info

Publication number
CN110962121A
CN110962121A CN201811163269.6A CN201811163269A CN110962121A CN 110962121 A CN110962121 A CN 110962121A CN 201811163269 A CN201811163269 A CN 201811163269A CN 110962121 A CN110962121 A CN 110962121A
Authority
CN
China
Prior art keywords
pose
eye
unit
matrix
grabbing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811163269.6A
Other languages
Chinese (zh)
Other versions
CN110962121B (en
Inventor
周帅骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Micro Electronics Equipment Co Ltd
Original Assignee
Shanghai Micro Electronics Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Micro Electronics Equipment Co Ltd filed Critical Shanghai Micro Electronics Equipment Co Ltd
Priority to CN201811163269.6A priority Critical patent/CN110962121B/en
Publication of CN110962121A publication Critical patent/CN110962121A/en
Application granted granted Critical
Publication of CN110962121B publication Critical patent/CN110962121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a moving device loading a 3D detection unit and a material grabbing method thereof, which acquire and calculate grabbing poses, photographing poses, eye poses and corresponding pose matrixes thereof by selecting a specific working condition to acquire the hand-eye poses of the 3D detection unit and a tail end execution unit, further acquire new photographing poses and eye poses in actual grabbing, acquire complete 3D information of materials by calculation of a control unit and acquire accurate grabbing poses thereof, so that a mechanical arm unit moves to the grabbing poses under the guidance of the 3D detection unit to enable the tail end execution unit to grab the materials, and improve the positioning accuracy and reliability of the mechanical arm unit on the materials.

Description

Movement device for loading 3D detection unit and material grabbing method thereof
Technical Field
The invention relates to the field of machine vision, in particular to a moving device for loading a 3D detection unit and a material grabbing method thereof.
Background
In the modern automated production process, with the development of industrial robots, people widely use machine vision systems in the fields of material transportation, working condition monitoring, finished product inspection, quality control and the like. The purpose of machine vision is to provide information on the object being operated on to the robot, and the research of machine vision includes: object recognition, detecting an object in an image; estimating the pose, namely calculating the position and the posture of the object under a camera coordinate system; and calibrating the camera, and determining the position and the posture of the camera relative to the robot. In this way, the object pose can be converted into the robot pose.
In a robot system, the installation manner of a camera can be divided into two major categories: one is that the camera is installed outside the mechanical arm, is fixed relative to the base (world coordinate system) of the robot, and does not move along with the movement of the mechanical arm; the other is that the camera is mounted on the mechanical arm and moves with the movement of the mechanical arm.
In the prior art, a vision sensor is used for measuring an object so as to obtain pose information. The visual sensor is installed at the tail end of the mechanical arm and moves along with the mechanical arm, so that the visual sensor is wider in application range and suitable for more working occasions. Due to technical limitation, the tail end of the mechanical arm is extremely difficult to move at a constant speed, the error is far larger than that of a workpiece table, the result precision obtained by a common vision sensor is not high, complete 3D information cannot be obtained by the algorithm of the existing automatic handling machine, the robot cannot be grabbed accurately, and the positioning precision and the reliability of the robot are affected.
Disclosure of Invention
The invention aims to provide a movement device loaded with a 3D detection unit and a material grabbing method thereof, which can obtain complete 3D information, so that the positioning accuracy and reliability of a mechanical arm on a material are improved.
In order to achieve the above object, the present invention provides a moving device for loading a 3D detection unit, including a robot arm unit, a terminal execution unit, a control unit and a 3D detection unit, wherein the terminal execution unit is disposed at a terminal of the robot arm unit, the 3D detection unit is disposed on the terminal execution unit to obtain 3D information of a material, and the control unit guides the robot arm unit to move to a position of the material according to the 3D information of the material, and enables the terminal execution unit to grasp the material.
Optionally, the 3D detection unit includes a dot matrix camera.
Optionally, a flange is disposed at a tail end of the mechanical arm unit, so that the mechanical arm unit and the tail end execution unit are connected through the flange.
Optionally, the robot arm unit includes a robot arm with n degrees of freedom, where n is greater than or equal to 3.
The invention also provides a material grabbing method, which comprises the following steps:
s1: acquiring the grabbing pose P of the mechanical arm unit and the material under a specific working conditionboThe photographing position of the mechanical arm unit and the tail end execution unitPosture PbtAnd the 3D detection unit and the eye pose P of the materialcoAnd according to the grabbing pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose P of the 3D detection unit and the tail end execution unittcAnd performs step S2;
s2: when actually grabbing, acquiring the photographing pose PbtPosition and posture of eye object PcoAccording to the hand-eye pose PtcObtaining the grabbing pose PboAnd performs step S3;
s3: the control unit is used for grabbing the pose P according to the pose PboGuiding the robot arm unit to move to the position of the material, and causing the end effector unit to grasp the material, and performing step S2.
Alternatively, in step S3, when the installation position of the 3D detection unit and/or the end effector is changed, step S1 is performed.
Optionally, in step S1, according to the grasp pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose PtcComprises the following steps:
will grasp the pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a hand-eye pose matrix Mtc
The hand-eye pose matrix MtcConvert into the hand-eye pose Ptc
Optionally, according to formula Mtc=Mbt -1·Mbo·Mco -1Obtaining the hand-eye pose matrix Mtc
Optionally, in step S2, according to the hand-eye pose PtcObtaining the grabbing pose PboComprises the following steps:
the hand-eye position P is obtainedtcAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a grabbing pose matrix Mbo
The grabbing pose matrix M is obtainedboConvert into the grab pose Pbo
Optionally, according to formula Mbo=Mbt·Mtc·McoObtaining the grabbing pose matrix Mbo
Optionally, the mechanical arm unit has an encoder, and the photographing pose P is acquired by reading the encoderbt
Optionally, the 3D detection unit acquires 3D information of the material, and the control unit obtains the eye pose P according to the 3D informationco
According to the invention, the 3D information of a material is acquired through the 3D detection unit, the control unit guides the mechanical arm unit to move to the position of the material according to the 3D information of the material, and the tail end execution unit is enabled to grab the material, so that the working precision and reliability of the mechanical arm unit are improved.
Drawings
Fig. 1 is a schematic view of a motion device for loading a 3D detection unit according to the present invention;
FIG. 2 is a schematic diagram of a dot matrix camera according to the present invention;
in the figure: 1-a robot arm unit; a 2-3D detection unit; 3-an end execution unit; 4-materials.
Detailed Description
The following describes in more detail embodiments of the present invention with reference to the schematic drawings. The advantages and features of the present invention will become more apparent from the following description. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is merely for the purpose of facilitating and distinctly claiming the embodiments of the present invention.
Referring to fig. 1, the moving device for loading a 3D detection unit provided by the present invention includes a robot arm unit 1, a terminal execution unit 3, a control unit and a 3D detection unit 2, wherein the terminal execution unit 3 is disposed at the terminal of the robot arm unit 1, the 3D detection unit 2 is disposed on the terminal execution unit 3 to obtain 3D information of a material 4, and the control unit guides the robot arm unit 1 to move to the position of the material 4 according to the 3D information of the material 4, and enables the terminal execution unit 3 to grasp the material 4.
It should be noted that the material 4 according to the present invention is plural, and the shapes of the plural materials 4 are identical.
Further, a flange is provided at the end of the robot arm unit 1, so that the robot arm unit 1 and the end effector unit 3 are connected by the flange.
Further, the robot arm unit 1 includes a robot arm with n degrees of freedom, where n is 3 or more.
The invention discloses a material grabbing method of a movement device for loading a 3D detection unit by taking a 6-degree-of-freedom mechanical arm as an example, which specifically comprises the following steps:
s1: acquiring the grabbing pose P of the mechanical arm unit 1 and the material 4 under a specific working conditionboThe shooting pose P of the mechanical arm unit 1 and the tail end execution unit 3btAnd the 3D detection unit 2 and the eye pose P of the material 4coAnd according to the grabbing pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose P of the 3D detection unit 2 and the tail end execution unit 3tcAnd performs step S2;
s2: when actually grabbing, acquiring the photographing pose PbtPosition and posture of eye object PcoAccording to the hand-eye pose PtcObtaining the grabbing pose PboAnd performs step S3;
s3: the control unit is used for grabbing the pose P according to the pose PboGuiding the robot arm unit 1 to move to theThe position of the material 4 and the end effector 3 to grasp the material 4 and perform step S2.
Further, in step S3, when the installation positions of the 3D detection unit 2 and the end effector 3 are changed, it is necessary to repeatedly execute step S1 of reacquiring the hand-eye pose Ptc
Wherein, the grabbing pose PboThe relative pose of the mechanical arm unit 1 from a base coordinate system to a material 4 coordinate system; shooting pose PbtThe relative pose from the base coordinate system of the mechanical arm unit 1 to the coordinate system of the end execution unit 3 is shown; pose of eye object PcoThe relative pose from the coordinate system of the 3D detection unit 2 to the coordinate system of the material 4 is determined; hand-eye pose PtcIs the relative pose of the end effector 3 coordinate system to the 3D detector 2 coordinate system. All coordinate systems in this example follow the right hand rule.
Specifically, the mechanical arm unit 1 is taught manually to move to a photographing position, the 3D detection unit 2 is triggered to photograph, the 3D detection unit 2 is arranged on the mechanical arm unit 1, and the movement posture of the mechanical arm unit 1 follows Euler transformation, so that the photographing position P of the mechanical arm unit 1 and the terminal execution unit 3 is changed into a basic coordinate systembtCan be described as (x)bt,ybt,zbt,Rxbt,Rybt,Rzbt) And when the 3D detection unit 2 works, the photographing pose PbtCan be read from the encoder of the robot arm unit 1 and uploaded to the control unit.
In the embodiment of the present invention, the end effector 3 is mounted on the end flange of the robot arm unit 1, so that the relative position of the base coordinate system of the robot arm unit 1 and the coordinate system of the material 4, i.e., the grasping position PboCan be expressed as (x)bo,ybo,zbo,Rxbo,Rybo,Rzbo) In the embodiment, the mechanical arm unit 1 is taught manually to grab the material 4 to obtain the material; in the specific operation, a path is manually appointed by manually operating a demonstrator attached to the mechanical arm unit 1, after the mechanical arm unit 1 finishes photographing the 3D detection unit 2 at the photographing position, the mechanical arm unit 1 is controlled to move to the position for grabbing the material 4,at the moment, the grabbing pose P can be obtainedboAnd uploading to the control unit.
Further, the 3D detection unit 2 includes a dot matrix camera. Specifically, the 3D detection unit 2 in this example may be a dot matrix camera including an infrared camera and an infrared dot matrix light source. The working principle is shown in figure 2:
in the figure, a is a projection plane of the infrared dot matrix light source O, and a 'is an imaging plane of the infrared camera O'. The relative position of the infrared dot matrix light source and the infrared camera is fixed. The ray Op is a beam of rays emitted by an infrared lattice light source, and if the point p is on the plane A, the position of the point p projected on the object is determined to be on the ray Op. The object has different depths, and the coordinates projected on the imaging plane of the infrared camera are different, such as points P, P1, P2 with different depths on the ray Op in fig. 2, which are projected on the imaging plane as P ', P1 ', P2 ', respectively. The depth information of point P2 in fig. 2 can be expressed as:
Figure BDA0001820470210000051
and c is the distance between the center of the infrared dot matrix light source and the center of the infrared camera. The coordinates of the point P2 in the infrared camera coordinate system can be obtained by geometric relations.
In fact, the infrared lattice light source can emit a plurality of rays, and the points with different depths are formed when the rays are projected on the material 4. By encoding the dot matrix image projected by the infrared dot matrix light source, points of different depths on a plurality of projection rays of the infrared dot matrix light source correspond to points on an imaging plane of the infrared camera one by one, so that point cloud data of the shot material 4 can be obtained.
The point cloud data is matched with the surface information of the material 4, and the relative position from the lattice structured light camera coordinate system to the material 4 coordinate system, namely the eye object position P can be calculated through the control unitcoCan be expressed as (x)co,yco,zco,Rxco,Ryco,Rzco). Visible, the pose P of the eye objectcoThe image data obtained by photographing through the 3D detection unit 2 is obtained after calculation through the control unit.
Further, in step S1, according to the grasp pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose PtcComprises the following steps:
will grasp the pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a hand-eye pose matrix Mtc
The hand-eye pose matrix MtcConvert into the hand-eye pose Ptc
In specific implementation, when the mechanical arm unit 1 moves to the photographing position and the 3D detection unit 2 photographs, the control unit obtains the photographing pose P in sequencebtAnd the pose P of the eye objectcoWhen the teaching mechanical arm unit 1 moves to the grabbing position of the material 4, the control unit obtains the grabbing pose Pbo. Waiting for the control unit to acquire the shooting pose PbtPosition and posture of eye object PcoAnd capture pose PboThen, the control unit calculates and converts the three-dimensional transformation matrix into a corresponding three-dimensional transformation matrix, namely a capture pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectco
Since all the information obtained by the 3D detection unit 2 is described in the coordinate system of the 3D detection unit 2, the robot arm unit 1 needs to use the information obtained by the vision system, and it is necessary to first determine the relative relationship between the coordinate system of the 3D detection unit 2 and the base coordinate system of the robot arm unit 1, i.e. the calibration of the 3D detection unit 2, and step S1 is the process of calibrating the 3D detection unit 2.
In the present embodiment, the movement of the robot arm unit 1 is based on the base coordinate system of the robot arm unit 1, and the eye pose P acquired by the 3D detection unit 2coIs based on the 3D detection unit 2 coordinate system; in order to ensure that the end effector 3 can accurately grasp the material 4, the 3D detector 2 and the machine are set upThe pose relationship between the arm units 1, or the pose relationship with the end effector unit 3.
According to the grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectcoThe hand-eye position matrix M of the coordinate system of the end execution unit 3 and the coordinate system of the 3D detection unit 2 is calculated by the control unittcWherein the calculation formula is as follows:
Mtc=Mbt -1·Mbo·Mco -1
then, the control unit makes the hand-eye pose matrix MtcConvert into hand-eye pose PtcTherefore, the pose relation between the 3D detection unit 2 and the tail end execution unit 3 is obtained, and the calibration work of the 3D detection unit 2 is completed.
Here, the hand-eye pose matrix MtcConvert into hand-eye pose PtcAnd the operator can refer to the actual numerical value.
Then, the process proceeds to step S2: when actually grabbing, acquiring the photographing pose PbtPosition and posture of eye object PcoAccording to the hand-eye pose PtcObtaining the grabbing pose PboStep S3 is executed.
In step S2, according to the hand-eye pose PtcObtaining the grabbing pose PboComprises the following steps:
the hand-eye position P is obtainedtcAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a grabbing pose matrix Mbo
The grabbing pose matrix M is obtainedboConvert into the grab pose Pbo
Specifically, the terminal execution unit 3 is driven by the mechanical arm unit 1 to move to any photographing position, the photographing position can be manually set, and the mechanical arm unit is used for1 drives the end execution unit 3 to move to the photographing position. The photographing positions in each step S21 may be the same or different, as long as the 3D detection unit 2 can photograph the material 4 to be gripped. When the terminal execution unit 3 moves to any photographing position, the control unit reads the photographing pose P from the encoder of the robot arm unit 1btAnd the shooting pose P is calculatedbtConvert into the matrix M of the pose of shooingbt
The 3D detection unit 2 shoots and uploads image data to the control unit, the control unit processes the received 3D information of the material 4, and the eye pose P is obtained through calculationcoAnd set the position of the eye object PcoConvert into an eye pose matrix Mco
The control unit reads the hand-eye pose P in step S1tcConvert it into hand-eye pose matrix Mtc
According to the hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectcoThe control unit calculates to obtain a grabbing pose matrix M according to a formulaboWherein the calculation formula is
Mbo=Mbt·Mtc·Mco
Then, the control unit will grab the pose matrix MboConverted into a grabbing pose PboThus, the specific position information of the material 4 is obtained.
Then step S3 is executed: the control unit is used for grabbing the pose P according to the pose PboThe robot arm unit 1 is guided to move to the position of the material 4 and the end effector 3 is caused to grasp the material 4, and step S2 is executed.
Specifically, the control unit is to grasp the pose PboAnd sending the data to the mechanical arm unit 1, driving the mechanical arm unit 1 to move to the position of the material 4, and then driving the tail end execution unit 3 to grab the material 4 to finish grabbing work.
Specifically, the terminal execution unit 3 can be a pneumatic claw or other components, and the grabbing of the material 4 can be realized by controlling the terminal execution unit 3 to act.
Due to the hand-eye pose PtcIt has been found in step S1 that the hand-eye pose P is not changed when the relative fixing positions of the 3D detection unit 2 and the end effector 3 are not changedtcIs constant and will not change, therefore, the step S1 only needs to be executed once to acquire the hand-eye pose matrix MtcThereafter, the steps S2 and S3 may be repeatedly performed a plurality of times. That is, the steps S2 and S3 may be repeatedly performed for each material 4, and the robot arm unit 1 finishes grasping each material 4. If the relative fixing position of the 3D detection unit 2 and/or the end execution unit 3 is changed, step S1 needs to be executed to perform calibration again.
In summary, in the moving device loading the 3D detection unit and the material grabbing method thereof provided by the embodiment of the present invention, the pose relationship between the 3D detection unit and the end execution unit is established to obtain the complete 3D information of any material, so as to obtain the accurate grabbing pose thereof, so that the robot arm unit moves to the position of the material under the guidance of the 3D detection unit, and the end execution unit grabs the material, thereby improving the precision and reliability of the operation of the robot arm unit.
The above description is only a preferred embodiment of the present invention, and does not limit the present invention in any way. It will be understood by those skilled in the art that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. The moving device for loading the 3D detection unit is characterized by comprising a mechanical arm unit, a tail end execution unit, a control unit and the 3D detection unit, wherein the tail end execution unit is arranged at the tail end of the mechanical arm unit, the 3D detection unit is arranged on the tail end execution unit to acquire 3D information of a material, and the control unit guides the mechanical arm unit to move to the position of the material according to the 3D information of the material and enables the tail end execution unit to grab the material.
2. The motion device loaded with 3D detection unit according to claim 1, wherein the 3D detection unit comprises a dot matrix camera.
3. The movement apparatus for loading a 3D detection unit according to claim 1, wherein a flange is provided at an end of the robot unit so that the robot unit and the end effector are connected through the flange.
4. The movement apparatus for loading a 3D detection unit according to claim 1, wherein the robot unit comprises a robot arm with n degrees of freedom, wherein n is 3 or more.
5. A material grabbing method is characterized by comprising the following steps:
s1: acquiring the grabbing pose P of the mechanical arm unit and the material under a specific working conditionboPhotographing pose P of the mechanical arm unit and the tail end execution unitbtAnd the 3D detection unit and the eye pose P of the materialcoAnd according to the grabbing pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose P of the 3D detection unit and the tail end execution unittcAnd performs step S2;
s2: when actually grabbing, acquiring the photographing pose PbtPosition and posture of eye object PcoAccording to the hand-eye pose PtcObtaining the grabbing pose PboAnd performs step S3;
s3: the control unit is used for grabbing the pose P according to the pose PboGuiding the robot arm unit to move to the position of the material, and causing the end effector unit to grasp the material, and performing step S2.
6. The material grasping method according to claim 5, wherein in step S3, when a mounting position of the 3D detection unit and/or the end effector unit is changed, step S1 is performed.
7. The material grasping method according to claim 5, characterized in that in step S1, according to the grasping pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoObtaining the hand-eye pose PtcComprises the following steps:
will grasp the pose PboAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the grabbing pose matrix MboShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a hand-eye pose matrix Mtc
The hand-eye pose matrix MtcConvert into the hand-eye pose Ptc
8. Method for gripping material according to claim 7, characterised in that it is carried out according to the formula Mtc=Mbt -1·Mbo·Mco -1Obtaining the hand-eye pose matrix Mtc
9. The material grasping method according to claim 5, characterized in that in step S2, according to the hand-eye pose PtcObtaining the grabbing pose PboComprises the following steps:
the hand-eye position P is obtainedtcAnd a photographing pose PbtAnd the pose P of the eye objectcoRespectively converted into hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectco
According to the hand-eye pose matrix MtcShooting pose matrix MbtAnd the position matrix M of the eye objectcoObtaining a grabbing pose matrix Mbo
The grabbing pose matrix M is obtainedboConvert into the grab pose Pbo
10. Method for gripping material according to claim 9, characterised in that it is carried out according to the formula Mbo=Mbt·Mtc·McoObtaining the grabbing pose matrix Mbo
11. The material grasping method according to claim 5, wherein the robot arm unit has an encoder that is read to acquire the photographing pose Pbt
12. The material grabbing method according to claim 5, wherein the 3D detection unit acquires 3D information of the material, and the control unit obtains the eye pose P according to the 3D informationco
CN201811163269.6A 2018-09-30 2018-09-30 Movement device for loading 3D detection unit and material grabbing method thereof Active CN110962121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811163269.6A CN110962121B (en) 2018-09-30 2018-09-30 Movement device for loading 3D detection unit and material grabbing method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811163269.6A CN110962121B (en) 2018-09-30 2018-09-30 Movement device for loading 3D detection unit and material grabbing method thereof

Publications (2)

Publication Number Publication Date
CN110962121A true CN110962121A (en) 2020-04-07
CN110962121B CN110962121B (en) 2021-05-07

Family

ID=70029545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811163269.6A Active CN110962121B (en) 2018-09-30 2018-09-30 Movement device for loading 3D detection unit and material grabbing method thereof

Country Status (1)

Country Link
CN (1) CN110962121B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103759716A (en) * 2014-01-14 2014-04-30 清华大学 Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm
CN103895042A (en) * 2014-02-28 2014-07-02 华南理工大学 Industrial robot workpiece positioning grabbing method and system based on visual guidance
CN106272424A (en) * 2016-09-07 2017-01-04 华中科技大学 A kind of industrial robot grasping means based on monocular camera and three-dimensional force sensor
CN107053173A (en) * 2016-12-29 2017-08-18 芜湖哈特机器人产业技术研究院有限公司 The method of robot grasping system and grabbing workpiece
CN107443377A (en) * 2017-08-10 2017-12-08 埃夫特智能装备股份有限公司 Sensor robot coordinate system conversion method and Robotic Hand-Eye Calibration method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103759716A (en) * 2014-01-14 2014-04-30 清华大学 Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm
CN103895042A (en) * 2014-02-28 2014-07-02 华南理工大学 Industrial robot workpiece positioning grabbing method and system based on visual guidance
CN106272424A (en) * 2016-09-07 2017-01-04 华中科技大学 A kind of industrial robot grasping means based on monocular camera and three-dimensional force sensor
CN107053173A (en) * 2016-12-29 2017-08-18 芜湖哈特机器人产业技术研究院有限公司 The method of robot grasping system and grabbing workpiece
CN107443377A (en) * 2017-08-10 2017-12-08 埃夫特智能装备股份有限公司 Sensor robot coordinate system conversion method and Robotic Hand-Eye Calibration method

Also Published As

Publication number Publication date
CN110962121B (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN108109174B (en) Robot monocular guidance method and system for randomly sorting scattered parts
JP6966582B2 (en) Systems and methods for automatic hand-eye calibration of vision systems for robot motion
JP6180087B2 (en) Information processing apparatus and information processing method
CN108098762A (en) A kind of robotic positioning device and method based on novel visual guiding
JP6855492B2 (en) Robot system, robot system control device, and robot system control method
JP2020011339A5 (en) Robot system control method, control program, recording medium, control device, robot system, article manufacturing method
US20050273199A1 (en) Robot system
CN110621447B (en) Robot conveyor calibration method, robot system and control system
CN112010024B (en) Automatic container grabbing method and system based on laser and vision fusion detection
JP2016185572A (en) Robot, robot control device, and robot system
CN104827480A (en) Automatic calibration method of robot system
JP2019113895A (en) Imaging apparatus with visual sensor for imaging work-piece
US20190030722A1 (en) Control device, robot system, and control method
CN112958960B (en) Robot hand-eye calibration device based on optical target
JP2019063955A (en) Robot system, operation control method and operation control program
KR20110095700A (en) Industrial robot control method for workpiece object pickup
JP2020089963A (en) Robot system and coordinate conversion method
CN110962121B (en) Movement device for loading 3D detection unit and material grabbing method thereof
CN116852359A (en) TCP (Transmission control protocol) quick calibration device and method based on robot hand teaching device
US20230123629A1 (en) 3d computer-vision system with variable spatial resolution
JP2015132523A (en) Position/attitude measurement apparatus, position/attitude measurement method, and program
WO2023013740A1 (en) Robot control device, robot control system, and robot control method
JP7477633B2 (en) Robot System
CN110977950B (en) Robot grabbing and positioning method
CN113733078A (en) Method for interpreting fine control quantity of mechanical arm and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant