CN107471218B - Binocular vision-based hand-eye coordination method for double-arm robot - Google Patents

Binocular vision-based hand-eye coordination method for double-arm robot Download PDF

Info

Publication number
CN107471218B
CN107471218B CN201710802484.5A CN201710802484A CN107471218B CN 107471218 B CN107471218 B CN 107471218B CN 201710802484 A CN201710802484 A CN 201710802484A CN 107471218 B CN107471218 B CN 107471218B
Authority
CN
China
Prior art keywords
target object
mechanical arm
cameras
coordinate system
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710802484.5A
Other languages
Chinese (zh)
Other versions
CN107471218A (en
Inventor
罗雨龙
崔宪阳
虞文杰
吴巍
郭毓
黄颖
苏鹏飞
郭飞
陈宝存
肖潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201710802484.5A priority Critical patent/CN107471218B/en
Publication of CN107471218A publication Critical patent/CN107471218A/en
Application granted granted Critical
Publication of CN107471218B publication Critical patent/CN107471218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention discloses a binocular vision-based hand-eye coordination method for a double-arm robot, which belongs to the field of robot vision, and is characterized in that two binocular cameras are used for positioning a target, a mechanical arm close to the target object is selected for grabbing, and the target object is repositioned at the same time, so that the target object is grabbed and sent to a fixed area. The binocular vision-based hand-eye coordination method for the double-arm robot solves the problem that in the existing scheme, only a single camera in a binocular camera shoots an object and binocular ranging and positioning cannot be achieved, and improves the precision of the mechanical arm in grabbing a target object.

Description

Binocular vision-based hand-eye coordination method for double-arm robot
Technical Field
The invention relates to the field of robot vision, in particular to a multi-eye vision-based hand-eye coordination method for a double-arm robot.
Background
Computer vision refers to the fact that a camera replaces human eyes to identify, track and measure a target, serves as the most advanced research hotspot at present, and the robot technology based on the computer vision is widely applied to the field of robot control. Binocular stereo vision is a method for acquiring three-dimensional geometric information of an object by acquiring two images of the object to be measured from different positions by using imaging equipment and calculating the position deviation between corresponding points of the images. The vision can provide abundant environment and target information for the robot and provide a basis for judgment and decision of the robot.
In the actual operation process, various problems exist in the distance measurement adopting the parallel fixed binocular stereo vision and the grabbing of the manipulator, wherein in the common situation, a target object is blocked or exists outside the visual field, so that the shooting cannot be completed or is incomplete, and the accurate positioning and grabbing cannot be realized. As in the patent "binocular ranging method, apparatus, and panoramic image stitching method and system thereof", the principle of binocular ranging is introduced in detail. However, in the actual ranging process, the problem of a visual blind area for capturing a target object often exists, namely the target object cannot be captured by the binocular camera at the same time, and therefore the mechanical arm fails to capture.
Disclosure of Invention
The invention provides a multi-vision-based hand-eye coordination method for a double-arm robot.
The technical solution for realizing the invention is as follows: a method for coordinating hands and eyes of a double-arm robot based on multi-eye vision comprises the following specific steps:
step 1, respectively installing two binocular cameras on mechanical arms of a double-arm robot, and respectively calibrating a transfer matrix from the cameras to the tail ends of the mechanical arms provided with the cameras, a transformation matrix from the tail ends of the mechanical arms to a base and a transformation matrix between two mechanical arm bases;
step 2, controlling the mechanical arm to move, searching for a target object through the binocular cameras, and storing the characteristics of the target object until at least two monocular cameras shoot the target object, so as to complete the search for the target object;
step 3, the upper computer extracts and processes two images with target characteristics when the scanning is terminated, three-dimensional coordinate information of the target object is obtained, and the target object is positioned;
step 4, controlling the mechanical arm to grab based on the three-dimensional information of the target object to finish grabbing the target object;
and 5, moving the target object to a fixed area, and returning the mechanical arm to an initial state to finish the grabbing task.
Compared with the prior art, the invention has the following remarkable advantages:
the invention adopts two multi-view cameras for shooting, can solve the problem of visual blind areas of the existing binocular camera and monocular camera, greatly improves the probability of successfully capturing the target by the cameras, and also shoots the target in real time by the binocular camera in the process of grabbing by the mechanical arm, continuously obtains the position of the target, continuously updates the three-dimensional coordinate of the target and correspondingly improves the grabbing precision of the mechanical arm.
The present invention is described in further detail below with reference to the attached drawings.
Drawings
FIG. 1 is a schematic diagram of two-dimensional coordinate transformation according to the present invention.
FIG. 2 is a schematic diagram of the transformation between an image coordinate system and a gray scale coordinate system according to the present invention.
FIG. 3 is a schematic diagram showing the relationship among the camera coordinate system, the world coordinate system, and the image coordinate system according to the present invention.
FIG. 4 is a schematic diagram of the positioning of a target object according to the present invention.
FIG. 5 is a flow chart of an embodiment of the present invention.
Detailed Description
A method for coordinating hands and eyes of a double-arm robot based on multi-eye vision comprises the following specific steps:
step 1, respectively installing two binocular cameras on mechanical arms of a double-arm robot, and respectively calibrating a transfer matrix from the cameras to the tail ends of the mechanical arms provided with the cameras, a transformation matrix from the tail ends of the mechanical arms to a base and a transformation matrix between two mechanical arm bases;
step 2, controlling the mechanical arm to move, searching for the target object through the binocular cameras, and storing the characteristics of the target object until at least two monocular cameras shoot the target object, completing the search for the target object, wherein the specific steps are as follows:
step 2-1, controlling any mechanical arm base steering engine to slowly rotate to drive a binocular camera to scan and detect a target object;
step 2-2, storing the characteristics of the target object;
step 2-3, controlling the base steering engine to keep the original scanning direction to continue rotating for 30 degrees, and rotating according to the rotatable angle when the rotatable angle is less than 30 degrees;
step 2-4, judging whether the two cameras on the mechanical arm shoot the target object, and stopping scanning if the two cameras on the mechanical arm shoot the target object; if only one camera of the two cameras on the mechanical arm shoots the target object, controlling the other mechanical arm to drive the binocular camera to scan and detect the target object, and repeating the steps 2-2 and 2-3 until at least two monocular cameras shoot the target object to complete the search of the target object.
And 3, extracting and processing two images with target characteristics when the scanning is terminated by the upper computer to obtain three-dimensional coordinate information of the target object and finish the positioning of the target object, wherein the method comprises the following specific steps:
step 3-1, extracting two images with target characteristics to obtain pixel coordinates (u, v) of a target object in image coordinate systems of the two images respectively;
step 3-2, obtaining length coordinates (x, y) of the target object in the image coordinate systems of the two images respectively according to the relationship between the length and the pixels in the coordinate systems, specifically:
Figure BDA0001401907050000031
in the formula (d)x,dyRespectively representing the length and height, u, of a pixel0,v0Image projection point coordinates representing a target object;
3-3, obtaining three-dimensional coordinates x of the target object in the camera coordinate system for shooting the two images respectively according to the relation between the image coordinate system and the camera coordinate systemc,yc,zcThe method specifically comprises the following steps:
Figure BDA0001401907050000032
3-4, shooting three-dimensional coordinates x in coordinate systems of two cameras according to the target objectc,yc,zcTo obtainThree-dimensional coordinate x of target object in world coordinate systemw,yw,zwThe method specifically comprises the following steps:
Figure BDA0001401907050000033
in the formula, the matrix M represents a transfer matrix from the camera to the origin of the world coordinate system.
And 4, controlling the mechanical arm to grab based on the three-dimensional information of the target object to finish grabbing the target object, wherein the method specifically comprises the following steps:
step 4-1, controlling the mechanical arm with the minimum distance to the target object to grab according to the acquired three-dimensional coordinate information of the target object;
4-2, shooting a target object in real time by using a binocular camera to obtain new three-dimensional coordinate information of the target object;
and 4-3, repeating the step 4-1 and the step 4-2 until the mechanical arm grabs the target object, and performing the step 5.
And 5, moving the target object to a fixed area, and returning the mechanical arm to an initial state to finish the grabbing task.
The binocular stereo vision is to shoot by two cameras at different positions to obtain surrounding scenes, analyze the common part of the vision of the two cameras, and deduce the three-dimensional information of the common part by the geometric principle and the parallax principle.
In order to quantitatively describe the imaging process of the camera, the related knowledge of coordinate system conversion and three coordinate systems are needed: an image coordinate system, a camera coordinate system and a world coordinate system.
1. Transformation of two-dimensional coordinate system
As shown in FIG. 1, the new coordinate system O-X 'Y' is obtained by adding translation to rotation of the original coordinate system O-XY by an angle θ, and is represented by any point (X) in the original two-dimensional coordinate system O-XY0,y0) As the origin of the new coordinate system O-X 'Y', any coordinate transformation relation of different coordinate systems can be obtained:
Figure BDA0001401907050000041
2. transformation of three-dimensional coordinate systems
Similar to two dimensions, the angles by which the three coordinate axes of xyz of the three-dimensional coordinate system are rotated are respectively denoted as α β γ.
Through simplification, a transition matrix can be obtained:
Figure BDA0001401907050000042
in both two-dimensional and three-dimensional situations, the coordinates of the same object in both coordinate systems can be transformed into each other when the relationship between the two coordinate systems is known.
3. Image coordinate system
Referring to fig. 2, the image taken by the camera is transmitted to the upper computer in a digital form, and exists in the form of an array of m × n, each element value in the array is a pixel, namely, the gray scale of the image, and the array is recorded as a gray scale coordinate system O-UV (u, v) representing any point in the coordinate system, but the coordinate unit of the coordinate system is a pixel1(u0,v0) Establishing an image coordinate system O with coordinate axes parallel to u, v1-XY, the coordinates of any point in the coordinate system being (x, y) in mm.
At this time, a pixel point is in the image coordinate system O1-length in XY coordinate system is denoted dx, dy, then coordinate (u, v) of any point is in coordinate system O1-coordinates under XY:
Figure BDA0001401907050000051
4. camera coordinate system
As shown in FIG. 3, O-XcYcZcThe coordinate system being the camera coordinate system, i.e. the camera optical centre OcThe position is a coordinate system established by the origin.
5. World coordinate system
As shown in FIG. 4, the world coordinate system is the absolute coordinates of the objective world, which is introduced for describing the specific position of the camera, and the position of any object in space can be represented by the world coordinate system, which is O-XwYwZwIt is shown that the origin of the world coordinate system is the tip of the robot arm in the present invention.
If the coordinate of the target object p in the grayscale coordinate system of the image captured by the camera is (u, v), the coordinate of the point in the image coordinate system is:
Figure BDA0001401907050000052
the coordinate of the target object p under the two camera coordinate systems is (x)c,yc,zc)TThe relationship between the image coordinate system and the camera coordinate system of the target object p is:
Figure BDA0001401907050000053
in the formula, f represents the focal length of the camera.
Let p be (x) as an absolute coordinate in the world coordinate systemw,yw,zw)TThen, there is coordinate transformation:
Figure BDA0001401907050000054
in the formula, the matrix M represents a transfer matrix from the camera to the origin of the world coordinate system. However, the specific coordinate value cannot be determined only by the numerical value of one picture, and the method can be realized by using at least two images shot by two cameras. There are two situations at this time, one where two images are taken by two monocular cameras located on the same mechanical arm, and the other where two images are taken by one monocular camera located on two mechanical arms, respectively.
When two images are shot by two monocular cameras positioned on the same mechanical arm, the tail end of the mechanical arm is used as world coordinatesThe origin of the system, matrix M is the transfer matrix M from the camera to the tail end of the mechanical arm1The calibration method can be obtained by calibration when a mechanical arm and a camera are installed.
When two images are shot by a monocular camera positioned on two mechanical arms respectively, the tail end of any one mechanical arm is taken as the original point of a world coordinate system, one image is shot by the camera on the mechanical arm establishing the world coordinate system, and the matrix M is a transfer matrix M from the camera to the tail end of the mechanical arm1(ii) a Another image is shot by a camera on the other mechanical arm, and then the matrix M is equal to M2T1T2T3Wherein, T1Representing the transformation matrix, T, of the other-strip manipulator end to the base2Representing the transformation matrix, T, between two robot arm bases3Representing the transformation matrix, M, from the base of the robot to the end of the robot for establishing the world coordinate system2Representing the transfer matrix, T, from the camera on the robot arm to the end of the robot arm, establishing the world coordinate system1、T2、T3And M2Can be obtained by calibration.
Therefore, the binocular vision-based hand-eye coordination method for the double-arm robot disclosed by the invention comprehensively utilizes the binocular information, the binocular distance measurement principle and the hand-eye coordination method controlled by the mechanical arm to grab the object. The method overcomes the problem that accurate grabbing cannot be performed due to the fact that a visual blind area exists when a single mechanical arm carries out distance measurement on the target object, and improves grabbing efficiency of the mechanical arm.
The following is a more detailed description with reference to examples.
Example 1
As shown in fig. 4, a model of stereo vision of two cameras in random positions is established, and two single cameras are randomly arranged. In the traditional binocular system, a world coordinate system is fixed on a camera coordinate system to reduce the calculation amount, but a camera for collecting images is unknown in the invention, so that the origin of the world coordinate system is not meaningful to be fixed on the camera coordinate system, and the invention establishes the world coordinate system by taking the tail end of any mechanical arm as the origin.
In this embodiment, two images with target features are captured by two cameras on the same robot arm, and the end of the robot arm is used as the origin of the world coordinate system, so that the coordinates (x, y) of the target object p in the two camera coordinate systems are obtained according to the coordinates (x, y) of the target object p in the image coordinate systems of the two images respectivelyc,yc,zc)TAt this time, the transfer matrix M from the camera to the origin of the world coordinate system is the transfer matrix M from the camera on the robot arm to the end of the robot arm1So that, according to the relationship of the camera coordinate system to the world coordinate system:
Figure BDA0001401907050000071
obtaining the absolute coordinate (x) of the target object in the world coordinate systemw,yw,zw)T
Example 2
In this embodiment, two images with target features are obtained by shooting with cameras on two mechanical arms, respectively, where one image is shot with a camera on a mechanical arm that establishes a world coordinate system, and then the relationship between the coordinates of the target object in the camera coordinate system and the coordinate system in the world coordinate system is:
Figure BDA0001401907050000072
the other graph comes from the other arm, and the matrix M is M2T1T2T3In the formula, T1Representing the transformation matrix, T, of the other robot arm end to the base2Representing the transformation matrix, T, between two robot arm bases3Representing the transformation matrix, M, from the base of the robot to the end of the robot for establishing the world coordinate system2And representing a transfer matrix from a camera on the mechanical arm to the tail end of the mechanical arm for establishing a world coordinate system, wherein the relation between the coordinates of the target object in the camera coordinate system and the coordinate system in the world coordinate system is as follows:
Figure BDA0001401907050000073
combining the two relations, the three-dimensional coordinate information (x) of the target object in the world coordinate system can be obtainedw,yw,zw)T

Claims (5)

1. A method for coordinating hands and eyes of a double-arm robot based on multi-eye vision is characterized by comprising the following specific steps:
step 1, respectively installing two binocular cameras on mechanical arms of a double-arm robot, and respectively calibrating a transfer matrix from the cameras to the tail ends of the mechanical arms provided with the cameras, a transformation matrix from the tail ends of the mechanical arms to a base and a transformation matrix between the bases of the two mechanical arms;
step 2, controlling the mechanical arm to move, searching for a target object through the binocular cameras, and storing the characteristics of the target object until at least two monocular cameras shoot the target object, so as to complete the search for the target object;
step 3, the upper computer extracts and processes two images with target characteristics when the scanning is terminated, three-dimensional coordinate information of the target object is obtained, and the target object is positioned;
step 4, controlling the mechanical arm to grab based on the three-dimensional information of the target object to finish grabbing the target object;
and 5, moving the target object to a fixed area, and returning the mechanical arm to an initial state to finish the grabbing task.
2. The binocular vision based robot hand-eye coordination method according to claim 1, wherein in the step 2, the mechanical arm is controlled to move, the target object is searched through the binocular cameras, the characteristics of the target object are stored until at least two monocular cameras shoot the target object, and the search for the target object is completed, and the specific steps are as follows:
step 2-1, controlling any mechanical arm base steering engine to slowly rotate to drive a binocular camera to scan and detect a target object;
step 2-2, storing the characteristics of the target object;
step 2-3, controlling the base steering engine to keep the original scanning direction to continue rotating for 30 degrees, and rotating according to the rotatable angle when the rotatable angle is less than 30 degrees;
step 2-4, judging whether the two cameras on the mechanical arm shoot the target object, and stopping scanning if the two cameras on the mechanical arm shoot the target object; if only one camera of the two cameras on the mechanical arm shoots the target object, controlling the other mechanical arm to drive the binocular camera to scan and detect the target object, and repeating the steps 2-2 and 2-3 until at least two monocular cameras shoot the target object to complete the search of the target object.
3. The binocular vision based robot hand-eye coordination method according to claim 1, wherein the upper computer in step 3 extracts and processes two images having target features at the end of scanning, and extracts and processes the first two images having target features.
4. The binocular vision based robot hand-eye coordination method according to claim 1, wherein in step 3, the upper computer extracts and processes the image with the target characteristics when the scanning is terminated, obtains the three-dimensional coordinates of the target object in the world coordinate system, and completes the positioning of the target object, and the specific steps are as follows:
step 3-1, extracting two images with target characteristics to obtain pixel coordinates (u, v) of a target object in image coordinate systems of the two images respectively;
step 3-2, obtaining length coordinates (x, y) of the target object in the image coordinate systems of the two images respectively according to the relationship between the length and the pixels in the coordinate systems, specifically:
Figure FDA0002445987950000021
in the formula (d)x,dyRespectively representing the length and height, u, of a pixel0,v0Image projection point coordinates representing a target object;
3-3, obtaining three-dimensional coordinates x of the target object in the camera coordinate system for shooting the two images respectively according to the relation between the image coordinate system and the camera coordinate systemc,yc,zcThe method specifically comprises the following steps:
Figure FDA0002445987950000022
3-4, shooting three-dimensional coordinates x in coordinate systems of two cameras according to the target objectc,yc,zcObtaining the three-dimensional coordinate x of the target object in the world coordinate systemw,yw,zwThe method specifically comprises the following steps:
Figure FDA0002445987950000023
in the formula, the matrix M represents a transfer matrix from the camera to the origin of the world coordinate system.
5. The multi-vision based hand-eye coordination method for the double-arm robot as claimed in claim 1, wherein the step 4 is to control the mechanical arm to grab the target object based on the three-dimensional information of the target object, and the grabbing of the target object is accomplished by the following specific steps:
step 4-1, controlling the mechanical arm with the minimum distance to the target object to grab according to the acquired three-dimensional coordinate information of the target object;
4-2, shooting a target object in real time by using a binocular camera to obtain new three-dimensional coordinate information of the target object;
and 4-3, repeating the step 4-1 and the step 4-2 until the mechanical arm grabs the target object, and performing the step 5.
CN201710802484.5A 2017-09-07 2017-09-07 Binocular vision-based hand-eye coordination method for double-arm robot Active CN107471218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710802484.5A CN107471218B (en) 2017-09-07 2017-09-07 Binocular vision-based hand-eye coordination method for double-arm robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710802484.5A CN107471218B (en) 2017-09-07 2017-09-07 Binocular vision-based hand-eye coordination method for double-arm robot

Publications (2)

Publication Number Publication Date
CN107471218A CN107471218A (en) 2017-12-15
CN107471218B true CN107471218B (en) 2020-09-11

Family

ID=60583694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710802484.5A Active CN107471218B (en) 2017-09-07 2017-09-07 Binocular vision-based hand-eye coordination method for double-arm robot

Country Status (1)

Country Link
CN (1) CN107471218B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109129483A (en) * 2018-08-31 2019-01-04 厦门理工学院 A kind of method, apparatus and robot based on multi-robot Cooperation under cloud platform
CN109341718B (en) * 2018-09-27 2020-12-29 北京伟景智能科技有限公司 Calibration method and device based on multi-view vision
CN109986541A (en) * 2019-05-06 2019-07-09 深圳市恒晟智能技术有限公司 Manipulator
CN110340901B (en) * 2019-06-28 2022-09-27 深圳盈天下视觉科技有限公司 Control method, control device and terminal equipment
CN110605714B (en) * 2019-08-06 2021-08-03 华中科技大学 Hand-eye coordination grabbing method based on human eye fixation point
CN111823223B (en) * 2019-08-19 2023-12-29 北京伟景智能科技有限公司 Robot arm grabbing control system and method based on intelligent stereoscopic vision
CN110640748A (en) * 2019-11-07 2020-01-03 南京工程学院 Binocular vision-based coordinated control method for double-arm robot
CN110825088B (en) * 2019-11-29 2021-10-01 燕山大学 Multi-view vision guiding ship body cleaning robot system and cleaning method
CN113066122B (en) * 2020-05-15 2022-05-13 支付宝(杭州)信息技术有限公司 Image processing method and device
CN112060083B (en) * 2020-08-20 2021-10-29 中国农业大学 Binocular stereoscopic vision system for mechanical arm and measuring method thereof
CN111932448B (en) * 2020-09-09 2021-03-26 南昌虚拟现实研究院股份有限公司 Data processing method, device, storage medium and equipment
CN113804106A (en) * 2021-08-18 2021-12-17 南京理工大学 Double-arm cooperative assembly method based on twice calibration of vision
CN114589698B (en) * 2022-04-07 2023-06-06 北京信息科技大学 Model-free real-time calibration method and device for mechanical arm based on multi-target vision measurement and machine learning
CN115008477B (en) * 2022-08-09 2023-03-21 苏州华兴源创科技股份有限公司 Manipulator movement compensation method, manipulator movement compensation device and computer-readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105157680A (en) * 2015-08-28 2015-12-16 北京控制工程研究所 Vision measurement system and method based on combination of global feature and local feature
CN105537824A (en) * 2016-01-27 2016-05-04 华南理工大学 Automatic welding control method based on hand-eye coordination of mechanical arm
CN106003036A (en) * 2016-06-16 2016-10-12 哈尔滨工程大学 Object grabbing and placing system based on binocular vision guidance
CN106426186A (en) * 2016-12-14 2017-02-22 国网江苏省电力公司常州供电公司 Electrified operation robot autonomous operation method based on multi-sensor information fusion
CN106695748A (en) * 2016-12-09 2017-05-24 南京理工大学 Hot-line robot with double mechanical arms

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI530375B (en) * 2014-02-05 2016-04-21 廣明光電股份有限公司 Teaching device and method for robot arm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105157680A (en) * 2015-08-28 2015-12-16 北京控制工程研究所 Vision measurement system and method based on combination of global feature and local feature
CN105537824A (en) * 2016-01-27 2016-05-04 华南理工大学 Automatic welding control method based on hand-eye coordination of mechanical arm
CN106003036A (en) * 2016-06-16 2016-10-12 哈尔滨工程大学 Object grabbing and placing system based on binocular vision guidance
CN106695748A (en) * 2016-12-09 2017-05-24 南京理工大学 Hot-line robot with double mechanical arms
CN106426186A (en) * 2016-12-14 2017-02-22 国网江苏省电力公司常州供电公司 Electrified operation robot autonomous operation method based on multi-sensor information fusion

Also Published As

Publication number Publication date
CN107471218A (en) 2017-12-15

Similar Documents

Publication Publication Date Title
CN107471218B (en) Binocular vision-based hand-eye coordination method for double-arm robot
CN107767423B (en) mechanical arm target positioning and grabbing method based on binocular vision
WO2019114339A1 (en) Method and device for correcting motion of robotic arm
CN111151463B (en) Mechanical arm sorting and grabbing system and method based on 3D vision
CN110751691B (en) Automatic pipe fitting grabbing method based on binocular vision
CN111801198B (en) Hand-eye calibration method, system and computer storage medium
CN110555889A (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN110782394A (en) Panoramic video rapid splicing method and system
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
CN103971375B (en) A kind of panorama based on image mosaic stares camera space scaling method
CN108161931A (en) The workpiece automatic identification of view-based access control model and intelligent grabbing system
JP2011198349A (en) Method and apparatus for processing information
CN111199556B (en) Indoor pedestrian detection and tracking method based on camera
JP2006010376A (en) Correlation method of stereo image and three-dimensional data generation device
CN106403924B (en) Robot based on depth camera quickly positions and Attitude estimation method
CN112132874B (en) Calibration-plate-free heterogeneous image registration method and device, electronic equipment and storage medium
CN110889829A (en) Monocular distance measurement method based on fisheye lens
CN108154536A (en) The camera calibration method of two dimensional surface iteration
CN112775959A (en) Method and system for determining grabbing pose of manipulator and storage medium
CN112518748B (en) Automatic grabbing method and system for visual mechanical arm for moving object
CN112164112A (en) Method and device for acquiring pose information of mechanical arm
CN110544278A (en) rigid body motion capture method and device and AGV pose capture system
JP2004239791A (en) Position measuring method by zooming
CN110445982B (en) Tracking shooting method based on six-degree-of-freedom equipment
CN112288801A (en) Four-in-one self-adaptive tracking shooting method and device applied to inspection robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant