CN107471218A - A kind of tow-armed robot hand eye coordination method based on multi-vision visual - Google Patents

A kind of tow-armed robot hand eye coordination method based on multi-vision visual Download PDF

Info

Publication number
CN107471218A
CN107471218A CN201710802484.5A CN201710802484A CN107471218A CN 107471218 A CN107471218 A CN 107471218A CN 201710802484 A CN201710802484 A CN 201710802484A CN 107471218 A CN107471218 A CN 107471218A
Authority
CN
China
Prior art keywords
mtd
target object
mtr
msub
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710802484.5A
Other languages
Chinese (zh)
Other versions
CN107471218B (en
Inventor
罗雨龙
崔宪阳
虞文杰
吴巍
郭毓
黄颖
苏鹏飞
郭飞
陈宝存
肖潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201710802484.5A priority Critical patent/CN107471218B/en
Publication of CN107471218A publication Critical patent/CN107471218A/en
Application granted granted Critical
Publication of CN107471218B publication Critical patent/CN107471218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of tow-armed robot hand eye coordination method based on multi-vision visual, belong to robot vision field, target positioning is carried out using mesh camera more than two, the mechanical arm nearer from target object is chosen to be captured, and the repositioning to target object is carried out simultaneously, so as to which target object crawl is delivered into FX.The tow-armed robot hand eye coordination method based on multi-vision visual of the present invention, solve in existing scheme the problem of only single camera photographs object in binocular camera, can not realize binocular ranging localization, improve the precision of mechanical arm crawl target object.

Description

A kind of tow-armed robot hand eye coordination method based on multi-vision visual
Technical field
The present invention relates to robot vision field, and in particular to a kind of tow-armed robot hand eye coordination based on multi-vision visual Method.
Background technology
Computer vision refers to replace human eye that target is identified, tracks with camera and measured, as presently the most The study hotspot in forward position, the robot technology based on computer vision are widely used in robot control field.Binocular solid Vision is to utilize two images of the imaging device from different position acquisition testees, by calculating the position between image corresponding points Deviation is put, to obtain the method for object dimensional geological information.Vision can provide abundant environment and target information for robot, be The judgement decision-making of robot provides foundation.
In actual mechanical process, existed using the ranging of parallel fixed binocular stereo vision and the crawl of manipulator each The problem of kind of various kinds, wherein common situation have target object be blocked or its be present in outside the visual field cause shooting less than or It is incomplete, and can not be accurately positioned and capture.Such as " binocular distance-finding method, device and Panorama Mosaic method and its it is in patent In system ", it describes the principle of binocular ranging in detail.But in the ranging process of reality, often exist to target object The vision dead zone problem of seizure, i.e. binocular camera can not capture target object simultaneously, so as to mechanical arm crawl failure.
The content of the invention
The present invention proposes a kind of tow-armed robot hand eye coordination method based on multi-vision visual.
Realize that technical solution of the invention is:A kind of tow-armed robot hand eye coordination method based on multi-vision visual, Concretely comprise the following steps:
Step 1, two binocular cameras are mounted respectively on the mechanical arm of tow-armed robot, and calibrate shooting respectively Head is arrived equipped with the transfer matrix of mechanical arm tail end of the camera, the transformation matrix of mechanical arm end to the pedestal and two Transformation matrix between mechanical arm pedestal;
Step 2, the movement of control machinery arm, by binocular camera searching target object, and preserve the feature of target object, Until at least two monocular cams photograph target object, the search to target object is completed;
Step 3, host computer extract and handled two images when scanning terminates with target signature, obtain target object Three-dimensional coordinate information, complete the positioning to target object;
Step 4, the three-dimensional information based on target object, control machinery arm are captured, and complete the crawl to target object;
Step 5, target object is moved to FX, mechanical arm returns to original state, completes crawl task.
Compared with prior art, its remarkable advantage is the present invention:
The present invention is shot using mesh camera more than two, can solve the problem that existing binocular camera and monocular shooting The vision dead zone problem of head, the probability that camera successfully catches target is greatly improved, and during mechanical arm captures, it is double Mesh camera also to target carry out captured in real-time, constantly obtain target position, the three-dimensional coordinate of continuous updating target, while It has been correspondingly improved the precision of mechanical arm crawl.
Further detailed description is done to the present invention below in conjunction with the accompanying drawings.
Brief description of the drawings
Fig. 1 is the two-dimensional coordinate transition diagram of the present invention.
Fig. 2 is the transition diagram of image coordinate system of the present invention and gray scale coordinate system.
Fig. 3 is camera coordinate system of the present invention, world coordinate system, the relation schematic diagram of image coordinate system.
Fig. 4 is that target object of the present invention positions schematic diagram.
Fig. 5 is the flow chart of one embodiment of the invention.
Embodiment
A kind of tow-armed robot hand eye coordination method based on multi-vision visual, is concretely comprised the following steps:
Step 1, two binocular cameras are mounted respectively on the mechanical arm of tow-armed robot, and calibrate shooting respectively Head is arrived equipped with the transfer matrix of mechanical arm tail end of the camera, the transformation matrix of mechanical arm end to the pedestal and two Transformation matrix between mechanical arm pedestal;
Step 2, the movement of control machinery arm, by binocular camera searching target object, and preserve the feature of target object, Until at least two monocular cams photograph target object, the search to target object is completed, is concretely comprised the following steps:
Step 2-1, control any one mechanical arm pedestal steering wheel slowly to rotate, drive binocular camera to enter target object Row scanning and detection;
Step 2-2, the feature of target object is preserved;
Step 2-3, pedestal steering wheel is controlled to keep original scanning direction to be rotated further 30 °, when angle rotatable is less than 30 ° When, rotated by angle rotatable;
Step 2-4, judge whether two on the mechanical arm cameras all photograph target object, if on the mechanical arm Two cameras all photograph target object, then stop scanning;If there was only a shooting in two cameras on the mechanical arm Head photographs target object, then controls another mechanical arm to drive binocular camera to be scanned and detect target object, weight Multiple step 2-2, step 2-3, until at least two monocular cams photograph target object, complete to search target object Seek.
Step 3, host computer extract and handled two images when scanning terminates with target signature, obtain target object Three-dimensional coordinate information, the positioning to target object is completed, concretely comprise the following steps:
Step 3-1, two width of extraction have the image of target signature, obtain target object respectively in the image coordinate of two width figures Pixel coordinate (u, v) in system;
Step 3-2, according to length in coordinate system and the relation of pixel, obtain target object and sat respectively in the image of two width figures Length coordinate (x, y) in mark system, it is specially:
In formula, dx,dyThe length and height of a pixel, u are represented respectively0,v0Represent that the image projection point of target object is sat Mark;
Step 3-3, according to the relation of image coordinate system and camera coordinate system, obtain target object and shooting two width respectively Three-dimensional coordinate x in the camera coordinate system of figurec,yc,zc, it is specially:
Step 3-4, the three-dimensional coordinate x according to target object in two camera coordinate systems are photographedc,yc,zc, obtain Three-dimensional coordinate x of the target object in world coordinate systemw,yw,zw, it is specially:
In formula, matrix M represents camera to the transfer matrix of world coordinate system origin.
Step 4, the three-dimensional information based on target object, control machinery arm are captured, and complete the crawl to target object, Concretely comprise the following steps:
Step 4-1, according to the three-dimensional coordinate information of the target object of acquisition, control to the minimum machinery of target object distance Arm is captured;
Step 4-2, binocular camera captured in real-time target object, the new three-dimensional coordinate information of target object is obtained;
Step 4-3, repeat step 4-1, step 4-2, until mechanical arm grabs target object, step 5 is carried out.
Step 5, target object is moved to FX, mechanical arm returns to original state, completes crawl task.
Binocular stereo vision is to be shot with two video cameras in different positions, obtains surrounding scenery, analyzes two The common portion of camera vision, the three-dimensional information of common portion is just derived by principle geometrically and principle of parallax.
For quantitative description video camera imaging process, it is necessary to use the relevant knowledge and three kinds of coordinates of coordinate system conversion System:Image coordinate system, camera coordinate system and world coordinate system.
1st, the conversion of two-dimensional coordinate system
Obtain, rotate plus translation as shown in figure 1, new coordinate system O-X ' Y ' are rotated by original coordinate system O-XY coordinate systems Angle be θ, with any point (x in original two dimensional coordinate system O-XY0,y0) origin as new coordinate system O-X ' Y ', can be with Obtain different coordinates arbitrary coordinate transformation relation:
2nd, the conversion of three-dimensional system of coordinate
It is similar with two dimension, the angle of tri- reference axis rotations of xyz of three-dimensional system of coordinate is designated as α β γ respectively.
By abbreviation, transfer matrix can be obtained:
Either two dimension or three-dimensional, when relation between knowing two coordinate systems, same object is in two seats Coordinate in mark system all can be changed mutually.
3rd, image coordinate system
With reference to shown in Fig. 2, the image that video camera is clapped is transferred to host computer according to digital form, in the form of m × n array In the presence of each element value in array is the gray scale of pixel, i.e. image, and the array is designated as into gray scale coordinate system O-UV, (u, v) table Show any point in the coordinate system, but the coordinate unit of the coordinate system is pixel.In order to facilitate calculating, with target object Image projection point O1(u0,v0) on establish reference axis parallel to u, v image coordinate system O1- XY, any point in the coordinate system Coordinate is (x, y), and its unit is mm.
Now, a pixel is in image coordinate system O1Length in-XY coordinate systems is designated as dx, dy, then the seat of any point (u, v) is marked in coordinate system O1Coordinate under-XY is:
4th, camera coordinate system
As shown in figure 3, O-XcYcZcCoordinate system is camera coordinate system, i.e., with video camera photocentre OcThe position at place is original The coordinate system that point is established.
5th, world coordinate system
As shown in figure 4, world coordinate system is the absolute coordinate of objective world, be in order to describe the specific position of video camera and Introduce, the position of any object in space can world coordinate system represent, world coordinate system O-XwYwZwTable Show, the origin of world coordinate system is the end of mechanical arm in the present invention.
Coordinates of the target object p in the gray scale coordinate system for the image that video camera photographs is (u, v), then the point is in image Coordinate in coordinate system is:
Coordinates of the target object p under two camera coordinate systems is (xc,yc,zc)T, target object p is in image coordinate system And the relation between camera coordinate system is:
In formula, f represents the focal length of video camera.
If absolute coordinates of the p under world coordinate system is (xw,yw,zw)T, then have coordinate transform:
In formula, matrix M represents camera to the transfer matrix of world coordinate system origin.But only by a width picture numerical value simultaneously Specific coordinate value is not can determine that, and the two images at least photographed with two cameras could be realized.Now there are two kinds of situations, A kind of situation is that two images are shot by two monocular cams on same mechanical arm, and also a kind of situation is two width figures As being shot respectively by a monocular cam on two mechanical arms.
When two images by same mechanical arm two monocular cams shoot when, using the mechanical arm tail end as The origin of world coordinate system, matrix M are transfer matrix M of the camera to the mechanical arm tail end1, by installation mechanical arm and can take the photograph Demarcate to obtain during camera.
When two images are shot by a monocular cam on two mechanical arms respectively, with any one machinery Camera on the mechanical arm of origin of the arm end as world coordinate system, wherein piece image by establishing world coordinate system is clapped Take the photograph, matrix M is transfer matrix M of the camera to the mechanical arm tail end1;Other piece image is by taking the photograph on another mechanical arm Picture head is shot, then matrix M=M2T1T2T3, wherein, T1Represent the transformation matrix of another-bar mechanical arm end to the pedestal, T2 Represent the transformation matrix between two mechanical arm pedestals, T3Represent that the robot base for establishing world coordinate system is last to the manipulator The transformation matrix at end, M2Represent the camera established on the mechanical arm of world coordinate system to the transfer matrix of mechanical arm tail end, T1、 T2、T3And M2It can all be obtained by demarcation.
So that a kind of tow-armed robot hand eye coordination method based on multi-vision visual disclosed by the invention, integrated use are more The hand eye coordination method of mesh information, binocular range measurement principle and mechanical arm control, is captured to object.It the method overcome wall scroll There is vision dead zone so as to lead to not accurately be captured in mechanical arm, improve machine when carrying out ranging to target object The crawl efficiency of tool arm.
It is described in more detail with reference to embodiment.
Embodiment 1
As shown in figure 4, establishing the model of the stereoscopic vision of two random cameras of position, two single cameras are put at random Put.In traditional biocular systems, world coordinate system is scheduled on a camera coordinates and fastened, to reduce amount of calculation, but in this hair It is unknown that the camera of image is collected in bright, thus by world coordinate system origin be scheduled on camera coordinates fasten it is nonsensical, because And the present invention establishes world coordinate system using the end of any one mechanical arm as origin.
The two images with target signature used in the present embodiment are shot by two cameras on same mechanical arm, Using the mechanical arm tail end as the origin of world coordinate system, then according to target object p respectively in the image coordinate system of two images In coordinate (x, y), respectively obtain coordinate (xs of the target object p in two camera coordinate systemsc,yc,zc)T, now, shooting The transfer matrix M of head to world coordinate system origin is transfer matrix M of the camera on mechanical arm to the mechanical arm tail end1, So as to the relation according to camera coordinate system and world coordinate system:
Obtain absolute coordinate (x of the target object in world coordinate systemw,yw,zw)T
Embodiment 2
The two images with target signature used in the present embodiment are shot by the camera on two mechanical arms respectively Obtain, the video camera shooting wherein on mechanical arm of the piece image by establishing world coordinate system, then target object is in the video camera The relation of the coordinate system in coordinate and world coordinate system in coordinate system is:
Other width figure comes from another mechanical arm, now matrix M=M2T1T2T3, in formula, T1Represent another machinery Arm end is to the transformation matrix of the pedestal, T2Represent the transformation matrix between two mechanical arm pedestals, T3World's seat is established in expression The robot base of system is marked to the transformation matrix of the arm end, M2Represent the shooting established on the mechanical arm of world coordinate system Head arrives the transfer matrix of mechanical arm tail end, then the seat in coordinate and world coordinate system of the target object in the camera coordinate system Marking the relation for being is:
With reference to two relational expressions, three-dimensional coordinate information (x of the target object in world coordinate system just can be obtainedw,yw,zw )T

Claims (5)

  1. A kind of 1. tow-armed robot hand eye coordination method based on multi-vision visual, it is characterised in that concretely comprise the following steps:
    Step 1, two binocular cameras are mounted respectively on the mechanical arm of tow-armed robot, and calibrate camera respectively and arrive The transfer matrix of mechanical arm tail end equipped with the camera, the transformation matrix of mechanical arm end to the pedestal and two machineries Transformation matrix between arm pedestal;
    Step 2, the movement of control machinery arm, by binocular camera searching target object, and preserve the feature of target object, until At least two monocular cams photograph target object, complete the search to target object;
    Step 3, host computer extract and handled two images when scanning terminates with target signature, obtain the three-dimensional of target object Coordinate information, complete the positioning to target object;
    Step 4, the three-dimensional information based on target object, control machinery arm are captured, and complete the crawl to target object;
    Step 5, target object is moved to FX, mechanical arm returns to original state, completes crawl task.
  2. 2. the tow-armed robot hand eye coordination method according to claim 1 based on multi-vision visual, it is characterised in that step Control machinery arm moves in 2, by binocular camera searching target object, and the feature of target object is preserved, until at least Two monocular cams photograph target object, complete the search to target object, concretely comprise the following steps:
    Step 2-1, control any one mechanical arm pedestal steering wheel slowly to rotate, drive binocular camera to sweep target object Retouch and detect;
    Step 2-2, the feature of target object is preserved;
    Step 2-3, pedestal steering wheel is controlled to keep original scanning direction to be rotated further 30 °, when angle rotatable is less than 30, by can Rotational angle is rotated;
    Step 2-4, judge whether two on the mechanical arm cameras all photograph target object, if two on the mechanical arm Camera all photographs target object, then stops scanning;If only have a camera to clap in two cameras on the mechanical arm Target object is taken the photograph, then controls another mechanical arm to drive binocular camera to be scanned and detect target object, repeats to walk Rapid 2-2, step 2-3, until at least two monocular cams photograph target object, complete the search to target object.
  3. 3. the tow-armed robot hand eye coordination method according to claim 1 based on multi-vision visual, it is characterised in that step Host computer extracts and handled two images when scanning terminates with target signature in 3, extracts and handles with target signature Preceding two images.
  4. 4. the tow-armed robot hand eye coordination method according to claim 1 based on multi-vision visual, it is characterised in that step Host computer extracts and handled image when scanning terminates with target signature in 3, obtains target object in world coordinate system Three-dimensional coordinate, the positioning to target object is completed, is concretely comprised the following steps:
    Step 3-1, two width of extraction have the image of target signature, obtain target object respectively in the image coordinate system of two width figures Pixel coordinate (u, v);
    Step 3-2, according to length in coordinate system and the relation of pixel, target object is obtained respectively in the image coordinate system of two width figures In length coordinate (x, y), be specially:
    <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>x</mi> <mo>=</mo> <mi>u</mi> <mi>d</mi> <mi>x</mi> <mo>-</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mi>d</mi> <mi>x</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>y</mi> <mo>=</mo> <mi>v</mi> <mi>d</mi> <mi>y</mi> <mo>-</mo> <msub> <mi>v</mi> <mn>0</mn> </msub> <mi>d</mi> <mi>y</mi> </mrow> </mtd> </mtr> </mtable> </mfenced>
    In formula, dx,dyThe length and height of a pixel, u are represented respectively0,v0Represent the image projection point coordinates of target object;
    Step 3-3, according to the relation of image coordinate system and camera coordinate system, obtain target object and shooting two width figures respectively Three-dimensional coordinate x in camera coordinate systemc,yc,zc, it is specially:
    <mrow> <msub> <mi>z</mi> <mi>c</mi> </msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow>
    Step 3-4, the three-dimensional coordinate x according to target object in two camera coordinate systems are photographedc,yc,zc, obtain target Three-dimensional coordinate x of the object in world coordinate systemw,yw,zw, it is specially:
    <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mi>M</mi> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>w</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>w</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>w</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow>
    In formula, matrix M represents camera to the transfer matrix of world coordinate system origin.
  5. 5. the tow-armed robot hand eye coordination method according to claim 1 based on multi-vision visual, it is characterised in that step Three-dimensional information based on target object in 4, control machinery arm are captured, and are completed the crawl to target object and are concretely comprised the following steps:
    Step 4-1, according to the three-dimensional coordinate information of the target object of acquisition, control to the minimum mechanical arm of target object distance Row crawl;
    Step 4-2, binocular camera captured in real-time target object, the new three-dimensional coordinate information of target object is obtained;
    Step 4-3, repeat step 4-1, step 4-2, until mechanical arm grabs target object, step 5 is carried out.
CN201710802484.5A 2017-09-07 2017-09-07 Binocular vision-based hand-eye coordination method for double-arm robot Active CN107471218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710802484.5A CN107471218B (en) 2017-09-07 2017-09-07 Binocular vision-based hand-eye coordination method for double-arm robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710802484.5A CN107471218B (en) 2017-09-07 2017-09-07 Binocular vision-based hand-eye coordination method for double-arm robot

Publications (2)

Publication Number Publication Date
CN107471218A true CN107471218A (en) 2017-12-15
CN107471218B CN107471218B (en) 2020-09-11

Family

ID=60583694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710802484.5A Active CN107471218B (en) 2017-09-07 2017-09-07 Binocular vision-based hand-eye coordination method for double-arm robot

Country Status (1)

Country Link
CN (1) CN107471218B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109129483A (en) * 2018-08-31 2019-01-04 厦门理工学院 A kind of method, apparatus and robot based on multi-robot Cooperation under cloud platform
CN109341718A (en) * 2018-09-27 2019-02-15 北京伟景智能科技有限公司 Scaling method and device based on multi-vision visual
CN109986541A (en) * 2019-05-06 2019-07-09 深圳市恒晟智能技术有限公司 Manipulator
CN110340901A (en) * 2019-06-28 2019-10-18 深圳盈天下视觉科技有限公司 A kind of control method, control device and terminal device
CN110605714A (en) * 2019-08-06 2019-12-24 华中科技大学 Hand-eye coordination grabbing method based on human eye fixation point
CN110640748A (en) * 2019-11-07 2020-01-03 南京工程学院 Binocular vision-based coordinated control method for double-arm robot
CN110825088A (en) * 2019-11-29 2020-02-21 燕山大学 Multi-view vision guiding ship body cleaning robot system and cleaning method
CN111340878A (en) * 2020-05-15 2020-06-26 支付宝(杭州)信息技术有限公司 Image processing method and device
CN111823223A (en) * 2019-08-19 2020-10-27 北京伟景智能科技有限公司 Robot arm grabbing control system and method based on intelligent stereoscopic vision
CN111932448A (en) * 2020-09-09 2020-11-13 南昌虚拟现实研究院股份有限公司 Data processing method, device, storage medium and equipment
CN112060083A (en) * 2020-08-20 2020-12-11 中国农业大学 Binocular stereoscopic vision system for mechanical arm and measuring method thereof
CN113804106A (en) * 2021-08-18 2021-12-17 南京理工大学 Double-arm cooperative assembly method based on twice calibration of vision
CN114589698A (en) * 2022-04-07 2022-06-07 北京信息科技大学 Mechanical arm model-free real-time calibration method and device based on multi-target visual measurement and machine learning
CN115008477A (en) * 2022-08-09 2022-09-06 苏州华兴源创科技股份有限公司 Manipulator movement compensation method, manipulator movement compensation device and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150217450A1 (en) * 2014-02-05 2015-08-06 Quanta Storage Inc. Teaching device and method for robotic arm
CN105157680A (en) * 2015-08-28 2015-12-16 北京控制工程研究所 Vision measurement system and method based on combination of global feature and local feature
CN105537824A (en) * 2016-01-27 2016-05-04 华南理工大学 Automatic welding control method based on hand-eye coordination of mechanical arm
CN106003036A (en) * 2016-06-16 2016-10-12 哈尔滨工程大学 Object grabbing and placing system based on binocular vision guidance
CN106426186A (en) * 2016-12-14 2017-02-22 国网江苏省电力公司常州供电公司 Electrified operation robot autonomous operation method based on multi-sensor information fusion
CN106695748A (en) * 2016-12-09 2017-05-24 南京理工大学 Hot-line robot with double mechanical arms

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150217450A1 (en) * 2014-02-05 2015-08-06 Quanta Storage Inc. Teaching device and method for robotic arm
CN105157680A (en) * 2015-08-28 2015-12-16 北京控制工程研究所 Vision measurement system and method based on combination of global feature and local feature
CN105537824A (en) * 2016-01-27 2016-05-04 华南理工大学 Automatic welding control method based on hand-eye coordination of mechanical arm
CN106003036A (en) * 2016-06-16 2016-10-12 哈尔滨工程大学 Object grabbing and placing system based on binocular vision guidance
CN106695748A (en) * 2016-12-09 2017-05-24 南京理工大学 Hot-line robot with double mechanical arms
CN106426186A (en) * 2016-12-14 2017-02-22 国网江苏省电力公司常州供电公司 Electrified operation robot autonomous operation method based on multi-sensor information fusion

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109129483A (en) * 2018-08-31 2019-01-04 厦门理工学院 A kind of method, apparatus and robot based on multi-robot Cooperation under cloud platform
CN109341718A (en) * 2018-09-27 2019-02-15 北京伟景智能科技有限公司 Scaling method and device based on multi-vision visual
CN109341718B (en) * 2018-09-27 2020-12-29 北京伟景智能科技有限公司 Calibration method and device based on multi-view vision
CN109986541A (en) * 2019-05-06 2019-07-09 深圳市恒晟智能技术有限公司 Manipulator
CN110340901A (en) * 2019-06-28 2019-10-18 深圳盈天下视觉科技有限公司 A kind of control method, control device and terminal device
CN110605714B (en) * 2019-08-06 2021-08-03 华中科技大学 Hand-eye coordination grabbing method based on human eye fixation point
CN110605714A (en) * 2019-08-06 2019-12-24 华中科技大学 Hand-eye coordination grabbing method based on human eye fixation point
CN111823223B (en) * 2019-08-19 2023-12-29 北京伟景智能科技有限公司 Robot arm grabbing control system and method based on intelligent stereoscopic vision
CN111823223A (en) * 2019-08-19 2020-10-27 北京伟景智能科技有限公司 Robot arm grabbing control system and method based on intelligent stereoscopic vision
CN110640748A (en) * 2019-11-07 2020-01-03 南京工程学院 Binocular vision-based coordinated control method for double-arm robot
CN110825088A (en) * 2019-11-29 2020-02-21 燕山大学 Multi-view vision guiding ship body cleaning robot system and cleaning method
CN110825088B (en) * 2019-11-29 2021-10-01 燕山大学 Multi-view vision guiding ship body cleaning robot system and cleaning method
CN111340878A (en) * 2020-05-15 2020-06-26 支付宝(杭州)信息技术有限公司 Image processing method and device
CN112060083A (en) * 2020-08-20 2020-12-11 中国农业大学 Binocular stereoscopic vision system for mechanical arm and measuring method thereof
CN112060083B (en) * 2020-08-20 2021-10-29 中国农业大学 Binocular stereoscopic vision system for mechanical arm and measuring method thereof
CN111932448B (en) * 2020-09-09 2021-03-26 南昌虚拟现实研究院股份有限公司 Data processing method, device, storage medium and equipment
CN111932448A (en) * 2020-09-09 2020-11-13 南昌虚拟现实研究院股份有限公司 Data processing method, device, storage medium and equipment
CN113804106A (en) * 2021-08-18 2021-12-17 南京理工大学 Double-arm cooperative assembly method based on twice calibration of vision
CN114589698A (en) * 2022-04-07 2022-06-07 北京信息科技大学 Mechanical arm model-free real-time calibration method and device based on multi-target visual measurement and machine learning
CN114589698B (en) * 2022-04-07 2023-06-06 北京信息科技大学 Model-free real-time calibration method and device for mechanical arm based on multi-target vision measurement and machine learning
CN115008477A (en) * 2022-08-09 2022-09-06 苏州华兴源创科技股份有限公司 Manipulator movement compensation method, manipulator movement compensation device and computer-readable storage medium

Also Published As

Publication number Publication date
CN107471218B (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN107471218A (en) A kind of tow-armed robot hand eye coordination method based on multi-vision visual
CN110728715B (en) Intelligent inspection robot camera angle self-adaptive adjustment method
CN108109174B (en) Robot monocular guidance method and system for randomly sorting scattered parts
CN106041937B (en) A kind of control method of the manipulator crawl control system based on binocular stereo vision
CN110815213B (en) Part identification and assembly method and device based on multi-dimensional feature fusion
CN110751691B (en) Automatic pipe fitting grabbing method based on binocular vision
CN104018850B (en) Section of jurisdiction based on the duct piece assembling machine of binocular vision positions and grasping system
US11100671B2 (en) Image generation apparatus, image generation system, image generation method, and image generation program
CN108399639A (en) Fast automatic crawl based on deep learning and arrangement method
CN107590835A (en) Mechanical arm tool quick change vision positioning system and localization method under a kind of nuclear environment
CN103971375B (en) A kind of panorama based on image mosaic stares camera space scaling method
CN110211180A (en) A kind of autonomous grasping means of mechanical arm based on deep learning
CN108648237A (en) A kind of space-location method of view-based access control model
CN104552341B (en) Mobile industrial robot single-point various visual angles pocket watch position and attitude error detection method
CN106530276A (en) Manipulator positioning method and system for grabbing of non-standard component
CN109739239A (en) A kind of planing method of the uninterrupted Meter recognition for crusing robot
CN109459984A (en) A kind of positioning grasping system and its application method based on three-dimensional point cloud
CN108154536A (en) The camera calibration method of two dimensional surface iteration
CN108839035A (en) A method of it takes and borrows books
CN109472778B (en) Appearance detection method for towering structure based on unmanned aerial vehicle
CN107092905A (en) A kind of instrument localization method to be identified of electric inspection process robot
CN106846385B (en) Multi-sensing remote sensing image matching method, device and system based on unmanned aerial vehicle
CN108470165A (en) A kind of picking robot fruit vision collaboratively searching method
CN110514114A (en) A kind of small objects space position calibration method based on binocular vision
Muffert et al. The estimation of spatial positions by using an omnidirectional camera system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant