CN103925879A - Indoor robot vision hand-eye relation calibration method based on 3D image sensor - Google Patents
Indoor robot vision hand-eye relation calibration method based on 3D image sensor Download PDFInfo
- Publication number
- CN103925879A CN103925879A CN201410166077.6A CN201410166077A CN103925879A CN 103925879 A CN103925879 A CN 103925879A CN 201410166077 A CN201410166077 A CN 201410166077A CN 103925879 A CN103925879 A CN 103925879A
- Authority
- CN
- China
- Prior art keywords
- hand
- robot
- vision
- trick
- sensor
- Prior art date
Links
- 239000008264 clouds Substances 0.000 claims abstract description 18
- 239000011159 matrix materials Substances 0.000 claims description 23
- 238000009877 rendering Methods 0.000 claims description 11
- 239000003550 marker Substances 0.000 claims description 3
- 201000002113 hereditary lymphedema I Diseases 0.000 claims 1
- 229920001610 polycaprolactones Polymers 0.000 claims 1
- 238000000034 methods Methods 0.000 abstract description 4
- 210000001503 Joints Anatomy 0.000 abstract 3
- 230000000007 visual effect Effects 0.000 abstract 1
- 281000009961 Parallax, Inc. (company) companies 0.000 description 1
- 240000006028 Sambucus nigra Species 0.000 description 1
- 238000004364 calculation methods Methods 0.000 description 1
- 238000010586 diagrams Methods 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0nMS4wJyBlbmNvZGluZz0naXNvLTg4NTktMSc/Pgo8c3ZnIHZlcnNpb249JzEuMScgYmFzZVByb2ZpbGU9J2Z1bGwnCiAgICAgICAgICAgICAgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJwogICAgICAgICAgICAgICAgICAgICAgeG1sbnM6cmRraXQ9J2h0dHA6Ly93d3cucmRraXQub3JnL3htbCcKICAgICAgICAgICAgICAgICAgICAgIHhtbG5zOnhsaW5rPSdodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rJwogICAgICAgICAgICAgICAgICB4bWw6c3BhY2U9J3ByZXNlcnZlJwp3aWR0aD0nMzAwcHgnIGhlaWdodD0nMzAwcHgnIHZpZXdCb3g9JzAgMCAzMDAgMzAwJz4KPCEtLSBFTkQgT0YgSEVBREVSIC0tPgo8cmVjdCBzdHlsZT0nb3BhY2l0eToxLjA7ZmlsbDojRkZGRkZGO3N0cm9rZTpub25lJyB3aWR0aD0nMzAwJyBoZWlnaHQ9JzMwMCcgeD0nMCcgeT0nMCc+IDwvcmVjdD4KPHRleHQgZG9taW5hbnQtYmFzZWxpbmU9ImNlbnRyYWwiIHRleHQtYW5jaG9yPSJzdGFydCIgeD0nMTI0LjYzNicgeT0nMTU2JyBzdHlsZT0nZm9udC1zaXplOjQwcHg7Zm9udC1zdHlsZTpub3JtYWw7Zm9udC13ZWlnaHQ6bm9ybWFsO2ZpbGwtb3BhY2l0eToxO3N0cm9rZTpub25lO2ZvbnQtZmFtaWx5OnNhbnMtc2VyaWY7ZmlsbDojM0I0MTQzJyA+PHRzcGFuPkF1PC90c3Bhbj48L3RleHQ+Cjwvc3ZnPgo= data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0nMS4wJyBlbmNvZGluZz0naXNvLTg4NTktMSc/Pgo8c3ZnIHZlcnNpb249JzEuMScgYmFzZVByb2ZpbGU9J2Z1bGwnCiAgICAgICAgICAgICAgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJwogICAgICAgICAgICAgICAgICAgICAgeG1sbnM6cmRraXQ9J2h0dHA6Ly93d3cucmRraXQub3JnL3htbCcKICAgICAgICAgICAgICAgICAgICAgIHhtbG5zOnhsaW5rPSdodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rJwogICAgICAgICAgICAgICAgICB4bWw6c3BhY2U9J3ByZXNlcnZlJwp3aWR0aD0nODVweCcgaGVpZ2h0PSc4NXB4JyB2aWV3Qm94PScwIDAgODUgODUnPgo8IS0tIEVORCBPRiBIRUFERVIgLS0+CjxyZWN0IHN0eWxlPSdvcGFjaXR5OjEuMDtmaWxsOiNGRkZGRkY7c3Ryb2tlOm5vbmUnIHdpZHRoPSc4NScgaGVpZ2h0PSc4NScgeD0nMCcgeT0nMCc+IDwvcmVjdD4KPHRleHQgZG9taW5hbnQtYmFzZWxpbmU9ImNlbnRyYWwiIHRleHQtYW5jaG9yPSJzdGFydCIgeD0nMTcuNTAwOScgeT0nNDcuNzk1NScgc3R5bGU9J2ZvbnQtc2l6ZTozOHB4O2ZvbnQtc3R5bGU6bm9ybWFsO2ZvbnQtd2VpZ2h0Om5vcm1hbDtmaWxsLW9wYWNpdHk6MTtzdHJva2U6bm9uZTtmb250LWZhbWlseTpzYW5zLXNlcmlmO2ZpbGw6IzNCNDE0MycgPjx0c3Bhbj5BdTwvdHNwYW4+PC90ZXh0Pgo8L3N2Zz4K [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
Abstract
Description
Technical field
The present invention relates to a kind of robot vision method, the Indoor Robot vision trick relating in particular to based on 3D rendering sensor is related to scaling method.
Background technology
Application and development along with 3D sensor, increasing robot adopts 3D sensor as robotic vision system, same with traditional binocular vision system, the collection of 3D sensor be that the point cloud chart of current scene is as information, its image information is a large amount of D coordinates value, can reflect intuitively the degree of depth and image information.What traditional binocular vision system adopted is two industrial cameras, collection be the plane picture of current scene, its image information is pixel value, the parallax by two images carrys out compute depth information, this inevitably can bring the error of calculation, causes precision to reduce.In Robotic Hand-Eye Calibration system, adopt the D coordinates value of 3D sensor return label point, applying three-dimensional dynamic measuring instrument measurement markers point is with respect to the D coordinates value of arm basis coordinates system, obtaining transition matrix gets final product to such an extent that trick is related to calibration matrix, point cloud chart looks like to obtain simple and efficient, proving operation is simple, and stated accuracy is higher; Existing Robot Hand-eye is related to scaling method, as document " algorithm of trick stereoscopic vision and realization " (bear Chun-shan Mountain, the gold zone Chinese, Wang Min.The > > of < < robot, 2001,23 (2), pp.113~117) based on directly image coordinate being mapped to the "black box" thought of robot reference coordinate, simplify calibration process, but calculated relative complex.The new method of the Indoor Robot hand and eye calibrating based on 3D sensor in this paper, has applied "black box" thought, and calibration process is simple, and calculated amount is little, and precision is higher, can effectively meet the demand of machine hand and eye calibrating.
Summary of the invention
The features and advantages of the present invention are partly statement in the following description, or can describe obviously from this, or can learn by putting into practice the present invention.
For overcoming the problem of prior art, the invention provides a kind of Indoor Robot vision trick based on 3D rendering sensor and be related to scaling method, comprise step S1, at the hand of robot, grab a plurality of gauge points of end joint marker, 3D vision sensor by this robot gathers point cloud chart that this hand grabs face end joint as information, obtains with respect to vision sensor coordinate system and organizes D coordinates value more; S2, exterior three dimensional measuring equipment gather the D coordinates value that this hand under world coordinate system is grabbed a plurality of gauge points in end joint, and this value is that the arm basis coordinates system with respect to this robot gathers; S3, the coordinate figure obtaining by step S1, S2, show that trick is related to calibration matrix.
According to one embodiment of present invention, this hand is grabbed end joint and is marked with four gauge points, and wherein three are positioned at finger tips, and another is positioned at the palm of the hand, and its mutual relationship is at any 34 of conllinear are not coplanar when measuring.
According to one embodiment of present invention, in this step S1, this 3D vision sensor obtains point cloud chart that this hand grabs end joint as information by mouse-keyboard trigger event algorithm in a cloud storehouse PCL.
According to one embodiment of present invention, this exterior three dimensional measuring equipment, when gathering this hand and grab the three-dimensional coordinate of gauge point in end joint, is reference frame with this robot arm basis coordinates.
According to one embodiment of present invention, in this step S3, by changing this hand, grab the mutual alignment of a plurality of gauge points of end, repeatedly record numerical value and calculate transition matrix, try to achieve Optimal matrix, this matrix of this optimum is related to calibration matrix as this trick.
The method that Indoor Robot vision trick relation based on 3D sensor provided by the invention is demarcated, there is the features such as method is simple, measuring accuracy is higher, easy popularization, can effectively meet the needs of Indoor Robot hand and eye calibrating, and effectively simplified calibration process, do not need calibrating camera inside and outside parameter, calculated amount is little, and has higher hand and eye calibrating precision.
By reading instructions, those of ordinary skills will understand feature and the content of these technical schemes better.
Accompanying drawing explanation
Below by describing particularly the present invention with reference to accompanying drawing and in conjunction with example, advantage of the present invention and implementation will be more obvious, wherein content shown in accompanying drawing is only for explanation of the present invention, and do not form the restriction of going up in all senses of the present invention, in the accompanying drawings:
Fig. 1 is the robot of the embodiment of the present invention and the structural representation of exterior three dimensional measuring equipment.
Fig. 2 is the schematic flow sheet that the Indoor Robot vision trick based on 3D rendering sensor of the embodiment of the present invention is related to scaling method.
Fig. 3 is the idiographic flow schematic diagram that the Indoor Robot vision trick based on 3D rendering sensor of the embodiment of the present invention is related to scaling method.
Embodiment
Please refer to Fig. 1, Fig. 1 is the robot of the embodiment of the present invention and the structural representation of exterior three dimensional measuring equipment, indoor mobile robot 4 is provided with 3D sensor Kinect, robot has been grabbed end joint 5 marks four stains 2, serve as a mark a little, laying respectively at right-hand man grabs in three finger tips and the palm of the hand in end joint, at the side of this indoor mobile robot endways NDI Three-Dimensional Dynamic measuring instrument 3.When measuring, make robot grab each joint of end to arm and hand and enable, make to go smoothly and grab end at robot the place ahead correct position, guarantee in the field range of Kinect, and 4 of conllinear are not coplanar to make four gauge point mutual relationships of finger tips be at 3.The program of robot terminal operation Kinect collection point cloud atlas picture, main function is the mouse-keyboard trigger event function in some cloud storehouse PCL (Point Cloud Library), and this function performance is in clicking point cloud atlas picture, a bit to return to the D coordinates value of this point.
As shown in Figure 2, the invention provides a kind of Indoor Robot vision trick based on 3D rendering sensor and be related to scaling method, comprise step S1, at the hand of robot 4, grab a plurality of gauge points 2 of end joint 5 mark, the point cloud chart of grabbing face end joint by 3D vision sensor 1 this hand of collection of this robot, as information, obtains with respect to vision sensor coordinate system and organizes D coordinates value more; S2, exterior three dimensional measuring equipment 3 gathers the D coordinates value that this hand under world coordinate systems is grabbed a plurality of gauge points 2 in end joint 5, and this value is that the arm basis coordinates system with respect to this robot gathers; S3, the coordinate figure obtaining by step S1, S2, show that trick is related to calibration matrix.
In the present embodiment, in step S1, this 3D vision sensor obtains point cloud chart that this hand grabs end joint as information by mouse-keyboard trigger event algorithm in a cloud storehouse PCL; In step S2, this exterior three dimensional measuring equipment, when gathering this hand and grab the three-dimensional coordinate of gauge point in end joint, is reference frame with this robot arm basis coordinates; In step S3, by changing this hand, grab the mutual alignment of a plurality of gauge points of end, repeatedly record numerical value and calculate transition matrix, try to achieve Optimal matrix, this matrix of this optimum is related to calibration matrix as this trick.
Referring again to Fig. 3, in the specific implementation, first robot is grabbed to four unique points of end joint marker, and robot arm is enabled, make to go smoothly and grab end joint in the field range the inside of 3D sensor and Three-Dimensional Dynamic measuring instrument.Terminal starts Kinect operation image capture program, demonstrates and comprises the real-time point cloud chart picture stream that hand is grabbed end, clicks successively four gauge points in image, triggers mouse-keyboard function, returns to the D coordinates value of four gauge points; In the measurement space of NDI Three-Dimensional Dynamic measuring instrument, with robot arm basis coordinates, be source point coordinate system, record the D coordinates value that hand is grabbed four gauge points in end joint.By these two groups of D coordinates value, can try to achieve trick relational matrix.
Specifically, can be so that { z} is world coordinate system, { x for x, y k, y k, z kcoordinate be the point cloud chart that gathers of 3D sensor Kinect as coordinate system, { x n, y n, z ncoordinate is in NDI Three-Dimensional Dynamic measurement space, the basis coordinates of robot arm system, (x 0, y 0, z 0) be that Kinect is with respect to { x n, y n, z nthe translation vector of coordinate system.Click successively four gauge points in the point cloud chart picture of current collection, record four D coordinates value returning, be designated as { x k i, y k i, z k i, i=1,2,3,4.In the measurement space of NDI Three-Dimensional Dynamic measuring instrument, with robot arm basis coordinates, be source point coordinate system, the D coordinates value that remote holder is grabbed four gauge points of end worked as in record, is designated as { x simultaneously n i, y n i, z n i, i=1,2,3,4.
By formula:
Release transition matrix is
By changing hand, grab the mutual alignment of four gauge points of end, repeatedly record numerical value and calculate transition matrix, try to achieve Optimal matrix.This matrix is required trick relation and demarcates transition matrix.
With reference to the accompanying drawings of the preferred embodiments of the present invention, those skilled in the art do not depart from the scope and spirit of the present invention above, can have multiple flexible program to realize the present invention.For example, the feature that illustrates or describe as the part of an embodiment can be used for another embodiment to obtain another embodiment.These are only the better feasible embodiment of the present invention, not thereby limit to interest field of the present invention, the equivalence that all utilizations instructions of the present invention and accompanying drawing content are done changes, within being all contained in interest field of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410166077.6A CN103925879A (en) | 2014-04-24 | 2014-04-24 | Indoor robot vision hand-eye relation calibration method based on 3D image sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410166077.6A CN103925879A (en) | 2014-04-24 | 2014-04-24 | Indoor robot vision hand-eye relation calibration method based on 3D image sensor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103925879A true CN103925879A (en) | 2014-07-16 |
Family
ID=51144180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410166077.6A CN103925879A (en) | 2014-04-24 | 2014-04-24 | Indoor robot vision hand-eye relation calibration method based on 3D image sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103925879A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105818167A (en) * | 2015-01-22 | 2016-08-03 | 通用汽车环球科技运作有限责任公司 | Method for calibrating an articulated end effector employing a remote digital camera |
CN104236456B (en) * | 2014-09-04 | 2016-10-26 | 中国科学院合肥物质科学研究院 | A kind of Robotic Hand-Eye Calibration method based on two-freedom 3D vision sensor |
CN106679590A (en) * | 2016-12-29 | 2017-05-17 | 中国科学院长春光学精密机械与物理研究所 | Three-dimensional scanning equipment and three-dimensional scanner |
CN107993227A (en) * | 2017-12-15 | 2018-05-04 | 深圳先进技术研究院 | A kind of method and apparatus of acquisition 3D laparoscope trick matrixes |
CN108942922A (en) * | 2018-06-11 | 2018-12-07 | 杭州灵西机器人智能科技有限公司 | Mechanical arm hand and eye calibrating method, apparatus and system based on circular cone calibration object |
CN109470138A (en) * | 2018-10-22 | 2019-03-15 | 江苏集萃微纳自动化系统与装备技术研究所有限公司 | The On-line Measuring Method of part |
CN109737871A (en) * | 2018-12-29 | 2019-05-10 | 南方科技大学 | A kind of scaling method of the relative position of three-dimension sensor and mechanical arm |
CN109893833A (en) * | 2019-03-27 | 2019-06-18 | 深圳市瑞源祥橡塑制品有限公司 | Aiming spot acquisition methods, device and its application |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1672881A (en) * | 2005-04-21 | 2005-09-28 | 上海交通大学 | On-line robot hand and eye calibrating method based on motion selection |
CN1904806A (en) * | 2006-07-28 | 2007-01-31 | 上海大学 | System and method of contactless position input by hand and eye relation guiding |
CN101186038A (en) * | 2007-12-07 | 2008-05-28 | 北京航空航天大学 | Method for demarcating robot stretching hand and eye |
US20090076655A1 (en) * | 2007-09-14 | 2009-03-19 | Zimmer, Inc. | Robotic calibration method |
CN101630409A (en) * | 2009-08-17 | 2010-01-20 | 北京航空航天大学 | Hand-eye vision calibration method for robot hole boring system |
-
2014
- 2014-04-24 CN CN201410166077.6A patent/CN103925879A/en not_active Application Discontinuation
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1672881A (en) * | 2005-04-21 | 2005-09-28 | 上海交通大学 | On-line robot hand and eye calibrating method based on motion selection |
CN1904806A (en) * | 2006-07-28 | 2007-01-31 | 上海大学 | System and method of contactless position input by hand and eye relation guiding |
US20090076655A1 (en) * | 2007-09-14 | 2009-03-19 | Zimmer, Inc. | Robotic calibration method |
CN101186038A (en) * | 2007-12-07 | 2008-05-28 | 北京航空航天大学 | Method for demarcating robot stretching hand and eye |
CN101630409A (en) * | 2009-08-17 | 2010-01-20 | 北京航空航天大学 | Hand-eye vision calibration method for robot hole boring system |
Non-Patent Citations (2)
Title |
---|
QIANG ZHANG ET AL.: "Real-time general object recognition for indoor robot based on PCL", 《INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS》, 31 December 2013 (2013-12-31), pages 651 - 655, XP032579468, DOI: doi:10.1109/ROBIO.2013.6739534 * |
王俊等: "老年服务机器人视觉定位方法研究", 《华中科技大学学报(自然科学版)》, vol. 39, 30 November 2011 (2011-11-30), pages 255 - 258 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104236456B (en) * | 2014-09-04 | 2016-10-26 | 中国科学院合肥物质科学研究院 | A kind of Robotic Hand-Eye Calibration method based on two-freedom 3D vision sensor |
CN105818167A (en) * | 2015-01-22 | 2016-08-03 | 通用汽车环球科技运作有限责任公司 | Method for calibrating an articulated end effector employing a remote digital camera |
CN105818167B (en) * | 2015-01-22 | 2018-10-23 | 通用汽车环球科技运作有限责任公司 | The method that hinged end effector is calibrated using long distance digital camera |
CN106679590A (en) * | 2016-12-29 | 2017-05-17 | 中国科学院长春光学精密机械与物理研究所 | Three-dimensional scanning equipment and three-dimensional scanner |
CN107993227A (en) * | 2017-12-15 | 2018-05-04 | 深圳先进技术研究院 | A kind of method and apparatus of acquisition 3D laparoscope trick matrixes |
CN107993227B (en) * | 2017-12-15 | 2020-07-24 | 深圳先进技术研究院 | Method and device for acquiring hand-eye matrix of 3D laparoscope |
CN108942922A (en) * | 2018-06-11 | 2018-12-07 | 杭州灵西机器人智能科技有限公司 | Mechanical arm hand and eye calibrating method, apparatus and system based on circular cone calibration object |
CN109470138A (en) * | 2018-10-22 | 2019-03-15 | 江苏集萃微纳自动化系统与装备技术研究所有限公司 | The On-line Measuring Method of part |
CN109737871A (en) * | 2018-12-29 | 2019-05-10 | 南方科技大学 | A kind of scaling method of the relative position of three-dimension sensor and mechanical arm |
CN109737871B (en) * | 2018-12-29 | 2020-11-17 | 南方科技大学 | Calibration method for relative position of three-dimensional sensor and mechanical arm |
CN109893833A (en) * | 2019-03-27 | 2019-06-18 | 深圳市瑞源祥橡塑制品有限公司 | Aiming spot acquisition methods, device and its application |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200096317A1 (en) | Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium | |
Pérez et al. | Robot guidance using machine vision techniques in industrial environments: A comparative review | |
EP3143596B1 (en) | Method and apparatus for scanning and printing a 3d object | |
Pauwels et al. | Simtrack: A simulation-based framework for scalable real-time object pose detection and tracking | |
CN104908038B (en) | The robot simulation device that the removal process of workpiece is emulated | |
US9460517B2 (en) | Photogrammetric methods and devices related thereto | |
Gudmundsson et al. | Fusion of stereo vision and time-of-flight imaging for improved 3d estimation | |
CN103279186B (en) | Merge the multiple goal motion capture system of optical alignment and inertia sensing | |
CN104260112B (en) | A kind of Robot Hand-eye localization method | |
US10899014B2 (en) | Multiple lens-based smart mechanical arm and positioning and assembly method thereof | |
US10909770B2 (en) | Capturing and aligning three-dimensional scenes | |
EP1886281B1 (en) | Image processing method and image processing apparatus | |
Montiel et al. | Unified inverse depth parametrization for monocular SLAM | |
CN103135755B (en) | Interactive system and method | |
JP5378374B2 (en) | Method and system for grasping camera position and direction relative to real object | |
US10341633B2 (en) | Systems and methods for correcting erroneous depth information | |
CN103353758B (en) | A kind of Indoor Robot navigation method | |
US9482515B2 (en) | Stereoscopic measurement system and method | |
CN102722249B (en) | Control method, actuation means and electronic installation | |
CN103941851B (en) | A kind of method and system for realizing virtual touch calibration | |
CN106041937A (en) | Control method of manipulator grabbing control system based on binocular stereoscopic vision | |
CN106272423A (en) | A kind of multirobot for large scale environment works in coordination with the method for drawing and location | |
CN106371281A (en) | Multi-module 360-degree space scanning and positioning 3D camera based on structured light | |
CN107255476A (en) | A kind of indoor orientation method and device based on inertial data and visual signature | |
US9454822B2 (en) | Stereoscopic measurement system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140716 |