CN104515502A - Robot hand-eye stereo vision measurement method - Google Patents
Robot hand-eye stereo vision measurement method Download PDFInfo
- Publication number
- CN104515502A CN104515502A CN201310451831.6A CN201310451831A CN104515502A CN 104515502 A CN104515502 A CN 104515502A CN 201310451831 A CN201310451831 A CN 201310451831A CN 104515502 A CN104515502 A CN 104515502A
- Authority
- CN
- China
- Prior art keywords
- point
- target object
- detail
- gaussian
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Manipulator (AREA)
Abstract
The invention relates to a robot hand-eye stereo vision measurement method. A target scene is shot from two different directions through an industrial camera installed on an industrial robot end effector. After camera calibration and image preprocessing, conjugate matching points of the two images are searched, and finally three-dimensional reconstruction is carried out according to the conjugate matching points so as to obtain the three-dimensional coordinate of a workpiece surface point. Compared with the currently common stationary industrial robot hand-eye vision measurement, the method provided by the invention can acquire the three-dimensional information of the workpiece rather than a simple two-dimensional coordinate of the workpiece, greatly expands the application of the vision system in industrial robot assembly, sorting and other systems, and has enormous application value and broad application field.
Description
Technical field
The present invention relates to robot vision fields of measurement, particularly relate to a kind of hand-eye stereo vision measuring method.
Background technology
In traditional robot production line, the industrial robot overwhelming majority performing grasp handling task carries out work in the mode of accurate teaching or off-line programing.Initial pose and the termination pose of target object are all considered critical, and robot can only complete the work of point-to-point.But in a lot of situation, particularly the occasion workpiece pose of streamline is usually unfixed, and pose and the dreamboat object pose of realistic objective object are always devious, and this deviation is even the very little failure that all can cause robot manipulation's task.This situation causing robot not finish the work well due to the change of environment significantly limit the practical ranges of robot.Along with the progress of modern production manufacturing technology, the requirement improving the flexibility of production line is further also day by day urgent, requires also more and more higher to industrial robot system's application, dirigibility and independence.
The production model of short run multi items is following manufacturing development trend.Complete production task for what ensure industrial robot real-time high-efficiency, machine vision is incorporated into robot manipulating task system, thus substantially increases flexibility and the level of intelligence of production line.
The realization of current machine vision mainly adopts plane visual system." the industrial robot target identification technology based on monocular vision is studied " that Wang Xiuyan etc. deliver on " machine design and manufacture " (2011 the 4th phase 155 pages to 157 pages), this article is proposed the target in the ccd video camera shooting workpiece environment that use one is fixed on above robot and is reached the object of localizing objects by related operation.Its deficiency is: the two-dimensional signal that can only obtain workpiece, cannot obtain its degree of depth or elevation information.The job task of the requirements at the higher level such as industrial robot assembling, piling cannot be met.
Summary of the invention
In view of above content, be necessary to propose a kind ofly can obtain the 3 D stereo information of target workpiece, have the robot vision measuring method of stronger practicality and reliability.
A kind of hand-eye stereo vision measuring method, include a filming apparatus and a robot, it is characterized in that, the method comprises the steps:
The image of described filming apparatus Real-time Obtaining target object;
By the mode of filtering, pre-service is carried out to image, obtain the picture rich in detail of target object;
Extract the unique point in target object picture rich in detail, and carry out Stereo matching, obtain conjugate impedance match point;
Carry out three-dimensional reconstruction according to conjugate impedance match point, obtain the three-dimensional information of target object;
Control described end effector of robot according to described three-dimensional information to capture target object.
Further, before described filming apparatus obtains image, first to demarcate described filming apparatus, demarcate the mode adopting pinhole imaging system.
Further, unique point in described extraction target object picture rich in detail, and carry out Stereo matching, obtain conjugate impedance match point, be specially: first adopt Harris angular-point detection method to obtain the point that in picture rich in detail, all local interest value is maximum, then carry out the Stereo matching of unique point according to difference of Gaussian.
Further, the point that in described first employing Harris angular-point detection method acquisition picture rich in detail, all local interest value is maximum is specially: the interest value IV calculating each pixel elements; Obtain Local Extremum;
According to the interest value after calculating, extract the point that in picture rich in detail, all local interest value is maximum.
Further, interest value maximum point in all local utilizes difference of Gaussian to carry out the Stereo matching of unique point, and determine final conjugate impedance match unique point, concrete steps are as follows: utilize Gaussian convolution to build the metric space of pyramid structure; Harris angle point is asked to each tomographic image in the middle of pyramid; The pixel corresponding with its levels image to extracted Harris angle point calculates difference of Gaussian, and difference of Gaussian obtains extreme value and the Harris angle point being greater than threshold value is chosen as final conjugate impedance match unique point.
Compared to prior art, the beneficial effect that the present invention reaches is as follows:
A kind of industrial robot hand-eye type of proposition of the present invention stereo vision measurement method, can obtain the 3 D stereo information of target workpiece on production line, the method has stronger practicality and reliability.Greatly expand the application of vision system in the systems such as industrial robot assembling, sorting, there is huge using value and application widely.
Accompanying drawing explanation
Fig. 1 is a better embodiment process flow diagram of a kind of hand-eye stereo vision measuring method of the present invention;
Fig. 2 is the three-dimensional reconstruction space diagram of a kind of hand-eye stereo vision measuring method of the present invention.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in further details.
Please refer to Fig. 1, a kind of hand-eye stereo vision measuring method, include a filming apparatus and a robot, the method comprises the steps:
S10: the image of described filming apparatus Real-time Obtaining target object.
Filming apparatus is fixed on the end effector of robot arm, move together with end effector, to make described filming apparatus obtain image at multiple diverse location (in this embodiment being two) to same target object, in this embodiment, the image of acquisition is two width.In the present embodiment, described filming apparatus is a video camera.
Before described filming apparatus obtains image, first demarcate described filming apparatus, camera calibration adopts the mode of pinhole imaging system.
S20: by the mode of filtering, pre-service is carried out to image, obtain the picture rich in detail of target object.
Select the noise signal that medium filtering comes in removal of images herein, namely adopt all pixels in the neighborhood of current pixel point place by gray level sequence, to get the gray-scale value that its intermediate value carrys out this pixel alternative.To obtain the picture rich in detail of target object.Medium filtering can be expressed from the next:
g(m,n)=Median{f(m-k,n-l),(k,l)∈W}
Wherein, g (m, n) is the result images after medium filtering, and f (m-k, n-l) is original image.Getting window is odd number, has g (m, n)=f
(n+1)/2.Conventional window is linear, cruciform, square, rhombus, circle etc.
S30: extract the unique point in target object picture rich in detail, and carry out Stereo matching, obtains conjugate impedance match point.
Certain unique point in usual piece image may have a lot of match objects in another piece image, in addition, also there is the unfavorable factors such as such as illumination condition, scenery shape, interference noise and distortion, ambiguity may be caused to mate in image.The appearance of mating for avoiding ambiguity, adopts Harris Corner Detection and difference of Gaussian to detect herein and carries out feature extraction to the image of each viewpoint.
First, obtain the unique point of some according to Harris angular-point detection method: Harris operator is a kind of interest point detect operator based on signal, when pixel position is all larger along the curvature of any direction, then judge that this pixel is as angle point.Harris operator only relates to the first order derivative of image, and its method step is as follows:
S301: the interest value IV calculating each pixel elements:
IV=Det(M)-k·trace
2(M),k=0.04
Wherein g
xthe gradient in x direction, g
ythe gradient in y direction,
for Gaussian template, the determinant of Det (M) and trace (M) difference representing matrix M and mark, k is acquiescence constant;
S302: obtain Local Extremum, according to the interest value after calculating, extracts the point that in picture rich in detail, all local interest value is maximum;
S303: the point maximum to all local interest value carries out Stereo matching.
Interest value maximum point in all local utilizes difference of Gaussian to carry out the Stereo matching of unique point, and determine final conjugate impedance match unique point, concrete steps are as follows:
S304: utilize Gaussian convolution to build the metric space of pyramid structure;
S305: Harris angle point is asked to each tomographic image in the middle of pyramid;
S306: the pixel corresponding with its levels image to extracted Harris angle point calculates difference of Gaussian, difference of Gaussian obtains extreme value and the Harris angle point being greater than threshold value is chosen as final conjugate impedance match unique point.
S40: carry out three-dimensional reconstruction according to conjugate impedance match point.
As shown in fig. 2, after obtaining the conjugate impedance match point of two width images, just the three-dimensional coordinate that three-dimensional reconstruction obtains spatial point can be carried out.If method is that wherein a pair conjugate impedance match point is P1, P2.The spatial point that subpoint P1 is corresponding must on O1P1 straight line, and the spatial point that same subpoint P2 is corresponding must on O2P2 straight line.Therefore, the spatial point P mono-that conjugate impedance match point P1 and P2 is corresponding is decided to be the intersection point of two straight lines, thus the three dimensional local information of corresponding spatial point P just can uniquely be determined.The 3 D stereo information that three-dimensional reconstruction just can recover space object is carried out according to the method pointwise.
S50: control described end effector of robot according to described three-dimensional information and target object is captured.
One skilled in the relevant art, the actual needs can produced according to scheme of the invention of the present invention and inventive concept combination makes corresponding change or adjustment, and these changes and adjustment all should belong to the protection domain of the claims in the present invention.
Claims (5)
1. a hand-eye stereo vision measuring method, include a filming apparatus and a robot, it is characterized in that, the method comprises the steps:
The image of described filming apparatus Real-time Obtaining target object;
By the mode of filtering, pre-service is carried out to image, obtain the picture rich in detail of target object;
Extract the unique point in target object picture rich in detail, and carry out Stereo matching, obtain conjugate impedance match point;
Carry out three-dimensional reconstruction according to conjugate impedance match point, obtain the three-dimensional information of target object;
Control described end effector of robot according to described three-dimensional information to capture target object.
2. a kind of hand-eye stereo vision measuring method as claimed in claim 1, is characterized in that: before described filming apparatus obtains image, first demarcate described filming apparatus, demarcates the mode adopting pinhole imaging system.
3. a kind of hand-eye stereo vision measuring method as claimed in claim 1, it is characterized in that: the unique point in described extraction target object picture rich in detail, and carry out Stereo matching, obtain conjugate impedance match point, be specially: first adopt Harris angular-point detection method to obtain the point that in picture rich in detail, all local interest value is maximum, then carry out the Stereo matching of unique point according to difference of Gaussian.
4. a kind of hand-eye stereo vision measuring method as claimed in claim 3, described first employing Harris angular-point detection method obtains the point that in picture rich in detail, all local interest value is maximum and is specially:
Calculate the interest value IV of each pixel elements;
Obtain Local Extremum;
According to the interest value after calculating, extract the point that in picture rich in detail, all local interest value is maximum.
5. a kind of hand-eye stereo vision measuring method as claimed in claim 4, interest value maximum point in all local utilizes difference of Gaussian to carry out the Stereo matching of unique point, and determine final conjugate impedance match unique point, concrete steps are as follows:
Gaussian convolution is utilized to build the metric space of pyramid structure;
Harris angle point is asked to each tomographic image in the middle of pyramid;
The pixel corresponding with its levels image to extracted Harris angle point calculates difference of Gaussian, and difference of Gaussian obtains extreme value and the Harris angle point being greater than threshold value is chosen as final conjugate impedance match unique point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310451831.6A CN104515502A (en) | 2013-09-28 | 2013-09-28 | Robot hand-eye stereo vision measurement method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310451831.6A CN104515502A (en) | 2013-09-28 | 2013-09-28 | Robot hand-eye stereo vision measurement method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104515502A true CN104515502A (en) | 2015-04-15 |
Family
ID=52791168
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310451831.6A Pending CN104515502A (en) | 2013-09-28 | 2013-09-28 | Robot hand-eye stereo vision measurement method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104515502A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106504237A (en) * | 2016-09-30 | 2017-03-15 | 上海联影医疗科技有限公司 | Determine method and the image acquiring method of matching double points |
CN106767393A (en) * | 2015-11-20 | 2017-05-31 | 沈阳新松机器人自动化股份有限公司 | The hand and eye calibrating apparatus and method of robot |
CN107571246A (en) * | 2017-10-13 | 2018-01-12 | 上海神添实业有限公司 | A kind of component assembly system and method based on tow-armed robot |
CN108917721A (en) * | 2018-04-19 | 2018-11-30 | 北京控制工程研究所 | A kind of unstability satellite satellite and the rocket butt joint ring binocular measurement method |
CN110666805A (en) * | 2019-10-31 | 2020-01-10 | 重庆科技学院 | Industrial robot sorting method based on active vision |
US10580135B2 (en) | 2016-07-14 | 2020-03-03 | Shanghai United Imaging Healthcare Co., Ltd. | System and method for splicing images |
CN110864671A (en) * | 2018-08-28 | 2020-03-06 | 中国科学院沈阳自动化研究所 | Robot repeated positioning precision measuring method based on line structured light fitting plane |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060233423A1 (en) * | 2005-04-19 | 2006-10-19 | Hesam Najafi | Fast object detection for augmented reality systems |
CN101625768A (en) * | 2009-07-23 | 2010-01-13 | 东南大学 | Three-dimensional human face reconstruction method based on stereoscopic vision |
CN101877063A (en) * | 2009-11-25 | 2010-11-03 | 中国科学院自动化研究所 | Sub-pixel characteristic point detection-based image matching method |
CN102135776A (en) * | 2011-01-25 | 2011-07-27 | 解则晓 | Industrial robot control system based on visual positioning and control method thereof |
CN102902271A (en) * | 2012-10-23 | 2013-01-30 | 上海大学 | Binocular vision-based robot target identifying and gripping system and method |
-
2013
- 2013-09-28 CN CN201310451831.6A patent/CN104515502A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060233423A1 (en) * | 2005-04-19 | 2006-10-19 | Hesam Najafi | Fast object detection for augmented reality systems |
CN101625768A (en) * | 2009-07-23 | 2010-01-13 | 东南大学 | Three-dimensional human face reconstruction method based on stereoscopic vision |
CN101877063A (en) * | 2009-11-25 | 2010-11-03 | 中国科学院自动化研究所 | Sub-pixel characteristic point detection-based image matching method |
CN102135776A (en) * | 2011-01-25 | 2011-07-27 | 解则晓 | Industrial robot control system based on visual positioning and control method thereof |
CN102902271A (en) * | 2012-10-23 | 2013-01-30 | 上海大学 | Binocular vision-based robot target identifying and gripping system and method |
Non-Patent Citations (2)
Title |
---|
邵巍等: ""一种快速多尺度特征点匹配算法"", 《中国图象图形学报》 * |
高一宁, 韩燮: ""双目视觉中立体匹配算法的研究与比较"", 《电子测试》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106767393A (en) * | 2015-11-20 | 2017-05-31 | 沈阳新松机器人自动化股份有限公司 | The hand and eye calibrating apparatus and method of robot |
CN106767393B (en) * | 2015-11-20 | 2020-01-03 | 沈阳新松机器人自动化股份有限公司 | Hand-eye calibration device and method for robot |
US10580135B2 (en) | 2016-07-14 | 2020-03-03 | Shanghai United Imaging Healthcare Co., Ltd. | System and method for splicing images |
US11416993B2 (en) | 2016-07-14 | 2022-08-16 | Shanghai United Imaging Healthcare Co., Ltd. | System and method for splicing images |
US11893738B2 (en) | 2016-07-14 | 2024-02-06 | Shanghai United Imaging Healthcare Co., Ltd. | System and method for splicing images |
CN106504237A (en) * | 2016-09-30 | 2017-03-15 | 上海联影医疗科技有限公司 | Determine method and the image acquiring method of matching double points |
CN107571246A (en) * | 2017-10-13 | 2018-01-12 | 上海神添实业有限公司 | A kind of component assembly system and method based on tow-armed robot |
CN108917721A (en) * | 2018-04-19 | 2018-11-30 | 北京控制工程研究所 | A kind of unstability satellite satellite and the rocket butt joint ring binocular measurement method |
CN108917721B (en) * | 2018-04-19 | 2021-06-11 | 北京控制工程研究所 | Unstable satellite and rocket docking ring binocular measurement method |
CN110864671A (en) * | 2018-08-28 | 2020-03-06 | 中国科学院沈阳自动化研究所 | Robot repeated positioning precision measuring method based on line structured light fitting plane |
CN110666805A (en) * | 2019-10-31 | 2020-01-10 | 重庆科技学院 | Industrial robot sorting method based on active vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104515502A (en) | Robot hand-eye stereo vision measurement method | |
CN104463108B (en) | A kind of monocular real time target recognitio and pose measuring method | |
CN104933718B (en) | A kind of physical coordinates localization method based on binocular vision | |
Heng et al. | Leveraging image‐based localization for infrastructure‐based calibration of a multi‐camera rig | |
CN104626206B (en) | The posture information measuring method of robot manipulating task under a kind of non-structure environment | |
CN108932475A (en) | A kind of Three-dimensional target recognition system and method based on laser radar and monocular vision | |
Tang et al. | 3D mapping and 6D pose computation for real time augmented reality on cylindrical objects | |
CN110176032B (en) | Three-dimensional reconstruction method and device | |
CN104034269B (en) | A kind of monocular vision measuring method and device | |
CN108492017B (en) | Product quality information transmission method based on augmented reality | |
CN105118021A (en) | Feature point-based image registering method and system | |
CN103727927A (en) | High-velocity motion object pose vision measurement method based on structured light | |
CN106203429A (en) | Based on the shelter target detection method under binocular stereo vision complex background | |
Lu et al. | Binocular stereo vision based on OpenCV | |
Ulrich et al. | High-accuracy 3D image stitching for robot-based inspection systems | |
Ma et al. | 3D reconstruction and measurement of indoor object using stereo camera | |
Li et al. | Overview of 3d reconstruction methods based on multi-view | |
Du et al. | Optimization of stereo vision depth estimation using edge-based disparity map | |
CN206912816U (en) | Identify the device of mechanical workpieces pose | |
CN104751452A (en) | Monocular camera calibration method based on any known movement | |
CN105069781A (en) | Salient object spatial three-dimensional positioning method | |
Sheng et al. | Research on object recognition and manipulator grasping strategy based on binocular vision | |
Sakagami et al. | Robust object tracking for underwater robots by integrating stereo vision, inertial and magnetic sensors | |
Wang et al. | Realization of 3D Reconstruction Algorithm Based on 2D Video | |
Chang | Robust needle recognition using artificial neural network (ann) and random sample consensus (ransac) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150415 |