CN104331894A - Robot unstacking method based on binocular stereoscopic vision - Google Patents

Robot unstacking method based on binocular stereoscopic vision Download PDF

Info

Publication number
CN104331894A
CN104331894A CN201410665285.0A CN201410665285A CN104331894A CN 104331894 A CN104331894 A CN 104331894A CN 201410665285 A CN201410665285 A CN 201410665285A CN 104331894 A CN104331894 A CN 104331894A
Authority
CN
China
Prior art keywords
robot
prime
camera
stacking
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410665285.0A
Other languages
Chinese (zh)
Inventor
范新建
侯宪伦
刘晓刚
刘广亮
王学林
肖永飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation Shandong Academy of Sciences
Original Assignee
Institute of Automation Shandong Academy of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation Shandong Academy of Sciences filed Critical Institute of Automation Shandong Academy of Sciences
Priority to CN201410665285.0A priority Critical patent/CN104331894A/en
Publication of CN104331894A publication Critical patent/CN104331894A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robot unstacking method based on binocular stereoscopic vision. The robot unstacking method includes the following steps of: simultaneously obtaining images within views of left and right eyes by using a binocular camera fixed right above a working area, and positioning a target by using a shape template matching method to obtain a three-dimensional coordinate under the camera coordinate system; converting from the camera coordinate system into a robot base coordinate system; arranging depth information and unstacking in a layered manner according to the detected target coordinate. According to the robot unstacking method based on binocular stereoscopic vision, target identification and three-dimensional positioning of product in an unstacking area are achieved directly through collecting binocular images, and thereby automatic unstacking task of the product is achieved, and the problems that manual unstacking is large in labor intensity and low in efficiency can be effectively overcome.

Description

A kind of robot de-stacking method based on binocular stereo vision
Technical field
The present invention relates to a kind of robot de-stacking method based on binocular stereo vision.
Background technology
In product storage system, generally adopt robotization palletizing system both at home and abroad at present, product directly can be stacked to pallet, then be stored up to packed storehouse by fork truck.But when needing product warehouse-out to transport outward, usually still by manually taking out product from pallet, and then carry out entrucking.Artificial de-stacking labour intensity is large, inefficiency.Along with the fast development of China's economy and the raising of people's living standard, also progressively there is the phenomenon of labor shortage in China in recent years, and particularly slightly, stupid flexibe work(labour) power is short especially.
Therefore be badly in need of a kind of robotization of configuration, intelligentized destacking apparatus and method and solve the contradiction of producing with transport.
Summary of the invention
The present invention is in order to solve the problem, propose a kind of robot de-stacking method based on binocular stereo vision, the method by binocular imaging collection, utilize binocular image to realize the target identification of de-stacking district product and three-dimensional localization, and then the robotization de-stacking task realized product, effectively can solve artificial de-stacking labour intensity large, inefficient problem.
To achieve these goals, the present invention adopts following technical scheme:
Based on a robot de-stacking method for binocular stereo vision, comprise the following steps:
(1) the utilization binocular camera be fixed on directly over workspace obtains the image in the right and left eyes visual field simultaneously, and the method utilizing shape template to mate carries out target localization, obtains the three-dimensional coordinate under camera coordinates;
(2) conversion between camera coordinates system and robot base coordinate sys-tem is carried out;
(3) according to the coordinates of targets detected, carry out the arrangement of depth information, carry out layering de-stacking.
In described step (1), its concrete steps comprise:
(1-1) obtain the image in the right and left eyes visual field by the binocular camera be fixed on directly over workspace simultaneously, and noise reduction process is carried out to it;
(1-2) respectively the right and left eyes image of binocular camera is carried out to the coupling of Shape-based interpolation template, obtain right and left eyes image pixel coordinates;
(1-3) by principle of triangulation, right and left eyes image pixel coordinates is converted to the three-dimensional coordinate under camera coordinates.
In described step (1-2), the concrete grammar of single eye images coupling is:
A) set up the shape template of product under off-line condition, complete the registration of template, template definition is point set p i=(x i, y i) twith each corresponding direction vector d that point is concentrated i=(t i, u i) t, i=1,2 ... the number of n, n to be direction vector be non-zero points, direction vector is calculated by Sobel gradient operator;
B), during online template matches, be each some q=(x, y) in image to be searched tcalculate a direction vector e x, y=(v x, y, w x, y) t;
C) at certain specified point place of searching image, the similarity measurement between template and searching image is defined as their direction vector normalized some sum s:
s = 1 n Σ i = 1 n d i ′ T e q + p i ′ | | d i ′ | | | | e q + p i ′ | | = 1 n Σ i = 1 n t i ′ v x + x i ′ , y + y i ′ + u i ′ w x + x i ′ , y + y i ′ ( t i ′ 2 + u i ′ 2 ) v x + x i ′ , y + y i ′ 2 + w x + x i ′ , y + y i ′ 2 - - - ( 1 )
Wherein, p ' i=(x ' i, y ' i) t, d ' i=(t ' i, u ' i) tbe respectively p i, d icoordinate after rigid transformation or similarity transformation conversion and direction vector; If s is greater than certain threshold value, then namely the region of this some correspondence is considered to a target image;
D) target search: adopt pyramid structure to implement by the thick matching strategy to essence on binocular camera right and left eyes image respectively, and with yardstick step delta s and anglec of rotation step delta θ discretize search volume;
E) according to precision needs, in steps d) target proximity that detects, adopt and carry out sub-pixel positioning based on the method edge point of Facet model, and then adopt ICP (Iterative Closest Point) algorithm to refine matching result, to obtain high-precision positioning precision.
The concrete grammar of described step (1-3) is:
According to the pixel coordinate (u of target in right and left eyes image l, v l) and (u r, v r), its three-dimensional coordinate under camera coordinates can be obtained by principle of triangulation;
X = b d C - - - ( 2 )
Wherein, X=[X Y Z] tfor target three-dimensional coordinate under camera coordinates system, d=u l-u r, b is parallax range, C=[u lv f] t, camera used is here parallel optical axis binocular camera, and left and right camera parameter is identical, therefore has v l=v r=v, f l=f r=f.
The concrete steps of described step (2) comprising:
(2-1) calibrated reference is placed at random n position of random selecting in robot working space, guarantees that the position chosen will in the coverage of camera;
(2-2) three-dimensional coordinate (X of object of reference in camera coordinates system is calculated in each position by foregoing visual locating module c, Y c, Z c);
(2-3) allow robot hand clamp an end sharp shaped material (as pencil), manipulation robot makes sharp shaped material end touch the center of object of reference, from the three-dimensional coordinate (X of the known object of reference of the reading of robot controller under robot coordinate system r, Y r, Z r);
(2-4) because camera is fixed on the top of robot work region, therefore (X c, Y c, Z c) and (X r, Y r, Z r) there is following relation:
X r = m 11 X c + m 12 Y c + m 13 Z c + P x Y r = m 21 X c + m 22 Y c + m 23 Z c + P y Z r = m 31 X c + m 32 Y c + m 33 Z c + P z - - - ( 3 )
Note R = m 11 m 12 m 13 m 21 m 22 m 23 m 31 m 32 m 33 , T = P x P y P z ; R and T represents rotation matrix and translation matrix respectively; N the data of step (2-2) and (2-3) gained are brought into equation (3) 3n system of linear equations can be obtained;
(2-5) utilize least square method to solve system of linear equations and solve R and T.
The concrete grammar of described step (3) comprising:
(3-1) for n the object detected, first ascending arrangement is carried out to the depth information of each target, then carry out judging whether at one deck;
(3-2) to after object layering, sort according to a certain order to the object being in top layer, determine last de-stacking order, robot implements de-stacking to object;
(3-3) after robot completes the de-stacking task of this layer, return step (1), again drive binocular camera to carry out binocular image acquisition, carry out successively.
The detailed step of described step (3-1) is: for n the object { OBJ detected 1, OBJ 2..., OBJ n, its coordinate is respectively (X 1, Y 1, Z 1), (X 2, Y 2, Z 2) ... (X n, Y n, Z n), first ascending arrangement is carried out to the depth information of each target, then judge.If | Z i-Z j|≤λ H (i=1,2 ... n, j > i), then i-th is same layer object with jth, otherwise is that two objects are in different layers.Wherein H is object thickness, and λ is coefficient.
The concrete steps of described step (3-2) are: after object layering, sort according to a certain order to the object being in top layer, determine last de-stacking order: { OBJ 1, OBJ 2... OBJ m, and its three-dimensional coordinate is passed to Robot Motion Controller, robot controller drive machines robot end actuator motions captures to above-mentioned position, and after providing off-position, can realize the de-stacking task to object.
Beneficial effect of the present invention is:
(1) the robot de-stacking system based on binocular stereo vision of the present invention's proposition, its directly by binocular imaging collection, utilize binocular image to realize the target identification of de-stacking district product and three-dimensional localization, and then the robotization de-stacking task realized product, effectively can solve artificial de-stacking labour intensity large, inefficient problem;
(2) the robot de-stacking system based on binocular stereo vision of the present invention's proposition, adopts shape template matching process to carry out the Stereo matching of target detection and left images character pair point; This algorithm only make use of the gradient information on image, is not easily masked, clutter and non-linear illumination effect, compares traditional matching process relevant based on gray scale and has more practicality and robustness;
(3) the robot de-stacking system based on stereoscopic vision of the present invention's proposition, without the need to special product orientation device, can carry out de-stacking process to the product piling up neatly arbitrarily shape; Only need add new model data in software processing module can realize, to the de-stacking task of dissimilar product, realizing the flexibility of de-stacking operation, intellectuality, thus there is good popularizing application prospect.
Accompanying drawing explanation
Fig. 1 is that system hardware forms schematic diagram.
Fig. 2 is de-stacking system flowchart.
Fig. 3 is de-stacking algorithm schematic diagram.
Embodiment:
Below in conjunction with accompanying drawing and embodiment, the invention will be further described.
The present invention proposes a kind of robot de-stacking method based on binocular stereo vision.It calculates mechanism primarily of binocular camera, industrial robot and motion controller, image procossing and robot controlling and becomes, and structure as shown in Figure 1.Be provided with product de-stacking district and collecting region around industrial robot, binocular camera is fixed on directly over de-stacking district, is mainly used to sensing external environment, and the scene image data of acquisition is sent to computing machine carries out post-processed in real time; Industrial robot, as main topworks, is used for realizing the crawl of product or absorption (determining according to corresponding end effector) and puts it to the task of corresponding product collecting region.Software section mainly comprises image processing module and motion planning and robot control module, and the two operates in same computing machine, realizes thread communication to each other by message queue mode.The binocular image of Real-time Collection and registered template are carried out the template matches of Shape-based interpolation by image processing module, judge in binocular image, whether to occur corresponding product image, when all there is registered product image in and if only if binocular image, just calculate according to principle of triangulation the three-dimensional coordinate of product in camera coordinates system entering the binocular camera visual field.Then the transformational relation that the camera coordinates obtained according to off-line is tied to robot base coordinate sys-tem obtains its three-dimensional coordinate in robot coordinate system, and by thread communication, the three-dimensional coordinate of product in robot coordinate system is delivered to motion planning and robot control module, after motion-control module receives three-dimensional coordinate, operational order is sent according to predetermined de-stacking strategy, drive machines robot end actuator successively successively captures or draws product and place it in corresponding product-collecting district, realizes the de-stacking task capturing-place.
The workflow of de-stacking system as shown in Figure 2, can be divided into off-line to prepare and two stages of on-line implement.
One, the off-line preparatory stage
1. binocular camera is demarcated
First the inside and outside parameter of method to single camera such as Zhang Zhengyou method are adopted to demarcate; Then the relative tertiary location relation of optimized algorithm to two cameras is adopted to demarcate.
2. template registration
For the template image of product, first need the To Template setting up Shape-based interpolation.Template definition is point set p i=(x i, y i) twith each corresponding direction vector d that point is concentrated i=(t i, u i) t, i=1,2 ... the number of n, n to be direction vector the be point of non-zero, direction vector can be calculated by Sobel gradient operator.
3. hand and eye calibrating
Adopt picture on surface to be easy to the planished sheet of identification and location as calibrated reference, demarcating steps is as follows:
(1) object of reference is placed at random n position of random selecting in robot working space;
(2) three-dimensional coordinate (X of object of reference in camera coordinates system is calculated in each position by vision locating module c, Y c, Z c);
(3) allow robot hand clamp an end sharp shaped material (as pencil), manipulation robot makes sharp shaped material end touch the center of object of reference, from the three-dimensional coordinate (X of the known object of reference of the reading of robot controller under robot coordinate system r, Y r, Z r);
(4) because camera is fixed on the top of robot work region, therefore (X c, Y c, Z c) and (X r, Y r, Z r) there is following relation:
X r = m 11 X c + m 12 Y c + m 13 Z c + P x Y r = m 21 X c + m 22 Y c + m 23 Z c + P y Z r = m 31 X c + m 32 Y c + m 33 Z c + P z - - - ( 1 )
Note R = m 11 m 12 m 13 m 21 m 22 m 23 m 31 m 32 m 33 , T = P x P y P z . R and T represents rotation and translation matrix respectively.The data of n the point by step (2) and (3) gained are brought into equation (1) 3n system of linear equations can be obtained.
(5) solve system of linear equations by least square method and can solve R and T.
Two, the on-line implement stage
1. start three-dimensional binocular camera to operative scenario synchronous acquisition image.
2. Image semantic classification: carry out low-pass filtering to input picture, rejects the noise in image to the full extent, then according to the calibration result of binocular camera, corrects the radial distortion of camera lens, and makes left images longitudinally alignment.
3. on left images, implementation goal detects respectively, and concrete steps are as follows:
(1) be each some q=(x, y) in searching image tcalculate a direction vector e x, y=(v x, y, w x, y) t.
(2) at certain specified point place of searching image, the similarity s between calculation template and searching image:
s = 1 n Σ i = 1 n d i ′ T e q + p i ′ | | d i ′ | | | | e q + p i ′ | | = 1 n Σ i = 1 n t i ′ v x + x i ′ , y + y i ′ + u i ′ w x + x i ′ , y + y i ′ ( t i ′ 2 + u i ′ 2 ) v x + x i ′ , y + y i ′ 2 + w x + x i ′ , y + y i ′ 2 - - - ( 2 )
Wherein, p ' i=(x ' i, y ' i) t, d ' i=(t ' i, u ' i) tbe respectively p i, d icoordinate after rigid transformation or similarity transformation conversion and direction vector.If s is greater than certain threshold value, then the region of this some correspondence can be considered to a target image.
(3) target search: adopt pyramid structure to implement by the thick matching strategy to essence on right and left eyes image respectively, and with yardstick step delta s and anglec of rotation step delta θ discretize search volume.
(4) according to precision needs, in the target proximity that step (3) detects, the method edge point based on Facet model can be adopted to carry out sub-pixel positioning, and then adopt ICP (Iterative Closest Point) algorithm to refine matching result, to obtain high-precision positioning precision.
4. draw the unique point coordinate (u of target in left images by above-mentioned steps l, v l) and (u r, v r), its three-dimensional space position can be obtained by principle of triangulation wherein X=[X Y Z] tfor target three-dimensional coordinate under camera coordinates system, d=u l-u r, b is parallax range, C=[u lv f] t, here the left and right camera parameter of binocular camera used is identical and be parallel optical axis, therefore has v l=v r=v, f l=f r=f.
5. be its three-dimensional coordinate in robot base coordinate sys-tem target in the coordinate transformation of camera coordinates system by formula (1).
6. robot implements de-stacking operation: first carry out layering according to depth information to target, then capture the object being in top layer or draw operation, and puts it into corresponding product-collecting district.Concrete steps are as follows:
(1) judge with layer object: n the object OBJ that vision module is detected 1, OBJ 2..., OBJ n, its coordinate is respectively (X 1, Y 1, Z 1), (X 2, Y 2, Z 2) ... (X n, Y n, Z n), first ascending arrangement is carried out to the depth information of each target, then judge.If | Z i-Z j|≤λ H (i=1,2 ... n, j > i), then i-th is same layer object with jth, otherwise is that two objects are in different layers.Wherein H is object thickness, and λ is coefficient.
(2) to after object layering, the object being in top layer is sorted according to a certain order, determine last de-stacking order: { OBJ 1, OBJ 2... OBJ m), and its three-dimensional coordinate is passed to Robot Motion Controller, robot controller drive machines robot end actuator motions captures to above-mentioned position, and after providing off-position, can realize the de-stacking task to object.
(3) after robot completes the de-stacking task of this layer, again drive binocular camera to carry out binocular image acquisition, obtain the three-dimensional coordinate of camera fields of view internal object thing.Drive machines robot end moves to assigned address again, completes this de-stacking, the like complete de-stacking task to whole workspace target.
Thus, based in the industrial robot de-stacking system of binocular camera, vision processing module and motion-control module work coordinated with each other, vision processing module carries out target identification and three-dimensional localization based on the product of the binocular image collected to current de-stacking district; And motion-control module sends corresponding operational order based on this positioning result, the product in current de-stacking district is carried to corresponding product-collecting district by drive machines robot end actuator; So move in circles.Show through experimental verification, the robot de-stacking method based on binocular stereo vision that the present invention proposes can surely, standard, realize the product sorting process in de-stacking district soon.
By reference to the accompanying drawings the specific embodiment of the present invention is described although above-mentioned; but not limiting the scope of the invention; one of ordinary skill in the art should be understood that; on the basis of technical scheme of the present invention, those skilled in the art do not need to pay various amendment or distortion that creative work can make still within protection scope of the present invention.

Claims (8)

1., based on a robot de-stacking method for binocular stereo vision, it is characterized in that: comprise the following steps:
(1) the utilization binocular camera be fixed on directly over workspace obtains the image in the right and left eyes visual field simultaneously, and the method utilizing shape template to mate carries out target localization, obtains the three-dimensional coordinate under camera coordinates;
(2) conversion between camera coordinates system and robot base coordinate sys-tem is carried out;
(3) according to the coordinates of targets detected, carry out the arrangement of depth information, the object being in top layer is sorted, determine that last de-stacking order carries out layering de-stacking.
2. a kind of robot de-stacking method based on binocular stereo vision as claimed in claim 1, is characterized in that: in described step (1), its concrete steps comprise:
(1-1) obtain the image in the right and left eyes visual field by the binocular camera be fixed on directly over workspace simultaneously, and noise reduction process is carried out to it;
(1-2) respectively the right and left eyes image of binocular camera is carried out to the coupling of Shape-based interpolation template, obtain right and left eyes image pixel coordinates;
(1-3) by principle of triangulation, right and left eyes image pixel coordinates is converted to the three-dimensional coordinate under camera coordinates.
3. a kind of robot de-stacking method based on binocular stereo vision as claimed in claim 2, is characterized in that: in described step (1-2), and the concrete grammar of single eye images coupling is:
A) set up the shape template of product under off-line condition, complete the registration of template, template definition is point set p i=(x i, y i) twith each corresponding direction vector d that point is concentrated i=(t i, u i) t, i=1,2 ... the number of n, n to be direction vector be non-zero points, direction vector is calculated by Sobel gradient operator;
B), during online template matches, be each some q=(x, y) in image to be searched tcalculate a direction vector e x, y=(v x, y, w x, y) t;
C) at certain specified point place of searching image, the similarity measurement between template and searching image is defined as their direction vector normalized some sum s:
s = 1 n Σ i = 1 n d i ′ T e q + p i ′ | | d i ′ | | | | e q + p i ′ | | = 1 n Σ i = 1 n t i ′ v x + x i ′ , y + y i ′ + u i ′ w x + x i ′ , y + y i ′ ( t i ′ 2 + u i ′ 2 ) v x + x i ′ , y + y i ′ 2 + w x + x i ′ , y + y i ′ 2 - - - ( 1 )
Wherein, p' i=(x' i, y' i) t, d' i=(t' i, u' i) tbe respectively p i, d icoordinate after rigid transformation or similarity transformation conversion and direction vector; If s is greater than certain threshold value, then namely the region of this some correspondence is considered to a target image;
D) target search: adopt pyramid structure to implement by the thick matching strategy to essence on binocular camera right and left eyes image respectively, and with yardstick step delta s and anglec of rotation step delta θ discretize search volume;
E) according to precision needs, in steps d) target proximity that detects, adopt and carry out sub-pixel positioning based on the method edge point of Facet model, and then adopt ICP (Iterative Closest Point) algorithm to refine matching result, to obtain high-precision positioning precision.
4. a kind of robot de-stacking method based on binocular stereo vision as claimed in claim 2, is characterized in that: the concrete grammar of described step (1-3) is:
According to the pixel coordinate (u of target in right and left eyes image l, v l) and (u r, v r), its three-dimensional coordinate under camera coordinates can be obtained by principle of triangulation;
X = b c C - - - ( 2 )
Wherein, X=[X Y Z] tfor target three-dimensional coordinate under camera coordinates system, d=u l-u r, b is parallax range, C=[u lv f] t, camera used is here parallel optical axis binocular camera, and left and right camera parameter is identical, therefore has v l=v r=v, f l=f r=f.
5. a kind of robot de-stacking method based on binocular stereo vision as claimed in claim 1, is characterized in that: described step
(2) concrete steps comprise:
(2-1) calibrated reference is placed at random n position of random selecting in robot working space, guarantees that the position chosen will in the coverage of camera;
(2-2) three-dimensional coordinate (X of object of reference in camera coordinates system is calculated in each position by foregoing visual locating module c, Y c, Z c);
(2-3) allow robot hand clamp an end sharp shaped material, manipulation robot makes sharp shaped material end touch the center of object of reference, from the three-dimensional coordinate (X of the known object of reference of the reading of robot controller under robot coordinate system r, Y r, Z r);
(2-4) because camera is fixed on the top of robot work region, therefore (X c, Y c, Z c) and (X r, Y r, Z r) there is following relation:
X r = m 11 X c + m 12 Y c + m 13 Z c + P x Y r = m 21 X c + m 22 Y c + m 23 Z c + P y Z r = m 31 X c + m 32 Y c + m 33 Z c + P z - - - ( 3 )
Note R = m 11 m 12 m 13 m 21 m 22 m 23 m 31 m 32 m 33 , T = P x P y P z ; R and T represents rotation matrix and translation matrix respectively; N the data of step (2-2) and (2-3) gained bring into equation (3) 3n system of linear equations;
(2-5) utilize least square method to solve system of linear equations and solve R and T.
6. a kind of robot de-stacking method based on binocular stereo vision as claimed in claim 1, is characterized in that: the concrete grammar of described step (3) comprising:
(3-1) for n the object detected, first ascending arrangement is carried out to the depth information of each target, then carry out judging whether at one deck;
(3-2) to after object layering, sort to the object being in top layer, determine last de-stacking order, robot implements de-stacking to object;
(3-3) after robot completes the de-stacking task of this layer, return step (1), again drive binocular camera to carry out binocular image acquisition, carry out successively.
7. a kind of robot de-stacking method based on binocular stereo vision as claimed in claim 6, is characterized in that: the detailed step of described step (3-1) is: for n the object { OBJ detected 1, OBJ 2..., OBJ n, its coordinate is respectively (X 1, Y 1, Z 1), (X 2, Y 2, Z 2) ... (X n, Y n, Z n), first ascending arrangement is carried out to the depth information of each target, then judge; If | Z i-Z j|≤λ H (i=1,2 ... n, j > i), then i-th is same layer object with jth, otherwise is that two objects are in different layers.Wherein H is object thickness, and λ is coefficient.
8. a kind of robot de-stacking method based on binocular stereo vision as claimed in claim 6, it is characterized in that: the concrete steps of described step (3-2) are: after object layering, the object being in top layer is sorted according to a certain order, determines last de-stacking order: { OBJ 1, OBJ 2... OBJ m, and its three-dimensional coordinate is passed to Robot Motion Controller, robot controller drive machines robot end actuator motions captures to above-mentioned position, and after providing off-position, can realize the de-stacking task to object.
CN201410665285.0A 2014-11-19 2014-11-19 Robot unstacking method based on binocular stereoscopic vision Pending CN104331894A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410665285.0A CN104331894A (en) 2014-11-19 2014-11-19 Robot unstacking method based on binocular stereoscopic vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410665285.0A CN104331894A (en) 2014-11-19 2014-11-19 Robot unstacking method based on binocular stereoscopic vision

Publications (1)

Publication Number Publication Date
CN104331894A true CN104331894A (en) 2015-02-04

Family

ID=52406614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410665285.0A Pending CN104331894A (en) 2014-11-19 2014-11-19 Robot unstacking method based on binocular stereoscopic vision

Country Status (1)

Country Link
CN (1) CN104331894A (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105217324A (en) * 2015-10-20 2016-01-06 上海影火智能科技有限公司 A kind of novel de-stacking method and system
CN105261012A (en) * 2015-09-25 2016-01-20 上海瑞伯德智能系统科技有限公司 Template matching method based on Sobel vectors
CN105513074A (en) * 2015-06-17 2016-04-20 电子科技大学 Badminton robot camera calibration method
CN106041937A (en) * 2016-08-16 2016-10-26 河南埃尔森智能科技有限公司 Control method of manipulator grabbing control system based on binocular stereoscopic vision
CN107065861A (en) * 2017-02-24 2017-08-18 珠海金萝卜智动科技有限公司 Robot collection intelligence is carried, is loaded and unloaded on integral method and apparatus
CN107192331A (en) * 2017-06-20 2017-09-22 佛山市南海区广工大数控装备协同创新研究院 A kind of workpiece grabbing method based on binocular vision
CN107945192A (en) * 2017-12-14 2018-04-20 北京信息科技大学 A kind of pallet carton pile type real-time detection method
CN107973120A (en) * 2017-12-18 2018-05-01 广东美的智能机器人有限公司 Loading system
WO2018120210A1 (en) * 2016-12-30 2018-07-05 深圳配天智能技术研究院有限公司 Method and device for determining position information about stacking point, and robot
CN108313748A (en) * 2018-04-18 2018-07-24 上海发那科机器人有限公司 A kind of 3D visions carton de-stacking system
CN108480239A (en) * 2018-02-10 2018-09-04 浙江工业大学 Workpiece quick sorting method based on stereoscopic vision and device
CN108890636A (en) * 2018-07-06 2018-11-27 陕西大中科技发展有限公司 A kind of crawl localization method of industrial robot
CN109297433A (en) * 2018-11-15 2019-02-01 青岛星晖昌达智能自动化装备有限公司 3D vision guide de-stacking measuring system and its control method
CN109335711A (en) * 2018-11-23 2019-02-15 山东省科学院自动化研究所 A kind of heavy load unstacking robot and de-stacking method
CN109421043A (en) * 2017-08-24 2019-03-05 深圳市远望工业自动化设备有限公司 Automotive oil tank welding positioning method and system based on robot 3D vision
CN109436820A (en) * 2018-09-17 2019-03-08 武汉库柏特科技有限公司 A kind of the de-stacking method and de-stacking system of stacks of goods
CN109521398A (en) * 2018-12-05 2019-03-26 普达迪泰(天津)智能装备科技有限公司 A kind of positioning system and localization method based on multi-vision visual
CN109592433A (en) * 2018-11-29 2019-04-09 合肥泰禾光电科技股份有限公司 A kind of cargo de-stacking method, apparatus and de-stacking system
CN109702738A (en) * 2018-11-06 2019-05-03 深圳大学 A kind of mechanical arm hand and eye calibrating method and device based on Three-dimension object recognition
CN110039525A (en) * 2019-05-25 2019-07-23 塞伯睿机器人技术(长沙)有限公司 Robot is used in 6S management
CN110322457A (en) * 2019-07-09 2019-10-11 中国大恒(集团)有限公司北京图像视觉技术分公司 A kind of de-stacking method of 2D in conjunction with 3D vision
CN110355758A (en) * 2019-07-05 2019-10-22 北京史河科技有限公司 A kind of machine follower method, equipment and follow robot system
CN110533717A (en) * 2019-08-06 2019-12-03 武汉理工大学 A kind of target grasping means and device based on binocular vision
CN111232346A (en) * 2019-11-25 2020-06-05 太原科技大学 Pipe and bar bundling system based on binocular vision
CN111232347A (en) * 2019-11-25 2020-06-05 太原科技大学 Tube and bar bundling method based on binocular vision
CN111311691A (en) * 2020-03-05 2020-06-19 上海交通大学 Unstacking method and system of unstacking robot
CN111347411A (en) * 2018-12-20 2020-06-30 中国科学院沈阳自动化研究所 Two-arm cooperative robot three-dimensional visual recognition grabbing method based on deep learning
CN111439594A (en) * 2020-03-09 2020-07-24 兰剑智能科技股份有限公司 Unstacking method and system based on 3D visual guidance
CN111754515A (en) * 2019-12-17 2020-10-09 北京京东尚科信息技术有限公司 Method and device for sequential gripping of stacked articles
CN113012238A (en) * 2021-04-09 2021-06-22 南京星顿医疗科技有限公司 Method for rapid calibration and data fusion of multi-depth camera
CN114408597A (en) * 2022-02-17 2022-04-29 湖南视比特机器人有限公司 Truck loading and unloading method and system based on 3D visual guidance and truck loading and unloading robot
CN116051658A (en) * 2023-03-27 2023-05-02 北京科技大学 Camera hand-eye calibration method and device for target detection based on binocular vision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1685367A (en) * 2002-10-02 2005-10-19 西门子共同研究公司 Fast two dimensional object localization based on oriented edges
CN202175438U (en) * 2011-06-27 2012-03-28 山东省科学院自动化研究所 Running control system of palletizing robot
CN103112008A (en) * 2013-01-29 2013-05-22 上海智周自动化工程有限公司 Method of automatic positioning and carrying of dual-vision robot used for floor cutting
CN103738902A (en) * 2014-01-02 2014-04-23 长春北方仪器设备有限公司 Vision positioning type filling robot system and vision positioning type filling method
US20140199142A1 (en) * 2013-01-15 2014-07-17 Wynright Corporation Automatic Tire Loader/Unloader for Stacking/Unstacking Tires in a Trailer
CN104058260A (en) * 2013-09-27 2014-09-24 沈阳工业大学 Robot automatic stacking method based on visual processing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1685367A (en) * 2002-10-02 2005-10-19 西门子共同研究公司 Fast two dimensional object localization based on oriented edges
CN202175438U (en) * 2011-06-27 2012-03-28 山东省科学院自动化研究所 Running control system of palletizing robot
US20140199142A1 (en) * 2013-01-15 2014-07-17 Wynright Corporation Automatic Tire Loader/Unloader for Stacking/Unstacking Tires in a Trailer
CN103112008A (en) * 2013-01-29 2013-05-22 上海智周自动化工程有限公司 Method of automatic positioning and carrying of dual-vision robot used for floor cutting
CN104058260A (en) * 2013-09-27 2014-09-24 沈阳工业大学 Robot automatic stacking method based on visual processing
CN103738902A (en) * 2014-01-02 2014-04-23 长春北方仪器设备有限公司 Vision positioning type filling robot system and vision positioning type filling method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CARSTEN STEGER: "Similarity Measures for Occlusion, Clutter, and Illumination Invariant Object Recognition", 《LECTURE NOTES IN COMPUTER SCIENCE》 *
XINJIAN FAN ET AL: "A Combined 2D-3D Vision System for Automatic Robot Picking", 《PROCEEDINGS OF THE 2014 INTERNATIONAL CONFERENCE ON ADVANCED MECHATRONIC SYSTEMS》 *
XINJIAN FAN ET AL: "An Automatic Robot Unstacking System Based on Binocular Stereo Vision", 《2014 IEEE INTERNATIONAL CONFERENCE ON SECURITY PATTERN ANALYSIS AND CYBERNETICS》 *

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513074A (en) * 2015-06-17 2016-04-20 电子科技大学 Badminton robot camera calibration method
CN105513074B (en) * 2015-06-17 2018-12-11 电子科技大学 A kind of scaling method of shuttlecock robot camera and vehicle body to world coordinate system
CN105261012A (en) * 2015-09-25 2016-01-20 上海瑞伯德智能系统科技有限公司 Template matching method based on Sobel vectors
CN105217324A (en) * 2015-10-20 2016-01-06 上海影火智能科技有限公司 A kind of novel de-stacking method and system
CN106041937A (en) * 2016-08-16 2016-10-26 河南埃尔森智能科技有限公司 Control method of manipulator grabbing control system based on binocular stereoscopic vision
WO2018120210A1 (en) * 2016-12-30 2018-07-05 深圳配天智能技术研究院有限公司 Method and device for determining position information about stacking point, and robot
CN107065861A (en) * 2017-02-24 2017-08-18 珠海金萝卜智动科技有限公司 Robot collection intelligence is carried, is loaded and unloaded on integral method and apparatus
CN107192331A (en) * 2017-06-20 2017-09-22 佛山市南海区广工大数控装备协同创新研究院 A kind of workpiece grabbing method based on binocular vision
CN109421043A (en) * 2017-08-24 2019-03-05 深圳市远望工业自动化设备有限公司 Automotive oil tank welding positioning method and system based on robot 3D vision
CN107945192A (en) * 2017-12-14 2018-04-20 北京信息科技大学 A kind of pallet carton pile type real-time detection method
CN107945192B (en) * 2017-12-14 2021-10-22 北京信息科技大学 Tray carton pile type real-time detection method
CN107973120A (en) * 2017-12-18 2018-05-01 广东美的智能机器人有限公司 Loading system
CN108480239A (en) * 2018-02-10 2018-09-04 浙江工业大学 Workpiece quick sorting method based on stereoscopic vision and device
CN108480239B (en) * 2018-02-10 2019-10-18 浙江工业大学 Workpiece quick sorting method and device based on stereoscopic vision
CN108313748A (en) * 2018-04-18 2018-07-24 上海发那科机器人有限公司 A kind of 3D visions carton de-stacking system
CN108890636A (en) * 2018-07-06 2018-11-27 陕西大中科技发展有限公司 A kind of crawl localization method of industrial robot
CN109436820B (en) * 2018-09-17 2024-04-16 武汉库柏特科技有限公司 Destacking method and destacking system for goods stack
CN109436820A (en) * 2018-09-17 2019-03-08 武汉库柏特科技有限公司 A kind of the de-stacking method and de-stacking system of stacks of goods
CN109702738B (en) * 2018-11-06 2021-12-07 深圳大学 Mechanical arm hand-eye calibration method and device based on three-dimensional object recognition
CN109702738A (en) * 2018-11-06 2019-05-03 深圳大学 A kind of mechanical arm hand and eye calibrating method and device based on Three-dimension object recognition
CN109297433A (en) * 2018-11-15 2019-02-01 青岛星晖昌达智能自动化装备有限公司 3D vision guide de-stacking measuring system and its control method
CN109335711A (en) * 2018-11-23 2019-02-15 山东省科学院自动化研究所 A kind of heavy load unstacking robot and de-stacking method
CN109335711B (en) * 2018-11-23 2024-03-22 山东省科学院自动化研究所 Large-load unstacking robot and unstacking method
CN109592433A (en) * 2018-11-29 2019-04-09 合肥泰禾光电科技股份有限公司 A kind of cargo de-stacking method, apparatus and de-stacking system
CN109521398A (en) * 2018-12-05 2019-03-26 普达迪泰(天津)智能装备科技有限公司 A kind of positioning system and localization method based on multi-vision visual
CN111347411B (en) * 2018-12-20 2023-01-24 中国科学院沈阳自动化研究所 Two-arm cooperative robot three-dimensional visual recognition grabbing method based on deep learning
CN111347411A (en) * 2018-12-20 2020-06-30 中国科学院沈阳自动化研究所 Two-arm cooperative robot three-dimensional visual recognition grabbing method based on deep learning
CN110039525A (en) * 2019-05-25 2019-07-23 塞伯睿机器人技术(长沙)有限公司 Robot is used in 6S management
CN110355758A (en) * 2019-07-05 2019-10-22 北京史河科技有限公司 A kind of machine follower method, equipment and follow robot system
CN110322457A (en) * 2019-07-09 2019-10-11 中国大恒(集团)有限公司北京图像视觉技术分公司 A kind of de-stacking method of 2D in conjunction with 3D vision
CN110533717B (en) * 2019-08-06 2023-08-01 武汉理工大学 Target grabbing method and device based on binocular vision
CN110533717A (en) * 2019-08-06 2019-12-03 武汉理工大学 A kind of target grasping means and device based on binocular vision
CN111232347A (en) * 2019-11-25 2020-06-05 太原科技大学 Tube and bar bundling method based on binocular vision
CN111232346B (en) * 2019-11-25 2021-09-28 太原科技大学 Pipe and bar bundling system based on binocular vision
CN111232347B (en) * 2019-11-25 2021-09-28 太原科技大学 Tube and bar bundling method based on binocular vision
CN111232346A (en) * 2019-11-25 2020-06-05 太原科技大学 Pipe and bar bundling system based on binocular vision
CN111754515A (en) * 2019-12-17 2020-10-09 北京京东尚科信息技术有限公司 Method and device for sequential gripping of stacked articles
CN111754515B (en) * 2019-12-17 2024-03-01 北京京东乾石科技有限公司 Sequential gripping method and device for stacked articles
CN111311691A (en) * 2020-03-05 2020-06-19 上海交通大学 Unstacking method and system of unstacking robot
CN111439594B (en) * 2020-03-09 2022-02-18 兰剑智能科技股份有限公司 Unstacking method and system based on 3D visual guidance
CN111439594A (en) * 2020-03-09 2020-07-24 兰剑智能科技股份有限公司 Unstacking method and system based on 3D visual guidance
CN113012238A (en) * 2021-04-09 2021-06-22 南京星顿医疗科技有限公司 Method for rapid calibration and data fusion of multi-depth camera
CN113012238B (en) * 2021-04-09 2024-04-16 南京星顿医疗科技有限公司 Method for quick calibration and data fusion of multi-depth camera
CN114408597A (en) * 2022-02-17 2022-04-29 湖南视比特机器人有限公司 Truck loading and unloading method and system based on 3D visual guidance and truck loading and unloading robot
CN114408597B (en) * 2022-02-17 2023-08-01 湖南视比特机器人有限公司 Truck loading and unloading method and system based on 3D visual guidance and loading and unloading robot
CN116051658A (en) * 2023-03-27 2023-05-02 北京科技大学 Camera hand-eye calibration method and device for target detection based on binocular vision
CN116051658B (en) * 2023-03-27 2023-06-23 北京科技大学 Camera hand-eye calibration method and device for target detection based on binocular vision

Similar Documents

Publication Publication Date Title
CN104331894A (en) Robot unstacking method based on binocular stereoscopic vision
CN109801337B (en) 6D pose estimation method based on instance segmentation network and iterative optimization
CN110211180A (en) A kind of autonomous grasping means of mechanical arm based on deep learning
CN104842361B (en) Robotic system with 3d box location functionality
AU2015307191B2 (en) Combination of stereo and structured-light processing
CN105217324A (en) A kind of novel de-stacking method and system
CN109272523B (en) Random stacking piston pose estimation method based on improved CVFH (continuously variable frequency) and CRH (Crh) characteristics
US9630320B1 (en) Detection and reconstruction of an environment to facilitate robotic interaction with the environment
US9802317B1 (en) Methods and systems for remote perception assistance to facilitate robotic object manipulation
CN103049912B (en) Random trihedron-based radar-camera system external parameter calibration method
DE112019000177T5 (en) A ROBOTIC SYSTEM WITH AN AUTOMATED PACKAGE REGISTRATION MECHANISM AND METHOD TO OPERATE THIS SYSTEM
CN109465809A (en) A kind of Intelligent garbage classification robot based on binocular stereo vision fixation and recognition
CN108555908A (en) A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN104463108A (en) Monocular real-time target recognition and pose measurement method
CN107610176A (en) A kind of pallet Dynamic Recognition based on Kinect and localization method, system and medium
CN103706568A (en) System and method for machine vision-based robot sorting
CN209289269U (en) A kind of Intelligent garbage classification robot based on binocular stereo vision fixation and recognition
CN104058260A (en) Robot automatic stacking method based on visual processing
CN104552341B (en) Mobile industrial robot single-point various visual angles pocket watch position and attitude error detection method
JPWO2009028489A1 (en) Object detection method, object detection apparatus, and robot system
Xia et al. Workpieces sorting system based on industrial robot of machine vision
CN111292376B (en) Visual target tracking method of bionic retina
CN109584216A (en) Object manipulator grabs deformable material bag visual identity and the localization method of operation
CN114882109A (en) Robot grabbing detection method and system for sheltering and disordered scenes
CN111311691A (en) Unstacking method and system of unstacking robot

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150204

RJ01 Rejection of invention patent application after publication