CN109146939A - A kind of generation method and system of workpiece grabbing template - Google Patents
A kind of generation method and system of workpiece grabbing template Download PDFInfo
- Publication number
- CN109146939A CN109146939A CN201811041799.3A CN201811041799A CN109146939A CN 109146939 A CN109146939 A CN 109146939A CN 201811041799 A CN201811041799 A CN 201811041799A CN 109146939 A CN109146939 A CN 109146939A
- Authority
- CN
- China
- Prior art keywords
- workpiece
- image
- robot
- crawl
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
Abstract
The invention discloses the generation methods and system of a kind of workpiece grabbing template, wherein the generation method of workpiece grabbing template, comprising: obtains robot to the image of reference workpiece multi-angled shooting, wherein the surface with reference to workpiece is pasted with crawl point label;The threedimensional model with reference to workpiece is rebuild according to described image, generate workpiece grabbing template, wherein the workpiece grabbing template include the crawl point indicated it is described with reference to the position that can be grabbed on workpiece a little.The present invention solves the problems, such as to import model heavy workload when grabbing workpiece and is difficult to specified crawl point, improves the deployment speed of robot three-dimensional visual grasping system.
Description
Technical field
The present embodiments relate to the generation method of three-dimensional reconstruction field more particularly to a kind of workpiece grabbing template and
System.
Background technique
Due to the development of artificial intelligence technology, the automation of robot is required gradually to get higher, this requires robots
The operation such as autonomous crawl and carrying can be carried out to object according to the instruction of people.
Currently, on assembly line utilize robot grabbing workpiece when, be all by camera obtain workpiece image data, and
The positioning and crawl of workpiece are carried out by modes such as image procossings.But this robot vision grasping means of the prior art
Have the shortcomings that several.Specifically, which needs very big calculation amount, needs to calculate from photographic intelligence
Product size and product space.Secondly as calculating process is complicated, it is poor so as to cause the timeliness of operation.Moreover, can not be right
Crawl point is positioned, and can only roughly be grabbed, be easy to damage workpiece.
Summary of the invention
In view of this, the purpose of the present invention is to propose to the generation methods and system of a kind of workpiece grabbing template, to improve machine
The deployment speed of device people's 3D vision grasping system.
To achieve the above object, the present invention adopts the following technical scheme:
On the one hand, the embodiment of the invention provides a kind of generation methods of workpiece grabbing template, comprising:
Robot is obtained to the image of reference workpiece multi-angled shooting, wherein the surface with reference to workpiece, which is pasted with, grabs
Take a label;
The threedimensional model with reference to workpiece is rebuild according to described image, generates workpiece grabbing template, wherein institute
State workpiece grabbing template include the crawl point indicated it is described with reference to the position that can be grabbed on workpiece a little.
Further, the robot that obtains is to the image of reference workpiece multi-angled shooting, comprising:
Using Robot Force control drive technology, the upper depth camera of the robot end is adjusted by dragging robot end
Pose, marked with changing the shooting visual angle of the depth camera and taking all crawl points;
Under each shooting visual angle using the depth camera shoot the single frames two dimensional image with reference to workpiece and
Single frames 3-D image.
Further, described that the threedimensional model with reference to workpiece is rebuild according to described image, it generates workpiece and grabs
Modulus plate, comprising:
The 3-D image is registrated under robot basis coordinates system, obtains the reference workpiece in the robot
Workpiece three-dimensional modeling data under basis coordinates system;
Based on the two dimensional image and the 3-D image, the crawl point label is identified, obtain the crawl point label
Label point coordinate data under the robot basis coordinates system;
Merge the workpiece three-dimensional modeling data and the label point coordinate data;
Reset workpiece coordinate system, by after merging the workpiece three-dimensional modeling data and the label point coordinate data convert
To in the workpiece coordinate system, the workpiece grabbing template is obtained.
Further, the 3-D image is registrated under robot basis coordinates system, comprising:
Pose based on the depth camera under the robot basis coordinates system, slightly matches the 3-D image
It is quasi-;
Smart registration is carried out to the 3-D image after rough registration based on iterative closest point algorithm.
On the other hand, the embodiment of the invention provides a kind of generation systems of workpiece grabbing template, including robot, reference
Workpiece and processor;
The robot is used to carry out multi-angled shooting with reference to workpiece to described, and the surface with reference to workpiece, which is pasted with, grabs
Take a label;
The processor is used to obtain robot to the image of reference workpiece multi-angled shooting, and according to described image to institute
The threedimensional model stated with reference to workpiece is rebuild, and workpiece grabbing template is generated, wherein the workpiece grabbing template includes described grabs
Take indicated it is described with reference to the position that can be grabbed on workpiece a little.
Further, the robot including dragging teaching system, robot end and is set to the robot end
Depth camera;
The dragging teaching system is used to utilize Robot Force control drive technology, is shown by dragging robot end
Religion;
The robot end is used to drag by power control to adjust the pose of the depth camera, to change the depth
It the shooting visual angle of camera and takes all crawl points and marks;
The depth camera for shot under each shooting visual angle the single frames two dimensional image with reference to workpiece and
Single frames 3-D image.
Further, dragging teaching button and camera acquisition button are additionally provided in the robot;
The robot end is specifically used for when the dragging teaching button is pressed, and is dragged by power control to adjust
The pose of depth camera is stated, to change the shooting visual angle of the depth camera and take all crawl point labels;
The depth camera is specifically used for clapping under each shooting visual angle when camera acquisition button is pressed
Take the photograph the single frames two dimensional image and single frames 3-D image with reference to workpiece.
Further, the processor includes image collection module, image registration module, picture recognition module, image conjunction
And module and crawl template generation module;
Described image obtains module for obtaining the two dimensional image and the 3-D image;
Described image registration module obtains described for being registrated under robot basis coordinates system to the 3-D image
With reference to workpiece three-dimensional modeling data of the workpiece under the robot basis coordinates system;
Described image identification module is used to be based on the two dimensional image and the 3-D image, identifies the crawl point mark
Note obtains label point coordinate data of the crawl point label under the robot basis coordinates system;
Described image merging module is for merging the workpiece three-dimensional modeling data and the label point coordinate data;
The crawl template generation module is for reseting workpiece coordinate system, by the workpiece three-dimensional modeling data after merging
It is converted into the workpiece coordinate system with the label point coordinate data, obtains the workpiece grabbing template.
Further, there is the crawl point label at least a pair of mark clip claw clip that is used for take clamping dot pattern a little, or
At least one is used to indicate the absorption dot pattern of sucker suction point to person, and the clamping dot pattern and the absorption dot pattern are difference
Pattern.
Further, the crawl point label includes the glue-line stacked gradually and printing layer;
The glue-line is used to the printing layer being attached at the surface with reference to workpiece;
The printing layer is printed with the clamping dot pattern or the absorption dot pattern.
The beneficial effects of the present invention are: the generation method and system of workpiece grabbing template provided by the invention, by table
A reference workpiece progress multi-angled shooting of the face paste with crawl point label, further according to shooting image to the three-dimensional mould of reference workpiece
Type is rebuild, pre-generated with the workpiece grabbing template that can grab dot position information.As a result, in normal assembly line later
When upper crawl does not have the workpiece of the same race of crawl point label, the pose of workpiece need to be only identified, it can be according to workpiece grabbing template
The position for determining crawl point, realizes the accurate crawl to workpiece, and model heavy workload is imported when solving grabbing workpiece and is difficult to
The problem of specified crawl point, improve the deployment speed of robot three-dimensional visual grasping system.
Detailed description of the invention
Exemplary embodiments of the present invention will be described in detail referring to the drawings by general below, makes those skilled in the art
Become apparent from above-mentioned and other feature and advantage of the invention, in attached drawing:
Fig. 1 is the flow diagram of the generation method of workpiece grabbing template provided in an embodiment of the present invention;
Fig. 2 is the idiographic flow schematic diagram of the generation method of workpiece grabbing template provided in an embodiment of the present invention;
Fig. 3 is the structural schematic diagram of the generation system of workpiece grabbing template provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram of dragging robot end provided in an embodiment of the present invention;
Fig. 5 is the enlarged diagram of region A in Fig. 3;
Fig. 6 is the schematic diagram provided in an embodiment of the present invention drawn when referring to workpiece using sucker;
Fig. 7 is the schematic diagram provided in an embodiment of the present invention clamped when referring to workpiece using clamping jaw;
Fig. 8 is the structural schematic diagram of crawl point label provided in an embodiment of the present invention.
Specific embodiment
To further illustrate the technical scheme of the present invention below with reference to the accompanying drawings and specific embodiments.It is understood that
It is that specific embodiment described herein is used only for explaining the present invention rather than limiting the invention.It further needs exist for illustrating
, only the parts related to the present invention are shown for ease of description, in attached drawing rather than entire infrastructure.
Fig. 1 is the flow diagram of the generation method of workpiece grabbing template provided in an embodiment of the present invention.This method is applicable in
In carry out three-dimensional reconstruction to the workpiece that can grab the case where, this method can be executed by the generation system of workpiece grabbing template.
As shown in Figure 1, the generation method of the workpiece grabbing template includes:
Step 110 obtains robot to the image of reference workpiece multi-angled shooting.
Wherein, the surface with reference to workpiece is pasted with crawl point label.In general, may include folder to the grasping manipulation of workpiece
The suction operation of the clamping operation and sucker end effector of pawl end effector, correspondingly, crawl point label may include folder
Take a label and suction point label, wherein clamping point is labeled as at least one labeled as at least a pair, suction point.Above-mentioned reference
Workpiece can attach a pair of clamping point label or a suction point label, if grabbing a little with reference to workpiece with multiple, refer to work
Part, which can respectively grab position a little, can attach corresponding crawl point label.
In the present embodiment, robot can be 6DOF revolute robot, and robot is by a series of connecting rod series connection groups
At by driver drive control, the relative motion in joint drives link motion, reaches specific bit the joint of robot between connecting rod
It sets to realize required pose.The present embodiment can be real by adjusting the pose of robot, to change the shooting visual angle to reference workpiece
Now to the multi-angled shooting of reference workpiece, and then robot is obtained to the image of reference workpiece multi-angled shooting.Wherein, the image
It can be two dimensional image and/or 3-D image.
In addition, should at least take a pair of of clamping after completing multi-angled shooting for subsequent rapidly and accurately grabbing workpiece
Point label or a suction point label.
Step 120 is rebuild according to threedimensional model of the image to reference workpiece, generates workpiece grabbing template.
Wherein, workpiece grabbing template includes grabbing the position that can be grabbed on the reference workpiece that point is indicated a little.This hair
Reference workpiece in bright embodiment is a qualified sample workpiece, and compared with general workpiece of the same race, which is pasted
With crawl point label, and the surface of general workpiece of the same race does not attach crawl point label, and crawl point label is only used for indicating
With reference to the position that can be grabbed on workpiece a little, convenient for identifying that the crawl point is marked from the image of acquisition, and then determines and refer to work
Position a little can be grabbed on part.
The present embodiment can rebuild the three-dimensional mould for referring to workpiece using suturing skill according to the multiframe two dimensional image of different perspectives
Type can also rebuild the threedimensional model for referring to workpiece by the registration to 3-D image according to the multiframe 3-D image of different perspectives,
So that the threedimensional model for the reference workpiece rebuild can embody position of the profile with reference to workpiece under basis coordinates.
The generation method of workpiece grabbing template provided in this embodiment, by the reference for being pasted with crawl point label to surface
Workpiece carries out multi-angled shooting, rebuilds further according to the image of shooting to the threedimensional model of reference workpiece, pre-generated to have
The workpiece grabbing template of dot position information can be grabbed.Crawl does not have crawl point label on normal assembly line later as a result,
Workpiece of the same race when, need to only identify the pose of workpiece, the position of crawl point can be determined according to workpiece grabbing template, realize pair
The accurate crawl of workpiece solves the problems, such as to import model heavy workload when grabbing workpiece and is difficult to specified crawl point, improves
The deployment speed of robot three-dimensional visual grasping system.
Optionally, in above scheme, robot is obtained to the image of reference workpiece multi-angled shooting, comprising:
Using Robot Force control drive technology, the position of the upper depth camera of robot end is adjusted by dragging robot end
Appearance, to change the shooting visual angle of depth camera and take all crawls point label;
The single frames two dimensional image and single frames 3-D image of workpiece are referred to using depth camera shooting under each shooting visual angle.
In addition, rebuilding according to threedimensional model of the image to reference workpiece, workpiece grabbing template is generated, comprising:
3-D image is registrated under robot basis coordinates system, is obtained with reference to workpiece under robot basis coordinates system
Workpiece three-dimensional modeling data;
Based on two dimensional image and 3-D image, identification crawl point label obtains crawl point label in robot basis coordinates system
Under label point coordinate data;
Merge workpiece three-dimensional modeling data and label point coordinate data, saves as workpiece grabbing template.
Correspondingly, as shown in Fig. 2, the generation method of workpiece grabbing template provided in an embodiment of the present invention is specific can include:
Step 111, using Robot Force control drive technology, pass through dragging robot end and adjust the upper depth of robot end
The pose of camera, to change the shooting visual angle of depth camera and take all crawls point label.
Illustratively, it may be provided with dragging teaching button in robot, it, can starter motor after dragging teaching button is pressed
The dragging teaching system of device people changes robot at this point, robot end can be dragged using Robot Force control drive technology
Pose, and then change the pose of the upper depth camera of robot end.Moreover, during dragging teaching, it can real-time recorder
The motion profile of people and the pose of robot.
The present embodiment adjusts the pose of the upper depth camera of robot end by dragging robot end, can be according to scene
The pose for the reference workpiece observed, it is easy to which the shooting visual angle of depth camera is adjusted to required angle by ground, convenient for shooting
It is marked to point is all grabbed.
Step 112, the single frames two dimensional image and single frames for referring to workpiece using depth camera shooting under each shooting visual angle
3-D image.
Illustratively, depth camera can have two-dimensional/three-dimensional that photographing mode can be switched, and may be provided with camera in robot and adopt
Collect button, when clicking camera acquisition button, depth camera can shoot single frames 3-D image;When double-clicking camera acquisition button,
Depth camera can shoot single frames two dimensional image and single frames 3-D image.Robot is being adjusted by dragging robot end as a result,
On end after depth camera to suitable pose, double-clicks camera and acquire button, can take under same shooting visual angle with reference to work
The single frames two dimensional image and single frames 3-D image of part.
Step 121 is registrated 3-D image under robot basis coordinates system, obtains with reference to workpiece in robot base
Workpiece three-dimensional modeling data under mark system.
Illustratively, the present embodiment can be using the pedestal of robot as the origin of robot basis coordinates system.It can be first based on deep
Pose of the camera under robot basis coordinates system is spent, rough registration is carried out to 3-D image;Again based on iterative closest point algorithm to warp
3-D image after rough registration carries out smart registration.Since the threedimensional model after rebuilding needs to be located at a robot basis coordinates system
In, the corresponding point cloud model of each frame 3-D image is and the different depth phase in the partial 3 d coordinate system of depth camera
The pose (i.e. different frame) of machine corresponds to different partial 3 d coordinate systems, it is therefore necessary to first according to note during dragging teaching
The pose of the robot of record determines pose of the depth camera under robot basis coordinates system, further according to the depth information of 3-D image
Coordinate of the reference workpiece under robot basis coordinates system in conversion shooting image.
Step 122 is based on two dimensional image and 3-D image, and identification crawl point label obtains crawl point label in robot
Label point coordinate data under basis coordinates system.
Illustratively, crawl point label, then root can be identified first according to the single frames two dimensional image for taking crawl point label
Determine crawl point label in corresponding position under above-mentioned robot basis coordinates system according to the single frames 3-D image under same shooting visual angle
Coordinate, to obtain a label point coordinate data of the crawl point label under robot basis coordinates system.
Step 123 merges workpiece three-dimensional modeling data and label point coordinate data.
The present embodiment can upload onto the server obtained two dimensional image and 3-D image, obtain work by server process
Part three-dimensional modeling data and label point coordinate data, to improve image real time transfer speed, while reducing the load of processor.
Step 124 resets workpiece coordinate system, by the workpiece three-dimensional modeling data after merging and marks point coordinate data transformation
Into workpiece coordinate system, workpiece grabbing template is obtained.
Illustratively, workpiece coordinate system can be established for origin with any in the geometric center of workpiece or Workpiece carrier platform,
By the workpiece three-dimensional modeling data after merging and point coordinate data is marked to convert using D translation transformation and three-dimensional rotation transformation
Into workpiece coordinate system.
The generation method of workpiece grabbing template provided in this embodiment, by dragging, robot end adjusts robot end
The pose of upper depth camera, can be according to the pose for the reference workpiece that field observation is arrived, more flexiblely by the bat of depth camera
The angle that visual angle is adjusted to required is taken the photograph, convenient for taking all crawls point label;According to the two dimensional image of multi-angled shooting and three
Image is tieed up, calculates workpiece three-dimensional modeling data and label point coordinate data respectively, and merge workpiece three-dimensional modeling data and mark
Remember point coordinate data, obtain workpiece grabbing template, crawl does not have the of the same race of crawl point label on normal assembly line later
When workpiece, the pose of workpiece need to be only identified, the position of crawl point can be determined according to workpiece grabbing template, is realized to workpiece
Precisely crawl, improves the deployment speed of robot three-dimensional visual grasping system.
The embodiment of the invention also provides a kind of generation systems of workpiece grabbing template, mention for executing the embodiment of the present invention
The generation method of the workpiece grabbing template of confession.Fig. 3 is the knot of the generation system of workpiece grabbing template provided in an embodiment of the present invention
Structure schematic diagram.As shown in figure 3, the generation system of the workpiece grabbing template includes robot 10, with reference to workpiece 20 and processor (figure
In be not shown, can be integrated in robot 10, may also set up in a remote terminal).
Wherein, robot 10 is used to carry out multi-angled shooting to reference workpiece 20, is pasted with and grabs with reference to the surface of workpiece 20
Take a label 21;It can be placed on Workpiece carrier platform 30 with reference to workpiece 20.
Processor is used to obtain image of 10 pairs of the robot with reference to 20 multi-angled shooting of workpiece, and according to image to reference work
The threedimensional model of part 20 is rebuild, and workpiece grabbing template is generated, wherein workpiece grabbing template includes that crawl point is indicated
Reference workpiece 20 on can grab position a little.
Optionally, robot 10 may include dragging teaching system, robot end 11 and be set to robot end 11
Depth camera 12;
It drags teaching system to be used to utilize Robot Force control drive technology, by dragging, robot end 11 carries out teaching;
Robot end 11 is used to drag by power control come the pose of percentage regulation camera 12, to change depth camera 12
Shooting visual angle and take all crawl point labels 21 (with reference to Fig. 3 and Fig. 4);
Depth camera 12 under each shooting visual angle for shooting single frames two dimensional image and the single frames three-dimensional with reference to workpiece 20
Image.
Optionally, as shown in figure 5, being additionally provided with dragging teaching button 13 and camera acquisition button 14 in robot;Example
Property, it drags teaching button 13 and camera acquisition button 14 is set on above-mentioned robot end 11.
Correspondingly, robot end 11 is specifically used for being dragged by power control when dragging teaching button 13 is pressed to adjust
The pose of whole depth camera 12, to change the shooting visual angle of depth camera 12 and take all crawls point label 21;
Depth camera 12 is specifically used for when camera acquisition button 14 is pressed, and shooting refers to work under each shooting visual angle
The single frames two dimensional image and single frames 3-D image of part 20.
Optionally, above-mentioned processor may include image collection module, image registration module, picture recognition module, image conjunction
And module and crawl template generation module.
Wherein, image collection module is for obtaining two dimensional image and 3-D image;Image registration module is used in robot
3-D image is registrated under basis coordinates system, is obtained with reference to workpiece threedimensional model number of the workpiece under robot basis coordinates system
According to;Picture recognition module is used to be based on two dimensional image and 3-D image, and identification crawl point label obtains crawl point label in machine
Label point coordinate data under people's basis coordinates system;Image merging module is for merging workpiece three-dimensional modeling data and mark point coordinate
Data;Crawl template generation module is for reseting workpiece coordinate system, by the workpiece three-dimensional modeling data and mark point seat after merging
Mark data are converted into workpiece coordinate system, obtain workpiece grabbing template.
Optionally, crawl point label takes clamping dot pattern a little for mark clip claw clip at least a pair of, or at least
One for indicating the absorption dot pattern of sucker suction point, clamping dot pattern and drawing dot pattern as different patterns.
Illustratively, drawing dot pattern can be circle, and clamping dot pattern can be diamond shape;Clamp dot pattern and suction point
Pattern may be other convenient for regular figures of identification, the present embodiment to this with no restriction.
Above-mentioned crawl point label may include suction point label and crawl point label.As shown in fig. 6, for the reference that can be drawn
Workpiece 20 is pasted with suction point label 22 with reference to the surface of workpiece 20, and the end effector of robot 10 is provided with a sucker 15,
It is directed at the accurate absorption, it can be achieved that reference workpiece 20 up and down by sucker 15 and suction point label 22;As shown in fig. 7, needle
To the reference workpiece 20 that can be clamped, crawl point label 23, the end effector of robot 10 are pasted with reference to the surface of workpiece 20
It is provided with a clamping jaw 16, is aligned with the horizontal direction of crawl point label 23, it can be achieved that the accurate of reference workpiece 20 by clamping jaw 16
Clamping.
Correspondingly, saving mark point after the coordinate for identifying different crawl point label and determining crawl point label
Different types of crawl point label can be distinguished when coordinate data with different marks.
In addition, clamping dot pattern and absorption dot pattern are also possible to identical pattern, at this point, crawl point label is only used for marking
Show with reference to the position that can be grabbed on workpiece a little, difference can be distinguished with different marks when subsequent preservation marks point coordinate data
The crawl point of type marks.
Optionally, as shown in figure 8, crawl point label may include the glue-line 201 stacked gradually and printing layer 202;Glue-line 201
For printing layer 202 to be attached to the surface with reference to workpiece;Printing layer 202 is printed with clamping dot pattern or draws dot pattern.Its
In, glue-line 201 can be non-drying glue layer, and printing layer 202 can be plastic printing layer.
The generation system of workpiece grabbing template provided in this embodiment, is grabbed with workpiece provided by any embodiment of the invention
The generation method of modulus plate belongs to same inventive concept, and workpiece grabbing template provided by any embodiment of the invention can be performed
Generation method has corresponding function and beneficial effect.The not technical detail of detailed description in the present embodiment, reference can be made to this hair
The generation method for the workpiece grabbing template that bright any embodiment provides.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that
The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation,
It readjusts, be combined with each other and substitutes without departing from protection scope of the present invention.Therefore, although by above embodiments to this
Invention is described in further detail, but the present invention is not limited to the above embodiments only, is not departing from present inventive concept
In the case of, it can also include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.
Claims (10)
1. a kind of generation method of workpiece grabbing template characterized by comprising
Robot is obtained to the image of reference workpiece multi-angled shooting, wherein the surface with reference to workpiece is pasted with crawl point
Label;
The threedimensional model with reference to workpiece is rebuild according to described image, generates workpiece grabbing template, wherein the work
Part crawl template include the crawl point indicated it is described with reference to the position that can be grabbed on workpiece a little.
2. the generation method of workpiece grabbing template according to claim 1, which is characterized in that the acquisition robot is to ginseng
Examine the image of workpiece multi-angled shooting, comprising:
Using Robot Force control drive technology, the position of the upper depth camera of the robot end is adjusted by dragging robot end
Appearance, to change the shooting visual angle of the depth camera and take all crawl point labels;
The single frames two dimensional image and single frames with reference to workpiece is shot using the depth camera under each shooting visual angle
3-D image.
3. the generation method of workpiece grabbing template according to claim 2, which is characterized in that described according to described image pair
The threedimensional model with reference to workpiece is rebuild, and workpiece grabbing template is generated, comprising:
The 3-D image is registrated under robot basis coordinates system, obtains the reference workpiece in the robot base
Workpiece three-dimensional modeling data under mark system;
Based on the two dimensional image and the 3-D image, the crawl point label is identified, obtain the crawl point label in institute
State the label point coordinate data under robot basis coordinates system;
Merge the workpiece three-dimensional modeling data and the label point coordinate data;
Reset workpiece coordinate system, by after merging the workpiece three-dimensional modeling data and the label point coordinate data be converted into institute
It states in workpiece coordinate system, obtains the workpiece grabbing template.
4. the generation method of workpiece grabbing template according to claim 3, which is characterized in that under robot basis coordinates system
The 3-D image is registrated, comprising:
Pose based on the depth camera under the robot basis coordinates system carries out rough registration to the 3-D image;
Smart registration is carried out to the 3-D image after rough registration based on iterative closest point algorithm.
5. a kind of generation system of workpiece grabbing template, which is characterized in that including robot, with reference to workpiece and processor;
The robot is used to carry out multi-angled shooting with reference to workpiece to described, and the surface with reference to workpiece is pasted with crawl point
Label;
The processor is used to obtain robot to the image of reference workpiece multi-angled shooting, and according to described image to the ginseng
The threedimensional model for examining workpiece is rebuild, and generates workpiece grabbing template, wherein the workpiece grabbing template includes the crawl point
What is indicated is described with reference to the position that can be grabbed on workpiece a little.
6. the generation system of workpiece grabbing template according to claim 5, which is characterized in that the robot includes dragging
Teaching system, robot end and the depth camera for being set to the robot end;
The dragging teaching system is used to utilize Robot Force control drive technology, and by dragging, robot end carries out teaching;
The robot end is used to drag by power control to adjust the pose of the depth camera, to change the depth camera
Shooting visual angle and take all crawl points and mark;
The depth camera under each shooting visual angle for shooting the single frames two dimensional image and single frames with reference to workpiece
3-D image.
7. the generation system of workpiece grabbing template according to claim 6, which is characterized in that also set up in the robot
There are dragging teaching button and camera acquisition button;
The robot end is specifically used for when the dragging teaching button is pressed, and is dragged by power control to adjust the depth
The pose of camera is spent, to change the shooting visual angle of the depth camera and take all crawl point labels;
The depth camera is specifically used for shooting institute under each shooting visual angle when camera acquisition button is pressed
State the single frames two dimensional image and single frames 3-D image with reference to workpiece.
8. the generation system of workpiece grabbing template according to claim 6, which is characterized in that the processor includes image
Obtain module, image registration module, picture recognition module, image merging module and crawl template generation module;
Described image obtains module for obtaining the two dimensional image and the 3-D image;
Described image registration module obtains the reference for being registrated under robot basis coordinates system to the 3-D image
Workpiece three-dimensional modeling data of the workpiece under the robot basis coordinates system;
Described image identification module is used to be based on the two dimensional image and the 3-D image, identifies the crawl point label, obtains
To label point coordinate data of the crawl point label under the robot basis coordinates system;
Described image merging module is for merging the workpiece three-dimensional modeling data and the label point coordinate data;
The crawl template generation module for reseting workpiece coordinate system, by after merging the workpiece three-dimensional modeling data and institute
It states label point coordinate data to be converted into the workpiece coordinate system, obtains the workpiece grabbing template.
9. the generation system of workpiece grabbing template according to claim 5, which is characterized in that the crawl point label has
It is at least a pair of for mark clip claw clip take clamping dot pattern a little or at least one for indicating the suction point of sucker suction point
Pattern, the clamping dot pattern and the absorption dot pattern are different patterns.
10. the generation system of workpiece grabbing template according to claim 9, which is characterized in that the crawl point label packet
Include the glue-line stacked gradually and printing layer;
The glue-line is used to the printing layer being attached at the surface with reference to workpiece;
The printing layer is printed with the clamping dot pattern or the absorption dot pattern.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811041799.3A CN109146939A (en) | 2018-09-07 | 2018-09-07 | A kind of generation method and system of workpiece grabbing template |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811041799.3A CN109146939A (en) | 2018-09-07 | 2018-09-07 | A kind of generation method and system of workpiece grabbing template |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109146939A true CN109146939A (en) | 2019-01-04 |
Family
ID=64827544
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811041799.3A Pending CN109146939A (en) | 2018-09-07 | 2018-09-07 | A kind of generation method and system of workpiece grabbing template |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109146939A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110136211A (en) * | 2019-04-18 | 2019-08-16 | 中国地质大学(武汉) | A kind of workpiece localization method and system based on active binocular vision technology |
CN110232710A (en) * | 2019-05-31 | 2019-09-13 | 深圳市皕像科技有限公司 | Article localization method, system and equipment based on three-dimensional camera |
CN110281231A (en) * | 2019-03-01 | 2019-09-27 | 浙江大学 | The mobile robot 3D vision grasping means of unmanned FDM increasing material manufacturing |
CN110509300A (en) * | 2019-09-30 | 2019-11-29 | 河南埃尔森智能科技有限公司 | Stirrup processing feeding control system and control method based on 3D vision guidance |
CN111724444A (en) * | 2020-06-16 | 2020-09-29 | 中国联合网络通信集团有限公司 | Method and device for determining grabbing point of target object and grabbing system |
GB2603931A (en) * | 2021-02-19 | 2022-08-24 | Additive Manufacturing Tech Ltd | Method for handling an additively manufactured part |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110098859A1 (en) * | 2009-10-26 | 2011-04-28 | Kabushiki Kaisha Yaskawa Denki | Robot system and workpiece picking method |
CN106041937A (en) * | 2016-08-16 | 2016-10-26 | 河南埃尔森智能科技有限公司 | Control method of manipulator grabbing control system based on binocular stereoscopic vision |
CN106829469A (en) * | 2017-03-30 | 2017-06-13 | 武汉库柏特科技有限公司 | A kind of unordered grabbing device of robot based on double camera and method |
CN106845354A (en) * | 2016-12-23 | 2017-06-13 | 中国科学院自动化研究所 | Partial view base construction method, part positioning grasping means and device |
CN107009358A (en) * | 2017-04-13 | 2017-08-04 | 武汉库柏特科技有限公司 | A kind of unordered grabbing device of robot based on one camera and method |
-
2018
- 2018-09-07 CN CN201811041799.3A patent/CN109146939A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110098859A1 (en) * | 2009-10-26 | 2011-04-28 | Kabushiki Kaisha Yaskawa Denki | Robot system and workpiece picking method |
CN106041937A (en) * | 2016-08-16 | 2016-10-26 | 河南埃尔森智能科技有限公司 | Control method of manipulator grabbing control system based on binocular stereoscopic vision |
CN106845354A (en) * | 2016-12-23 | 2017-06-13 | 中国科学院自动化研究所 | Partial view base construction method, part positioning grasping means and device |
CN106829469A (en) * | 2017-03-30 | 2017-06-13 | 武汉库柏特科技有限公司 | A kind of unordered grabbing device of robot based on double camera and method |
CN107009358A (en) * | 2017-04-13 | 2017-08-04 | 武汉库柏特科技有限公司 | A kind of unordered grabbing device of robot based on one camera and method |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110281231A (en) * | 2019-03-01 | 2019-09-27 | 浙江大学 | The mobile robot 3D vision grasping means of unmanned FDM increasing material manufacturing |
CN110136211A (en) * | 2019-04-18 | 2019-08-16 | 中国地质大学(武汉) | A kind of workpiece localization method and system based on active binocular vision technology |
CN110232710A (en) * | 2019-05-31 | 2019-09-13 | 深圳市皕像科技有限公司 | Article localization method, system and equipment based on three-dimensional camera |
CN110509300A (en) * | 2019-09-30 | 2019-11-29 | 河南埃尔森智能科技有限公司 | Stirrup processing feeding control system and control method based on 3D vision guidance |
CN110509300B (en) * | 2019-09-30 | 2024-04-09 | 河南埃尔森智能科技有限公司 | Steel hoop processing and feeding control system and control method based on three-dimensional visual guidance |
CN111724444A (en) * | 2020-06-16 | 2020-09-29 | 中国联合网络通信集团有限公司 | Method and device for determining grabbing point of target object and grabbing system |
CN111724444B (en) * | 2020-06-16 | 2023-08-22 | 中国联合网络通信集团有限公司 | Method, device and system for determining grabbing point of target object |
GB2603931A (en) * | 2021-02-19 | 2022-08-24 | Additive Manufacturing Tech Ltd | Method for handling an additively manufactured part |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109146939A (en) | A kind of generation method and system of workpiece grabbing template | |
CN109087343A (en) | A kind of generation method and system of workpiece grabbing template | |
CN108399639A (en) | Fast automatic crawl based on deep learning and arrangement method | |
CN109291048B (en) | Real-time online programming system and method for grinding and polishing industrial robot | |
CN108177143A (en) | A kind of robot localization grasping means and system based on laser vision guiding | |
CN110815213A (en) | Part identification and assembly method and device based on multi-dimensional feature fusion | |
CN105843166B (en) | A kind of special type multiple degrees of freedom automatic butt jointing device and its working method | |
CN108827154A (en) | A kind of robot is without teaching grasping means, device and computer readable storage medium | |
CN108994832A (en) | A kind of robot eye system and its self-calibrating method based on RGB-D camera | |
CN112926503B (en) | Automatic generation method of grabbing data set based on rectangular fitting | |
CN114912287A (en) | Robot autonomous grabbing simulation system and method based on target 6D pose estimation | |
CN112509063A (en) | Mechanical arm grabbing system and method based on edge feature matching | |
CN107009357A (en) | A kind of method that object is captured based on NAO robots | |
CN114882109A (en) | Robot grabbing detection method and system for sheltering and disordered scenes | |
CN111702772A (en) | Automatic upper surface guiding and gluing method and system | |
CN115512042A (en) | Network training and scene reconstruction method, device, machine, system and equipment | |
CN106886758B (en) | Insect identification device and method based on 3 d pose estimation | |
CN107363834A (en) | A kind of mechanical arm grasping means based on cognitive map | |
JP2019158691A (en) | Controller, robot, robot system, and method for recognizing object | |
CN112372641B (en) | Household service robot character grabbing method based on visual feedforward and visual feedback | |
CN114511690A (en) | 3D target detection data set acquisition device and labeling method | |
CN114055501A (en) | Robot grabbing system and control method thereof | |
CN115861780B (en) | Robot arm detection grabbing method based on YOLO-GGCNN | |
CN116749198A (en) | Binocular stereoscopic vision-based mechanical arm grabbing method | |
CN117037062A (en) | Target object grabbing method, system, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190104 |
|
RJ01 | Rejection of invention patent application after publication |