CN108537841A - A kind of implementation method, device and the electronic equipment of robot pickup - Google Patents
A kind of implementation method, device and the electronic equipment of robot pickup Download PDFInfo
- Publication number
- CN108537841A CN108537841A CN201710123315.9A CN201710123315A CN108537841A CN 108537841 A CN108537841 A CN 108537841A CN 201710123315 A CN201710123315 A CN 201710123315A CN 108537841 A CN108537841 A CN 108537841A
- Authority
- CN
- China
- Prior art keywords
- target
- visual angle
- view
- picked
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0014—Image feed-back for automatic industrial control, e.g. robot with camera
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Manipulator (AREA)
Abstract
The present invention provides implementation method, device and the electronic equipments of a kind of pickup of robot, belong to field of machine vision.Robot pickup implementation method include:Original three-dimensional image is received, the original three-dimensional image is to be obtained after camera has the container of multiple objects to be picked up to shoot accumulation;Receive the shape data of the object to be picked up;Position and the posture of target can be captured by being calculated according to the original three-dimensional image and the shape data in multiple objects to be picked up;The crawl information of target can be captured according to described in the position for capturing target and posture determination, and the crawl information is sent to robot, so that the robot can capture target according to the crawl information pickup.Technical solution through the invention can realize that there is robot automatic Picking the object, shelly-shaped object and presence of hollow space to wind possible object.
Description
Technical field
The present invention relates to field of machine vision, particularly relates to a kind of implementation method, device and the electronics of robot pickup and set
It is standby.
Background technology
In recent years, machine vision is fast-developing, is just gradually developing into an essential part in automatic technology;In order to
Efficiency is improved, saves time and human resources, the automation that technical staff develops many cargos picks up, sorts, is packaged as one
Robot of body, such as three bar parallel connection delta robots etc..
But the robot of the prior art is only capable of object, solid body or accumulation rule of the automatic Picking with larger plane
Object, for hollow space object, shelly-shaped object and exist and wind possible object, even cannot effectively pick up
It takes.
Invention content
The technical problem to be solved in the present invention is to provide a kind of robot pickup implementation method, device and electronic equipment,
It can realize that there is robot automatic Picking the object, shelly-shaped object and presence of hollow space to wind possible object.
In order to solve the above technical problems, the embodiment of the present invention offer technical solution is as follows:
On the one hand, a kind of implementation method of robot pickup is provided, including:
Receive original three-dimensional image, the original three-dimensional image be camera to accumulation have the containers of multiple objects to be picked up into
It is obtained after row shooting;
Receive the shape data of the object to be picked up;
The position of target can be captured by being calculated according to the original three-dimensional image and the shape data in multiple objects to be picked up
It sets and posture;
The crawl information of target can be captured according to described in the position for capturing target and posture determination, and is grabbed described
Breath of winning the confidence is sent to robot, so that the robot can capture target according to the crawl information pickup.
Further, described calculated according to the original three-dimensional image and the shape data can in multiple objects to be picked up
The position and posture for capturing target include:
The original three-dimensional image is handled, the space three-dimensional point set of multiple objects to be picked up is obtained;
Each visual angle above container is traversed, 3-D view is generated according to each visual angle of the space three-dimensional point set pair, it is right
3-D view is sliced, and target to be judged is identified on the image after slice according to the shape data;
Calculate each target to be judged can gripping, and by can gripping be more than first threshold target to be judged determination
For target can be captured;
Identify each position for capturing target and posture.
Further, described that the original three-dimensional image is handled, obtain the space three-dimensional of multiple objects to be picked up
Point set includes:
Remove image data, the image data of container and the noise outside the container in the original three-dimensional image;
Original three-dimensional image after the image data, the image data of container and the noise that remove outside container is converted into space
Three-dimensional point set.
Further, each visual angle above the traversal container, gives birth to according to each visual angle of the space three-dimensional point set pair
At 3-D view, 3-D view is sliced, target to be judged is identified on the image after slice according to the shape data
Including:
First angular field of view and visual angle cycle-index are set;
In each visual angle cycle stage, 3-D view is generated to each visual angle under the visual angle cycle stage, to graphics
As being sliced, the target to be judged on the image after slice is identified;
Compare the target to be judged of all visual angle cycles stage acquisitions, the target to be judged that removal wherein repeats.
Further, described in each visual angle cycle stage, each visual angle under the visual angle cycle stage is generated three-dimensional
Image is sliced 3-D view, identifies that the target to be judged on the image after slice includes:
Visual angle starting point and the first visual angle step-length are set, and according to described in the visual angle starting point of setting and the first visual angle step-length traversal
Each visual angle under first angular field of view;
The space three-dimensional point set is converted into 3-D view at each viewing angle;
From visual angle to being sliced to 3-D view according to preset space length on the range direction of object to be picked up;
The skeleton of object is identified on image after slice;
Whether the skeleton of judgment object meets the shape data of the object to be picked up;
If the skeleton of object meets the shape data of the object to be picked up, the reliability of object is calculated;
When the reliability of object is more than second threshold, the object is saved as into target to be judged.
Further, it is described calculate each target to be judged can gripping include:
Container data is added on the space three-dimensional point set and can wait picking up with what the target to be judged was wound
The data of object are taken, space three-dimensional point set is rebuild;
Under the corresponding visual angle of the target to be judged, the space three-dimensional point set of reconstruction is utilized to generate 3-D view;
The grabber of the robot is projected in the 3-D view;
According to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, mesh to be judged described in calculating
Target can gripping.
Further, each position for capturing target of the identification and posture include:
It determines and captures corresponding second angular field of view of target and the second visual angle step-length, second angular field of view with described
Less than first angular field of view, second visual angle step-length is less than first visual angle step-length;
Each visual angle under second angular field of view is traversed according to second visual angle step-length, at each viewing angle by institute
It states space three-dimensional point set and is converted to 3-D view;
From visual angle to being sliced to the 3-D view according to preset space length on the range direction of object to be picked up;
The skeleton of target can be captured on image after slice described in identification;
Whether the skeleton that target can be captured described in judgement meets the shape data of the object to be picked up;
If when the shape data degree of conformity highest of the skeleton for capturing target and the object to be picked up, according to right
Image output after the slice answered can capture position and the posture of target.
Further, can be captured described in the basis target position and posture determine described in can capture target crawl letter
Breath, and the crawl information is sent to robot and includes:
The six-degree-of-freedom information of target can be captured according to the position for capturing target and Attitude Calculation, and can be captured
The six-degree-of-freedom information of target is sent to robot.
The embodiment of the present invention additionally provides a kind of realization device of robot pickup, including:
First receiving module, for receiving original three-dimensional image, the original three-dimensional image has accumulation for camera multiple
The container of object to be picked up obtains after being shot;
Second receiving module, the shape data for receiving the object to be picked up;
Processing module, can in multiple objects to be picked up for being calculated according to the original three-dimensional image and the shape data
Capture position and the posture of target;
Information computational module is captured, target can be captured described in the position and posture determination for target can be captured according to
Crawl information, and the crawl information is sent to robot, so that the robot can according to the crawl information pickup
Capture target.
Further, the processing module includes:
Image procossing submodule obtains the sky of multiple objects to be picked up for handling the original three-dimensional image
Between three-dimensional point set;
Slicing treatment submodule, it is each according to the space three-dimensional point set pair for traversing each visual angle above container
Visual angle generates 3-D view, is sliced to 3-D view, is identified on the image after slice according to the shape data and wait sentencing
Disconnected target;
Computational submodule, for calculate each target to be judged can gripping, and can gripping be more than first threshold
Target to be judged be determined as that target can be captured;
Identify submodule, for identification each position for capturing target and posture.
Further, described image processing submodule includes:
Removal unit, for removing the image data outside the container in the original three-dimensional image, the image data of container
And noise;
Converting unit, for the initial three-dimensional after the image data outside container, the image data of container and noise will to be removed
Image is converted into space three-dimensional point set.
Further, the slicing treatment submodule includes:
Setting unit, for the first angular field of view and visual angle cycle-index to be arranged;
Slicing treatment unit, in each visual angle cycle stage, being generated to each visual angle under the visual angle cycle stage
3-D view is sliced 3-D view, identifies the target to be judged on the image after slice;
Screening unit, the target to be judged for comparing all visual angle cycles stage acquisitions, what removal wherein repeated waits sentencing
Disconnected target.
Further, the slicing treatment unit includes:
Subelement is initialized, for setting visual angle starting point and the first visual angle step-length;
Subelement is traversed, for being traversed under first angular field of view according to the visual angle starting point and the first visual angle step-length of setting
Each visual angle;
Conversion subunit, for the space three-dimensional point set to be converted to 3-D view at each viewing angle;
Be sliced subelement, for from visual angle on the range direction of object to be picked up according to preset space length to 3-D view
It is sliced;
It identifies subelement, the skeleton of object is identified on the image after slice;
Whether judgment sub-unit, the skeleton for judgment object meet the shape data of the object to be picked up;
Computation subunit, if the skeleton for object meets the shape data of the object to be picked up, calculate object can
By property;
Saving subunit, for when the reliability of object is more than second threshold, the object to be saved as mesh to be judged
Mark.
Further, the computational submodule includes:
Reconstruction unit, for adding container data on the space three-dimensional point set and capable of being sent out with the target to be judged
The data of the object to be picked up of raw winding, rebuild space three-dimensional point set;
Generation unit, under the corresponding visual angle of the target to be judged, the space three-dimensional point set of reconstruction being utilized to generate
3-D view;
Projecting cell, for the grabber of the robot to be projected in the 3-D view;
Computing unit, for according to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, calculating
The target to be judged can gripping.
Further, the identification submodule includes:
Determination unit captures corresponding second angular field of view of target and the second visual angle step-length, institute for determining with described
It states the second angular field of view and is less than first angular field of view, second visual angle step-length is less than first visual angle step-length;
Traversal Unit, for traversing each visual angle under second angular field of view according to second visual angle step-length;
Converting unit, for the space three-dimensional point set to be converted to 3-D view at each viewing angle;
Be sliced unit, for from visual angle on the range direction of object to be picked up according to preset space length to the graphics
As being sliced;
Recognition unit can capture the skeleton of target on the image after slice described in identification;
Judging unit, for judging whether the skeleton that can capture target meets the shape number of the object to be picked up
According to;
Computing unit, if most for the shape data degree of conformity of the skeleton for capturing target and the object to be picked up
Gao Shi can then capture position and the posture of target according to the image output after corresponding slice.
Further, the crawl information computational module is specifically used for that position and the posture meter of target can be captured according to
The six-degree-of-freedom information of target can be captured by calculating, and the six-degree-of-freedom information that can capture target is sent to robot.
The embodiment of the present invention additionally provides a kind of electronic equipment for realizing robot pickup, including:
Processor;With
Memory is stored with computer program instructions in the memory,
Wherein, when the computer program instructions are run by the processor so that the processor executes following step
Suddenly:
Receive original three-dimensional image, the original three-dimensional image be camera to accumulation have the containers of multiple objects to be picked up into
It is obtained after row shooting;
Receive the shape data of the object to be picked up;
The position of target can be captured by being calculated according to the original three-dimensional image and the shape data in multiple objects to be picked up
It sets and posture;
The crawl information of target can be captured according to described in the position for capturing target and posture determination, and is grabbed described
Breath of winning the confidence is sent to robot, so that the robot can capture target according to the crawl information pickup.
The embodiment of the present invention has the advantages that:
In said program, by the 3-D view of acquisition and the shape data of object to be picked up to object to be picked up unordered
The estimation that position and posture are carried out under the scene of accumulation, can more accurately estimate position and the posture of object to be picked up, to
Realize that there is robot automatic Picking the object, shelly-shaped object and presence of hollow space to wind possible object.
Description of the drawings
Fig. 1 is the flow diagram of the implementation method of robot of embodiment of the present invention pickup;
Fig. 2 is the flow signal that the embodiment of the present invention calculates position and posture that target can be captured in multiple objects to be picked up
Figure;
Fig. 3 is the flow diagram for the space three-dimensional point set that the embodiment of the present invention obtains multiple objects to be picked up;
Fig. 4 is the embodiment of the present invention identifies target to be judged flow diagram on the image after slice;
Fig. 5 is the flow diagram of the target to be judged on the image after the embodiment of the present invention identifies slice;
Fig. 6 be the embodiment of the present invention calculate each target to be judged can gripping flow diagram;
Fig. 7 is the flow diagram that the embodiment of the present invention identifies each position and posture for capturing target;
Fig. 8 is the structure diagram of the realization device of robot of embodiment of the present invention pickup;
Fig. 9 is the structure diagram of processing module of the embodiment of the present invention;
Figure 10 is the structure diagram of image procossing submodule of the embodiment of the present invention;
Figure 11 is the structure diagram of slicing treatment submodule of the embodiment of the present invention;
Figure 12 is the structure diagram of slicing treatment unit of the embodiment of the present invention;
Figure 13 is the structure diagram of computational submodule of the embodiment of the present invention;
Figure 14 is the structure diagram that the embodiment of the present invention identifies submodule;
Figure 15 is the structure diagram for the electronic equipment that the embodiment of the present invention realizes robot pickup;
Figure 16 is the flow diagram of the implementation method of specific embodiment of the invention robot pickup;
Figure 17 is the schematic diagram of application scenarios of the embodiment of the present invention;
Figure 18 and Figure 19 is the schematic diagram at visual angle of the embodiment of the present invention;
Figure 20 and Figure 21 is the schematic diagram of the grabber of robot end of the embodiment of the present invention;
Figure 22 is that the embodiment of the present invention captures the schematic diagram not wound;
Figure 23 is that the embodiment of the present invention captures the schematic diagram wound;
Figure 24 and Figure 25 be the embodiment of the present invention build the partial data of object to be picked up with calculate can gripping signal
Figure.
Specific implementation mode
To keep the embodiment of the present invention technical problems to be solved, technical solution and advantage clearer, below in conjunction with
Drawings and the specific embodiments are described in detail.
The embodiment of the present invention provides a kind of implementation method, device and the electronic equipment of robot pickup, can realize machine
There is people's automatic Picking the object, shelly-shaped object and presence of hollow space to wind possible object.
Embodiment one
The present embodiment provides a kind of implementation methods of robot pickup, as shown in Figure 1, including:
Step 101:Original three-dimensional image is received, the original three-dimensional image is camera there are multiple objects to be picked up to accumulation
Container shot after obtain;
Step 102:Receive the shape data of the object to be picked up;
Step 103:It calculates in multiple objects to be picked up and can capture according to the original three-dimensional image and the shape data
The position of target and posture;
Step 104:According to the crawl information that can capture target described in the position for capturing target and posture determination, and
The crawl information is sent to robot, so that the robot can capture target according to the crawl information pickup.
The present embodiment is by the 3-D view of acquisition and the shape data of object to be picked up to object to be picked up in unordered heap
The estimation that position and posture are carried out under long-pending scene, can more accurately estimate position and the posture of object to be picked up, to real
Showing robot automatic Picking, there is the object, shelly-shaped object and presence of hollow space to wind possible object.
Wherein, the position that can capture target refers to that can capture the specific location of target in a reservoir, can specifically be shown as
The three-dimensional coordinate of target in a reservoir can be captured;The posture that target can be captured includes that can capture the direction of target, for example can capture
Target is annulus, then can capture the posture of target and can show as annulus place plane and x-axis, y-axis and z-axis in space coordinates
Residing angle.
Robot can be completed after knowing the crawl information that can capture target and can capture the pickup of target and specifically grab
Breath of winning the confidence can be according to the six-degree-of-freedom information for capturing target that can capture the position of target and posture obtains, and object is in sky
Between have six-freedom degree, i.e., along the one-movement-freedom-degree of three rectangular co-ordinate axis directions of x, y, z and around these three reference axis turn
Dynamic degree of freedom, object can be accurately captured according to the six-degree-of-freedom information robot of object.
As an example, as shown in Fig. 2, the step 103 includes:
Step 1031:The original three-dimensional image is handled, the space three-dimensional point set of multiple objects to be picked up is obtained;
Step 1032:Each visual angle above container is traversed, three are generated according to each visual angle of the space three-dimensional point set pair
Image is tieed up, 3-D view is sliced, target to be judged is identified on the image after slice according to the shape data;
Step 1033:Calculate each target to be judged can gripping, and by can gripping be more than first threshold and wait sentencing
Disconnected target is determined as that target can be captured;
Step 1034:Identify each position for capturing target and posture.
As an example, as shown in figure 3, step 1031 includes:
Step 10311:It removes the image data outside the container in the original three-dimensional image, the image data of container and makes an uproar
Sound;
Step 10312:By the original three-dimensional image after image data, the image data of container and the noise outside removal container
It is converted into space three-dimensional point set.
As an example, as shown in figure 4, step 1032 includes:
Step 10321:First angular field of view and visual angle cycle-index are set;
Step 10322:In each visual angle cycle stage, graphics is generated to each visual angle under the visual angle cycle stage
Picture is sliced 3-D view, identifies the target to be judged on the image after slice;
Step 10323:Compare the target to be judged of all visual angle cycles stage acquisitions, the mesh to be judged that removal wherein repeats
Mark.
As an example, as shown in figure 5, step 10322 includes:
Step 103221:Visual angle starting point and the first visual angle step-length are set, and according to the visual angle starting point of setting and the first visual angle
Step-length traverses each visual angle under first angular field of view;
Step 103222:The space three-dimensional point set is converted into 3-D view at each viewing angle;
Step 103223:3-D view is carried out according to preset space length from visual angle on the range direction of object to be picked up
Slice;
Step 103224:The skeleton of object is identified on image after slice;
Step 103225:Whether the skeleton of judgment object meets the shape data of the object to be picked up;
Step 103226:If the skeleton of object meets the shape data of the object to be picked up, the reliability of object is calculated;
Step 103227:When the reliability of object is more than second threshold, the object is saved as into target to be judged.
As an example, as shown in fig. 6, step 1033 includes:
Step 10331:Container data is added on the space three-dimensional point set and can be occurred with the target to be judged
The data of the object to be picked up of winding rebuild space three-dimensional point set;
Step 10332:Under the corresponding visual angle of the target to be judged, the space three-dimensional point set of reconstruction is utilized to generate three-dimensional
Image;
Step 10333:The grabber of the robot is projected in the 3-D view;
Step 10334:According to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, institute is calculated
That states target to be judged can gripping.
As an example, as shown in fig. 7, step 1034 includes:
Step 10341:It determines and captures corresponding second angular field of view of target and the second visual angle step-length with described, described the
Two angulars field of view are less than first angular field of view, and second visual angle step-length is less than first visual angle step-length;
Step 10342:Each visual angle under second angular field of view is traversed according to second visual angle step-length, each
The space three-dimensional point set is converted into 3-D view under visual angle;
Step 10343:From visual angle on the range direction of object to be picked up according to preset space length to the 3-D view
It is sliced;
Step 10344:The skeleton of target can be captured on image after slice described in identification;
Step 10345:Whether the skeleton that target can be captured described in judgement meets the shape data of the object to be picked up;
Step 10346:If the shape data degree of conformity highest of the skeleton for capturing target and the object to be picked up
When, then position and the posture of target can be captured according to the image output after corresponding slice.
Further, can be captured described in the basis target position and posture determine described in can capture target crawl letter
Breath, and the crawl information is sent to robot and includes:
The six-degree-of-freedom information of target can be captured according to the position for capturing target and Attitude Calculation, and can be captured
The six-degree-of-freedom information of target is sent to robot.
Embodiment two
A kind of realization device 20 of robot pickup is present embodiments provided, as shown in figure 8, the present embodiment includes:
First receiving module 21, for receiving original three-dimensional image, the original three-dimensional image has accumulation for camera more
The container of a object to be picked up obtains after being shot;
Second receiving module 22, the shape data for receiving the object to be picked up;
Processing module 23, for being calculated in multiple objects to be picked up according to the original three-dimensional image and the shape data
Position and the posture of target can be captured;
Information computational module 24 is captured, mesh can be captured described in the position and posture determination for target can be captured according to
Target captures information, and the crawl information is sent to robot, so that the robot is according to the crawl information pickup
Target can be captured.
The present embodiment is by the 3-D view of acquisition and the shape data of object to be picked up to object to be picked up in unordered heap
The estimation that position and posture are carried out under long-pending scene, can more accurately estimate position and the posture of object to be picked up, to real
Showing robot automatic Picking, there is the object, shelly-shaped object and presence of hollow space to wind possible object.
As an example, as shown in figure 9, the processing module 23 includes:
Image procossing submodule 231 obtains multiple objects to be picked up for handling the original three-dimensional image
Space three-dimensional point set;
Slicing treatment submodule 232, it is every according to the space three-dimensional point set pair for traversing each visual angle above container
One visual angle generates 3-D view, is sliced to 3-D view, is identified and waited on the image after slice according to the shape data
Judge target;
Computational submodule 233, for calculate each target to be judged can gripping, and can gripping be more than the first threshold
The target to be judged of value is determined as that target can be captured;
Identify submodule 234, for identification each position for capturing target and posture.
As an example, as shown in Figure 10, image procossing submodule 231 includes:
Removal unit 2311, the image for removing the image data outside the container in the original three-dimensional image, container
Data and noise;
Converting unit 2312 is used to remove original after the image data outside container, the image data of container and noise
3-D view is converted into space three-dimensional point set.
As an example, as shown in figure 11, the slicing treatment submodule 232 includes:
Setting unit 2321, for the first angular field of view and visual angle cycle-index to be arranged;
Slicing treatment unit 2322 is used in each visual angle cycle stage, to each visual angle under the visual angle cycle stage
3-D view is generated, 3-D view is sliced, identifies the target to be judged on the image after slice;
Screening unit 2323, the target to be judged for comparing all visual angle cycles stage acquisitions, what removal wherein repeated
Target to be judged.
As an example, as shown in figure 12, the slicing treatment unit 2322 includes:
Subelement 23221 is initialized, for setting visual angle starting point and the first visual angle step-length;
Subelement 23222 is traversed, for traversing first visual angle according to the visual angle starting point and the first visual angle step-length of setting
Each visual angle under range;
Conversion subunit 23223, for the space three-dimensional point set to be converted to 3-D view at each viewing angle;
Be sliced subelement 23224, for from visual angle on the range direction of object to be picked up according to preset space length pair three
Dimension image is sliced;
It identifies subelement 23225, the skeleton of object is identified on the image after slice;
Whether judgment sub-unit 23226, the skeleton for judgment object meet the shape data of the object to be picked up;
Computation subunit 23227 calculates object if the skeleton for object meets the shape data of the object to be picked up
Reliability;
Saving subunit 23228, for when the reliability of object being more than second threshold, the object being saved as and waits sentencing
Disconnected target.
As an example, as shown in figure 13, the computational submodule 233 includes:
Reconstruction unit 2331, for adding container data and can be with the mesh to be judged on the space three-dimensional point set
The data of the object to be picked up wound are marked, space three-dimensional point set is rebuild;
Generation unit 2332, under the corresponding visual angle of the target to be judged, utilizing the space three-dimensional point set of reconstruction
Generate 3-D view;
Projecting cell 2333, for the grabber of the robot to be projected in the 3-D view;
Computing unit 2334 is used for according to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity,
Target to be judged described in calculating can gripping.
As an example, as shown in figure 14, the identification submodule 234 includes:
Determination unit 2341 captures corresponding second angular field of view of target and the second visual angle step for determining with described
Long, second angular field of view is less than first angular field of view, and second visual angle step-length is less than first visual angle step-length;
Traversal Unit 2342 each is regarded for traverse under second angular field of view according to second visual angle step-length
Angle;
Converting unit 2343, for the space three-dimensional point set to be converted to 3-D view at each viewing angle;
Be sliced unit 2344, for from visual angle on the range direction of object to be picked up according to preset space length to described three
Dimension image is sliced;
Recognition unit 2345 can capture the skeleton of target on the image after slice described in identification;
Judging unit 2346, for judging whether the skeleton that can capture target meets the shape of the object to be picked up
Data;
Computing unit 2347, if the shape data for the skeleton for capturing target and the object to be picked up meets
When spending highest, then position and the posture of target can be captured according to the image output after corresponding slice.
Further, the crawl information computational module 24 is specifically used for that position and the posture of target can be captured according to
The six-degree-of-freedom information of target can be captured by calculating, and the six-degree-of-freedom information that can capture target is sent to robot.
Embodiment three
A kind of electronic equipment 30 for realizing robot pickup is present embodiments provided, as shown in figure 15, the present embodiment includes:
Processor 32;With
Memory 34 is stored with computer program instructions in the memory 34,
Wherein, when the computer program instructions are run by the processor so that the processor 32 executes following
Step:
Receive original three-dimensional image, the original three-dimensional image be camera to accumulation have the containers of multiple objects to be picked up into
It is obtained after row shooting;
Receive the shape data of the object to be picked up;
The position of target can be captured by being calculated according to the original three-dimensional image and the shape data in multiple objects to be picked up
It sets and posture;
The crawl information of target can be captured according to described in the position for capturing target and posture determination, and is grabbed described
Breath of winning the confidence is sent to robot, so that the robot can capture target according to the crawl information pickup.
Further, as shown in figure 15, the electronic equipment for handling panoramic picture further includes network interface 31, input equipment
33, hard disk 35 and display equipment 36.
It can be interconnected by bus architecture between above-mentioned each interface and equipment.Bus architecture can be may include arbitrary
The bus and bridge of the interconnection of quantity.One or more central processing unit (CPU) specifically represented by processor 32, and by depositing
The various of one or more memory that reservoir 34 represents are electrically connected to together.Bus architecture can also such as will be set periphery
The various other of standby, voltage-stablizer and management circuit or the like are electrically connected to together.It is appreciated that bus architecture is for real
Connection communication between these existing components.Bus architecture in addition to including data/address bus, further include power bus, controlling bus and
Status signal bus in addition, these are all it is known in the art, therefore is no longer described in greater detail herein.
The network interface 31 can be connected to network (such as internet, LAN), dependency number is obtained from network
According to, for example, original three-dimensional image, object to be picked up shape data, and can be stored in hard disk 35.
The input equipment 33, can receive the various instructions of operating personnel's input, and be sent to processor 32 for holding
Row.The input equipment 33 may include keyboard or pointing device (for example, mouse, trace ball (trackball), touch-sensitive plate
Or touch screen etc..
The display equipment 36, the result that processor 32 can be executed instruction to acquisition are shown.
The memory 34 is calculated for program and data and processor 32 necessary to storage program area operation
The data such as intermediate result in the process.
It is appreciated that the memory 34 in the embodiment of the present invention can be volatile memory or nonvolatile memory,
Both or may include volatile and non-volatile memory.Wherein, nonvolatile memory can be read-only memory (ROM),
Programmable read only memory (PROM), Erasable Programmable Read Only Memory EPROM (EPROM), electrically erasable programmable read-only memory
(EEPROM) or flash memory.Volatile memory can be random access memory (RAM), be used as External Cache.Herein
The memory 34 of the device and method of description is intended to the memory of including but not limited to these and any other suitable type.
In some embodiments, memory 34 stores following element, executable modules or data structures, or
Their subset or their superset:Operating system 341 and application program 342.
Wherein, operating system 341, including various system programs, such as ccf layer, core library layer, driving layer etc., for real
Existing various basic businesses and the hardware based task of processing.Application program 342, including various application programs, such as browser
(Browser) etc., for realizing various applied business.Realize that the program of present invention method may be embodied in application program
In 342.
Above-mentioned processor 32, when calling and execute the application program and data that are stored in the memory 34, specifically,
When can be the program stored in application program 342 or instruction, original three-dimensional image can be received, the original three-dimensional image is
Camera obtains after having the container of multiple objects to be picked up to shoot accumulation;Receive the shape data of the object to be picked up;
Position and the posture of target can be captured by being calculated according to the original three-dimensional image and the shape data in multiple objects to be picked up;
According to the position for capturing target and posture determine described in can capture the crawl information of target, and by crawl information hair
Robot is given, so that the robot can capture target according to the crawl information pickup.
The method that the above embodiment of the present invention discloses can be applied in processor 32, or be realized by processor 32.Place
It may be a kind of IC chip to manage device 32, the processing capacity with signal.During realization, each step of the above method
It can be completed by the integrated logic circuit of the hardware in processor 32 or the instruction of software form.Above-mentioned processor 32 can
To be general processor, digital signal processor (DSP), application-specific integrated circuit (ASIC), ready-made programmable gate array (FPGA)
Either either transistor logic, discrete hardware components may be implemented or execute for other programmable logic device, discrete gate
Disclosed each method, step and logic diagram in the embodiment of the present invention.General processor can be microprocessor or this at
It can also be any conventional processor etc. to manage device.The step of method in conjunction with disclosed in the embodiment of the present invention, can directly embody
Execute completion for hardware decoding processor, or in decoding processor hardware and software module combination execute completion.Software
Module can be located at random access memory, flash memory, read-only memory, programmable read only memory or electrically erasable programmable storage
In the storage medium of this fields such as device, register maturation.The storage medium is located at memory 34, and processor 32 reads memory 34
In information, in conjunction with its hardware complete the above method the step of.
It is understood that embodiments described herein can use hardware, software, firmware, middleware, microcode or its
It combines to realize.For hardware realization, processing unit may be implemented in one or more application-specific integrated circuits (ASIC), number letter
Number processor DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), general processor, controller, microcontroller, microprocessor, other electronics lists for executing herein described function
In member or combinations thereof.
For software implementations, it can be realized herein by executing the module (such as process, function etc.) of function described herein
The technology.Software code is storable in memory and is executed by processor.Memory can in the processor or
It is realized outside processor.
Specifically, processor 32 handles the original three-dimensional image, obtains the space three of multiple objects to be picked up
Tie up point set;Each visual angle above container is traversed, 3-D view is generated according to each visual angle of the space three-dimensional point set pair, to three
Dimension image is sliced, and target to be judged is identified on the image after slice according to the shape data;It calculates and each waits judging
Target can gripping, and by can gripping be more than first threshold target to be judged be determined as that target can be captured;It identifies each
Position and the posture of target can be captured.
Specifically, processor 32 removes image data, the picture number of container outside the container in the original three-dimensional image
According to and noise;Original three-dimensional image after the image data, the image data of container and the noise that remove outside container is converted into sky
Between three-dimensional point set.
Specifically, the first angular field of view and visual angle cycle-index is arranged in processor 32;In each visual angle cycle stage, to this
Each visual angle under the cycle stage of visual angle generates 3-D view, is sliced to 3-D view, identifies on the image after slice
Target to be judged;Compare the target to be judged of all visual angle cycles stage acquisitions, the target to be judged that removal wherein repeats.
Specifically, processor 32 sets visual angle starting point and the first visual angle step-length, and according to the visual angle starting point of setting and first
Visual angle step-length traverses each visual angle under first angular field of view;The space three-dimensional point set is converted at each viewing angle
3-D view;From visual angle to being sliced to 3-D view according to preset space length on the range direction of object to be picked up;It is cutting
The skeleton of object is identified on image after piece;Whether the skeleton of judgment object meets the shape data of the object to be picked up;If
The skeleton of object meets the shape data of the object to be picked up, calculates the reliability of object;It is more than the in the reliability of object
When two threshold values, the object is saved as into target to be judged.
Specifically, processor 32 adds container data on the space three-dimensional point set and can be with the target to be judged
The data of the object to be picked up wound rebuild space three-dimensional point set;Under the corresponding visual angle of the target to be judged, utilize
The space three-dimensional point set of reconstruction generates 3-D view;The grabber of the robot is projected in the 3-D view;According to
The conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, target to be judged described in calculating can gripping.
Specifically, processor 32 is determined captures corresponding second angular field of view of target and the second visual angle step-length with described,
Second angular field of view is less than first angular field of view, and second visual angle step-length is less than first visual angle step-length;Root
Each visual angle under second angular field of view is traversed according to second visual angle step-length, at each viewing angle by the space three-dimensional
Point set is converted to 3-D view;From visual angle on the range direction of object to be picked up according to preset space length to the 3-D view
It is sliced;The skeleton of target can be captured on image after slice described in identification;The skeleton of target can be captured described in judgement is
The no shape data for meeting the object to be picked up;If the shape number of the skeleton for capturing target and the object to be picked up
When according to degree of conformity highest, then position and the posture of target can be captured according to the image output after corresponding slice.
Specifically, processor 32 can capture the six degree of freedom of target according to the position for capturing target and Attitude Calculation
Information, and the six-degree-of-freedom information that can capture target is sent to robot.
In said program, by the 3-D view of acquisition and the shape data of object to be picked up to object to be picked up unordered
The estimation that position and posture are carried out under the scene of accumulation, can more accurately estimate position and the posture of object to be picked up, to
Realize that there is robot automatic Picking the object, shelly-shaped object and presence of hollow space to wind possible object.
Example IV
There is extensive demand in robot to the sorting for arbitrarily accumulating object in manufacturing industry and other field, and machine regards
The prevalence of feel provides optimized integration thus.The implementation method of the robot pickup of the present embodiment, it is only necessary to know the part of object
Characteristic feature shape, for example be exactly round for annulus, so that it may with found in the object arbitrarily accumulated can capture target and
Their position and posture, are sent to robot, so that robot can capture target according to crawl information pickup.
As shown in figure 17, accumulation has the objects 5 of multiple same sizes in container 4, and in the present embodiment, object 5 is with annulus
For.Camera 3 is capable of the 3-D view of collection container 4, according to the 3-D view that camera 3 acquires, the technical solution of the present embodiment
It can identify in container 4 and capture target, target is captured so that robot 1 is captured with grabber 2 in container 4.
As shown in figure 16, the implementation method of the robot pickup of the present embodiment specifically includes following steps:
Step 401:Receive the original three-dimensional image of camera shooting;
The original three-dimensional image is to be obtained after camera has the container of multiple objects to be picked up to shoot accumulation, includes
Range information.
Step 402:Receive the shape data of object to be picked up;
The shape data of object to be picked up includes the size and shape of object to be picked up, according to the shape number of object to be picked up
According to can identify object to be picked up from three-dimensional or two dimensional image.
Step 403:Original three-dimensional image is handled, the space three-dimensional point set of multiple objects to be picked up is obtained;
The step is to remove the data unrelated with object to be picked up from original three-dimensional image, and will be on 3-D view
Point is converted to three dimensions point sequence.
The three-dimensional data of the working face of support container can be identified and removed first, it can be with planar hull modelling method come real
It is existing;The three-dimensional data of container part is identified and removed later, can identify the three of container by known container threedimensional model
Dimension data, or by the three-dimensional data of container registration step acquisition container, the noise in 3-D view is also removed later, most
The point on remaining 3-D view is converted into space three-dimensional point set afterwards, i.e., it is clean only with the relevant space of object to be picked up three
Tie up point set.
Step 404:Each visual angle above container is traversed, graphics is generated according to each visual angle of space three-dimensional point set pair
Picture is sliced 3-D view, and target to be judged is identified on the image after slice according to the shape data;
As shown in figure 18, it is assumed that there are one hemispherical grids to be covered in above container, and the centre of sphere is image center line and container
The intersection point of place working face.Visual angle is the arbitrary point to the line direction of the centre of sphere from spherical surface.Visual angle can be by spherical surface as a result,
The latitude α and longitude θ of upper point determine that range is 0≤α≤90 and -180 respectively<θ≤180.
As shown in figure 19, it is identifying when judging target, is needing that the first angular field of view and visual angle cycle-index is first arranged,
Multiple visual angles are searched for respectively under each visual angle cycle stage, and 3-D view is generated according to each visual angle of space three-dimensional point set pair, it is right
3-D view is sliced, and target to be judged is identified on the image after slice according to the shape data of object to be picked up.Such as
3-D view is generated to visual angle (0,0), 3-D view is sliced, according to the shape data of object to be picked up after slice
Target to be judged is identified on image;Or 3-D view is generated to visual angle (10,20), 3-D view is sliced, according to waiting picking up
The shape data of object is taken to identify target to be judged on the image after slice;Or 3-D view is generated to visual angle (20,30), it is right
3-D view is sliced, and target to be judged is identified on the image after slice according to the shape data of object to be picked up;Or it is right
Visual angle (30,60) generates 3-D view, is sliced to 3-D view, according to the shape data of object to be picked up after slice
Target to be judged is identified on image;Or 3-D view is generated to visual angle (40,110), 3-D view is sliced, according to waiting picking up
The shape data of object is taken to identify target to be judged on the image after slice.
The purpose for carrying out multiple visual angle cycle is to obtain enough targets to be judged.If terminated in one cycle
But have not been obtained enough when judging target, it will open next cycle.Or certain time can be set, setting when
Interior to carry out multiple visual angle cycle, when reaching the time of setting, visual angle cycle terminates.
Specifically, in each visual angle cycle stage, visual angle starting point and the first visual angle step-length are set, and according to the visual angle of setting
Starting point and the first visual angle step-length traverse each visual angle under the first angular field of view, wherein the first visual angle step-length includes longitude and latitude
The sampling step length of the sampling step length of degree, longitude is equal to the corresponding longitude range of the first angular field of view divided by samples the quantity at visual angle,
The sampling step length of latitude is equal to the corresponding latitude scope of the first angular field of view divided by samples the quantity at visual angle.
Space three-dimensional point set is converted into the 3-D view under the visual angle at each viewing angle, from visual angle to object to be picked up
The 3-D view is sliced according to preset space length on the range direction of body, wherein preset space length can be set as needed
It is fixed;In image after slice find and identification feature shape, if slice after image in there are one or multiple shapes exist,
Then there are one or more objects in explanation under the visual angle, and the skeleton of object is identified on the image after slice, judgment object
Whether skeleton meets the shape data of object to be picked up, if the skeleton of object meets the shape data of object to be picked up, it is also necessary to
The reliability for calculating object, verifies the object whether shape point identified is implicitly present in, because having under some cases, it may be possible to
The partial shape point of multiple objects integrally constitutes the skeleton of an object, and the object identified in this case is apparently not
One object being implicitly present in.In verification, the clean and tidy degree and integrated degree that consider shape point set are needed, and thus calculate
To reliability existing for object, if the reliability of object is more than preset second threshold, the object that will identify that is stored in
In one intermediate result list.
After the multiple visual angle cycle stage, multiple intermediate result lists are obtained, it will be in multiple intermediate result lists
As a result it is compared, it is that removal wherein repeats as a result, and choosing the wherein maximum object of reliability as target to be judged.
The present embodiment is observed 3-D view by various visual angles and to 3-D view slicing delamination, is known on the image after slice
The method of other object features shape can more accurately estimate the posture of object.
Step 405:Calculate each target to be judged can gripping, and by can gripping be more than first threshold and wait judging
Target is determined as that target can be captured;
The grabber of robot end may include two types as shown in Figure 20 and Figure 21.Refer to for two shown in Figure 20
Type grabber picks up object, such as annular object by the folding of two fingers;It is cyclic annular grabber shown in Figure 21, passes through
It shrinks and struts and supported from centre by annular object pickup, compare two finger-type grabbers, cyclic annular grabber can be to waiting for
Pickup object imposes soft uniform strength, those more suitable careful objects gently of needs.
Calculate can gripping when, need the three-dimensional data for restoring container again, if not restoring the three-dimensional data of container,
Then robot is capturing when judging target, and may colliding by container blocking or with container, it is unsuccessful to cause to capture.Specifically
Ground, can rebuild the three-dimensional data of container according to container model, these data can be used for calculating capturing for target to be judged
Property.
In addition, winding in order to prevent, the partial 3-D data of object to be picked up can also be built.Such as Figure 22 and Figure 23 institutes
Showing, object A is located at below object B, will not be wound for the position of crawl, the then crawl of Figure 22 in dotted line frame, and Figure 23
Crawl will be wound.In order to avoid winding, as shown in figures 24 and 25, detected all above target to be judged
Object hollow area be all filled with it is solid build the three-dimensional data of object, in this way grabbed in calculated crawl position
Taking will not wind.
To sum up, space three-dimensional point set is rebuild plus the data of container data and the object of structure in space three-dimensional point set, and
Under the corresponding visual angle of target to be judged, 3-D view is generated using the space three-dimensional point set of reconstruction, by the grabber of robot
It is projected in the 3-D view of production, according to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, meter
That calculates target to be judged can gripping.
Specifically, can calculate target to be judged using following formula can gripping G, wherein G between 0 and 1 it
Between:
G=1-g1*collisionPoints+g2*marginPoints;
Wherein, g1 and g2 is the coefficient of setting, and collisionPoints is conflict point and the adjacent domain conflict of projection
The quantity of point, marginPoints are the quantity of the point of white space near projection.
It is calculated can gripping be more than first threshold when, target to be judged can be determined as that target can be captured
Step 406:Identify each position for capturing target and posture.
In above-mentioned steps, the precision at the visual angle of traversal is poor, merely to roughly determination can capture target.Determination can
After capturing target, before pickup can capture target, need to obtain position and the posture that can more accurately capture target.Specifically
Ground determines the second angular field of view corresponding with that can capture target and the second visual angle step-length, wherein the second angular field of view is less than first
Angular field of view, the second visual angle step-length are less than the first visual angle step-length, are traversed according to the second visual angle step-length every under the second angular field of view
One visual angle can be found and can capture target accurately corresponding visual angle, and obtain under the visual angle position that can capture target and
Posture.
Specifically, space three-dimensional point set is converted into 3-D view at each viewing angle, from visual angle to object to be picked up
Range direction on 3-D view is sliced according to preset space length, on the image after slice identification can capture the bone of target
Frame, judges whether the skeleton that can capture target meets the shape data of object to be picked up, the skeleton that can capture target with wait picking up
When taking the shape data degree of conformity highest of object, obtain to capture position and the appearance of target according to the image after corresponding slice
State.
Basis, which can capture the position of target and Attitude Calculation, later can capture the six-degree-of-freedom information of target, and can capture
The six-degree-of-freedom information of target is sent to robot, and robot can accurately pickup can capture according to the six-degree-of-freedom information received
Target.
The present embodiment based on various visual angles transformation 3-D view to identify position and the posture of object, can accurately output
The six-degree-of-freedom information of body, the crawl object that can not only help robot preferably more suitable, but also can solve the problems, such as winding,
It can realize that there is robot automatic Picking the object, shelly-shaped object and presence of hollow space to wind possible object.
The above is the preferred embodiment of the present invention, it is noted that for those skilled in the art
For, without departing from the principles of the present invention, it can also make several improvements and retouch, these improvements and modifications
It should be regarded as protection scope of the present invention.
Claims (17)
1. a kind of implementation method of robot pickup, which is characterized in that including:
Original three-dimensional image is received, the original three-dimensional image, which is camera, has the container of multiple objects to be picked up to clap accumulation
It is obtained after taking the photograph;
Receive the shape data of the object to be picked up;
Calculated according to the original three-dimensional image and the shape data can be captured in multiple objects to be picked up target position and
Posture;
According to the position for capturing target and posture determine described in can capture the crawl information of target, and by crawl letter
Breath is sent to robot, so that the robot can capture target according to the crawl information pickup.
2. the implementation method of robot pickup according to claim 1, which is characterized in that described according to the initial three-dimensional
Image and the shape data calculate can capture the position of target in multiple objects to be picked up and posture includes:
The original three-dimensional image is handled, the space three-dimensional point set of multiple objects to be picked up is obtained;
Each visual angle above container is traversed, 3-D view is generated according to each visual angle of the space three-dimensional point set pair, to three-dimensional
Image is sliced, and target to be judged is identified on the image after slice according to the shape data;
Calculate each target to be judged can gripping, and by can gripping be more than the target to be judged of first threshold be determined as can
Capture target;
Identify each position for capturing target and posture.
3. the implementation method of robot pickup according to claim 2, which is characterized in that described to the initial three-dimensional figure
As being handled, the space three-dimensional point set for obtaining multiple objects to be picked up includes:
Remove image data, the image data of container and the noise outside the container in the original three-dimensional image;
Original three-dimensional image after the image data, the image data of container and the noise that remove outside container is converted into space three-dimensional
Point set.
4. the implementation method of robot pickup according to claim 3, which is characterized in that each above the traversal container
A visual angle generates 3-D view according to each visual angle of the space three-dimensional point set pair, is sliced to 3-D view, according to described
Shape data identifies that target to be judged includes on the image after slice:
First angular field of view and visual angle cycle-index are set;
In each visual angle cycle stage, 3-D view is generated to each visual angle under the visual angle cycle stage, to 3-D view into
Row slice, identifies the target to be judged on the image after slice;
Compare the target to be judged of all visual angle cycles stage acquisitions, the target to be judged that removal wherein repeats.
5. the implementation method of robot pickup according to claim 4, which is characterized in that described to recycle rank at each visual angle
Section generates 3-D view to each visual angle under the visual angle cycle stage, is sliced to 3-D view, after identifying slice
Target to be judged on image includes:
Visual angle starting point and the first visual angle step-length are set, and according to the visual angle starting point of setting and the first visual angle step-length traversal described first
Each visual angle under angular field of view;
The space three-dimensional point set is converted into 3-D view at each viewing angle;
From visual angle to being sliced to 3-D view according to preset space length on the range direction of object to be picked up;
The skeleton of object is identified on image after slice;
Whether the skeleton of judgment object meets the shape data of the object to be picked up;
If the skeleton of object meets the shape data of the object to be picked up, the reliability of object is calculated;
When the reliability of object is more than second threshold, the object is saved as into target to be judged.
6. the implementation method of robot pickup according to claim 2, which is characterized in that described to calculate each mesh to be judged
Target can gripping include:
Plus container data and the object to be picked up that can be wound with the target to be judged on the space three-dimensional point set
The data of body rebuild space three-dimensional point set;
Under the corresponding visual angle of the target to be judged, the space three-dimensional point set of reconstruction is utilized to generate 3-D view;
The grabber of the robot is projected in the 3-D view;
According to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, target to be judged described in calculating
It can gripping.
7. the implementation method of robot pickup according to claim 5, which is characterized in that the identification is each to capture mesh
Target position and posture include:
It determines and captures corresponding second angular field of view of target and the second visual angle step-length, second angular field of view are less than with described
First angular field of view, second visual angle step-length are less than first visual angle step-length;
Each visual angle under second angular field of view is traversed according to second visual angle step-length, at each viewing angle by the sky
Between three-dimensional point set be converted to 3-D view;
From visual angle to being sliced to the 3-D view according to preset space length on the range direction of object to be picked up;
The skeleton of target can be captured on image after slice described in identification;
Whether the skeleton that target can be captured described in judgement meets the shape data of the object to be picked up;
If when the shape data degree of conformity highest of the skeleton for capturing target and the object to be picked up, according to corresponding
Image output after slice can capture position and the posture of target.
8. the implementation method of robot pickup according to claim 1, which is characterized in that mesh can be captured described in the basis
Target position and posture can capture the crawl information of target described in determining, and the crawl information is sent to robot and includes:
The six-degree-of-freedom information of target can be captured according to the position for capturing target and Attitude Calculation, and can capture target
Six-degree-of-freedom information be sent to robot.
9. a kind of realization device of robot pickup, which is characterized in that including:
First receiving module, for receiving original three-dimensional image, the original three-dimensional image, which is camera, to be had accumulation and multiple waits picking up
It is obtained after taking the container of object to be shot;
Second receiving module, the shape data for receiving the object to be picked up;
Processing module, for calculating in multiple objects to be picked up and can capture according to the original three-dimensional image and the shape data
The position of target and posture;
Information computational module is captured, grabbing for target can be captured described in the position and posture determination for target can be captured according to
It wins the confidence breath, and the crawl information is sent to robot, so that the robot can be captured according to the crawl information pickup
Target.
10. the realization device of robot according to claim 9 pickup, which is characterized in that the processing module includes:
Image procossing submodule obtains the space three of multiple objects to be picked up for handling the original three-dimensional image
Tie up point set;
Slicing treatment submodule, for traversing each visual angle above container, according to each visual angle of the space three-dimensional point set pair
3-D view is generated, 3-D view is sliced, mesh to be judged is identified on the image after slice according to the shape data
Mark;
Computational submodule, for calculate each target to be judged can gripping, and by can gripping be more than first threshold and wait for
Judge that target is determined as that target can be captured;
Identify submodule, for identification each position for capturing target and posture.
11. the realization device of robot pickup according to claim 10, which is characterized in that described image handles submodule
Including:
Removal unit, for removing the image data outside the container in the original three-dimensional image, the image data of container and making an uproar
Sound;
Converting unit, for the original three-dimensional image after the image data outside container, the image data of container and noise will to be removed
It is converted into space three-dimensional point set.
12. the realization device of robot pickup according to claim 11, which is characterized in that the slicing treatment submodule
Including:
Setting unit, for the first angular field of view and visual angle cycle-index to be arranged;
Slicing treatment unit, in each visual angle cycle stage, being generated to each visual angle under the visual angle cycle stage three-dimensional
Image is sliced 3-D view, identifies the target to be judged on the image after slice;
Screening unit, the target to be judged for comparing all visual angle cycles stage acquisitions, the mesh to be judged that removal wherein repeats
Mark.
13. the realization device of robot pickup according to claim 12, which is characterized in that the slicing treatment unit packet
It includes:
Subelement is initialized, for setting visual angle starting point and the first visual angle step-length;
Subelement is traversed, it is every under first angular field of view for being traversed according to the visual angle starting point and the first visual angle step-length of setting
One visual angle;
Conversion subunit, for the space three-dimensional point set to be converted to 3-D view at each viewing angle;
It is sliced subelement, for being carried out to 3-D view according to preset space length from visual angle on the range direction of object to be picked up
Slice;
It identifies subelement, the skeleton of object is identified on the image after slice;
Whether judgment sub-unit, the skeleton for judgment object meet the shape data of the object to be picked up;
Computation subunit calculates the reliability of object if the skeleton for object meets the shape data of the object to be picked up;
Saving subunit, for when the reliability of object is more than second threshold, the object to be saved as target to be judged.
14. the realization device of robot pickup according to claim 10, which is characterized in that the computational submodule packet
It includes:
Reconstruction unit, for adding container data on the space three-dimensional point set and capable of being twined with the target to be judged
Around object to be picked up data, rebuild space three-dimensional point set;
Generation unit, it is three-dimensional under the corresponding visual angle of the target to be judged, utilizing the space three-dimensional point set of reconstruction to generate
Image;
Projecting cell, for the grabber of the robot to be projected in the 3-D view;
Computing unit, for according to the conflict point of projection, the quantity of adjacent domain conflict point and space abundant intensity, described in calculating
Target to be judged can gripping.
15. the realization device of robot pickup according to claim 13, which is characterized in that the identification submodule packet
It includes:
Determination unit captures corresponding second angular field of view of target and the second visual angle step-length for determining with described, and described the
Two angulars field of view are less than first angular field of view, and second visual angle step-length is less than first visual angle step-length;
Traversal Unit, for traversing each visual angle under second angular field of view according to second visual angle step-length;
Converting unit, for the space three-dimensional point set to be converted to 3-D view at each viewing angle;
Be sliced unit, for from visual angle on the range direction of object to be picked up according to preset space length to the 3-D view into
Row slice;
Recognition unit can capture the skeleton of target on the image after slice described in identification;
Judging unit, for judging whether the skeleton that can capture target meets the shape data of the object to be picked up;
Computing unit, if the shape data degree of conformity highest for the skeleton for capturing target and the object to be picked up
When, then position and the posture of target can be captured according to the image output after corresponding slice.
16. the realization device of robot pickup according to claim 9, which is characterized in that
The crawl information computational module can capture target specifically for that can capture the position of target and Attitude Calculation according to
Six-degree-of-freedom information, and the six-degree-of-freedom information that can capture target is sent to robot.
17. a kind of electronic equipment for realizing robot pickup, which is characterized in that including:
Processor;With
Memory is stored with computer program instructions in the memory,
Wherein, when the computer program instructions are run by the processor so that the processor executes following steps:
Original three-dimensional image is received, the original three-dimensional image, which is camera, has the container of multiple objects to be picked up to clap accumulation
It is obtained after taking the photograph;
Receive the shape data of the object to be picked up;
Calculated according to the original three-dimensional image and the shape data can be captured in multiple objects to be picked up target position and
Posture;
According to the position for capturing target and posture determine described in can capture the crawl information of target, and by crawl letter
Breath is sent to robot, so that the robot can capture target according to the crawl information pickup.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710123315.9A CN108537841B (en) | 2017-03-03 | 2017-03-03 | Robot picking method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710123315.9A CN108537841B (en) | 2017-03-03 | 2017-03-03 | Robot picking method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108537841A true CN108537841A (en) | 2018-09-14 |
CN108537841B CN108537841B (en) | 2021-10-08 |
Family
ID=63488685
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710123315.9A Active CN108537841B (en) | 2017-03-03 | 2017-03-03 | Robot picking method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108537841B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109333536A (en) * | 2018-10-26 | 2019-02-15 | 北京因时机器人科技有限公司 | A kind of robot and its grasping body method and apparatus |
CN109353778A (en) * | 2018-11-06 | 2019-02-19 | 深圳蓝胖子机器人有限公司 | Caching and feeding method, device and computer-readable storage media |
CN109697730A (en) * | 2018-11-26 | 2019-04-30 | 深圳市德富莱智能科技股份有限公司 | IC chip processing method, system and storage medium based on optical identification |
CN109816730A (en) * | 2018-12-20 | 2019-05-28 | 先临三维科技股份有限公司 | Workpiece grabbing method, apparatus, computer equipment and storage medium |
CN110395515A (en) * | 2019-07-29 | 2019-11-01 | 深圳蓝胖子机器人有限公司 | A kind of cargo identification grasping means, equipment and storage medium |
CN110428465A (en) * | 2019-07-12 | 2019-11-08 | 中国科学院自动化研究所 | View-based access control model and the mechanical arm grasping means of tactile, system, device |
CN111145257A (en) * | 2019-12-27 | 2020-05-12 | 深圳市越疆科技有限公司 | Article grabbing method and system and article grabbing robot |
CN112720495A (en) * | 2020-12-30 | 2021-04-30 | 深兰人工智能芯片研究院(江苏)有限公司 | Control method and device for manipulator, pickup device and storage medium |
CN112775968A (en) * | 2020-12-30 | 2021-05-11 | 深兰人工智能芯片研究院(江苏)有限公司 | Control method and device for manipulator, pickup device and storage medium |
CN113146636A (en) * | 2021-04-27 | 2021-07-23 | 深圳市一诺维奇教育科技有限公司 | Object grabbing method and device and flexible robot |
CN113811426A (en) * | 2019-03-06 | 2021-12-17 | 右手机器人股份有限公司 | Article feature adaptation techniques |
US11241795B2 (en) * | 2018-09-21 | 2022-02-08 | Beijing Jingdong Shangke Information Technology Co., Ltd. | Soft package, robot system for processing the same, and method thereof |
CN114161394A (en) * | 2019-03-14 | 2022-03-11 | 牧今科技 | Robot system with steering mechanism and method of operating the same |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1293752A (en) * | 1999-03-19 | 2001-05-02 | 松下电工株式会社 | Three-D object recognition method and pin picking system using the method |
US20030128207A1 (en) * | 2002-01-07 | 2003-07-10 | Canon Kabushiki Kaisha | 3-Dimensional image processing method, 3-dimensional image processing device, and 3-dimensional image processing system |
CN104271322A (en) * | 2012-03-08 | 2015-01-07 | 品质制造有限公司 | Touch sensitive robotic gripper |
US9207773B1 (en) * | 2011-05-13 | 2015-12-08 | Aquifi, Inc. | Two-dimensional method and system enabling three-dimensional user interaction with a device |
CN106269548A (en) * | 2016-09-27 | 2017-01-04 | 深圳市创科智能技术有限公司 | A kind of object automatic sorting method and device thereof |
-
2017
- 2017-03-03 CN CN201710123315.9A patent/CN108537841B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1293752A (en) * | 1999-03-19 | 2001-05-02 | 松下电工株式会社 | Three-D object recognition method and pin picking system using the method |
US20030128207A1 (en) * | 2002-01-07 | 2003-07-10 | Canon Kabushiki Kaisha | 3-Dimensional image processing method, 3-dimensional image processing device, and 3-dimensional image processing system |
US9207773B1 (en) * | 2011-05-13 | 2015-12-08 | Aquifi, Inc. | Two-dimensional method and system enabling three-dimensional user interaction with a device |
CN104271322A (en) * | 2012-03-08 | 2015-01-07 | 品质制造有限公司 | Touch sensitive robotic gripper |
CN106269548A (en) * | 2016-09-27 | 2017-01-04 | 深圳市创科智能技术有限公司 | A kind of object automatic sorting method and device thereof |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11241795B2 (en) * | 2018-09-21 | 2022-02-08 | Beijing Jingdong Shangke Information Technology Co., Ltd. | Soft package, robot system for processing the same, and method thereof |
CN109333536A (en) * | 2018-10-26 | 2019-02-15 | 北京因时机器人科技有限公司 | A kind of robot and its grasping body method and apparatus |
CN109353778A (en) * | 2018-11-06 | 2019-02-19 | 深圳蓝胖子机器人有限公司 | Caching and feeding method, device and computer-readable storage media |
CN109697730A (en) * | 2018-11-26 | 2019-04-30 | 深圳市德富莱智能科技股份有限公司 | IC chip processing method, system and storage medium based on optical identification |
CN109697730B (en) * | 2018-11-26 | 2021-02-09 | 深圳市德富莱智能科技股份有限公司 | IC chip processing method, system and storage medium based on optical identification |
CN109816730A (en) * | 2018-12-20 | 2019-05-28 | 先临三维科技股份有限公司 | Workpiece grabbing method, apparatus, computer equipment and storage medium |
CN113811426A (en) * | 2019-03-06 | 2021-12-17 | 右手机器人股份有限公司 | Article feature adaptation techniques |
CN114161394A (en) * | 2019-03-14 | 2022-03-11 | 牧今科技 | Robot system with steering mechanism and method of operating the same |
CN110428465A (en) * | 2019-07-12 | 2019-11-08 | 中国科学院自动化研究所 | View-based access control model and the mechanical arm grasping means of tactile, system, device |
CN110395515A (en) * | 2019-07-29 | 2019-11-01 | 深圳蓝胖子机器人有限公司 | A kind of cargo identification grasping means, equipment and storage medium |
CN110395515B (en) * | 2019-07-29 | 2021-06-11 | 深圳蓝胖子机器智能有限公司 | Cargo identification and grabbing method and equipment and storage medium |
CN111145257A (en) * | 2019-12-27 | 2020-05-12 | 深圳市越疆科技有限公司 | Article grabbing method and system and article grabbing robot |
CN111145257B (en) * | 2019-12-27 | 2024-01-05 | 深圳市越疆科技有限公司 | Article grabbing method and system and article grabbing robot |
CN112775968A (en) * | 2020-12-30 | 2021-05-11 | 深兰人工智能芯片研究院(江苏)有限公司 | Control method and device for manipulator, pickup device and storage medium |
CN112720495A (en) * | 2020-12-30 | 2021-04-30 | 深兰人工智能芯片研究院(江苏)有限公司 | Control method and device for manipulator, pickup device and storage medium |
CN113146636A (en) * | 2021-04-27 | 2021-07-23 | 深圳市一诺维奇教育科技有限公司 | Object grabbing method and device and flexible robot |
Also Published As
Publication number | Publication date |
---|---|
CN108537841B (en) | 2021-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537841A (en) | A kind of implementation method, device and the electronic equipment of robot pickup | |
CN109807882B (en) | Gripping system, learning device, and gripping method | |
JP7300550B2 (en) | METHOD AND APPARATUS FOR CONSTRUCTING SIGNS MAP BASED ON VISUAL SIGNS | |
WO2022170844A1 (en) | Video annotation method, apparatus and device, and computer readable storage medium | |
CN109816730B (en) | Workpiece grabbing method and device, computer equipment and storage medium | |
CN109816773A (en) | A kind of driving method, plug-in unit and the terminal device of the skeleton model of virtual portrait | |
CN109508688A (en) | Behavioral value method, terminal device and computer storage medium based on skeleton | |
CN113119099A (en) | Computer device and method for controlling mechanical arm to clamp and place object | |
CN109909998B (en) | Method and device for controlling movement of mechanical arm | |
CN108537214B (en) | Automatic construction method of indoor semantic map | |
CN111415420B (en) | Spatial information determining method and device and electronic equipment | |
CN109840508A (en) | One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium | |
CN109308437B (en) | Motion recognition error correction method, electronic device, and storage medium | |
CN112509036B (en) | Pose estimation network training and positioning method, device, equipment and storage medium | |
CN113524187B (en) | Method and device for determining workpiece grabbing sequence, computer equipment and medium | |
WO2022001739A1 (en) | Mark point identification method and apparatus, and device and storage medium | |
CN112288809B (en) | Robot grabbing detection method for multi-object complex scene | |
CN115713547A (en) | Motion trail generation method and device and processing equipment | |
CN109033920A (en) | A kind of recognition methods grabbing target, device and computer readable storage medium | |
CN114310892B (en) | Object grabbing method, device and equipment based on point cloud data collision detection | |
CN113119104B (en) | Mechanical arm control method, mechanical arm control device, computing equipment and system | |
CN116330306B (en) | Object grabbing method and device, storage medium and electronic equipment | |
JP5978996B2 (en) | Object region extraction apparatus, method, and program | |
CN111353347B (en) | Action recognition error correction method, electronic device, and storage medium | |
Schaub et al. | 6-DOF grasp detection for unknown objects using surface reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |