CN109436820B - Destacking method and destacking system for goods stack - Google Patents
Destacking method and destacking system for goods stack Download PDFInfo
- Publication number
- CN109436820B CN109436820B CN201811082406.3A CN201811082406A CN109436820B CN 109436820 B CN109436820 B CN 109436820B CN 201811082406 A CN201811082406 A CN 201811082406A CN 109436820 B CN109436820 B CN 109436820B
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- cargo
- stack
- camera
- goods
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 239000011159 matrix material Substances 0.000 claims description 30
- 230000009466 transformation Effects 0.000 claims description 27
- 230000001131 transforming effect Effects 0.000 claims description 7
- 230000009191 jumping Effects 0.000 claims description 5
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G61/00—Use of pick-up or transfer devices or of manipulators for stacking or de-stacking articles not otherwise provided for
Landscapes
- Manipulator (AREA)
Abstract
The invention relates to a method and a system for unstacking a cargo stack, wherein the method comprises the following steps: acquiring a 2D image obtained by shooting a goods stack by a 2D camera above the goods stack and a 3D image obtained by shooting the goods stack by a 3D camera above the goods stack; determining pose information of each top-level cargo on the top level of the cargo stack based on a robot-based coordinate system according to the 2D image and the 3D image; planning a motion track and an unstacking sequence of the robot to the goods stack according to pose information of all top-level goods based on a robot base coordinate system; the robot is controlled to take out all top-layer cargoes from the top layer of the cargoes according to the motion trail and the unstacking sequence; and judging that the top layer of the goods stack is not the last layer of the goods stack, continuously taking out all top layer goods from the top layer of the goods stack, and realizing unstacking of the goods stack. The invention automatically plans the motion track and the unstacking sequence of the robot based on the 2D image and the 3D image, controls the robot to unstacke the goods stack in layers according to the motion track and the unstacking sequence, and has higher automation degree and unstacking efficiency.
Description
Technical Field
The invention relates to the technical field of cargo storage and transportation, in particular to a cargo stack unstacking method and unstacking system.
Background
At present, an automatic stacking system is commonly adopted in the goods storage and transportation process at home and abroad, goods can be directly stacked to a tray, then the tray is transported to a warehouse by using a forklift or an AGV trolley, and the goods are usually taken out of the tray manually, so that the goods are discharged from the warehouse, and the manual unstacking is difficult and low in efficiency.
Disclosure of Invention
The invention provides a cargo stack unstacking method and a cargo stack unstacking system, aiming at the technical problems of high manual unstacking difficulty and low efficiency in the existing unstacking method.
In one aspect, the invention provides a method for unstacking a stack of goods, comprising the following specific steps:
step 1, acquiring a 2D image obtained by shooting a goods stack above the goods stack by a 2D camera and a 3D image obtained by shooting the goods stack above the goods stack by a 3D camera;
step 2, determining pose information of each top-level cargo on the top level of the cargo stack based on a robot-based coordinate system according to the 2D image and the 3D image;
step 3, planning a motion track and an unstacking sequence of the robot to the goods stack according to pose information of all the top-level goods based on a robot base coordinate system;
step 4, controlling a robot to take out all the top-layer cargoes from the top layer of the cargo stack according to the motion trail and the unstacking sequence;
and 5, judging whether the top layer of the goods stack is the last layer of the goods stack, if not, jumping to execute the step 1, and if so, stopping unstacking.
The unstacking method of the goods stack has the beneficial effects that: the 2D camera and the 3D camera can shoot the goods stack above the goods stack to obtain a 2D image and a 3D image, and the pose information of each top-layer goods on the top layer of the goods stack in the robot base coordinate system is identified by utilizing the 2D image and the 3D image, so that the pose information can be calibrated rapidly and accurately; according to the gesture information planning robot, the optimal motion path can be planned for different goods stacks, the robot moves to the top layer of the goods stacks along the motion path according to the destacking sequence, each top layer of goods is sequentially taken out from the top layer of the goods stacks until all top layer of goods are taken out, and then when the top layer of the goods stacks is not the last layer of the goods stacks, all top layer of goods are continuously taken out from the top layer of the goods stacks, so that the layered destacking of the top layer of the goods stacks by the robot is realized, and the gesture information planning robot has the characteristics of high automation degree and destacking efficiency.
In another aspect, the present invention provides an unstacking system for a stack of goods, the unstacking system comprising: conveyor belt, 2D camera, 3D camera, processor and robot;
the conveyor belt is used for conveying the goods stack to a working area of the robot;
the 2D camera is used for shooting a 2D image of the cargo stack above the cargo stack;
the 3D camera is used for shooting a 3D image of the cargo stack above the cargo stack;
the processor is used for acquiring the 2D image and the 3D image, determining pose information of each top-layer cargo on the top layer of the cargo stack based on a robot base coordinate system according to the 2D image and the 3D image, and planning a motion track and a unstacking sequence of a robot moving towards the cargo stack according to the pose information of all the top-layer cargoes based on the robot base coordinate system;
the robot is used for taking out all the top-layer cargoes from the top layer of the cargoes according to the movement track and the unstacking sequence;
the processor is further configured to determine whether the top layer of the cargo stack is a last layer of the cargo stack, if not, continue unstacking, and if so, stop unstacking.
The unstacking system of the goods stack has the beneficial effects that: the top layer of the goods stack is a layer of goods closest to the 2D camera and the 3D camera, the heights of the 2D camera and the 3D camera to the top layer of the goods stack are equal, the 2D camera and the 3D camera can shoot the goods stack above the goods stack at the same time to obtain a 2D image and a 3D image, the method is suitable for the same or different heights of the 2D camera and the 3D camera above the goods stack, and the pose information of each top layer of goods on the top layer of the goods stack in a robot base coordinate system is identified by utilizing the 2D image and the 3D image, so that the pose information can be calibrated rapidly and accurately; the processor plans the motion track and the unstacking sequence of the robot to the goods stack based on the pose information, realizes the layered unstacking of the robot to the goods stack, can cope with the optimal motion path planned by different goods stacks, moves to the top layer of the goods stack along the motion track according to the unstacking sequence, sequentially takes out each top layer of goods from the top layer of the goods stack until all top layer of goods are taken out, further continuously takes out all top layer of goods from the top layer of the goods stack when the top layer of the goods stack is not the last layer of the goods stack, realizes the layered unstacking of the robot to the top layer of the goods stack, and has the characteristics of high automation degree, unstacking efficiency and engineering practical value.
Drawings
Fig. 1 is a schematic flow chart of a method for unstacking a cargo stack according to an embodiment of the present invention;
fig. 2 is a schematic structural view of an unstacking system for a cargo stack according to an embodiment of the present invention;
fig. 3 is a schematic diagram of circuit connections in an unstacking system of the stack of goods corresponding to fig. 2.
In the drawings, the list of components represented by the various numbers is as follows:
11-base, 12-robot, 13-conveyer belt, 14-lift, 15-processor, 121-robotic arm, 122-unstacking fixture, 141-2D camera, 142-3D camera.
Detailed Description
The principles and features of the present invention are described below in connection with examples, which are set forth only to illustrate the present invention and not to limit the scope of the invention.
Example 1
As shown in fig. 1, the method for unstacking a cargo stack according to the embodiment of the present invention includes the following steps:
step 1, acquiring a 2D image obtained by shooting a cargo stack by a 2D camera above the cargo stack and a 3D image obtained by shooting the cargo stack by a 3D camera above the cargo stack;
step 2, determining pose information of each top-level cargo on the top level of the cargo stack based on a robot-based coordinate system according to the 2D image and the 3D image;
step 3, planning a motion track and an unstacking sequence of the robot to the goods stack according to pose information of all top-level goods based on a robot base coordinate system;
step 4, controlling the robot to take out all top-layer cargoes from the top layer of the cargoes according to the motion track and the unstacking sequence;
and 5, judging whether the top layer of the goods stack is the last layer of the goods stack, if not, jumping to execute the step 1, and if so, stopping unstacking.
In the embodiment, the top layer of the goods stack is a layer of goods closest to the 2D camera and the 3D camera, the heights from the 2D camera and the 3D camera to the top layer of the goods stack are equal, the 2D camera and the 3D camera can shoot the goods stack above the goods stack at the same time to obtain a 2D image and a 3D image, the method is suitable for the 2D camera and the 3D camera to be at the same height or at different heights above the goods stack, and the pose information of each top layer of goods on the top layer of the goods stack in a robot base coordinate system is identified by utilizing the 2D image and the 3D image, so that the pose information can be calibrated rapidly and accurately; according to the gesture information planning robot, the optimal motion path can be planned for different goods stacks, the robot moves to the top layer of the goods stacks along the motion path according to the destacking sequence, each top layer of goods is sequentially taken out from the top layer of the goods stacks until all top layer of goods are taken out, and then when the top layer of the goods stacks is not the last layer of the goods stacks, all top layer of goods are continuously taken out from the top layer of the goods stacks, so that the layered destacking of the top layer of the goods stacks by the robot is realized, and the gesture information planning robot has the characteristics of high automation degree, destacking efficiency and engineering practical value.
Preferably, step 2 specifically includes:
step 2.1, let the first camera coordinate system of the 2D camera be { o } 1 -x 1 y 1 z 1 Second phase of 3D cameraThe machine coordinate system is { o } 2 -x 2 y 2 z 2 The robot base coordinate system is { o } b -x b y b z b And acquiring a first homogeneous transformation matrix of the first camera coordinate system in the second camera coordinate system and a second homogeneous transformation matrix of the second camera coordinate system in the robot base coordinate system.
And 2.2, detecting the 2D image to obtain two-dimensional coordinates of each top-level cargo based on the first camera coordinate system, and detecting the 3D image to obtain depth coordinates and gesture information of each cargo in the 3D image based on the second camera coordinate system.
And 2.3, transforming the two-dimensional coordinates of each top-level cargo based on the first camera coordinate system by applying the first uniform transformation matrix to obtain the two-dimensional coordinates of each top-level cargo based on the second camera coordinate system.
And 2.4, ordering the depth coordinates of each cargo in the 3D image based on the second camera coordinate system to obtain a depth coordinate sequence, determining the depth coordinates of each top cargo based on the second camera coordinate system according to the distance between adjacent depth coordinates in the depth coordinate sequence, and correspondingly forming the three-dimensional coordinates of each top cargo based on the second camera coordinate system by the two-dimensional coordinates of each top cargo based on the second camera coordinate system and the depth coordinates of each top cargo based on the second camera coordinate system.
And 2.5, determining the position information of each top-level cargo in the second camera coordinate system according to each three-dimensional coordinate, and transforming the position information and the posture information of each top-level cargo in the second camera coordinate system by applying a second homogeneous transformation matrix to obtain the position and posture information of each top-level cargo based on the robot base coordinate system.
After the 2D image and the 3D image are subjected to filtering, correction, enhancement, depth detection and other processes, the number of top-layer cargoes on the top layer of the cargo stack and the pixel coordinates of each top-layer cargoes in the 2D image are identified based on the 2D image, and the two-dimensional coordinates of each top-layer cargoes are calculated according to the pixel coordinates, wherein the two-dimensional coordinates represent the plane positions of the top-layer cargoes on the top layer of the cargo stack; in addition, based on the 3D image, the depth coordinate and the gesture information of the goods based on the second camera coordinate system can be rapidly identified, the depth coordinate represents the height of the goods stack of the goods in the 3D image from the 3D camera, the gesture information represents the information such as the azimuth and the angle of the goods in the second camera coordinate system, and the identification efficiency and the accuracy of the two-dimensional coordinate, the depth coordinate and the gesture information are simplified and improved.
Sequencing the depth coordinates of cargoes in the 3D image based on the second camera coordinate system according to the sequence from small to large to obtain a depth coordinate sequence, and calculating the difference value of two adjacent depth coordinates in the depth coordinate sequence to obtain the difference value of the two adjacent depth coordinates; comparing the difference value with a preset cargo height, if the difference value is smaller than the preset cargo height, respectively corresponding cargoes of two adjacent depth coordinates are on the top layer of the cargo stack, and if the difference value is larger than or equal to the preset cargo height, respectively corresponding cargoes of two adjacent depth coordinates are not on the same layer; and determining the cargos on the same layer as the top-layer cargos, determining the depth coordinates of the cargos on the same layer in the depth coordinate sequence as the depth coordinates of the top-layer cargos based on a second camera coordinate system, eliminating the cargos which are not on the top layer of the cargos in the 3D image, and ensuring the accuracy of the top-layer cargos and the depth coordinates thereof.
For example: n top-level cargo (e.g., boxes) boxes_1, box_2, …, box_n, three-dimensional coordinates are (x) 1 ,y 1 ,z 1 ),(x 2 ,y 2 ,z 2 ),…,(x n ,y n ,z n ),z 1 、z 2 To z n Sequentially increasing in order, wherein z i -z j I < D, i is from 1 to n-1, j is from 2 to n, and D is the preset cargo height.
The three-dimensional coordinates are represented by a one-dimensional matrix form, the pose information is represented by a 3 multiplied by 3 matrix form, the position information and the pose information are respectively transformed into a robot-based coordinate system through a second homogeneous transformation matrix, so that the pose information is obtained, and the pose information is positioned based on a robot hand and eye positioning technology.
Preferably, step 3 specifically includes:
step 3.1, for each top-level cargo, based on pose information of a robot-based coordinate system, letting the pose informationThe position information is P b The gesture information is R b Establishing a motion equation,
equation of motion is expressed as
Wherein, b H f is a homogeneous transformation matrix of a flange coordinate system of the robot in a base coordinate system of the robot, f H ee the method comprises the steps that a homogeneous transformation matrix of a unstacking tool coordinate system of the robot in a flange coordinate system is adopted;
step 3.2, planning a motion track of a robot arm corresponding to the flange coordinate system to each top-layer cargo motion by using a motion equation;
and 3.3, ordering the position information in the pose information of all the top-level cargoes based on the robot base coordinate system to obtain a position sequence, and planning the position sequence as an unstacking sequence according to the sequence from small to large.
The homogeneous transformation matrix of the flange coordinate system in the robot base coordinate system is obtained by reading the joint angle function of the robot through the demonstrator, and parameters of the unstacking tool coordinate system in the homogeneous transformation matrix of the flange coordinate system are fixed values.
Constructing a motion equation by utilizing the position information and the gesture information of each top-level cargo based on the robot base coordinate system, and planning the optimal motion trail from the robot arm to each top-level cargo according to the motion equation; and sequencing the three-dimensional coordinates of each position information based on the robot base coordinate system according to the sequence from small to large, wherein the smaller the three-dimensional coordinates are, the closer the top-level cargo is to the robot, the larger the three-dimensional coordinates are, the farther the top-level cargo is to the robot, and the unstacking sequence from top to bottom and from front to back is determined.
Preferably, step 4 specifically includes: according to the unstacking sequence, the robot arm is sequentially driven to move to each top-layer cargo along each movement track, so that the unstacking tool in the unstacking tool coordinate system moves to each top-layer cargo, and the unstacking tool is driven to take out each top-layer cargo from the top layer of the cargo stack until all top-layer cargoes are taken out.
The robot arm starts from the initial position, moves to a top-layer cargo along a movement track, and returns to the initial position after the unstacking tool takes out the top-layer cargo from the top layer of the cargo stack, and the robot arm and the circulation are executed according to the unstacking sequence until all the top-layer cargo on the top layer of the cargo stack is taken out, so that the taking-out precision of the top-layer cargo is improved.
Preferably, step 5 specifically includes: when the top layer of the goods stack is not the last layer of the goods stack, synchronously lowering the 2D camera and the 3D camera to the same height as the top layer of the goods stack, and jumping to execute the step 1; and stopping unstacking when the top layer of the cargo stack is the last layer of the cargo stack.
If the top layer of the goods stack is not the last layer of the goods stack, at least one layer of goods is arranged below the top layer of the goods stack which is already unstacked, the goods become the top layer of the goods stack which is to be unstacked, in order to ensure that the top layer of the goods stack which is to be unstacked is within the shooting range of the 2D camera and the 3D camera, the heights of the 2D camera and the 3D camera are synchronously reduced, and the heights are the same as the heights of the goods on the top layer of the goods stack which is already unstacked, and the height difference between the 2D camera and the 3D camera and the top layer of the goods stack is kept unchanged.
Optionally, if the top layer of the stack is the last layer of the stack, indicating that the stack has been destacked, the 2D camera and the 3D camera are raised synchronously to the initial height to await destacking the next stack.
Example two
As shown in fig. 2 and fig. 3, an unstacking system for a cargo stack according to an embodiment of the present invention includes: the robot 12 includes a robot arm 121 and an unstacking jig 122 mounted at the end of the robot arm 121 to move the robot arm 121, a conveyor belt 13, a lifter 14, and a processor 15, and the lifter 14 is mounted with a 2D camera 141 and a 3D camera 142.
The robot 12 is fixed to the table of the base 11, one side surface of the base 11 faces the conveyor belt 13, the conveyor belt 13 is on the same side as one side surface of the base 11, and the 2D camera 141, the 3D camera 142, and the robot 12 are electrically connected to the processor 15, respectively.
The conveyor belt 13 is used for conveying the stack of goods to the working area of the robot arm 121, the 2D camera 141 is used for taking 2D images of the stack of goods above the stack of goods, and the 3D camera 142 is used for taking 3D images of the stack of goods above the stack of goods.
The processor 15 is used for acquiring a 2D image and a 3D image, determining pose information of each top-level cargo on the top layer of the cargo stack based on the robot base coordinate system according to the 2D image and the 3D image, and planning a motion track and an unstacking sequence of the robot to the cargo stack according to the pose information of all the top-level cargoes based on the robot base coordinate system; and the device is also used for judging whether the top layer of the goods stack is the last layer of the goods stack, if not, continuing to unstacking, and if so, stopping unstacking.
The robot 12 is used to remove all top-level loads from the top-level of the stack in a motion trajectory and unstacking sequence, and the elevator 14 is used to simultaneously lower the heights of the 2D camera and the 3D camera equal to the top-level of the stack when the top-level of the stack is not the last level of the stack.
In the embodiment, the top layer of the goods stack is a layer of goods closest to the 2D camera and the 3D camera, the heights from the 2D camera and the 3D camera to the top layer of the goods stack are equal, the 2D camera and the 3D camera can shoot the goods stack above the goods stack at the same time to obtain a 2D image and a 3D image, the method is suitable for the 2D camera and the 3D camera to be at the same height or at different heights above the goods stack, and the pose information of each top layer of goods on the top layer of the goods stack in a robot base coordinate system is identified by utilizing the 2D image and the 3D image, so that the pose information can be calibrated quickly and accurately; the processor plans the motion track and the unstacking sequence of the robot to the goods stack based on the pose information, realizes the layered unstacking of the robot to the goods stack, can cope with the optimal motion path planned by different goods stacks, moves to the top layer of the goods stack along the motion track according to the unstacking sequence, sequentially takes out each top layer of goods from the top layer of the goods stack until all top layer of goods are taken out, further continuously takes out all top layer of goods from the top layer of the goods stack when the top layer of the goods stack is not the last layer of the goods stack, realizes the layered unstacking of the robot to the top layer of the goods stack, and has the characteristics of high automation degree, unstacking efficiency and engineering practical value.
Preferably, the processor 15 is specifically configured to:
let the first camera coordinate system of the 2D camera be { o } 1 -x 1 y 1 z 1 The second camera coordinate system of the 3D camera is { o } 2 -x 2 y 2 z 2 The robot base coordinate system is { o } b -x b y b z b And acquiring a first homogeneous transformation matrix of the first camera coordinate system in the second camera coordinate system and a second homogeneous transformation matrix of the second camera coordinate system in the robot base coordinate system.
And detecting the 2D image to obtain two-dimensional coordinates of each top cargo based on the first camera coordinate system, and detecting the 3D image to obtain depth coordinates and gesture information of each top cargo in the 3D image based on the second camera coordinate system.
And transforming the two-dimensional coordinates of each top-level cargo based on the first camera coordinate system by applying the first uniform transformation matrix to obtain the two-dimensional coordinates of each top-level cargo based on the second camera coordinate system.
And ordering the depth coordinates of each cargo in the 3D image based on the second camera coordinate system to obtain a depth coordinate sequence, and determining the depth coordinates of each top cargo based on the second camera coordinate system according to the distance between adjacent depth coordinates in the depth coordinate sequence, wherein the two-dimensional coordinates of each top cargo based on the second camera coordinate system and the depth coordinates of each top cargo based on the second camera coordinate system correspondingly form the three-dimensional coordinates of each top cargo based on the second camera coordinate system.
And determining the position information of each top-level cargo in the second camera coordinate system according to each three-dimensional coordinate, and transforming the position information and the gesture information of each top-level cargo in the second camera coordinate system by applying a second homogeneous transformation matrix to obtain the gesture information of each top-level cargo based on the robot base coordinate system.
Preferably, the processor 15 is specifically further configured to:
for each top-level cargo, the pose information based on the robot-based coordinate system is set as P b The gesture information is R b Establishing a motion equation,
equation of motion is expressed as
Wherein, b H f is a homogeneous transformation matrix of a flange coordinate system of the robot in a base coordinate system of the robot, f H ee the matrix is a homogeneous transformation matrix of a unstacking tool coordinate system of the robot in a flange coordinate system.
And planning a motion track of a robot arm corresponding to the flange coordinate system to each top-level cargo by using a motion equation, sequencing position information in pose information of all the top-level cargoes based on the robot base coordinate system to obtain a position sequence, and planning the position sequence into an unstacking sequence according to the sequence from small to large.
Preferably, the processor 15 is specifically configured to: according to the unstacking sequence, the robot arm is sequentially driven to move to each top-layer cargo along each movement track, so that the unstacking tool random robot arm arranged at the tail end of the robot arm in the unstacking tool coordinate system moves to each top-layer cargo, and the unstacking tool is driven to take out each top-layer cargo from the top layer of the cargo stack until all top-layer cargoes are taken out.
Preferably, the elevator 14 is also used to raise the 2D camera 141 and the 3D camera 142 to an initial height simultaneously when the top layer of the stack is the last layer of the stack.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (6)
1. A method of destacking a stack of goods comprising the steps of:
step 1, acquiring a 2D image obtained by shooting a goods stack above the goods stack by a 2D camera and a 3D image obtained by shooting the goods stack above the goods stack by a 3D camera;
step 2, determining pose information of each top-level cargo on the top level of the cargo stack based on a robot-based coordinate system according to the 2D image and the 3D image;
step 3, planning a motion track and an unstacking sequence of the robot to the goods stack according to pose information of all the top-level goods based on a robot base coordinate system;
step 4, controlling a robot to take out all the top-layer cargoes from the top layer of the cargo stack according to the motion trail and the unstacking sequence;
step 5, judging whether the top layer of the goods stack is the last layer of the goods stack, if not, jumping to execute the step 1, and if so, stopping unstacking;
the step 2 specifically includes:
step 2.1, let the first camera coordinate system of the 2D camera be { o } 1 -x 1 y 1 z 1 A second camera coordinate system of the 3D camera is { o } 2 -x 2 y 2 z 2 The robot base coordinate system is { o } b -x b y b z b Acquiring a first homogeneous transformation matrix of the first camera coordinate system in the second camera coordinate system and a second homogeneous transformation matrix of the second camera coordinate system in the robot base coordinate system;
step 2.2, detecting the 2D image to obtain two-dimensional coordinates of each top cargo based on the first camera coordinate system, and detecting the 3D image to obtain depth coordinates and gesture information of each cargo in the 3D image based on the second camera coordinate system;
2.3, transforming the two-dimensional coordinates of each top-level cargo based on the first camera coordinate system by applying the first uniform transformation matrix to obtain the two-dimensional coordinates of each top-level cargo based on the second camera coordinate system;
step 2.4, ordering the depth coordinates of each cargo in the 3D image based on the second camera coordinate system to obtain a depth coordinate sequence, and determining the depth coordinates of each top cargo based on the second camera coordinate system according to the distance between adjacent depth coordinates in the depth coordinate sequence, wherein the two-dimensional coordinates of each top cargo based on the second camera coordinate system and the three-dimensional coordinates of each top cargo based on the second camera coordinate system are correspondingly formed by the depth coordinates of each top cargo based on the second camera coordinate system;
step 2.5, determining the position information of each top-level cargo in the second camera coordinate system according to each three-dimensional coordinate, and transforming the position information and the posture information of each top-level cargo in the second camera coordinate system by applying the second homogeneous transformation matrix to obtain the position information of each top-level cargo based on the robot base coordinate system;
the step 3 specifically includes:
step 3.1, making the position information in the pose information of each top-level cargo based on the robot base coordinate system be P b Enabling the pose information in the pose information of each top-level cargo based on the robot-based coordinate system to be R b Establishing a motion equation,
the equation of motion is expressed as
Wherein, b H f a homogeneous transformation matrix of the flange coordinate system of the robot in the robot base coordinate system, f H ee the method comprises the steps of obtaining a homogeneous transformation matrix of an unstacking tool coordinate system of the robot in a flange coordinate system;
step 3.2, planning a motion trail of the robot arm corresponding to the flange coordinate system to each top cargo by using the motion equation;
and 3.3, ordering the position information in the pose information of all the top-level cargoes based on the robot base coordinate system to obtain a position sequence, and planning the position sequence into the unstacking sequence according to the sequence from small to large.
2. A method of unstacking a stack of goods according to claim 1, wherein said step 4 comprises:
according to the unstacking sequence, the robot arm is sequentially driven to move to each top-layer cargo along each movement track, so that unstacking tools in the unstacking tool coordinate system move to each top-layer cargo along with the robot arm, and the unstacking tools are driven to take out each top-layer cargo from the top layer of the cargo stack until all the top-layer cargoes are taken out.
3. A method of unstacking a stack of goods according to claim 1 or 2, wherein said step 5 comprises:
when the top layer of the goods stack is not the last layer of the goods stack, synchronously lowering the 2D camera and the 3D camera to the same height as the top layer of the goods stack, and jumping to execute the step 1;
and stopping unstacking when the top layer of the goods stack is the last layer of the goods stack.
4. A destacking system for a stack of goods, the destacking system comprising: conveyor belt, 2D camera, 3D camera, processor and robot;
the conveyor belt is used for conveying the goods stack to a working area of the robot;
the 2D camera is used for shooting a 2D image of the cargo stack above the cargo stack;
the 3D camera is used for shooting a 3D image of the cargo stack above the cargo stack;
the processor is used for acquiring the 2D image and the 3D image, determining pose information of each top-layer cargo on the top layer of the cargo stack based on a robot base coordinate system according to the 2D image and the 3D image, and planning a motion track and a unstacking sequence of a robot moving towards the cargo stack according to the pose information of all the top-layer cargoes based on the robot base coordinate system;
the robot is used for taking out all the top-layer cargoes from the top layer of the cargoes according to the movement track and the unstacking sequence;
the processor is further used for judging whether the top layer of the goods stack is the last layer of the goods stack, if not, continuing to unstacking, and if yes, stopping unstacking;
the processor is specifically configured to:
let the first camera coordinate system of the 2D camera be { o } 1 -x 1 y 1 z 1 A second camera coordinate system of the 3D camera is { o } 2 -x 2 y 2 z 2 The robot base coordinate system is { o } b -x b y b z b Acquiring a first homogeneous transformation matrix of the first camera coordinate system in the second camera coordinate system and a second homogeneous transformation matrix of the second camera coordinate system in the robot base coordinate system;
detecting the 2D image to obtain two-dimensional coordinates of each top-level cargo based on the first camera coordinate system, and detecting the 3D image to obtain depth coordinates and gesture information of each cargo in the 3D image based on the second camera coordinate system;
transforming the two-dimensional coordinates of each top-level cargo based on the first camera coordinate system by applying the first uniform transformation matrix to obtain the two-dimensional coordinates of each top-level cargo based on the second camera coordinate system;
ordering the depth coordinates of each cargo in the 3D image based on the second camera coordinate system to obtain a depth coordinate sequence, and determining the depth coordinates of each top cargo based on the second camera coordinate system according to the distance between adjacent depth coordinates in the depth coordinate sequence, wherein the two-dimensional coordinates of each top cargo based on the second camera coordinate system and the three-dimensional coordinates of each top cargo based on the second camera coordinate system are correspondingly formed by the depth coordinates of each top cargo based on the second camera coordinate system;
determining the position information of each top-level cargo in the second camera coordinate system according to each three-dimensional coordinate, and applying the second homogeneous transformation matrix to transform the position information and the posture information of each top-level cargo in the second camera coordinate system to obtain the posture information of each top-level cargo based on the robot base coordinate system;
the processor is specifically further configured to:
for each top-level cargo, the pose information based on the robot-based coordinate system is set as P b The gesture information is R b Establishing a motion equation,
the equation of motion is expressed as
Wherein, b H f a homogeneous transformation matrix of the flange coordinate system of the robot in the robot base coordinate system, f H ee the method comprises the steps of obtaining a homogeneous transformation matrix of an unstacking tool coordinate system of the robot in a flange coordinate system;
planning a motion trail of the robot arm corresponding to the flange coordinate system to each top cargo by using the motion equation;
and sequencing all the top-level cargoes based on position information in pose information of a robot base coordinate system to obtain a position sequence, and planning the position sequence into the unstacking sequence according to the sequence from small to large.
5. The unstacking system of a stack of goods according to claim 4, wherein said processor is specifically configured to:
according to the unstacking sequence, the robot arm is sequentially driven to move to each top-layer cargo along each movement track, so that unstacking tools in the unstacking tool coordinate system move to each top-layer cargo along with the robot arm, and the unstacking tools are driven to take out each top-layer cargo from the top layer of the cargo stack until all the top-layer cargoes are taken out.
6. A destacking system as in claim 4 or 5 further comprising an elevator mounted with the 2D camera and the 3D camera for simultaneously lowering the height of the 2D camera and the 3D camera equal to the top layer of the stack when the top layer of the stack is not the last layer of the stack.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811082406.3A CN109436820B (en) | 2018-09-17 | 2018-09-17 | Destacking method and destacking system for goods stack |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811082406.3A CN109436820B (en) | 2018-09-17 | 2018-09-17 | Destacking method and destacking system for goods stack |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109436820A CN109436820A (en) | 2019-03-08 |
CN109436820B true CN109436820B (en) | 2024-04-16 |
Family
ID=65530527
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811082406.3A Active CN109436820B (en) | 2018-09-17 | 2018-09-17 | Destacking method and destacking system for goods stack |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109436820B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110422521B (en) * | 2019-07-17 | 2021-06-01 | 上海新时达机器人有限公司 | Robot side unstacking method and device for irregular random materials |
CN110642025B (en) * | 2019-09-26 | 2020-07-24 | 华中科技大学 | Stacking and unstacking device for automatic transfer of box body structure |
CN112978392B (en) * | 2019-12-13 | 2023-04-21 | 上海佳万智能科技有限公司 | Paperboard stack disassembling method |
CN112975943B (en) * | 2019-12-13 | 2022-06-28 | 广东弓叶科技有限公司 | Processing method and system for judging optimal grabbing height of robot clamping jaw |
CN111754515B (en) * | 2019-12-17 | 2024-03-01 | 北京京东乾石科技有限公司 | Sequential gripping method and device for stacked articles |
CN112077843B (en) * | 2020-08-24 | 2022-08-16 | 北京配天技术有限公司 | Robot graphical stacking method, computer storage medium and robot |
CN112520431A (en) * | 2020-11-23 | 2021-03-19 | 配天机器人技术有限公司 | Stacking calibration method and related device for stacking robot |
US11911801B2 (en) * | 2020-12-11 | 2024-02-27 | Intelligrated Headquarters, Llc | Methods, apparatuses, and systems for automatically performing sorting operations |
TWI746333B (en) * | 2020-12-30 | 2021-11-11 | 所羅門股份有限公司 | Destacking method and destacking system |
CN112509024B (en) * | 2021-02-08 | 2021-05-18 | 杭州灵西机器人智能科技有限公司 | Lifting device based mixed unstacking control method, device, equipment and medium |
CN113688704A (en) * | 2021-08-13 | 2021-11-23 | 北京京东乾石科技有限公司 | Item sorting method, item sorting device, electronic device, and computer-readable medium |
CN114030843B (en) * | 2021-10-27 | 2022-11-18 | 因格(苏州)智能技术有限公司 | Article circulation method and system |
CN114012720B (en) * | 2021-10-27 | 2022-12-16 | 因格(苏州)智能技术有限公司 | Robot |
CN114029250B (en) * | 2021-10-27 | 2022-11-18 | 因格(苏州)智能技术有限公司 | Article sorting method and system |
CN115159402B (en) * | 2022-06-17 | 2024-06-25 | 杭州海康机器人股份有限公司 | Goods placing and taking method and device, electronic equipment and machine-readable storage medium |
CN117485929B (en) * | 2023-12-29 | 2024-03-19 | 中国电力工程顾问集团西南电力设计院有限公司 | Unmanned material stacking and taking control system and method based on intelligent control |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11333770A (en) * | 1998-03-20 | 1999-12-07 | Kobe Steel Ltd | Loading position and attitude recognizing device |
CN1293752A (en) * | 1999-03-19 | 2001-05-02 | 松下电工株式会社 | Three-D object recognition method and pin picking system using the method |
CN104331894A (en) * | 2014-11-19 | 2015-02-04 | 山东省科学院自动化研究所 | Robot unstacking method based on binocular stereoscopic vision |
US9102055B1 (en) * | 2013-03-15 | 2015-08-11 | Industrial Perception, Inc. | Detection and reconstruction of an environment to facilitate robotic interaction with the environment |
CN105217324A (en) * | 2015-10-20 | 2016-01-06 | 上海影火智能科技有限公司 | A kind of novel de-stacking method and system |
CN106276325A (en) * | 2016-08-31 | 2017-01-04 | 长沙长泰机器人有限公司 | Van automatic loading system |
CN108313748A (en) * | 2018-04-18 | 2018-07-24 | 上海发那科机器人有限公司 | A kind of 3D visions carton de-stacking system |
-
2018
- 2018-09-17 CN CN201811082406.3A patent/CN109436820B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11333770A (en) * | 1998-03-20 | 1999-12-07 | Kobe Steel Ltd | Loading position and attitude recognizing device |
CN1293752A (en) * | 1999-03-19 | 2001-05-02 | 松下电工株式会社 | Three-D object recognition method and pin picking system using the method |
US9102055B1 (en) * | 2013-03-15 | 2015-08-11 | Industrial Perception, Inc. | Detection and reconstruction of an environment to facilitate robotic interaction with the environment |
CN104331894A (en) * | 2014-11-19 | 2015-02-04 | 山东省科学院自动化研究所 | Robot unstacking method based on binocular stereoscopic vision |
CN105217324A (en) * | 2015-10-20 | 2016-01-06 | 上海影火智能科技有限公司 | A kind of novel de-stacking method and system |
CN106276325A (en) * | 2016-08-31 | 2017-01-04 | 长沙长泰机器人有限公司 | Van automatic loading system |
CN108313748A (en) * | 2018-04-18 | 2018-07-24 | 上海发那科机器人有限公司 | A kind of 3D visions carton de-stacking system |
Also Published As
Publication number | Publication date |
---|---|
CN109436820A (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109436820B (en) | Destacking method and destacking system for goods stack | |
US11491654B2 (en) | Robotic system with dynamic pack adjustment mechanism and methods of operating same | |
US20240109199A1 (en) | Robotic system with error detection and dynamic packing mechanism | |
US20230016733A1 (en) | Robotic system for palletizing packages using real-time placement simulation | |
US9707682B1 (en) | Methods and systems for recognizing machine-readable information on three-dimensional objects | |
EP3169489B1 (en) | Real-time determination of object metrics for trajectory planning | |
JP6305213B2 (en) | Extraction device and method | |
JP6704157B1 (en) | Robot system with dynamic packing mechanism | |
JP6683333B1 (en) | Robot system for handling out-of-order arriving packages | |
JP2016222377A (en) | Cargo handling device and operation method thereof | |
CN115582827A (en) | Unloading robot grabbing method based on 2D and 3D visual positioning | |
JP2023525524A (en) | Identification of elements in the environment | |
CN113307042B (en) | Object unstacking method and device based on conveyor belt, computing equipment and storage medium | |
CN113800270B (en) | Robot control method and system for logistics unstacking | |
US11459221B2 (en) | Robot for stacking elements | |
US20220355474A1 (en) | Method and computing system for performing robot motion planning and repository detection | |
JP2018108896A (en) | Extracting device and method | |
US20210347617A1 (en) | Engaging an element | |
US20230025647A1 (en) | Robotic system with object update mechanism and methods for operating the same | |
US20240173866A1 (en) | Robotic system with multi-location placement control mechanism | |
CN115703238A (en) | System and method for robotic body placement | |
CN116835334A (en) | Disordered stacking method, disordered stacking device, disordered stacking medium and disordered stacking equipment based on 3D vision | |
CN115609569A (en) | Robot system with image-based sizing mechanism and method of operating the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |