CN216511500U - Object unstacking system - Google Patents

Object unstacking system Download PDF

Info

Publication number
CN216511500U
CN216511500U CN202121317290.4U CN202121317290U CN216511500U CN 216511500 U CN216511500 U CN 216511500U CN 202121317290 U CN202121317290 U CN 202121317290U CN 216511500 U CN216511500 U CN 216511500U
Authority
CN
China
Prior art keywords
camera
clamp
stack
upper computer
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202121317290.4U
Other languages
Chinese (zh)
Inventor
司林林
班宇
郑书强
丁有爽
邵天兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mech Mind Robotics Technologies Co Ltd
Original Assignee
Mech Mind Robotics Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mech Mind Robotics Technologies Co Ltd filed Critical Mech Mind Robotics Technologies Co Ltd
Priority to CN202121317290.4U priority Critical patent/CN216511500U/en
Application granted granted Critical
Publication of CN216511500U publication Critical patent/CN216511500U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model discloses an object unstacking system. The object unstacking system comprises: the device comprises a first 3D camera, a conveying mechanism and a first clamp arranged at the tail end of the conveying mechanism, wherein the conveying mechanism comprises a conveying belt; the stacking type image acquisition device comprises a first 3D camera, a second 3D camera and a third 3D camera, wherein the first 3D camera is used for acquiring a stacking type image of a stack, and the shooting direction of the first 3D camera faces the side face of the stack; the first clamp is used for grabbing corresponding objects in the stack according to first object pose information sent by the upper computer and placing the grabbed objects on the conveyor belt, wherein the first object pose information is determined by analyzing and processing the stack type image by the upper computer; the conveyer belt for the object that the first anchor clamps of transmission snatched, wherein, the conveyer belt is in running state to realized accurately snatching the object in the stack, effectively promoted the efficiency of breaking a jam when the object breaks a jam in the transport case, utilized the conveyer belt transmission the object that first anchor clamps snatched can effectively promote object conveying efficiency, saves time and the cost is reduced.

Description

Object unstacking system
Technical Field
The utility model relates to the technical field of intelligent logistics, in particular to an object unstacking system.
Background
With the rapid development of society, the demand of people on logistics is increasing day by day, wherein, the transport case has a very important position as the carrier of medium and small goods transportation. The box in the logistics process needs to be unstacked continuously according to the difference of logistics links. However, the existing unstacking method is mainly manual unstacking, and the workers manually carry objects, so that manual carrying is time-consuming and labor-consuming, and the cost is too high, and therefore, an efficient and accurate unstacking scheme is needed.
SUMMERY OF THE UTILITY MODEL
In view of the above, the present invention has been developed to provide an object unstacking system that overcomes, or at least partially solves, the above-mentioned problems.
According to an aspect of the present invention, there is provided an object unstacking system comprising: the device comprises a first 3D camera, a conveying mechanism and a first clamp arranged at the tail end of the conveying mechanism, wherein the conveying mechanism comprises a conveying belt;
the stacking type image acquisition device comprises a first 3D camera, a second 3D camera and a third 3D camera, wherein the first 3D camera is used for acquiring a stacking type image of a stack, and the shooting direction of the first 3D camera faces the side face of the stack;
the first clamp is used for grabbing corresponding objects in the stack according to first object pose information sent by the upper computer and placing the grabbed objects on the conveyor belt, wherein the first object pose information is determined by analyzing and processing the stack type image by the upper computer;
and the conveyor belt is used for conveying the object grabbed by the first clamp, wherein the conveyor belt is in a running state.
Further, the object unstacking system further comprises: the second 3D camera and a second clamp are arranged at the other end of the conveying mechanism;
the second 3D camera is used for acquiring an object image of the object transmitted by the conveyor belt;
the second clamp is used for: and grabbing objects on the conveyor belt according to second object pose information sent by the upper computer and placing the objects in the object frame, wherein the second object pose information is determined by analyzing and processing the object images by the upper computer.
Further, the object unstacking system further comprises: a second 3D camera and a palletizing robot having a tray, wherein the palletizing robot is located at the other end of the transport mechanism;
the second 3D camera is used for acquiring an object image of the object transmitted by the conveyor belt;
and the palletizing robot is used for controlling the tray to bear the objects transmitted by the conveyor belt and placing the objects in the object frame according to second object position and attitude information sent by the upper computer, wherein the second object position and attitude information is determined by analyzing and processing the object images by the upper computer.
Further, the first object pose information includes: first object position information and first object attitude information;
the object unstacking system further comprises: the plurality of direction adjusting mechanisms are used for adjusting the conveying mechanism according to the adjusting direction and the adjusting distance sent by the upper computer so that the conveying mechanism drives the first clamp to move to the grabbing position corresponding to the first object position information, wherein the adjusting direction and the adjusting distance are determined by the upper computer according to the first object position information;
the first clamp is further configured to: and forming a grabbing posture corresponding to the first object posture information to grab the corresponding object in the stack.
Further, the first clamp is a telescopic structure clamp.
Further, the first clamp includes: a suction cup or jaw; the second clamp includes: suction cups or clamping jaws.
Further, the sucker is a sponge sucker.
Further, the stack is formed by stacking the objects in the transport box.
Further, the object unstacking system further comprises: and the light source is used for carrying out illumination compensation on the first 3D camera.
Further, the conveying mechanism also comprises a shell, and the conveying belt is arranged on the shell;
the first 3D camera is disposed on a lower surface of the housing of the transport mechanism.
According to the scheme provided by the utility model, the stack type image of the stack is acquired through the first 3D camera arranged on the side surface of the stack, the first clamp grabs the corresponding object in the stack according to the first object position and posture information sent by the upper computer and places the grabbed object on the conveyor belt, wherein the first object position and posture information is determined by analyzing and processing the stack type image through the upper computer, so that the object in the stack can be accurately grabbed, the unstacking efficiency of the objects in the transport box is effectively improved, the object grabbed by the first clamp is transmitted by using the conveyor belt in the running state, the object transport efficiency can be effectively improved, the time is saved, and the cost is reduced.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the utility model. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a block diagram of an object unstacking system according to one embodiment of the utility model;
figure 2 shows a physical schematic of an object unstacking system.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the utility model are shown in the drawings, it should be understood that the utility model can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the utility model to those skilled in the art.
FIG. 1 shows a block diagram of an object unstacking system according to one embodiment of the utility model. As shown in fig. 1, the system includes: a first 3D camera 101, a transport mechanism 102 and a first gripper 103 arranged at the end of the transport mechanism, wherein the transport mechanism 102 comprises a conveyor belt 104.
A first 3D camera 101 for acquiring a stack-type image of the stack, wherein the shooting direction of the first 3D camera is towards the side of the stack.
Specifically, when there is a need for unstacking an object, a stack-type image of a stack may be acquired using a first 3D camera, the stack-type image reflecting a stacking state of the object in the current stack, wherein the stack-type image is an RGB image.
In the embodiment, the first 3D camera is disposed at a side position, and the shooting direction of the first 3D camera faces to the side of the stack, that is, the first 3D camera captures a stack-type image of the side of the stack, and optionally, the stack is formed by stacking objects in a transport container, for example, a stack formed by stacking objects in a container or a van, since the door of the container or the van is generally disposed at one side, when unstacking is performed, a side door is generally opened for unstacking, and therefore, the first 3D camera is disposed at the side position, which can better serve the unstacking.
And the first clamp 103 is used for grabbing corresponding objects in the stack according to first object pose information sent by the upper computer and placing the grabbed objects on the conveyor belt, wherein the first object pose information is determined by the upper computer through stack image analysis processing.
The first clamp is arranged at the tail end of the conveying mechanism and used for grabbing the objects in the stack, and optionally, the first clamp is a telescopic structure clamp which can extend forwards to grab the corresponding objects in the stack when the objects need to be grabbed; when the object that will snatch is placed on the conveyer belt when needs, can move downwards backward simultaneously to the conveyer belt drives the object and moves, wherein, first anchor clamps are when placing the object on the conveyer belt, and the back of carrying out is retract and is moved downwards, and first anchor clamps are in the coplanar with the conveyer belt, thereby can not shelter from first 3D camera collection buttress type image.
Optionally, the first clamp comprises: suction cups or clamping jaws. When the first clamp is a sucker, the first clamp adopts a sucker sucking mode, and after the first clamp is started, the inside of the sucker is in a vacuum state, so that a corresponding object can be sucked to complete the grabbing operation; when first anchor clamps are the clamping jaw, the mode of snatching of anchor clamps snatchs for the clamping jaw, and first anchor clamps start the back, and the clamping jaw is closed, and then can snatch the object and accomplish and snatch the operation. More specifically, the suction cup may be a sponge suction cup, which is internally provided with a vacuum generator, a check valve and a control valve, so that the grabbing effect is not affected even if the object is not completely adsorbed, and the suction of objects in various shapes can be effectively realized. Preferably, the first clamp is a suction cup.
The upper computer determines which object is to be grabbed at this time after determining the first object pose information of each object in the stack through analysis processing, and sends the first object pose information corresponding to the object to the first clamp, and the first clamp grabs the corresponding object in the stack according to the first object pose information after receiving the first object pose information sent by the upper computer. When analyzing and determining the pose information of the first object, the prior art can process the stack image and the depth image to conveniently obtain point clouds corresponding to the stack image, determine the point clouds corresponding to the objects in the stack by using a point cloud segmentation technology, wherein the point clouds comprise the pose information of each 3D point, and determine the pose information of the first object of the object by matching the point clouds corresponding to the objects with preset template point clouds. In addition, the point cloud can be generated according to elements such as a laser detector, a visible light detector such as an LED, an infrared detector, a radar detector and the like.
The embodiment utilizes the first clamp arranged at the tail end of the conveying mechanism to grab the corresponding objects in the stack, and the object in the stack is grabbed one time and then again to realize unstacking.
And a conveyor belt 104 for transporting the object gripped by the first gripper, wherein the conveyor belt is in operation.
In this embodiment, the first gripper, after gripping the object, places the object on the conveyor belt, which is always in motion, so as to transport the object gripped by the first gripper in time. The objects are grabbed through the first clamp arranged at the tail end of the conveying mechanism and are conveyed through the conveying belt in the conveying mechanism, so that the accurate unstacking and object conveying are realized.
In an alternative embodiment of the utility model, in order to achieve accurate palletization, the object unstacking system further comprises: a second 3D camera and a second gripper (not shown) arranged at the other end of the transport mechanism;
the second 3D camera is used for acquiring an object image of the object transmitted by the conveyor belt; the second 3D camera can be arranged on the upper surface of the other end of the conveying mechanism, the second 3D camera is arranged at a certain distance threshold from the other end of the conveying belt, and the shooting direction of the second 3D camera faces downwards; the object image collected by the second 3D camera is an RGB image.
The second clamp is used for: and grabbing objects on the conveyor belt according to second object pose information sent by the upper computer and placing the objects in the object frame, wherein the second object pose information is determined by analyzing and processing the object images by the upper computer.
A second gripper is arranged at the other end of the transport mechanism, the second gripper being adapted to grip the object on the conveyor belt, optionally the second gripper comprising: suction cups or clamping jaws. When the second clamp is a sucker, the grabbing mode of the second clamp is sucker sucking, and after the second clamp is started, the inside of the sucker is in a vacuum state, so that a corresponding object can be sucked to complete grabbing operation; when the second clamp is the clamping jaw, the clamping jaw is used for clamping, and after the second clamp is started, the clamping jaw is closed, so that an object can be clamped to complete the clamping operation. More specifically, the suction cup may be a sponge suction cup, which is internally provided with a vacuum generator, a check valve and a control valve, so that the grabbing effect is not affected even if the object is not completely adsorbed, and the suction of objects in various shapes can be effectively realized. Preferably, the second clamp is a suction cup.
The upper computer analyzes and processes the object image to determine second object pose information of the object on the conveyor belt, and sends the second object pose information corresponding to the object to the second clamp, after the second clamp receives the second object pose information sent by the upper computer, the second clamp grabs the object on the conveyor belt according to the second object pose information and places the grabbed object in the material frame, for example, the second clamp places the grabbed object code at a corresponding position of a cage car or a tray. It should be noted that the upper computer may analyze and process the object image by using an analysis method in the prior art to determine the second object pose information of the object on the conveyor belt, and this embodiment is not a process of protecting the upper computer from analyzing and processing the object image to determine the second object pose information of the object on the conveyor belt.
In an alternative embodiment of the utility model, the object unstacking system further comprises: a second 3D camera and a palletizing robot having a tray, wherein the palletizing robot is located at the other end of the transport mechanism;
the second 3D camera is used for acquiring an object image of the object transmitted by the conveyor belt;
and the palletizing robot is used for controlling the tray to bear the objects transmitted by the conveyor belt and placing the objects in the object frame according to second object position and attitude information sent by the upper computer, wherein the second object position and attitude information is determined by analyzing and processing the object images by the upper computer.
Specifically, the palletizing robot and the conveying mechanism are two independent parts, the palletizing robot is located at the other end of the conveying mechanism, the palletizing robot is provided with a tray, after the upper computer determines second object pose information of the objects conveyed by the conveying belt, the upper computer sends the second object pose information to the palletizing robot, the palletizing robot controls the tray to move to a corresponding position of the conveying belt according to the second object pose information so as to bear the objects conveyed by the conveying belt and place the grabbed objects into the material frame, and for example, the second clamp places the grabbed object code at a corresponding position of a cage or the tray.
In an optional embodiment of the present invention, the first object pose information comprises: first object position information and first object attitude information;
the object unstacking system further comprises: the plurality of direction adjusting mechanisms are used for adjusting the conveying mechanism according to the adjusting direction and the adjusting distance sent by the upper computer so that the conveying mechanism drives the first clamp to move to the grabbing position corresponding to the first object position information, wherein the adjusting direction and the adjusting distance are determined by the upper computer according to the first object position information;
the first clamp is further configured to: and forming a grabbing posture corresponding to the first object posture information to grab the corresponding object in the stack.
Specifically, each object has a certain size and occupies a certain volume, and the stack changes accordingly as the object is grabbed, so that after each object is grabbed, the conveying mechanism needs to be moved to continuously grab the objects in the stack, and because the first object pose information corresponding to different objects is different, the first object pose information includes: the first object position information reflects the position of the object, and the first object posture information reflects the orientation of the object. Therefore, the adjusting directions and the adjusting distances of the plurality of direction adjusting mechanisms can be determined according to the first object position information by the upper computer, the adjusting directions and the adjusting distances are sent to the plurality of direction adjusting mechanisms, the plurality of direction adjusting mechanisms adjust the conveying mechanism according to the adjusting directions and the adjusting distances sent by the upper computer, so that the conveying mechanism drives the first clamp to move to the grabbing position corresponding to the first object position information, and the adjusting directions represent the moving directions of the conveying mechanism, such as upward or downward, and/or leftward or rightward. The upper computer may determine the adjustment directions and the adjustment distances of the plurality of direction adjustment mechanisms according to the position information of the first object by using a calculation method in the prior art, which is not a determination process for protecting the adjustment directions and the adjustment distances.
The object posture information reflects one orientation of the object, and in order to effectively and stably grab, the first clamp needs to form a grabbing posture corresponding to the first object posture information so as to grab the corresponding object in the stack.
In an alternative embodiment of the present invention, as the objects are grabbed in the stack, the objects inside the transport box will gradually remain, however, the light inside the transport box is poor, and in order to enable the first 3D camera to acquire a clear stack-type image, it is necessary to perform illumination compensation on the first 3D camera when the first 3D camera acquires the stack-type image, and therefore, the object unstacking system further includes: a light source for illumination compensation of the first 3D camera, wherein the light source may be disposed below the first 3D camera.
In an optional embodiment of the present invention, the conveying mechanism further comprises a housing, the conveyor belt is disposed on the housing, and a distance is provided between a lower surface of the conveyor belt and an upper surface of a bottom of the housing to ensure a normal operation of the conveyor belt; the first 3D camera is disposed on a lower surface of a housing of the transport mechanism. When the conveyor belt is in the operating state, the first 3D camera does not move with the conveyor belt.
Fig. 2 shows a schematic physical structure of an object unstacking system, which is an implementation of the object unstacking system, and it should be emphasized that the object unstacking system is not limited to the physical structure shown in fig. 2. As shown in fig. 2, the first 3D camera 101 is fixedly disposed on the lower surface of the housing of the conveying mechanism 102, alternatively, the first 3D camera and the conveying mechanism may also be two independent components, the first 3D camera is disposed below the conveying mechanism, the position of the first 3D camera may be adjusted according to the shooting requirement, and preferably, the first 3D camera is fixedly disposed on the lower surface of the housing of the conveying mechanism 102. Wherein, if the first 3D camera is fixedly arranged on the lower surface of the conveying mechanism, the arrangement position of the first 3D camera needs to ensure that the visual field of the first 3D camera can shoot the whole stack, and as the conveying mechanism is longer, in order to ensure the imaging effect of the stack-type image collected by the first 3D camera, the shooting position is approximately within a distance threshold of the first 3D camera from the stack, and if the shooting position is too far away, the imaging effect is not good, for example, the first 3D camera is arranged at a position 4m to 4.5m away from the stack, when the first 3D camera is fixedly arranged on the lower surface of the conveying mechanism, the first 3D camera is arranged at any position within a range of 4m to 4.5m away from the tail end of the conveying mechanism provided with the first clamp, for example, at a position 4m, the tail end of the conveying mechanism can move along with the object being grabbed in the stack, thus ensuring that the first 3D camera is spaced 4m from the stack, for example, the transport mechanism may be deep inside the transport box, for example, 3m deep inside the transport box, which is merely illustrative and not limiting. In fig. 2, the transfer mechanism is provided on the AGV car, and the AGV car can move according to actual needs.
A first clamp 103 arranged at the tail end of the conveying mechanism grabs corresponding objects in the stack according to first object pose information sent by an upper computer and places the grabbed objects on a conveyor belt 104, and the conveyor belt 104 conveys the objects grabbed by the first clamp, wherein the conveyor belt is in a running state.
According to the system provided by the embodiment of the utility model, the stack type image of the stack is acquired through the first 3D camera arranged on the side face of the stack, the first clamp grabs the corresponding object in the stack according to the first object pose information sent by the upper computer and places the grabbed object on the conveyor belt, wherein the first object pose information is determined by analyzing and processing the stack type image through the upper computer, so that the object in the stack can be accurately grabbed, the unstacking efficiency of the objects in the transport box during unstacking is effectively improved, the object grabbed by the first clamp is transmitted by using the conveyor belt in the running state, the object transport efficiency can be effectively improved, the time is saved, and the cost is reduced.
The various modules and circuits mentioned in the present invention are all circuits implemented by hardware, and although some of the modules and circuits integrate software, the present invention protects hardware circuits integrating the corresponding functions of the software, not just the software itself.
It will be appreciated by those skilled in the art that the arrangement of devices shown in the figures or embodiments is merely schematic and representative of a logical arrangement. Where modules shown as separate components may or may not be physically separate, components shown as modules may or may not be physical modules.
Finally, it is noted that: the above-mentioned embodiments are only examples of the present invention, and it is a matter of course that those skilled in the art can make modifications and variations to the present invention, and it is considered that the present invention is protected by the modifications and variations if they are within the scope of the claims of the present invention and their equivalents.

Claims (10)

1. An object unstacking system comprising: the device comprises a first 3D camera, a conveying mechanism and a first clamp arranged at the tail end of the conveying mechanism, wherein the conveying mechanism comprises a conveying belt;
the stacking type image acquisition device comprises a first 3D camera, a second 3D camera and a third 3D camera, wherein the first 3D camera is used for acquiring a stacking type image of a stack, and the shooting direction of the first 3D camera faces to the side face of the stack;
the first clamp is used for grabbing corresponding objects in the stack according to first object pose information sent by the upper computer and placing the grabbed objects on the conveyor belt, wherein the first object pose information is determined by the upper computer through analysis processing of the stack type images;
and the conveyor belt is used for conveying the object grabbed by the first clamp, wherein the conveyor belt is in a running state.
2. The system of claim 1, wherein the object unstacking system further comprises: the second 3D camera and a second clamp are arranged at the other end of the conveying mechanism;
the second 3D camera is used for acquiring an object image of the object transmitted by the conveyor belt;
the second clamp is used for: and grabbing objects on the conveyor belt according to second object pose information sent by the upper computer and placing the objects in the object frame, wherein the second object pose information is determined by analyzing and processing the object images by the upper computer.
3. The system of claim 1, wherein the object unstacking system further comprises: a second 3D camera and a palletizing robot having a tray, wherein the palletizing robot is located at the other end of the transfer mechanism;
the second 3D camera is used for acquiring an object image of the object transmitted by the conveyor belt;
and the palletizing robot is used for controlling the tray to bear the objects transmitted by the conveyor belt and placing the objects in the object frame according to second object position and attitude information sent by the upper computer, wherein the second object position and attitude information is determined by the upper computer through the analysis and the processing of the object images.
4. The system according to any one of claims 1-3, wherein the first object pose information includes: first object position information and first object attitude information;
the object unstacking system further comprises: the plurality of direction adjusting mechanisms are used for adjusting the conveying mechanism according to an adjusting direction and an adjusting distance sent by the upper computer so that the conveying mechanism drives the first clamp to move to a grabbing position corresponding to the first object position information, wherein the adjusting direction and the adjusting distance are determined by the upper computer according to the first object position information;
the first clamp is further to: and forming a grabbing posture corresponding to the first object posture information to grab the corresponding object in the stack.
5. The system of claim 2, wherein the first clamp is a telescoping structure clamp.
6. The system of claim 5, wherein the first clamp comprises: a suction cup or jaw; the second clamp includes: suction cups or clamping jaws.
7. The system of claim 6, wherein the suction cup is a sponge suction cup.
8. The system of any of claims 1-3, wherein the stack is formed from stacked objects within a shipping box.
9. The system of any of claims 1-3, wherein the object unstacking system further comprises: a light source for performing illumination compensation on the first 3D camera.
10. The system of any of claims 1-3, wherein the conveyor mechanism further comprises a housing, the conveyor belt being disposed on the housing;
the first 3D camera is disposed on a lower surface of a housing of the transport mechanism.
CN202121317290.4U 2021-06-11 2021-06-11 Object unstacking system Active CN216511500U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202121317290.4U CN216511500U (en) 2021-06-11 2021-06-11 Object unstacking system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202121317290.4U CN216511500U (en) 2021-06-11 2021-06-11 Object unstacking system

Publications (1)

Publication Number Publication Date
CN216511500U true CN216511500U (en) 2022-05-13

Family

ID=81460769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202121317290.4U Active CN216511500U (en) 2021-06-11 2021-06-11 Object unstacking system

Country Status (1)

Country Link
CN (1) CN216511500U (en)

Similar Documents

Publication Publication Date Title
CN109178960B (en) Intelligent stacking system and method suitable for different specification carton packages
CN109279373B (en) Flexible unstacking and stacking robot system and method based on machine vision
US11840411B2 (en) Vision-assisted robotized depalletizer
JP7317112B2 (en) A robotic system for picking up, sorting, and arranging multiple random, novel objects
CN109230580B (en) Unstacking robot system and unstacking robot method based on mixed material information acquisition
CN112047113B (en) 3D visual stacking system and method based on artificial intelligence technology
US9492924B2 (en) Moveable apparatuses having robotic manipulators and conveyors to facilitate object movement
CN109969781B (en) Taking-out device, transfer device, taking-out method, and control device
US10239210B2 (en) Vision-assisted system and method for picking of rubber bales in a bin
US11240392B2 (en) Systems and methods relating to document and fastener identification
CN113351522B (en) Article sorting method, device and system
CN109513630B (en) Package sorting system, control method thereof and storage medium
CN113307042B (en) Object unstacking method and device based on conveyor belt, computing equipment and storage medium
CN113602799B (en) Airport luggage case carrying system and control method thereof
CN216511500U (en) Object unstacking system
CN115848715A (en) Disordered sorting robot, system and method
CN211359675U (en) Sheet-like article sorting system
CN112004616A (en) Mechanical device for obtaining sheet-like article
JP7264387B2 (en) Robotic gripper assembly for openable objects and method for picking objects
CN218743278U (en) Logistics automatic sorting equipment based on three-dimensional vision
CN211444118U (en) Mechanical device for obtaining sheet-like article
US20230071488A1 (en) Robotic system with overlap processing mechanism and methods for operating the same
CN115485216A (en) Robot multi-surface gripper assembly and method of operating the same
CN117104870A (en) Equipment for grabbing unordered stacked soles by mechanical arm
CN117696445A (en) Efficient express sorting method based on mechanical arm

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant