CN117864806A - Autonomous unloading method of trolley and autonomous unloading trolley - Google Patents

Autonomous unloading method of trolley and autonomous unloading trolley Download PDF

Info

Publication number
CN117864806A
CN117864806A CN202410181189.2A CN202410181189A CN117864806A CN 117864806 A CN117864806 A CN 117864806A CN 202410181189 A CN202410181189 A CN 202410181189A CN 117864806 A CN117864806 A CN 117864806A
Authority
CN
China
Prior art keywords
grabbing
trolley
axis
carriage
cart
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410181189.2A
Other languages
Chinese (zh)
Inventor
李华
王义山
李海滨
刘向
姚超捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sainade Technology Co ltd
Original Assignee
Sainade Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sainade Technology Co ltd filed Critical Sainade Technology Co ltd
Priority to CN202410181189.2A priority Critical patent/CN117864806A/en
Publication of CN117864806A publication Critical patent/CN117864806A/en
Pending legal-status Critical Current

Links

Landscapes

  • Manipulator (AREA)

Abstract

The invention provides an autonomous unloading method of a trolley and the autonomous unloading trolley, wherein the autonomous unloading method comprises the following steps: the trolley automatically navigates and moves to the position of a carriage door according to the position relation between the trolley and the carriage; defining the depth direction of a carriage as an X axis, the width direction of the carriage as a Y axis and the height direction of the carriage as a Z axis; acquiring images in a carriage through a 3D vision system, and carrying out image recognition and point cloud segmentation on the acquired images so as to obtain size information and position information of each packing box; calculating the center coordinates of the grabbing surfaces according to the size information and the position information of each packing box; sequencing the central coordinates of all the grabbing surfaces to define the grabbing sequence of the packing box; wherein the ordering comprises: sequencing according to an X axis, then sequencing according to a Z axis, and then sequencing according to a Y axis, so that grabbing along the Y axis direction from near to far from top to bottom is realized; and controlling the mechanical arm to sequentially grasp the packaging boxes according to the sorting result and the distance between the trolley and the packaging boxes.

Description

Autonomous unloading method of trolley and autonomous unloading trolley
Technical Field
The invention relates to the technical field of logistics, in particular to an autonomous unloading method of a trolley and an autonomous unloading trolley.
Background
In some large logistics centers, wharfs, cold chain transportation and other box-type container loading and unloading scenes, the loading and unloading operation is finished mainly manually at present, and the stacking is required to be tidy. However, the conventional manual loading and unloading is difficult in handling various different types of goods. In addition, the loading and unloading sites and carriages are not usually provided with air conditioners, in narrow carriages, the carriages are cold in winter and hot in summer, the cargoes are still carried continuously, odor, air temperature, dust and the like bring inconvenience to work, and due to the huge quantity and huge weight of the cargoes, tasks are required to be completed continuously and rapidly, and basically each worker suffers from diseases left by long-term working. For the enterprise end, the manual cost of loading and unloading per year of manufacturing enterprises with a scale above is up to tens of millions of yuan, and the cost rises year by year.
Therefore, how to realize intelligent loading and unloading is the focus of attention of the person skilled in the art.
Disclosure of Invention
The invention aims to provide an autonomous unloading method of a trolley and an autonomous unloading trolley, which can realize intelligent autonomous unloading.
In order to achieve the above object, the present invention provides an autonomous unloading method of a trolley, comprising:
the trolley automatically navigates and moves to the position of a carriage door according to the position relation between the trolley and the carriage;
defining the depth direction of a carriage as an X axis, the width direction of the carriage as a Y axis and the height direction of the carriage as a Z axis; acquiring images in a carriage through a 3D vision system, and carrying out image recognition and point cloud segmentation on the acquired images so as to obtain size information and position information of each packing box;
calculating the center coordinates of the grabbing surfaces according to the size information and the position information of each packing box;
sequencing the center coordinates of all the grabbing surfaces to define the grabbing sequence of the packing box;
wherein the ordering comprises: sequencing according to an X axis, then sequencing according to a Z axis, and then sequencing according to a Y axis, so that grabbing along the Y axis direction from near to far from top to bottom is realized;
and controlling the mechanical arm to sequentially grasp the packaging boxes according to the sorting result and the distance between the trolley and the packaging boxes.
In an alternative scheme, the method for obtaining the size information and the position information of each packing box comprises the following steps: and carrying out image recognition and point cloud segmentation on the acquired images so as to obtain the size information and the position information of each packing box.
In an alternative scheme, the method for acquiring the images in the carriage comprises the steps of automatically recording the position information of each packing box every time a picture of the current visual field is taken, and adjusting the visual field angle in the next shooting according to the position information.
In an alternative scheme, the method for grabbing the packaging box by the mechanical arm comprises the following steps: when the height of the packaging box to be grasped is greater than a preset threshold value, controlling the mechanical arm to grasp from the side face of the packaging box; when the height of the packaging box to be grabbed is not greater than a preset threshold value, the mechanical arm is controlled to grab from the upper surface of the packaging box.
In an alternative solution, before the sorting the central coordinates of all the grabbing surfaces, the method further includes: when the height of the packaging box to be grabbed is larger than the preset threshold value, filtering out the point cloud of the packaging box with the height below the preset threshold value through point cloud filtering;
and when the height of the packaging box to be grabbed is not greater than the preset threshold value, filtering out the point cloud of the packaging box with the height above the preset threshold value through point cloud filtering.
In an alternative scheme, when the number of the packing boxes which are grabbed at one time is one, the center coordinates of the grabbing surfaces are the center coordinates of the grabbing surfaces of the single packing boxes; when the number of the packing boxes which are grabbed at one time is larger than one, the center coordinates of the grabbing surfaces are the center coordinates of the grabbing surfaces of the whole packing boxes.
In an alternative, after the trolley moves to the carriage door position, the method further includes: and obtaining the distance between the trolley and the outermost packaging box, and adjusting the position of the trolley according to the grabbing distance of the mechanical arm.
In an alternative scheme, the method for obtaining the distance between the trolley and the outermost packaging box comprises the following steps: and acquiring images of all the packing boxes at the outermost side, processing to obtain point cloud data, and obtaining the distance between the trolley and the packing boxes at the outermost side according to the point cloud data.
The invention also provides an autonomous unloading trolley, comprising:
a trolley body;
the mechanical arm is rotatably arranged on the trolley main body;
the grabbing unit is arranged at the tail end of the mechanical arm and used for grabbing the packaging box;
the 3D vision module is arranged on the trolley main body and used for acquiring images in a carriage and carrying out point cloud processing on the acquired images to obtain the size and position information of each packing box;
the calculating and sorting module is in communication connection with the 3D vision module and calculates the center coordinates of the grabbing surfaces according to the size and position information of each packing box; sequencing the center coordinates of all the grabbing surfaces to define the grabbing sequence of the packing box; wherein the ordering comprises: sequencing according to an X axis, then sequencing according to a Z axis, and then sequencing according to a Y axis, so that grabbing along the Y axis direction from near to far from top to bottom is realized;
and the control module is used for controlling the mechanical arm to sequentially grasp the packaging boxes according to the sorting result and the distance between the trolley and the packaging boxes.
In an alternative scheme, the grabbing module is a vacuum chuck, and the vacuum chuck comprises a plurality of suction nozzles;
the number of the vacuum chucks is one or a plurality of the vacuum chucks independently work, and the control module controls the working number of the vacuum chucks according to the grabbing size.
In an alternative scheme, the autonomous unloading trolley further comprises a support, the support rotates coaxially with the mechanical arm, the 3D vision module comprises a first binocular camera and a second binocular camera, and the first binocular camera and the second binocular camera are arranged on the support up and down.
The invention has the beneficial effects that:
according to the intelligent unloading method, the size information and the position information of each packing box are obtained through obtaining the images in the carriage, the packing boxes are grabbed according to the preset grabbing rules, and intelligent unloading of the packing boxes is achieved.
Drawings
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the invention.
Fig. 1 is a flowchart of an autonomous unloading method of a cart according to an embodiment of the invention.
Fig. 2 is a schematic structural view of an autonomous unloading truck according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a discharge sequence of a packing box according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific examples. The advantages and features of the present invention will become more apparent from the following description and drawings, however, it should be understood that the inventive concept may be embodied in many different forms and is not limited to the specific embodiments set forth herein. The drawings are in a very simplified form and are to non-precise scale, merely for convenience and clarity in aiding in the description of embodiments of the invention.
It will be understood that when an element or layer is referred to as being "on," "adjacent," "connected to," or "coupled to" another element or layer, it can be directly on, adjacent, connected, or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being "directly on," "directly adjacent to," "directly connected to," or "directly coupled to" another element or layer, there are no intervening elements or layers present. It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the present invention.
Spatially relative terms, such as "under," "below," "beneath," "under," "above," "over," and the like, may be used herein for ease of description to describe one element or feature's relationship to another element or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use and operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements or features described as "under" or "beneath" other elements would then be oriented "on" the other elements or features. Thus, the exemplary terms "below" and "under" may include both an upper and a lower orientation. The device may be otherwise oriented (rotated 90 degrees or other orientations) and the spatially relative descriptors used herein interpreted accordingly.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
Example 1
Referring to fig. 1, the embodiment provides an autonomous unloading method of a trolley, which includes:
the trolley automatically navigates and moves to the position of a carriage door according to the position relation between the trolley and the carriage;
defining the depth direction of a carriage as an X axis, the width direction of the carriage as a Y axis and the height direction of the carriage as a Z axis; acquiring images in a carriage through a 3D vision system, and carrying out image recognition and point cloud segmentation on the acquired images so as to obtain size information and position information of each packing box;
calculating the center coordinates of the grabbing surfaces according to the size information and the position information of each packing box;
sequencing the center coordinates of all the grabbing surfaces to define the grabbing sequence of the packing box;
wherein the ordering comprises: sequencing according to an X axis, then sequencing according to a Z axis, and then sequencing according to a Y axis, so that grabbing along the Y axis direction from near to far from top to bottom is realized;
and controlling the mechanical arm to grasp the packaging box according to the sorting result and the distance between the trolley and the packaging box.
Specifically, the trolley is provided with a laser radar navigation device, can navigate according to a laser radar, autonomously plan an optimal path through monitoring the change of the surrounding environment in real time, and move to the position of a carriage door. The trolley is also provided with an infrared induction system, and can sense whether people exist around the trolley or not so as to ensure that the personnel cannot be injured in the operation process. After the specified position is reached, the image of each packing box at the outermost side in the carriage is obtained, point cloud data is obtained through processing, the distance between the trolley and the packing box at the outermost side is obtained according to the point cloud data, and then the position of the trolley is accurately adjusted according to the grabbing distance of the mechanical arm, so that the grabbing range of the mechanical arm can cover the width of the trolley under the condition that the trolley does not move left and right, and the packing boxes can be grabbed.
After the position is adjusted, an image in a carriage is acquired through a 3D vision system, and the acquired image is subjected to image recognition and point cloud segmentation, so that the size information and the position information of each packing box are obtained. Because the view field range is limited, the omnibearing images in the carriage cannot be acquired at the same time, so that the position information of each packing box is automatically recorded when the picture of the current view field is taken every time, and the view field angle of the next shooting is adjusted according to the position information. Shooting and image processing are carried out while grabbing, when grabbing is completed, the image processing is completed, the next grabbing coordinate is obtained, the next grabbing can be immediately carried out, and the working efficiency can be improved by the parallel processing mode.
According to the setting, the mechanical arm can only grasp one packing box at a time, and can also grasp a plurality of packing boxes simultaneously. When the number of the packaging boxes which are grabbed at one time is one, the center coordinates of the grabbing surfaces are the center coordinates of the grabbing surfaces of the single packaging boxes; when the number of the packing boxes which are grabbed at one time is larger than one, the center coordinates of the grabbing surfaces are the center coordinates of the grabbing surfaces of the whole packing boxes.
After the central coordinates of all the grabbing surfaces are obtained, the central coordinates of all the grabbing surfaces are ordered according to a preset grabbing rule. The depth direction of the carriage is defined as the X axis, the width direction of the carriage is defined as the Y axis, and the height direction of the carriage is defined as the Z axis. The grabbing rule is as follows: the gripping is realized along the Y-axis direction from the near to the far from the top down by sequencing according to the X-axis, then sequencing according to the Z-axis and then sequencing according to the Y-axis. Specifically, the position of the trolley is taken as the main visual angle, the packing boxes at the upper left corner of the carriage are firstly grabbed, at this time, the packing boxes in the same row have the same X-axis coordinate and the same Z-axis coordinate, the same allows certain errors, the surfaces of the packing boxes in the same row cannot be completely leveled due to different sizes or irregular placement of the packing boxes, if the errors are designed to be 10 millimeters, when the two data are ordered, the coordinate values of the two packing boxes in the shaft are considered to be the same (namely, the two packing boxes are in the same row) and then sequentially grabbed from left to right, and when the grabbing of the packing boxes in the row is completed, the next row is sequentially grabbed from left to right. Thereby realizing the grabbing from near to far, from left to right and from top to bottom. When the image of the packing box is acquired, the center coordinates of the grabbing surface of the first box at the leftmost side are automatically recorded, and when the second box is grabbed for the packing box in the same row (Y-axis arrangement), the X-axis coordinates and the Z-axis coordinates of the second box are required to be identical with the X-axis coordinates and the Z-axis coordinates of the first box until the packing box in the row is grabbed. Of course, the grabbing sequence of the packing boxes can be from right to left.
Referring to fig. 3, the sorting method is described in one specific example. Fig. 3 is a schematic diagram showing the unloading sequence of the packing boxes in the car with the main view angle, wherein the central coordinates of the grabbing surfaces of the packing boxes marked as 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 are (X1, Y2, Z1), (X1, Y3, Z1), (X1, Y2, Z2), (X1, Y3, Z2), (X1, Y1, Z3), (X1, Y2, Z3), (X1, Y3, Z3), (X2, Y1, Z1), (X2, Y2, Z1), (X2, Y3 and Z1) in sequence. Wherein the upper left-hand corner of the package in fig. 3 has been removed, the central coordinates of its gripping surfaces are (X1, Y1, Z1). Assume that after the 3D vision system acquires the image in the vehicle cabin, the above 12 containers (including containers located in different/same rows and different/same columns) are processed to obtain the center coordinates of the gripping surfaces. In this embodiment, after the central coordinates of the plurality of gripping surfaces are obtained, the central coordinates of the gripping surfaces are first subjected to X-axis sorting, so that it can be seen that the X-axis coordinates of the packaging boxes with reference numerals 2-9 are identical (including the first packaging box that has been removed), and the packaging boxes with reference numerals 10, 11 and 12 are preferred to each other during gripping. The containers numbered 2-9 are then sorted by Z-axis, and it can be seen that the first group of containers numbered 2 and 3, when grasped, takes precedence over the second group of containers numbered 4, 5, 6, and the second group of containers takes precedence over the third group of containers numbered 7, 8, 9. The above 3 groups of packages are then ordered in sequence on the Y-axis, and it can be seen that the package numbered 2 takes precedence over the package numbered 3. The package with the number 3 takes precedence over the package with the number 4, and the package with the number 4 takes precedence over the package with the number 5. After being ordered according to the central coordinates of the grabbing surfaces, the grabbing sequences are the packing boxes with the numbers of 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 in sequence.
In this embodiment, a preset threshold (for example, half of the height of the carriage) is designed, and when the height of the packaging box to be grasped is greater than the preset threshold, the mechanical arm is controlled to grasp from the side surface of the packaging box; when the height of the packaging box to be grabbed is not greater than a preset threshold value, the mechanical arm is controlled to grab from the upper surface of the packaging box. That is, the side of the high-level packing box facing the trolley is used as a grabbing surface, and the lower-level packing box is used as a grabbing surface. Therefore, the efficiency of grabbing the packaging box by the mechanical arm can be improved.
In this embodiment, before sorting the center coordinates of all the gripping surfaces, the method further includes: when the height of the packaging box to be grabbed is larger than the preset threshold value, filtering out the point cloud of the packaging box with the height below the preset threshold value through point cloud filtering; and when the height of the packaging box to be grabbed is not greater than the preset threshold value, filtering out the point cloud of the packaging box with the height above the preset threshold value through point cloud filtering. So as to reduce unnecessary sequencing work and improve processing speed.
After the packing box near the outside is grabbed, the mechanical arm can not grab the packing box in the carriage, at the moment, whether the mechanical arm can grab the packing box or not can be judged according to the coordinate of the X axis of the packing box and the grabbing distance of the mechanical arm, and if the mechanical arm can not grab the packing box, the trolley is controlled to move towards the carriage.
Example 2
Referring to fig. 2, the present embodiment provides an autonomous unloading trolley comprising:
a trolley body 1;
the mechanical arm 2 is rotatably arranged on the trolley main body 1;
the grabbing unit 3 is arranged at the tail end of the mechanical arm 2 and used for grabbing the packaging box;
the 3D vision module is arranged on the trolley main body 1 and is used for acquiring images in a carriage and carrying out point cloud processing on the acquired images to obtain the size and position information of each packing box;
the calculating and sorting module is in communication connection with the 3D vision module and calculates the center coordinates of the grabbing surfaces according to the size and position information of each packing box; sequencing the center coordinates of all the grabbing surfaces to define the grabbing sequence of the packing box; wherein the ordering comprises: sequencing according to an X axis, then sequencing according to a Z axis, and then sequencing according to a Y axis, so that grabbing along the Y axis direction from near to far from top to bottom is realized;
and the control module is used for controlling the mechanical arm to sequentially grasp the packaging boxes according to the sorting result and the distance between the trolley and the packaging boxes.
Specifically, the chassis of the unloading trolley adopts a crawler-type design, so that various complex working conditions such as climbing, parking, gap crossing, vertical obstacle crossing and the like are met, and the unloading trolley can also rotate and turn around in situ. The laser radar navigation device is arranged on the trolley, so that the change of the surrounding environment can be monitored in real time, the optimal path is planned autonomously, collision with obstacles is avoided, and the movement efficiency is improved. The unloading trolley can be further provided with an infrared induction system, and can sense whether people exist around the unloading trolley or not, so that the unloading trolley can not cause injury to people in the operation process. After the unloading trolley reaches the designated position, the unloading trolley collects 3D information of the packing box through the 3D vision module, and the packing box is segmented through a point cloud segmentation technology, so that accurate grabbing and placing of objects are achieved.
In order to achieve that all objects of the box-type container with various heights can be covered by the grabbing radius of the mechanical arm, a lifting mechanism is arranged at the joint of the lower end of the mechanical arm 2 and the trolley main body 1. The grabbing height of the mechanical arm can be adjusted, and the condition that the mechanical arm 2 cannot extend or be grabbed can not occur. The mechanical arm is a 7-axis mechanical arm, and balance of flexibility and accuracy is achieved.
In this embodiment, the grabbing module is a vacuum chuck, and the vacuum chuck includes a plurality of suction nozzles; the number of the vacuum chucks is one or a plurality of the vacuum chucks independently work, and the control module controls the working number of the vacuum chucks according to the grabbing size (such as the number of the packing boxes).
In this embodiment, the autonomous unloading trolley further includes a support 4, the support 4 rotates coaxially with the mechanical arm 2, the 3D vision module is mounted on the support 4, along with the rotation of the support 4, the image of the packing box in the box-type container is acquired, and image segmentation is performed through an image segmentation algorithm, so as to prepare for grabbing the next packing box by the mechanical arm, and each time the mechanical arm grabs and rotates by 90 degrees for placement, the 3D vision module directly scans all the packing boxes inside once because of the coaxial rotation of the support 4 and the mechanical arm 2, performs image processing, prepares for grabbing the next time, and repeatedly circulates in sequence until grabbing is completed. In another embodiment, the 3D vision module does not need to collect an omnidirectional image in the carriage, automatically records the position information of each packing box every time a picture of the current field of view is taken, and adjusts the angle of the field of view at the next shooting according to the position information.
In this embodiment, the 3D vision module includes a first binocular camera and a second binocular camera, and the first binocular camera and the second binocular camera are disposed on the stand 4 up and down. The first binocular camera angle is downward for obtain the image of carriage bottom half, and the second binocular camera angle is upwards for obtain the image of carriage top half.
In this embodiment, the stand 4 is further provided with an illumination lamp, so as to provide required illumination for the 3D vision module.
According to the two embodiments, through obtaining the images in the carriage, the size information and the position information of each packing box are obtained through processing, the packing boxes are grabbed according to the preset grabbing rules, and intelligent unloading of the packing boxes is achieved.
The above description is only illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any alterations and modifications made by those skilled in the art based on the above disclosure shall fall within the scope of the appended claims.

Claims (10)

1. A method of autonomous unloading of a cart, comprising:
the trolley automatically navigates and moves to the position of a carriage door according to the position relation between the trolley and the carriage;
defining the depth direction of a carriage as an X axis, the width direction of the carriage as a Y axis and the height direction of the carriage as a Z axis; acquiring images in a carriage through a 3D vision system, and carrying out image recognition and point cloud segmentation on the acquired images so as to obtain size information and position information of each packing box;
calculating the center coordinates of the grabbing surfaces according to the size information and the position information of each packing box;
sequencing the center coordinates of all the grabbing surfaces to define the grabbing sequence of the packing box;
wherein the ordering comprises: sequencing according to an X axis, then sequencing according to a Z axis, and then sequencing according to a Y axis, so that grabbing along the Y axis direction from near to far from top to bottom is realized;
and controlling the mechanical arm to sequentially grasp the packaging boxes according to the sorting result and the distance between the trolley and the packaging boxes.
2. The method of autonomous unloading of a cart of claim 1, wherein the method of acquiring images of the interior of the cart includes:
and when the current view photo is shot each time, automatically recording the position information of each packing box, and adjusting the view angle in the next shooting according to the position information.
3. The method of autonomous unloading of a cart of claim 1, wherein the method of robotic arm grasping the package comprises:
when the height of the packaging box to be grasped is greater than a preset threshold value, controlling the mechanical arm to grasp from the side face of the packaging box; when the height of the packaging box to be grabbed is not greater than a preset threshold value, the mechanical arm is controlled to grab from the upper surface of the packaging box.
4. A method of autonomous unloading of a cart according to claim 3, wherein prior to said ordering of the center coordinates of all gripping surfaces, the method further comprises:
when the height of the packaging box to be grabbed is larger than the preset threshold value, filtering out the point cloud of the packaging box with the height below the preset threshold value through point cloud filtering;
and when the height of the packaging box to be grabbed is not greater than the preset threshold value, filtering out the point cloud of the packaging box with the height above the preset threshold value through point cloud filtering.
5. The autonomous unloading method of the cart according to claim 1, wherein when the number of the packing boxes gripped at one time is one, the center coordinates of the gripping surfaces are the center coordinates of the gripping surfaces of the individual packing boxes; when the number of the packing boxes which are grabbed at one time is larger than one, the center coordinates of the grabbing surfaces are the center coordinates of the grabbing surfaces of the whole packing boxes.
6. The autonomous discharge method of a cart of claim 1, wherein after the cart is moved to a door position, the method further comprises:
and obtaining the distance between the trolley and the outermost packaging box, and adjusting the position of the trolley according to the grabbing distance of the mechanical arm.
7. The method of autonomous unloading of a cart of claim 6, wherein the method of obtaining a distance of the cart from an outermost package comprises:
and acquiring images of all the packing boxes at the outermost side, processing to obtain point cloud data, and obtaining the distance between the trolley and the packing boxes at the outermost side according to the point cloud data.
8. An autonomous discharge trolley, comprising:
a trolley body;
the mechanical arm is rotatably arranged on the trolley main body;
the grabbing unit is arranged at the tail end of the mechanical arm and used for grabbing the packaging box;
the 3D vision module is arranged on the trolley main body and used for acquiring images in a carriage and carrying out point cloud processing on the acquired images to obtain the size and position information of each packing box;
the calculating and sorting module is in communication connection with the 3D vision module and calculates the center coordinates of the grabbing surfaces according to the size and position information of each packing box; sequencing the center coordinates of all the grabbing surfaces to define the grabbing sequence of the packing box; wherein the ordering comprises: sequencing according to an X axis, then sequencing according to a Z axis, and then sequencing according to a Y axis, so that grabbing along the Y axis direction from near to far from top to bottom is realized;
and the control module is used for controlling the mechanical arm to sequentially grasp the packaging boxes according to the sorting result and the distance between the trolley and the packaging boxes.
9. The autonomous discharge cart of claim 8, wherein the gripping module is a vacuum chuck comprising a plurality of suction nozzles;
the number of the vacuum chucks is one or a plurality of the vacuum chucks independently work, and the control module controls the working number of the vacuum chucks according to the grabbing size.
10. The autonomous discharge cart of claim 8, further comprising a stand coaxially rotatable with the robotic arm, the 3D vision module including a first binocular camera and a second binocular camera disposed on the stand up and down.
CN202410181189.2A 2024-02-18 2024-02-18 Autonomous unloading method of trolley and autonomous unloading trolley Pending CN117864806A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410181189.2A CN117864806A (en) 2024-02-18 2024-02-18 Autonomous unloading method of trolley and autonomous unloading trolley

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410181189.2A CN117864806A (en) 2024-02-18 2024-02-18 Autonomous unloading method of trolley and autonomous unloading trolley

Publications (1)

Publication Number Publication Date
CN117864806A true CN117864806A (en) 2024-04-12

Family

ID=90579396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410181189.2A Pending CN117864806A (en) 2024-02-18 2024-02-18 Autonomous unloading method of trolley and autonomous unloading trolley

Country Status (1)

Country Link
CN (1) CN117864806A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004033437A1 (en) * 2004-04-01 2005-10-20 Christine Farrenkopf Mechanical handling device for loading/unloading containers or warehouse racks has a jacking mast that inclines in the X-plane
CN114212564A (en) * 2021-12-31 2022-03-22 武汉理工大学 Telescopic cargo handling equipment and cargo handling method
CN115159149A (en) * 2022-07-28 2022-10-11 深圳市罗宾汉智能装备有限公司 Material taking and unloading method and device based on visual positioning
CN115582827A (en) * 2022-10-20 2023-01-10 大连理工大学 Unloading robot grabbing method based on 2D and 3D visual positioning
CN117104864A (en) * 2023-08-25 2023-11-24 中储恒科物联网系统有限公司 Intelligent unloader suitable for box freight bagged materials
CN117509220A (en) * 2023-11-20 2024-02-06 合肥井松智能科技股份有限公司 Container loading and unloading robot and loading and unloading method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004033437A1 (en) * 2004-04-01 2005-10-20 Christine Farrenkopf Mechanical handling device for loading/unloading containers or warehouse racks has a jacking mast that inclines in the X-plane
CN114212564A (en) * 2021-12-31 2022-03-22 武汉理工大学 Telescopic cargo handling equipment and cargo handling method
CN115159149A (en) * 2022-07-28 2022-10-11 深圳市罗宾汉智能装备有限公司 Material taking and unloading method and device based on visual positioning
CN115582827A (en) * 2022-10-20 2023-01-10 大连理工大学 Unloading robot grabbing method based on 2D and 3D visual positioning
CN117104864A (en) * 2023-08-25 2023-11-24 中储恒科物联网系统有限公司 Intelligent unloader suitable for box freight bagged materials
CN117509220A (en) * 2023-11-20 2024-02-06 合肥井松智能科技股份有限公司 Container loading and unloading robot and loading and unloading method thereof

Similar Documents

Publication Publication Date Title
EP3423913B1 (en) Sensor trajectory planning for a vehicle-mounted sensor
US11383380B2 (en) Object pickup strategies for a robotic device
CN205852774U (en) Multifunctional finishing robot
US10289111B1 (en) Systems and methods for removing debris from warehouse floors
US9457970B1 (en) Modular cross-docking system
US9665095B1 (en) Systems and methods for removing debris from warehouse floors
EP3169489B1 (en) Real-time determination of object metrics for trajectory planning
US10384870B2 (en) Method and device for order picking in warehouses largely by machine
US9688489B1 (en) Modular dock for facilities integration
CN110465928B (en) Storage commodity taking and placing mobile platform and path planning method of mobile platform
CA3155138A1 (en) Vision-assisted robotized depalletizer
CN107324041A (en) The manipulator and automatic film magazine handling device clamped for film magazine
CN109071114B (en) Method and equipment for automatically loading and unloading goods and device with storage function
CN109772718A (en) Wrap up Address Recognition system, recognition methods and packages system, method for sorting
CN112573058A (en) Goods taking method, carrying robot, processing terminal and intelligent warehousing system
WO2020102900A1 (en) Systems, methods, and storage units for article transport and storage
CN112318320A (en) Workpiece polishing system and method based on 3D vision camera
CN110421542B (en) Intelligent robot for loading and unloading box packages
CN117864806A (en) Autonomous unloading method of trolley and autonomous unloading trolley
CN118108032A (en) Autonomous loading method of trolley and autonomous loading trolley
WO2023086868A1 (en) Automated product unloading, handling, and distribution
JP2022190865A (en) Traveling system
CN115703238A (en) System and method for robotic body placement
WO2024072803A1 (en) Transferring items between two support surfaces
WO2021000370A1 (en) Control method and automated guided vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination