CN115123839A - AGV-based bin stacking method, device, equipment and storage medium - Google Patents

AGV-based bin stacking method, device, equipment and storage medium Download PDF

Info

Publication number
CN115123839A
CN115123839A CN202211068469.XA CN202211068469A CN115123839A CN 115123839 A CN115123839 A CN 115123839A CN 202211068469 A CN202211068469 A CN 202211068469A CN 115123839 A CN115123839 A CN 115123839A
Authority
CN
China
Prior art keywords
bin
dimensional model
fork
material box
agv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211068469.XA
Other languages
Chinese (zh)
Other versions
CN115123839B (en
Inventor
王志杰
刘斌
李逐原
董翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Hangcha Intelligent Technology Co ltd
Hangcha Group Co Ltd
Original Assignee
Zhejiang Hangcha Intelligent Technology Co ltd
Hangcha Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Hangcha Intelligent Technology Co ltd, Hangcha Group Co Ltd filed Critical Zhejiang Hangcha Intelligent Technology Co ltd
Priority to CN202211068469.XA priority Critical patent/CN115123839B/en
Publication of CN115123839A publication Critical patent/CN115123839A/en
Application granted granted Critical
Publication of CN115123839B publication Critical patent/CN115123839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G57/00Stacking of articles
    • B65G57/02Stacking of articles by adding to the top of the stack
    • B65G57/16Stacking of articles of particular shape
    • B65G57/20Stacking of articles of particular shape three-dimensional, e.g. cubiform, cylindrical
    • B65G57/22Stacking of articles of particular shape three-dimensional, e.g. cubiform, cylindrical in layers each of predetermined arrangement
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/063Automatically guided
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • B66F9/0755Position control; Position detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G2201/00Indexing codes relating to handling devices, e.g. conveyors, characterised by the type of product or load being conveyed or handled
    • B65G2201/02Articles
    • B65G2201/0235Containers
    • B65G2201/0258Trays, totes or bins
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The application discloses a method, a device, equipment and a storage medium for stacking workbins based on AGV, which relate to the technical field of automatic stacking and comprise the following steps: when the AGV moves to a discharge preparation point, acquiring image data and depth data of a material box above a fork and a material box on a discharge layer through a visual recognition system, and constructing a first material box three-dimensional model of the material box above the fork and a second material box three-dimensional model of the material box on the discharge layer; judging whether the model meets the stacking requirement or not; if the position of the pallet fork is not met, determining offset data among the models, adjusting the position of the box body above the pallet fork by the vehicle control system according to the offset data until the stacking requirement is met, and stacking the material box on the material box of the unloading layer. The three-dimensional model that this application was judged to acquire image data and degree of depth data through the vision recognition system and is found whether satisfies the requirement of piling up, if not, then according to determining the position appearance of offset data adjustment fork top box, when satisfying, pile up the workbin on the layer workbin that unloads, improve workbin and pile up the precision.

Description

AGV-based bin stacking method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of automatic stacking, in particular to a method, a device, equipment and a storage medium for stacking a material box based on an AGV.
Background
At present, because the position is black to lid and the bottom fork of black workbin, infrared laser can be absorbed at the black position, lead to TOF (Time of flight ) camera (utilize the equipment of TOF technical imaging promptly), two mesh structure light cameras, laser radar etc. transmission laser type's cameras can take place the data disappearance under normal power, can't satisfy the work that the workbin was piled up to the high accuracy, and the workbin is the components of a whole that can function independently combination formula, and the dark blue box in middle has the error for the black part, can't satisfy the demand that piles up the precision within 10 mm. That is to say, the method for automatically stacking black boxes by an AGV (Automated Guided Vehicle) in the prior art lacks flexibility and ease of use, and has the problem of poor stacking precision, so that in practical application, the black boxes are stacked at a height of more than 6 meters, and cannot meet the requirements of an industrial application scene on instantaneity, flexibility, working efficiency, safety and reliability.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, a device and a storage medium for stacking bins based on AGVs, which can solve the problem of lack of flexibility and usability when stacking black bins above a certain height in practical applications, and can improve the stacking accuracy of the bins. The specific scheme is as follows:
in a first aspect, the present application discloses an AGV-based bin stacking method, comprising:
when the AGV travels to a discharge preparation point, acquiring image data and depth data of a material box above a fork and a discharge layer material box through a visual recognition system, and constructing a first material box three-dimensional model of the material box above the fork and a second material box three-dimensional model of the discharge layer material box by using the image data and the depth data;
judging whether the first bin three-dimensional model and the second bin three-dimensional model meet the current stacking requirement or not;
if the position of the pallet fork is not met, determining offset data between the first bin three-dimensional model and the second bin three-dimensional model, and sending the offset data to a vehicle control system so that the vehicle control system can adjust the position of the box body above the pallet fork according to the offset data;
and after the pose of the box body above the fork is adjusted, the step of obtaining the image data and the depth data of the material box above the fork and the unloading layer material box is executed again until the built first material box three-dimensional model and the built second material box three-dimensional model meet the current stacking requirement, and the material box above the fork is stacked on the unloading layer material box.
Optionally, the visual recognition system includes an RGB camera, a TOF camera, a limit switch, an industrial personal computer, and a power supply; wherein, RGB camera level and forward are installed the both sides of AGV, TOF camera level and forward are installed the centre of AGV, limit switch installs the root of fork, the industrial computer is used for right image data with depth data fuses and handles in order to establish three-dimensional model, limit switch is used for restricting the position appearance of tray on the fork.
Optionally, the obtaining of the image data and the depth data of the material box above the fork and the material box of the unloading layer by the visual recognition system includes:
the image data of the material box above the fork and the unloading layer material box are obtained through the RGB camera in the visual recognition system, and the depth data of the material box above the fork and the unloading layer material box are obtained through the TOF camera in the visual recognition system.
Optionally, the constructing a first bin three-dimensional model of the bin above the fork and a second bin three-dimensional model of the discharge layer bin by using the image data and the depth data includes:
extracting detail features in the image data, and matching the detail features with three-dimensional features output by a pre-constructed target neural network to obtain corresponding matching results;
judging whether the matching result meets a preset matching requirement or not;
and when the matching result meets the preset matching requirement, constructing a first bin three-dimensional model of the bin above the fork and a second bin three-dimensional model of the unloading layer bin based on the detail feature, the three-dimensional feature and the depth data.
Optionally, the matching the detail feature with a three-dimensional feature output by a pre-constructed target neural network to obtain a corresponding matching result includes:
and matching the detail features with three-dimensional features output by a target neural network model constructed in advance based on a Yolo5 algorithm to obtain corresponding matching results.
Optionally, the determining offset data between the first bin three-dimensional model and the second bin three-dimensional model includes:
and determining the front-back offset and the left-right offset between the first bin three-dimensional model and the second bin three-dimensional model.
Optionally, the sending the offset data to a vehicle control system so that the vehicle control system can adjust the pose of the box body above the fork according to the offset data includes:
and sending the offset data to a vehicle control system so that the vehicle control system can call a portal advancing function and a fork side shifting function according to the offset data to control the AGV to adjust the pose of the box body above the fork.
In a second aspect, the present application discloses an AGV-based bin stacking apparatus, comprising:
the data acquisition module is used for acquiring image data and depth data of a material box above the fork and a material box on a discharging layer through a visual recognition system when the AGV advances to a discharging preparation point;
the three-dimensional model building module is used for building a first bin three-dimensional model of a bin above the fork and a second bin three-dimensional model of a discharge layer bin by using the image data and the depth data;
the stacking requirement judging module is used for judging whether the first bin three-dimensional model and the second bin three-dimensional model meet the current stacking requirement;
the offset determining module is used for determining offset data between the first bin three-dimensional model and the second bin three-dimensional model when the first bin three-dimensional model and the second bin three-dimensional model do not meet the current stacking requirement;
the position and orientation adjusting module is used for sending the offset data to a vehicle control system so that the vehicle control system can adjust the position and orientation of the box body above the fork according to the offset data;
and the bin stacking module is used for re-executing the step of acquiring the image data and the depth data of the bin above the fork and the unloading layer bin after adjusting the pose of the box above the fork until the constructed three-dimensional model of the first bin and the constructed three-dimensional model of the second bin meet the stacking requirement, and stacking the bin above the fork on the unloading layer bin.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
a processor for executing said computer program to implement the steps of the AGV-based bin stacking method disclosed in the foregoing.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program; wherein the computer program when executed by a processor implements the steps of the AGV-based bin stacking method disclosed above.
In summary, the present application provides a AGV-based bin stacking method, comprising: when the AGV travels to a discharging preparation point, image data and depth data of a material box above a fork and a discharging layer material box are obtained through a visual recognition system, and a first material box three-dimensional model of the material box above the fork and a second material box three-dimensional model of the discharging layer material box are constructed by utilizing the image data and the depth data; judging whether the first bin three-dimensional model and the second bin three-dimensional model meet the current stacking requirement; if the position of the pallet fork is not met, determining offset data between the first bin three-dimensional model and the second bin three-dimensional model, and sending the offset data to a vehicle control system so that the vehicle control system can adjust the position of the box body above the pallet fork according to the offset data; and after the pose of the box body above the fork is adjusted, the step of obtaining the image data and the depth data of the material box above the fork and the unloading layer material box is executed again until the built first material box three-dimensional model and the built second material box three-dimensional model meet the current stacking requirement, and the material box above the fork is stacked on the unloading layer material box. Therefore, the image data and the depth data of the material box above the fork and the material box on the unloading layer are obtained through the visual recognition system, the three-dimensional model of the first material box of the material box above the fork and the three-dimensional model of the second material box of the material box on the unloading layer are built, whether the built three-dimensional models meet the stacking requirement is judged, if the stacking requirement is not met, the pose of the box body above the fork is adjusted according to the determined offset data, and when the built three-dimensional models meet the current stacking requirement, the material box above the fork is stacked on the material box on the unloading layer.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of an AGV based bin stacking method disclosed herein;
FIG. 2 is a schematic view of a vision recognition system disclosed herein;
FIG. 3 is a flow chart of a method for determining a path of travel outside a vehicle body as disclosed herein;
FIG. 4 is a schematic illustration of an AGV based bin stacking apparatus according to the present disclosure;
fig. 5 is a block diagram of an electronic device disclosed in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, the method for automatically stacking black material boxes by an AGV in the prior art lacks flexibility and usability, and has the problem of poor stacking precision, so that in practical application, the black material boxes are stacked at a height above a certain height, and the requirements of an industrial application scene on instantaneity, flexibility, working efficiency and safety and reliability cannot be met. Therefore, the application provides a workbin stacking scheme based on AGV, and the black workbin stacking operation in practical application at a certain height can be achieved, and the requirements of industrial application scenes on instantaneity, flexibility, working efficiency and safety and reliability are met.
The embodiment of the invention discloses an AGV-based bin stacking method, which comprises the following steps of:
step S11: when the AGV advances to the preparation point of unloading, obtain the image data and the depth data of fork top workbin and unloading layer workbin through visual identification system, and utilize image data and depth data to establish the first workbin three-dimensional model of fork top workbin with the second workbin three-dimensional model of unloading layer workbin.
It should be noted that the visual recognition system includes an RGB (Red-Green-Blue, Red, Green, Blue) camera, a TOF camera, a limit switch, an industrial personal computer, and a power supply; wherein, RGB camera level and forward are installed the both sides of AGV, TOF camera level and forward are installed the centre of AGV, limit switch installs the root of fork, the industrial computer is used for right image data with depth data fuses and handles in order to establish three-dimensional model, limit switch is used for restricting the position appearance of tray on the fork. As shown in fig. 2, the two-sided RGB cameras provide bin box image data, the TOF camera provides bin box depth data, after receiving corresponding instructions, the image data and the depth data are transmitted to the industrial personal computer through the network cable, the limit switch at the root of the pallet fork is used for limiting the pose of the pallet on the pallet fork, namely, when the goods taking task is carried out, the position of the material box is changed by the collision of the limit switch during the traveling process of the AGV, the traveling direction of the material box and the AGV is ensured to be a right angle of 90 degrees, and the RGB camera and the TOF camera on the two sides are fixed on the AGV fork frame body by adopting a lower hanging slide rail type telescopic bracket, the photographing height can be improved along with the ascending of the AGV fork frame body, the TOF camera can work in two modes, firstly, in a normal mode, the infrared laser wavelength of the TOF camera is 940, and stable point cloud with the precision within +/-2 mm can be provided for identifying a conventional object; secondly, under the high energy mode, TOF camera's infrared laser wavelength is 850 to under the rated power, through reducing infrared laser wavelength, improve infrared laser's energy, make the light pulse by black workbin box surface absorption part back, still have partial light pulse reflection to receive by TOF camera's receiver, thereby obtain box degree of depth information, thereby avoid because black workbin can absorb infrared laser, lead to the camera of transmission laser type to appear the data disappearance under normal power, thereby can't satisfy the problem that the workbin work was piled up to the high accuracy. The TOF camera in the high-energy mode further reduces a resolving threshold of the depth map, namely when a pixel normal vector in the depth map is resolved, initial real part data and initial imaginary part data of each pixel are obtained, an edge confidence map is reestablished by reducing the resolving threshold, characteristic data are formed by combining the obtained real part data and the obtained imaginary part data with data in a calculation mode, and then more image basic data are reconstructed, so that the problem of data loss caused by the fact that the surface of a black object absorbs light pulses is solved. The industrial personal computer performs fusion processing on the RGB image and the depth image through a pre-designed deep learning framework, for example, the deep learning framework adopts a feature layer fusion method, firstly extracts texture features of a bin from the RGB image and the depth image, fuses the extracted texture features into a single feature vector, and then performs processing by using a pattern recognition method. Moreover, the deep learning framework may be a framework constructed based on a neural network, and the neural network may be a network constructed in advance based on the Yolo5 algorithm, and the neural network may be divided into 3 parts, namely, an encoder, a fusion layer and a decoder. Wherein, the encoder is a convolution recursion self-encoder, which consists of a convolution neural network and a three-dimensional structure reconstruction network, after RGB images and depth images are input, the images are enhanced by a Laplacian operator after Gaussian filtering, the image contrast is improved, then the convolution neural network extracts the contour characteristics, the foot pier, the groove, the tongue and groove, the round hole, the straight line and other structural characteristics in the RGB images and the depth images, finally the three-dimensional structure reconstruction network is used for decoding the characteristics, the interrelation among all parts of the material box is reconstructed by the cuboid, the tree-shaped structure and the known external reference information of the material box, the interrelation can include but is not limited to the connection relationship, the rotational symmetry relationship, the parallel symmetry relationship and the like, and then the three-dimensional structure information of the material box in each image is comprehensively processed into an overall input function in the fusion layer, and the mapping of the input function is defined as the mapping function of the relevant characteristics, and the known external reference information of the bin is reflected to the topological structure of the network through the interaction of the neural network and the environment, so that the three-dimensional structure information of the bin is converted into a digital form, and the management and the establishment of a knowledge base are facilitated. The high-level features in the RGB image are used for measuring semantic similarity, and the low-level features in the depth image are used for measuring fine-grained similarity. When the fine-grained similarity and similar semantics between the image and the nearest neighbor are queried, the low-level features are refined into the sequencing result of the high-level features, and the low-level features of the depth image can be used for measuring and querying the fine-grained similarity between the image and the neighbor with the same semantics through the mapping function. The decoder is used for reconstructing the fused image by utilizing the feature vector which is output by the fusion layer and contains the three-dimensional structure information of the bin. By learning and understanding the contour features and the structural features in the RGB image and the depth image, the distribution of weight is determined, the input three-dimensional structure information of the bin is explained, the input feature vector is converted into a high-level logic concept, namely, a decoder performs upsampling on the feature vector, then the upsampled image is subjected to convolution processing, the geometric shape of the bin is perfected, and finally the feature vector is reconstructed into a fused image.
For example, an RGB camera and a TOF camera acquire an RGB image and a depth image at the same time stamp, and then extract detailed features through a training encoder, and a decoder reconstructs an image according to the detailed features. The method comprises the steps of calculating characteristic point coordinates and bin external parameter information through data of a depth image, enhancing and fusing the detail characteristics at a fusion layer by using the characteristic point coordinates and the bin external parameter information, inputting the fused detail characteristics into a training set of a decoder for training, testing in a testing set after training is finished, and finally outputting a bin three-dimensional model trained through a Yolo5 neural network.
In this embodiment, in the process of fetching goods by the AGV, the AGV may travel from the current position to the goods fetching preparation point according to the calculated external goods fetching path, and then adjust the fork height to fetch goods, that is, when the AGV is at the current position and the vehicle control system receives corresponding goods fetching task information, the vehicle control system may send a corresponding goods fetching identification instruction to the visual identification system, so that the visual identification system identifies the bin at the current goods fetching preparation point according to the received goods fetching identification instruction to obtain corresponding bin image information, reconstruct a three-dimensional bin model based on the bin image information obtained during goods fetching, as shown in fig. 3, after the industrial personal computer in the visual identification system calls a corresponding camera interface to obtain image data and depth data corresponding to the bin collected by the current camera, extract detailed features in local image data output by RGB cameras on the left and right sides in the visual identification system, matching feature point coordinates calculated through data of a depth image and information such as bin external parameters based on the detail features with a bin three-dimensional model trained by a Yolo5 neural network to complete identification of the bin, reconstructing a three-dimensional model of the current bin, inputting actually measured bin external parameters, performing size verification on the reconstructed three-dimensional model of the current bin, performing position calculation on the AGV based on the input camera external parameters when the three-dimensional model passes the size verification, namely converting a camera coordinate system into a world coordinate system, namely a rotation matrix and a translation matrix, to obtain a calculation result of relative position data of a car body containing the AGV and the bin to be taken, and inputting the calculation result obtained when the position calculation is performed after each time of identification and matching into a vector container in order to obtain more stable relative position data, so that after the data volume in the vector container meets the preset identification times N, the current data in the vector container is subjected to mean processing to obtain more stable relative pose data, and after the relative pose data are obtained, according to the navigation coordinate data and the corresponding complementary value data of the AGV body, the current position of the AGV body is calculated, after the current position of the AGV body is output to an upper computer, the upper computer calculates the goods taking external path of the AGV from the current position to the goods taking preparation point according to the coordinate data corresponding to the current vehicle body pose, the upper computer issues the goods taking external path to the vehicle control system of the AGV, so that the AGV is controlled by the vehicle control system to travel to the goods taking operation site according to the goods taking external path to complete the goods taking operation task, when the three-dimensional model does not pass the size verification, clearing the relative pose data in the vector container and identifying the bin of the current goods taking preparation point again to acquire the corresponding bin image information again. Similarly, after the AGV finishes taking the goods from the goods taking preparation point, in order to control the AGV to move from the goods taking preparation point to the unloading preparation point, the external path from the goods taking preparation point to the unloading preparation point is calculated in the same way as the above process, so that the AGV is controlled by the vehicle control system to move to the unloading preparation point according to the external path, and when the AGV moves to the unloading preparation point, the vehicle control system in the AGV sends a recognition instruction to the visual recognition system again, so that the visual recognition system obtains the image data and the depth data of the goods box above the fork and the unloading layer material box according to the received recognition instruction, specifically, the image data of the goods box above the fork and the unloading layer material box are obtained through the RGB camera in the visual recognition system, and the depth data of the goods box above the fork and the unloading layer material box are obtained through the TOF camera in the visual recognition system, and uploading the image data and the depth data to an industrial personal computer in the visual recognition system, wherein the industrial personal computer constructs a first bin three-dimensional model of a bin above the fork and a second bin three-dimensional model of a bin at the unloading layer by using the image data and the depth data. In the process of reconstructing the first bin three-dimensional model of the bin above the fork and the second bin three-dimensional model of the discharge layer bin, the method specifically includes: extracting detail features in the image data, and matching the detail features with three-dimensional features output by a pre-constructed target neural network to obtain a corresponding matching result; judging whether the matching result meets a preset matching requirement or not; and when the matching result meets the preset matching requirement, constructing a first bin three-dimensional model of the bin above the fork and a second bin three-dimensional model of the unloading layer bin based on the detail feature, the three-dimensional feature and the depth data. The target neural network model may adopt a Yolo5 algorithm, that is, the detail features are matched with three-dimensional features output by the target neural network model pre-constructed based on the Yolo5 algorithm to obtain corresponding matching results. It can be understood that when the AGV moves to the unloading preparation point, the vehicle control system sends a second recognition instruction to the visual recognition system, so that a camera in the visual recognition system acquires corresponding image information, and then an industrial personal computer in the system is used for extracting detail features containing contour features and structural features from the image information, and the detail features are matched with three-dimensional features corresponding to the detail features in a three-dimensional bin model completed through Yolo5 neural network training, so as to obtain a corresponding matching result, and when the matching result meets a preset matching requirement, a first three-dimensional bin model of the bin above the fork and a second three-dimensional bin model of the bin at the unloading layer are reconstructed based on the three-dimensional bin model completed through Yolo5 neural network training and the image information.
Step S12: and judging whether the first bin three-dimensional model and the second bin three-dimensional model meet the current stacking requirement.
In this embodiment, when reconstructing the first bin three-dimensional model of the bin above the fork and the second bin three-dimensional model of the discharge layer bin, the first bin three-dimensional model and the second bin three-dimensional model are calibrated to determine whether the first bin three-dimensional model and the second bin three-dimensional model meet the current stacking requirement.
Step S13: and if the position of the pallet fork is not met, determining offset data between the first bin three-dimensional model and the second bin three-dimensional model, and sending the offset data to a vehicle control system so that the vehicle control system can adjust the position of the box body above the pallet fork according to the offset data.
In this embodiment, when calibrating the first bin three-dimensional model and the second bin three-dimensional model, if the first bin three-dimensional model and the second bin three-dimensional model do not satisfy the current stacking requirement, the offset data between the first bin three-dimensional model and the second bin three-dimensional model is determined through model comparison, the offset may include, but is not limited to, a front offset, a rear offset, a left offset, a right offset, an angle offset, and the like, and then the offset data is sent to a vehicle control system so that the vehicle control system can adjust the pose of the fork top box according to the offset data, for example, the front offset, the rear offset, the left offset, and the right offset between the first bin three-dimensional model and the second bin three-dimensional model are determined, and the offset data is sent to the vehicle control system so that the vehicle control system can call a gantry forward movement function and a fork side movement function according to the offset data to control the AGV to adjust the fork top box The pose of (1). And if the first material box three-dimensional model and the second material box three-dimensional model meet the current stacking requirement, controlling the AGV to adjust the height of the pallet fork through a vehicle control system, stacking the material boxes to be unloaded on the material boxes of the unloading layer, completing the fork-out operation, adjusting the height of the pallet fork to be the carrying height, and finally completing the stacking operation.
Step S14: and after the pose of the box body above the fork is adjusted, the step of obtaining the image data and the depth data of the material box above the fork and the unloading layer material box is executed again until the built first material box three-dimensional model and the built second material box three-dimensional model meet the current stacking requirement, and the material box above the fork is stacked on the unloading layer material box.
In this embodiment, when the first bin three-dimensional model and the second bin three-dimensional model do not meet the current stacking requirement and the pose of the box body above the fork is adjusted according to the determined offset, the step of obtaining the image data and the depth data of the bin above the fork and the unloading layer bin is executed again until the constructed first bin three-dimensional model and the constructed second bin three-dimensional model meet the current stacking requirement, and the bin above the fork is stacked on the unloading layer bin.
It can be seen that in the embodiment of the application, the image data and the depth data of the material box above the fork and the material box on the unloading layer are obtained through the visual recognition system, the three-dimensional model of the first material box of the material box above the fork and the three-dimensional model of the second material box of the material box on the unloading layer are built, whether the built three-dimensional model meets the stacking requirement or not is further judged, if the stacking requirement is not met, the pose of the box body above the fork is adjusted according to the determined offset data, and when the built three-dimensional model meets the current stacking requirement, the material box above the fork is further stacked on the material box on the unloading layer.
Correspondingly, the embodiment of the application also discloses an AGV-based bin stacking device, which is shown in FIG. 4 and comprises:
the data acquisition module 11 is used for acquiring image data and depth data of a material box above a fork and a material box on a discharging layer through a visual recognition system when the AGV advances to a discharging preparation point;
the three-dimensional model building module 12 is used for building a first bin three-dimensional model of the bin above the pallet fork and a second bin three-dimensional model of the unloading layer bin by using the image data and the depth data;
the stacking requirement judging module 13 is configured to judge whether the first bin three-dimensional model and the second bin three-dimensional model meet a current stacking requirement;
an offset determination module 14, configured to determine offset data between the first bin three-dimensional model and the second bin three-dimensional model when the first bin three-dimensional model and the second bin three-dimensional model do not meet the current stacking requirement;
the pose adjusting module 15 is used for sending the offset data to a vehicle control system so that the vehicle control system can adjust the pose of the box body above the fork according to the offset data;
and the bin stacking module 16 is configured to, after the pose of the box body above the fork is adjusted, re-execute the step of obtaining image data and depth data of the bin above the fork and the unloading layer bin until the built first bin three-dimensional model and the built second bin three-dimensional model meet the current stacking requirement, and stack the bin above the fork on the unloading layer bin.
From top to bottom, image data and depth data of a bin above a fork and a bin on a discharging layer are obtained through a visual recognition system in the embodiment of the application, a first bin three-dimensional model of the bin above the fork and a second bin three-dimensional model of the bin on the discharging layer are built, whether the built three-dimensional models meet stacking requirements is judged, if the stacking requirements are not met, the pose of a box body above the fork is adjusted according to determined offset data, and when the built three-dimensional models meet the current stacking requirements, the bin above the fork is stacked on the bin on the discharging layer.
In some specific embodiments, the data obtaining module 11 may specifically include:
the first data acquisition unit is used for acquiring image data of the material box above the fork and the unloading layer material box through the RGB camera in the visual recognition system;
and the second data acquisition unit is used for acquiring the depth data of the material box above the fork and the unloading layer material box through the TOF camera in the visual recognition system.
In some specific embodiments, the three-dimensional model building module 12 may specifically include:
a feature extraction unit configured to extract a detail feature in the image data;
the feature matching unit is used for matching the detail features with three-dimensional features output by a pre-constructed target neural network to obtain corresponding matching results;
the matching requirement judging unit is used for judging whether the matching result meets the preset matching requirement or not;
and the three-dimensional model building unit is used for building a first bin three-dimensional model of the bin above the fork and a second bin three-dimensional model of the unloading layer bin based on the detail features, the three-dimensional features and the depth data when the matching result meets the preset matching requirement.
In some specific embodiments, the structure matching unit may specifically include:
and the feature matching subunit is used for matching the detail features with three-dimensional features output by a target neural network model which is pre-constructed based on a Yolo5 algorithm to obtain corresponding matching results.
In some specific embodiments, the offset determining module 14 may specifically include:
and the offset determining unit is used for determining the front-back offset and the left-right offset between the first bin three-dimensional model and the second bin three-dimensional model.
In some specific embodiments, the pose adjustment module 15 may specifically include:
and the pose adjusting unit is used for sending the offset data to a vehicle control system so that the vehicle control system can call a gantry forward moving function and a fork side moving function according to the offset data to control the AGV to adjust the pose of the box body above the fork.
Further, the embodiment of the application also provides electronic equipment. FIG. 5 is a block diagram illustrating an electronic device 20 according to an exemplary embodiment, and the contents of the diagram should not be construed as limiting the scope of use of the present application in any way.
Fig. 5 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein the memory 22 is used for storing a computer program which is loaded and executed by the processor 21 to implement the relevant steps of the AGV-based bin stacking method disclosed in any of the previous embodiments. In addition, the electronic device 20 in the present embodiment may be specifically an electronic computer.
In this embodiment, the power supply 23 is configured to provide a working voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
In addition, the storage 22 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon may include an operating system 221, a computer program 222, etc., and the storage manner may be a transient storage or a permanent storage.
The operating system 221 is used for managing and controlling each hardware device on the electronic device 20 and the computer program 222, and may be Windows Server, Netware, Unix, Linux, or the like. The computer programs 222 may further include computer programs that can be used to perform other specific tasks in addition to the computer programs that can be used to perform the AGV-based bin stacking method disclosed by any of the foregoing embodiments and executed by the electronic device 20.
Further, the present application discloses a computer readable storage medium, in which a computer program is stored, and when the computer program is loaded and executed by a processor, the steps of the AGV-based bin stacking method disclosed in any of the foregoing embodiments are implemented.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The AGV-based bin stacking method, apparatus, device and storage medium provided by the present invention are described in detail above, and the principle and implementation of the present invention are explained herein by using specific examples, and the description of the above examples is only used to help understand the method and core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An AGV based bin stacking method comprising:
when the AGV travels to a discharge preparation point, acquiring image data and depth data of a material box above a fork and a discharge layer material box through a visual recognition system, and constructing a first material box three-dimensional model of the material box above the fork and a second material box three-dimensional model of the discharge layer material box by using the image data and the depth data;
judging whether the first bin three-dimensional model and the second bin three-dimensional model meet the current stacking requirement;
if the position of the pallet fork is not met, determining offset data between the first bin three-dimensional model and the second bin three-dimensional model, and sending the offset data to a vehicle control system so that the vehicle control system can adjust the position of the box body above the pallet fork according to the offset data;
and after the pose of the box body above the fork is adjusted, the step of obtaining the image data and the depth data of the material box above the fork and the unloading layer material box is executed again until the built first material box three-dimensional model and the built second material box three-dimensional model meet the current stacking requirement, and the material box above the fork is stacked on the unloading layer material box.
2. The AGV-based bin stacking method according to claim 1, wherein said visual recognition system comprises an RGB camera, a TOF camera, a limit switch, an industrial personal computer, a power supply; wherein, RGB camera level and forward are installed the both sides of AGV, TOF camera level and forward are installed the centre of AGV, limit switch installs the root of fork, the industrial computer is used for right image data with depth data fuses and handles in order to establish three-dimensional model, limit switch is used for restricting the position appearance of tray on the fork.
3. The AGV-based bin stacking method of claim 2 wherein said obtaining image data and depth data of bins above forks and said layer bin by a visual recognition system comprises:
the image data of the material box above the fork and the unloading layer material box are obtained through the RGB camera in the visual recognition system, and the depth data of the material box above the fork and the unloading layer material box are obtained through the TOF camera in the visual recognition system.
4. The AGV-based bin stacking method of claim 1 wherein said using said image data and depth data to construct a first bin three-dimensional model of said bin above said forks and a second bin three-dimensional model of said unload layer bin comprises:
extracting detail features in the image data, and matching the detail features with three-dimensional features output by a pre-constructed target neural network to obtain corresponding matching results;
judging whether the matching result meets a preset matching requirement or not;
and when the matching result meets the preset matching requirement, constructing a first bin three-dimensional model of the bin above the fork and a second bin three-dimensional model of the unloading layer bin based on the detail feature, the three-dimensional feature and the depth data.
5. The AGV based bin stacking method of claim 4, wherein said matching said detail features to pre-constructed three-dimensional features of target neural network outputs results in corresponding matching results, comprising:
and matching the detail features with three-dimensional features output by a target neural network model constructed in advance based on a Yolo5 algorithm to obtain corresponding matching results.
6. The AGV-based bin stacking method of claim 1 wherein said determining offset data between said first bin three-dimensional model and said second bin three-dimensional model comprises:
and determining the front-back offset and the left-right offset between the first bin three-dimensional model and the second bin three-dimensional model.
7. The AGV based bin stacking method of any one of claims 1 to 6, wherein said sending said offset data into a vehicle control system for said vehicle control system to adjust the pose of said bins above said forks based on said offset data comprises:
and sending the offset data to a vehicle control system so that the vehicle control system can call a gantry forward moving function and a fork side moving function according to the offset data to control the AGV to adjust the pose of the box body above the fork.
8. A AGV based bin stacking apparatus comprising:
the data acquisition module is used for acquiring image data and depth data of a material box above the fork and a material box on a discharging layer through a visual recognition system when the AGV advances to a discharging preparation point;
the three-dimensional model building module is used for building a first bin three-dimensional model of the bin above the pallet fork and a second bin three-dimensional model of the unloading layer bin by using the image data and the depth data;
the stacking requirement judging module is used for judging whether the first bin three-dimensional model and the second bin three-dimensional model meet the current stacking requirement;
the offset determining module is used for determining offset data between the first bin three-dimensional model and the second bin three-dimensional model when the first bin three-dimensional model and the second bin three-dimensional model do not meet the current stacking requirement;
the position and orientation adjusting module sends the offset data to a vehicle control system so that the vehicle control system can adjust the position and orientation of the box body above the fork according to the offset data;
and the bin stacking module is used for re-executing the step of acquiring the image data and the depth data of the bin above the fork and the unloading layer bin after the pose of the box above the fork is adjusted until the constructed three-dimensional model of the first bin and the constructed three-dimensional model of the second bin meet the current stacking requirement, and then stacking the bin above the fork on the unloading layer bin.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing said computer program to implement the steps of the AGV-based bin stacking method according to any one of claims 1 to 7.
10. A computer-readable storage medium for storing a computer program; wherein the computer program when executed by a processor implements the steps of the AGV-based bin stacking method of any one of claims 1 to 7.
CN202211068469.XA 2022-09-02 2022-09-02 AGV-based bin stacking method, device, equipment and storage medium Active CN115123839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211068469.XA CN115123839B (en) 2022-09-02 2022-09-02 AGV-based bin stacking method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211068469.XA CN115123839B (en) 2022-09-02 2022-09-02 AGV-based bin stacking method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115123839A true CN115123839A (en) 2022-09-30
CN115123839B CN115123839B (en) 2022-12-09

Family

ID=83387910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211068469.XA Active CN115123839B (en) 2022-09-02 2022-09-02 AGV-based bin stacking method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115123839B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115848878A (en) * 2023-02-28 2023-03-28 云南烟叶复烤有限责任公司 AGV-based cigarette frame identification and stacking method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018068024A1 (en) * 2016-10-06 2018-04-12 Doerfer Corporation Automated warehouse fulfillment operations and system
CN112429535A (en) * 2020-11-09 2021-03-02 苏州罗伯特木牛流马物流技术有限公司 Control system and method for multi-layer accurate stacking of ground piled cargos
CN112660686A (en) * 2021-03-17 2021-04-16 杭州蓝芯科技有限公司 Depth camera-based material cage stacking method and device, electronic equipment and system
CN114275712A (en) * 2021-12-30 2022-04-05 中钞长城金融设备控股有限公司 Stacking device and stacking method
CN114524209A (en) * 2021-12-21 2022-05-24 杭叉集团股份有限公司 AGV high-position stacking method and detection device based on double TOF cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018068024A1 (en) * 2016-10-06 2018-04-12 Doerfer Corporation Automated warehouse fulfillment operations and system
CN112429535A (en) * 2020-11-09 2021-03-02 苏州罗伯特木牛流马物流技术有限公司 Control system and method for multi-layer accurate stacking of ground piled cargos
CN112660686A (en) * 2021-03-17 2021-04-16 杭州蓝芯科技有限公司 Depth camera-based material cage stacking method and device, electronic equipment and system
CN114524209A (en) * 2021-12-21 2022-05-24 杭叉集团股份有限公司 AGV high-position stacking method and detection device based on double TOF cameras
CN114275712A (en) * 2021-12-30 2022-04-05 中钞长城金融设备控股有限公司 Stacking device and stacking method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115848878A (en) * 2023-02-28 2023-03-28 云南烟叶复烤有限责任公司 AGV-based cigarette frame identification and stacking method and system

Also Published As

Publication number Publication date
CN115123839B (en) 2022-12-09

Similar Documents

Publication Publication Date Title
JP7282080B2 (en) 3D bounding boxes from 2D images and point cloud data
CN106950985B (en) Automatic delivery method and device
US9269188B2 (en) Densifying and colorizing point cloud representation of physical surface using image data
KR102428050B1 (en) Information supplement method, lane recognition method, intelligent driving method and related products
WO2020215172A1 (en) Obstacle detection method and device, mobile platform, and storage medium
CN108401461A (en) Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN107388960A (en) A kind of method and device for determining object volume
CN115123839B (en) AGV-based bin stacking method, device, equipment and storage medium
WO2021016854A1 (en) Calibration method and device, movable platform, and storage medium
CN113052109A (en) 3D target detection system and 3D target detection method thereof
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN112766135B (en) Target detection method, device, electronic equipment and storage medium
WO2020237516A1 (en) Point cloud processing method, device, and computer readable storage medium
CN113267761B (en) Laser radar target detection and identification method, system and computer readable storage medium
CN114764778A (en) Target detection method, target detection model training method and related equipment
CN105466523A (en) Grain-piling height measuring method and apparatus based on single camera image
CN111247564A (en) Method for constructing digital earth surface model, processing equipment and system
CN112287824A (en) Binocular vision-based three-dimensional target detection method, device and system
CN111739099B (en) Falling prevention method and device and electronic equipment
CN116612468A (en) Three-dimensional target detection method based on multi-mode fusion and depth attention mechanism
WO2023164845A1 (en) Three-dimensional reconstruction method, device, system, and storage medium
CN113670316A (en) Path planning method and system based on double radars, storage medium and electronic equipment
CN112560736A (en) Random angle laser gate detection method based on convolutional neural network and storage medium
CN111890358B (en) Binocular obstacle avoidance method and device, storage medium and electronic device
CN115346184A (en) Lane information detection method, terminal and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant