CN114559431A - Article distribution method and device, robot and storage medium - Google Patents

Article distribution method and device, robot and storage medium Download PDF

Info

Publication number
CN114559431A
CN114559431A CN202210198847.XA CN202210198847A CN114559431A CN 114559431 A CN114559431 A CN 114559431A CN 202210198847 A CN202210198847 A CN 202210198847A CN 114559431 A CN114559431 A CN 114559431A
Authority
CN
China
Prior art keywords
distance
target
article
target object
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210198847.XA
Other languages
Chinese (zh)
Inventor
徐卓立
姚昀
杨亚运
何林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keenlon Intelligent Technology Co Ltd
Original Assignee
Shanghai Keenlon Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keenlon Intelligent Technology Co Ltd filed Critical Shanghai Keenlon Intelligent Technology Co Ltd
Priority to CN202210198847.XA priority Critical patent/CN114559431A/en
Publication of CN114559431A publication Critical patent/CN114559431A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/006Controls for manipulators by means of a wireless system for controlling one or several manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The embodiment of the application discloses an article distribution method, an article distribution device, a robot and a storage medium. Acquiring an original image of a target object area through the camera; determining distance data between the target object area and the camera according to the original image; and determining the object placing condition of the target object placing area according to the distance data and the preset distance. According to the technical scheme, the distance data between the camera and the object placing area is determined according to the acquired original image of the object placing area, and the object placing condition of the object placing area is determined through the judgment of the distance data and the preset distance. The robot has the advantages that the existence condition of the articles to be delivered in the article delivery robot can be automatically identified through the camera, the work efficiency of the robot in autonomous service in the article delivery process is improved, and better work service is provided for users.

Description

Article distribution method and device, robot and storage medium
Technical Field
The present disclosure relates to robotics, and more particularly, to a method and an apparatus for dispensing articles, a robot, and a storage medium.
Background
With the rapid development of the robot technology, more and more production and living fields use the robot to replace the manpower to work, for example, a service robot in a restaurant can distribute articles such as food and drink for customers.
In the prior art, after a service robot in a restaurant delivers articles to a designated dining table, a user takes out the articles, and interacts with the robot through user operation to determine that the current articles are taken out, and the robot receives article delivery signals and then performs follow-up work. However, this method requires a manual operation by the user to determine the status of the items to be delivered, which reduces the efficiency of the robot in performing the delivery of the items.
Disclosure of Invention
The application provides an article distribution method, an article distribution device, a robot and a storage medium, so that the distribution efficiency of the robot is improved.
In a first aspect, an embodiment of the present application provides an article distribution method, including:
acquiring an original image of a target object area through the camera;
determining distance data between the target object area and the camera according to the original image;
and determining the object placing condition of the target object placing area according to the distance data and the preset distance.
In a second aspect, embodiments of the present application further provide an article dispensing device, including:
the original image acquisition module is used for acquiring an original image of the target object area through the camera;
the distance data determining module is used for determining distance data between the target object area and the camera according to the original image;
and the object placing condition determining module is used for determining the object placing condition of the target object placing area according to the distance data and the preset distance.
In a third aspect, an embodiment of the present application further provides a robot, including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement any one of the article dispensing methods as provided in embodiments of the first aspect of the present application.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements any one of the article distribution methods provided in the embodiments of the first aspect of the present application.
According to the technical scheme, the distance between the target object placing area and the camera is calculated according to the original image of the target object placing area, and the object placing condition of the target object placing area is judged by combining the preset distance. The method has the advantages that the method capable of automatically identifying the object placing condition is provided for the article distribution robot, the article distribution robot can trigger other work tasks independently according to the object placing condition, manual interaction is not needed, manual operation and influence are reduced, user experience is optimized, and the work efficiency of the article distribution robot is improved.
Drawings
Fig. 1 is a flowchart of an article distribution method according to an embodiment of the present application;
fig. 2 is a flowchart of an article distribution method according to a second embodiment of the present application;
fig. 3 is a flowchart of an article distribution method according to a third embodiment of the present application;
fig. 4 is a structural view of an article dispensing device according to a fourth embodiment of the present application;
fig. 5 is a structural diagram of a robot according to a fifth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an article distribution method according to an embodiment of the present application. The method can be executed by an article distribution device, and the device can be realized by software and/or hardware, and can be specifically configured in an article distribution robot with a camera correspondingly arranged above each layer of object area, and also can be configured in a background server.
The article distribution method shown in fig. 1 specifically includes the following steps:
and S110, acquiring an original image of the target object area through the camera.
The target object area may be an object area built in the article distribution robot, for example, an inter-cabin partition of the food delivery robot, and an area on each partition where an article can be placed may be referred to as a target object area. Specifically, each object placing region is shot through a camera arranged above each object placing region, and an image corresponding to the object placing region is obtained and serves as an original image. It will be appreciated that the original image should, at a minimum, cover the entire target object area.
And S120, determining distance data between the target object area and the camera according to the original image.
The distance data may be a distance between the target object area and the corresponding camera. It is understood that the distance data should reach a maximum value when there is no object to be dispensed in the target object region. Specifically, the method for determining the distance data from the original image may be that, by using a camera (e.g., a 3D camera, an infrared camera, etc.) capable of capturing depth information, the distance information included in the acquired original image may be directly acquired from the image; and a common 2D camera can be used, and the acquired original image is subjected to calculation on distance data corresponding to the original image through a pre-trained distance estimation model.
In an alternative embodiment, the determining distance data between the target object area and the camera from the raw image may include: acquiring depth information of the original image through a pre-trained monocular depth estimation model; and obtaining distance data between the target object area and the camera according to the depth information.
The principle of the monocular depth estimation model is that accurate data related to predicted distance data are obtained by analyzing an original image, a picture including depth information based on the original image is obtained according to the accurate data, and the depth information picture is analyzed to obtain distance data between a storage area of a target and a camera. Particularly, when the monocular depth estimation model is trained, pictures of transparent cups, beverages, soup and the like can be added for training, and the model is optimized in a data set adding mode, so that the accuracy of the identification distance calculation is improved.
In the above embodiment, the depth information of the original image is obtained through a pre-trained monocular depth estimation model; thereby obtaining distance data between the target object area and the camera. The advantage of doing so is can be fast, accurate acquisition target object area and the camera between the distance data to for judging that put the thing condition in thing district and provide effective basis, help improving article distribution robot's work efficiency.
In an alternative embodiment, the acquiring, by the camera, a raw image of the target object area may include: and acquiring continuous frame original images of the target object area through the camera.
The continuous frame original images can be understood as original images shot by a camera in real time, and the original images of each frame are recorded and used for calculating distance information in a pre-trained monocular depth estimation model.
Accordingly, determining distance data between the target object area and the camera according to the original image may include: respectively determining the initial distance between the target object area and the camera according to each frame of original image; and smoothing the initial distance corresponding to each frame of original image to obtain the distance data.
The initial distance may be distance information calculated for each frame of the original image. Specifically, the continuous frame original images are input into a pre-trained monocular depth estimation model to obtain initial distances corresponding to the frame original images, and the continuous initial distances are smoothed to obtain distance data. The smoothing method may use any smoothing algorithm in the prior art, such as a filtering algorithm, which is not limited in the embodiments of the present application. It is worth mentioning that it is possible to show,
according to the technical scheme of the embodiment, the continuous frame original image is obtained and is subjected to smoothing processing, so that the influence of environmental factors is reduced by effectively smoothing distance data, the change condition of the distance data can be more accurately found, and a reliable basis is provided for subsequently judging the object placing condition of the target object placing area.
In an alternative embodiment, the initial distance is a distance between a target placement in the target placement area and the camera.
If the current distribution mode is a tray-free distribution mode, the target placement object is an original placement tray arranged in the target placement object area;
and if the current distribution mode is a tray distribution mode, the target placement object is an additional tray placed in the original placement tray set in the target placement object area.
It should be noted that in some specific scenarios, such as the restaurant delivery example, the article delivery robot itself has a compartment hierarchy, and some articles may be directly placed in a compartment for delivery, and such a compartment may be understood as the original storage tray, which is inherent to the article delivery robot. There are also items that are not easily placed directly on the original storage tray and require an additional tray to be padded for distribution.
Therefore, the article delivery robot is classified into a tray-free delivery mode in which no additional tray participates and a tray delivery mode in which an additional tray participates according to the delivery mode, and the selection of the target placement object is different according to the difference between the two modes. It can be understood that, in the tray-free distribution mode, the target placement object is an original placement object tray, and the calculated distance data is based on the distance from the camera to the original placement object tray; similarly, in the tray distribution mode, the target placement object is an additional tray, and the calculated distance data is based on the distance from the camera to the additional tray.
According to the technical scheme of the embodiment, the distance data are calculated and distinguished according to whether the additional tray exists or not, so that even if the camera recognizes the additional tray, the object placing situation of the recognition target object placing area is not impressed due to the change of the distance, the mistaken recognition situation is reduced, and the accuracy of determining the distance data is improved.
And S130, determining the object placing condition of the target object placing area according to the distance data and the preset distance.
The preset distance may be a distance parameter as a comparison standard, may be set manually, or may be changed according to a difference in the target placement object in the target placement area. For example, if the distribution mode is currently the tray-free distribution mode, the distance data is calculated based on the distance from the camera to the original placement tray, for example, the preset distance may be 23 cm; if the current mode is the distribution mode with trays and the thickness of the trays is 1 cm, the preset distance may be 22 cm. The placement condition may be the presence of an item to be dispensed in the target placement area, i.e., whether the item to be dispensed is within the target placement area. Specifically, distance data calculated by a pre-trained monocular depth estimation algorithm is compared with a preset distance to judge the existence condition of the articles to be delivered.
In an optional implementation, the determining the object condition of the target object area according to the distance data and a preset distance may include: and if the distance data is changed from the preset distance to be smaller than the preset distance, determining that the target object area is to be placed with the delivered objects.
Continuing with the previous example, in the tray-free distribution mode, when the currently detected distance data between the camera and the target object area changes from 23 centimeters to less than 23 centimeters, it is determined that the target object area is provided with the object to be distributed.
In another alternative embodiment, the determining the object condition of the target object area according to the distance data and the preset distance may include: and if the distance data is changed from the distance smaller than the preset distance to the preset distance, determining that the object to be dispensed placed in the target object placing area is taken out.
Continuing with the previous example, in the tray-less dispensing mode, when the detected distance data between the camera and the target object area changes from less than 23 cm to 23 cm, it is determined that the object to be dispensed in the target object area is taken out.
In the technical scheme of the embodiment, the distance data is compared with the preset distance, so that the existence condition of the objects to be delivered in the target object area is simply and accurately judged, the judgment method is simple and quick in calculation, a large amount of calculated amount is saved, and the working efficiency of the object delivery robot is improved.
In an alternative embodiment, after determining that the object to be dispensed placed in the target object region is removed, the method may further include: judging whether the taken-out address of the article to be delivered is consistent with a target delivery address associated with the article to be delivered; and controlling to send out a wrong-taking prompt according to the consistent judgment result.
The retrieved address may be positioning information for determining when the article to be delivered is retrieved, for example, positioning information of a table in a restaurant, positioning information of a shelf in a warehouse, and the like. The target delivery address may be understood as the location information to which the item to be delivered needs to be delivered. Specifically, whether the article to be delivered is delivered to the target delivery address correctly is defined by judging the consistency between the taken-out address and the target delivery address. If the target delivery address is not reached, the article to be delivered is taken out, and the article delivery robot sends out an error taking alarm to prompt the user to put back the article. If the distribution is successful, the article distribution robot can execute other follow-up work according to the preset task condition.
Taking restaurant delivery as an example, a target delivery address associated with an article A to be delivered is a dining table B, when the article A is taken out, the article delivery robot acquires current position information and compares the current position information with the position information of the dining table B, if the current position information is inconsistent with the position information of the dining table B, the article delivery robot proves that the article A is accidentally taken by a user, a prompt lamp is started immediately, and a sound alarm is sent to prompt the user to return to the original position. If the current position information and the position information of the dining table B are moderate, the delivery is proved to be successful, and the article delivery robot continues to work according to task conditions, such as delivering other dining tables or returning to a dining table to load new articles to be delivered.
According to the technical scheme of the embodiment, whether the delivery of the articles to be delivered is successful is determined by judging whether the taken-out address of the articles to be delivered is consistent with the target delivery address, so that the delivery condition can be accurately judged, and the user can timely remind the user of the articles to be delivered when the articles are taken out by mistake, so that the probability of wrong delivery is greatly reduced, repeated delivery work is reduced, and the work efficiency of the article delivery robot is improved.
According to the technical scheme, the distance between the target object area and the camera is calculated according to the original image of the target object area, and the object placing condition of the target object area is judged by combining the preset distance. The method has the advantages that the method capable of automatically identifying the object placing condition is provided for the article distribution robot, the article distribution robot can trigger other work tasks independently according to the object placing condition, manual interaction is not needed, manual operation and influence are reduced, user experience is optimized, and the work efficiency of the article distribution robot is improved.
In an optional embodiment, at least one layer of object placing area is provided with at least two object placing subareas; correspondingly, the target object area is an object area in the object area.
The storage partition may be a different area that divides the storage partition, and may be used to place the articles to be delivered associated with the same target delivery address, or may be used to place the articles to be delivered associated with different target delivery addresses. It should be noted that the article distribution robot includes at least one layer object area, each layer object area is provided with at least two object partitions, and each object partition can be used as an independent object area. It will be appreciated that when an image of an object region is acquired, an image of the object region is acquired. The distance data can be calculated for each object region according to the processing method in the previous step according to the images of different object regions.
According to the technical scheme of the embodiment, different objects to be distributed in the object containing areas on the same layer can be distributed to different target distribution addresses by distinguishing different object containing areas, under the condition that the capacity of the object distribution robot is certain, the space in the cabin is utilized to the greatest extent, so that the distribution tasks which can be executed by the object distribution robot each time become more, and the working efficiency of the object distribution robot is greatly improved.
Example two
Fig. 2 is a flowchart of an article distribution method according to a second embodiment of the present application. The embodiment of the application is suitable for respectively distributing different articles in the same original article placing tray, and the distribution operation in the work of the article distribution robot is supplemented, so that the article distribution efficiency is improved.
The article distribution method shown in fig. 2 specifically includes the following steps:
s210, acquiring a current task to be distributed in the associated distribution event; and different tasks to be distributed in the related distribution events correspond to the articles to be distributed and are placed in different article placing subareas in the original article placing tray of the current robot.
The related delivery event is a set of tasks to be delivered, which need to be continuously executed by the current robot, and the related delivery event comprises at least two tasks to be delivered. The present robot can divide into at least two to put the thing subregion to at least one original thing tray of putting, and wherein, same robot can be provided with at least two original thing trays of putting. In order to avoid article confusion, articles to be dispensed corresponding to different tasks to be dispensed can be placed in different article placing subareas in a distinguishing mode so as to be dispensed respectively. In practical situations, since there are many articles to be dispensed corresponding to the same task to be dispensed and they cannot be all placed in one object-placing partition, the articles to be dispensed corresponding to the same task to be dispensed can be placed in different object-placing partitions respectively.
Specifically, taking the restaurant using the robot to deliver food as an example, the process from the food fetching port to the food fetching port of the robot to complete all tasks to be delivered is called an associated delivery event. Suppose that the current robot installs 4 layers of original article trays, every layer of original article tray can be divided into 2 article subareas, then the current robot is provided with 8 article subareas. The catering in different object-placing subareas can be distributed to different target distribution addresses, and can also be distributed to the same target distribution address. If the catering in the 8 placement partitions needs to be distributed to 6 different target distribution addresses, the associated distribution event contains 6 tasks to be distributed that need to be continuously executed.
And S220, selecting a target object-placing partition corresponding to the current task to be distributed from all the object-placing partitions.
Since the to-be-delivered items corresponding to the current to-be-delivered task are to be placed in at least one object placing partition, at least one object placing partition which is needed to be used is matched with the corresponding to-be-delivered task according to actual conditions before delivery. And when the current task to be distributed is executed, selecting a storage partition (namely a target storage partition) matched with the current task with distribution.
In an alternative embodiment, the article distribution method may further include: presetting a binding relation between a target delivery address and a storage partition in an associated delivery event; and determining a target object-placing partition corresponding to the task to be distributed currently according to the binding relationship.
Before all distribution works start, different target distribution addresses and different object-placing partitions can be bound in advance, and objects to be distributed corresponding to all tasks to be distributed in the associated distribution events are respectively placed in the corresponding target object-placing partitions according to the binding relation.
Taking a restaurant food delivery scene as an example, different food delivery addresses corresponding to different storage partitions can be set before the food delivery robot is put into use. For example, can be with number 1 thing subregion and number 1 table bind, number 2 thing subregion and number 2 table bind etc. place the article that need be sent to number 1 table in number 1 thing subregion according to the relation of binding. The advantage of doing so is that every puts the thing subregion and has bound fixed target delivery address, and the better distribution of the delivery side of being convenient for is managed and is waited to deliver the task. However, the distribution mode is single, and the articles to be distributed in different object-placing partitions cannot be sent to the same target distribution address, so that the distribution flexibility and the distribution efficiency are reduced.
In another optional embodiment, before acquiring the current task to be delivered in the associated delivery event, the method may further include: setting corresponding relations between different tasks to be distributed and object-placing partitions in the associated distribution events; and selecting a target object-placing partition corresponding to the current task to be distributed from all the object-placing partitions according to the corresponding relation.
The corresponding relation between the tasks to be distributed and the object placing partitions can be understood as that the objects to be distributed corresponding to the tasks to be distributed are placed in which specific object placing partition. Therefore, the corresponding relation can determine the target object-placing partition corresponding to the object to be delivered of the task to be delivered currently. In practical situations, the corresponding relationship may be manually set, or the object partition may be automatically allocated by the background server of the current robot according to a specific task to be allocated.
For example, before acquiring the current task to be delivered in the associated delivery event, the corresponding relationship between different tasks to be delivered and the placement partition may be preset.
In one specific example, assume that the associated delivery event contains 6 tasks to be delivered that need to be performed consecutively. When the current robot takes meals, each task to be distributed is matched with at least one object placing partition for placing the meals (namely, the corresponding relation between the tasks to be distributed and the object placing partitions is set).
According to the technical scheme of the embodiment, in the distribution process, the corresponding target object-placing partition is determined before each task to be distributed is executed, the object-placing partitions can be distributed according to different actual conditions, dynamic management of the object-placing partitions used by different tasks to be distributed is achieved, and the distribution flexibility and distribution efficiency are improved.
And S230, controlling the current robot to go to the target distribution address of the current task to be distributed so as to distribute the objects to be distributed placed in the target object-placing partition.
After the current task to be distributed and the corresponding target object-placing partition are determined, the current robot is controlled to travel to the target distribution address corresponding to the current task to be distributed, and therefore the objects to be distributed in the target object-placing partition are distributed to the target distribution address.
According to the technical scheme, the at least two object containing partitions are arranged on the original object containing tray and used for containing the objects to be distributed corresponding to different tasks to be distributed, the technical effect that different objects in the same original object containing tray can be distributed to different target distribution addresses is achieved, and the defect that the objects in the original object containing tray can only be distributed to a single target distribution address in the prior art is overcome. The different article-placing partitions of the same original article-placing tray can place articles, so that the utilization efficiency of the article-placing region is improved, the number of times of reciprocating the robot is reduced, the working energy consumption of the robot is reduced, more efficient distribution is achieved, and the article distribution efficiency is greatly improved.
EXAMPLE III
Fig. 3 is a flowchart of an article distribution method according to a third embodiment of the present application. The embodiment of the application is based on the technical solutions of the foregoing embodiments, and the operation of judging whether the article to be delivered is correctly delivered is supplemented, so as to improve the accuracy of the article delivery process.
Referring to fig. 3, a method for distributing articles specifically includes the following steps:
s310, acquiring a current task to be distributed in the associated distribution event; and different tasks to be distributed in the related distribution events correspond to the articles to be distributed, and the articles are placed in different article-placing partitions in the original article-placing tray of the current robot.
And S320, selecting a target object-placing partition corresponding to the current task to be distributed from all the object-placing partitions.
S330, controlling the current robot to go to the target distribution address of the current task to be distributed so as to distribute the objects to be distributed placed in the target object placing partition.
And S340, acquiring an original image acquired by the current robot through the image acquisition device in the target object partition.
In the process of delivering the articles by the current robot, the image acquisition device can acquire the image information of the articles to be delivered for subsequently judging the state of the articles to be delivered. Aiming at the current task to be allocated, the image acquisition device needs to acquire the original image of the target object partition. At least one image acquisition device can be arranged in the current robot and used for acquiring original images of different object containing partitions. In order to improve the accuracy of the shot original image and further improve the accuracy of the state identification result of the articles to be delivered, a corresponding image acquisition device can be arranged for each object-placing partition. In an alternative embodiment, the image capturing device may be installed above the original storage tray, and the original image of the corresponding storage partition is shot downwards in real time or at regular time. The image capturing device may be a camera or other devices, which is not limited in this application.
It should be noted that, in practical situations, the current robot generally includes at least two original placement trays, and therefore a corresponding image capturing device should be installed above each original placement tray.
And S350, identifying whether the to-be-dispensed articles placed in the target object placing partition are taken out or not according to the original image.
According to the original image of the target object-placing partition acquired by the image acquisition device, the existence state of the object to be distributed in the target object-placing partition is identified, so that whether the object to be distributed in the target object-placing partition is taken out or not is judged. For example, a preset image processing algorithm may be used to compare the continuously shot original images, so as to identify whether the object storage partition has the articles left to be delivered. The distance from the image acquisition device to the article to be dispensed in the original image can be acquired through a preset image processing algorithm, and whether the article to be dispensed is taken out or not is judged according to the distance change.
In an alternative embodiment, the identifying whether the to-be-dispensed item placed in the target placement partition is taken out according to the original image may include: processing the original image to obtain a depth image of the original image; according to the depth image, determining distance data between the target object-placing partition and the image acquisition device; and determining whether the to-be-dispensed articles placed in the target placement partition are taken out or not according to the distance data.
The original image can be converted into a depth image containing depth information through a preset image processing algorithm, and therefore all distance information from the image acquisition device to different articles to be delivered in the target object-placing partition is obtained. And judging whether the article to be delivered is taken out or not according to the change condition of the distance information. The method judges the state of the articles to be delivered through the distance data of the depth image, can simultaneously identify all the articles to be delivered in the target object-placing partition, and improves the accuracy and the efficiency of article identification.
In an alternative embodiment, the determining whether the to-be-dispensed item placed in the target placement partition is removed according to the distance data may include: if the distance data meet the preset taking-out distance condition, determining that the articles to be delivered arranged in the target object-placing partition are taken out; and if the distance data does not meet the preset taking-out distance condition, determining that the articles to be dispensed arranged in the target object placing partition are not taken out.
The taking-out distance condition can be preset according to actual conditions, and when the distance data acquired from the depth image meets the preset taking-out distance condition, the object to be delivered corresponding to the depth image is judged to be taken out; and if the distance data acquired from the depth image does not meet the preset taking-out distance condition, judging that the article to be delivered corresponding to the depth image is not taken out. This has the advantage that it can be recognized accurately whether the goods to be dispensed have been removed.
Optionally, the preset taking-out distance condition may be: and the standard distance between the image acquisition device corresponding to the target object placing partition and the target object to be delivered in the target object placing partition is not less than.
The distance between the image capture device mounted above the target placement section and the target placement may be defined as a standard distance. And when the fact that the distance data between the image acquisition device and the target object-placing partition is not smaller than the standard distance is recognized, determining that the articles to be delivered in the target object-placing partition are taken out.
The target placement object can be an object placed on the current robot original placement tray to lift the object to be delivered. For example, taking a restaurant as an example, in order to facilitate distribution and taking, the restaurant uses an additional tray to lift a meal and place the meal into the original placement tray of the robot, and a customer does not take the additional tray when taking the meal. This additional tray can therefore be understood as an object placement. The standard distance is now the distance between the image acquisition device and the additional tray.
The target placement may also be the original placement tray itself of the current robot. In actual conditions, the articles to be delivered can be delivered by directly placing the articles in the original article placing tray without lifting any article, and the standard distance is the distance between the image acquisition device and the original article placing tray of the current robot.
It should be noted that, articles with different heights may be placed in the same target object-placing partition, for example, the heights of the articles stacked are different; it is also possible to place items with varying heights, such as dishes or soup (since there is a possibility that the height varies due to scattering of dishes or sloshing of soup during distribution), so it may be set that it is determined that the items to be distributed in the target object-placing partition are completely taken out only when the collected distance data is not less than the standard distance.
The advantage of setting up like this lies in no matter the height that transports article in the object storage subregion changes, can all accurately discern the state that article were taken out, has improved the degree of accuracy of article state discernment.
Therefore, optionally, if the associated delivery event is set to be a delivery mode with an additional tray, the target placement object is an additional tray; and if the associated delivery event is set to be a delivery mode without an additional tray, the target placement object is the original placement tray of the target placement partition.
According to the practical situation, when the articles are delivered, the articles are directly placed in the target article placing partition of the current robot without using an additional tray, at this time, the associated delivery events belong to a delivery mode without the additional tray, and the standard distance in the mode is the distance between the target article placing partition and the corresponding image acquisition device. For example, if the distance between the original placement tray of the robot and the camera mounted above the original placement tray is 23 cm, the standard distance is set to 23 cm.
If the additional tray is needed to be used when the articles are delivered, the articles to be delivered are lifted and placed in the target object-placing subarea of the current robot by using the additional tray, and the standard distance is changed into the distance between the additional tray in the target object-placing subarea and the corresponding image acquisition device. For example, if the additional tray thickness of a restaurant is 1 cm, the standard distance is set to 22 cm.
The advantage of setting up like this is that can formulate different distribution modes according to whether contain additional tray to according to different mark distance, the state that accurate judgement article were taken out.
And S360, controlling to acquire the next current task to be distributed in the associated distribution event according to the identification result.
The recognition result is "article taken out" or "article not taken out". The state of "article taken out" can be further classified into "article taken out correctly" and "article taken out incorrectly". And judging whether the next current task to be delivered in the associated delivery event is required to be carried out or not according to the different identification results. For example, if it is recognized that the object to be delivered is correctly taken out, the current task to be delivered is completed, and then the next current task to be delivered can be obtained; if the article to be dispensed is recognized to be taken out by mistake or not taken out, the current robot can be controlled to feed back the actual situation.
In an optional implementation manner, the controlling, according to the recognition result, to execute the next task to be currently delivered may include: if the to-be-distributed articles placed in the target article placing partition are taken out, judging whether the taken-out address is consistent with the target distribution address or not; and controlling to execute the next current task to be distributed in the associated distribution event according to the consistent judgment result.
If the distance data is not smaller than the currently set standard distance, the article to be delivered is judged to be taken out, the address where the article to be delivered is taken out is compared with the target delivery address, whether the two addresses are consistent or not is judged, and whether the next current task to be delivered needs to be obtained or not is determined according to the judgment result.
According to the technical scheme of the embodiment, whether the current task to be distributed is completed or not can be judged according to the consistency of the addresses, so that the next task to be distributed is started, and a judgment basis is provided for judging whether the subsequent task to be distributed in the associated distribution event can be started or not.
In an optional implementation manner, the controlling, according to a result of the consistency determination, to execute a next current task to be delivered in the associated delivery event may include: if the extracted address is consistent with the target delivery address, controlling to execute the next current task to be delivered in the associated delivery event; and if the extracted address is inconsistent with the target distribution address, controlling to send alarm information.
When the taken-out address is consistent with the target distribution address, the article is considered to be taken out correctly, namely the current task to be distributed is completed, and the next task to be distributed can be started. And when the taken-out address is inconsistent with the target delivery address, namely the article is taken out by mistake, controlling the current robot to send out preset alarm information.
For example, when the current robot delivers food to a table of a corresponding customer, the customer takes out the food completely, and at this time, the current robot recognizes that the address where the object to be delivered is taken out is the target delivery address, the task of delivering food is completed, and the related information of the next task to be delivered (where to deliver the object in which object-placing partition) is obtained. When the article to be delivered is taken out, the current robot recognizes that the current address of the article to be delivered taken out does not accord with the target delivery address corresponding to the storage partition at the position, the article to be delivered is proved to be taken away by mistake, the current robot is immediately controlled to send out preset alarm information, for example, a prompt lamp flickers, and meanwhile, a sound box sends out: the sound effect of 'article is taken out by mistake and put back'.
The robot has the advantages that the robot can deal with the condition that the articles are taken out by mistake, the problem that the articles are often taken out by mistake in daily use is solved, the handling capacity of the robot in the process of distributing the articles is improved, the distribution accuracy is improved, and the distribution efficiency is improved.
According to the technical scheme, whether the articles to be delivered are taken away at the target delivery address or not is judged through the image information collected by the image collecting device, and therefore continuous delivery of the next task to be delivered is carried out. The method has the advantages that the continuous processing capacity of the tasks to be distributed is improved, and the efficiency of the whole distribution service process is improved.
Example four
Fig. 4 is a structural diagram of an article dispensing device according to a fourth embodiment of the present application, where the article dispensing device according to the fourth embodiment of the present application is applicable to identify a placement condition of a target placement area, and the article dispensing device may be implemented by software and/or hardware, and may be configured in a current robot or a background server of the current robot. As shown in fig. 4, the article dispensing device 400 may include: an original image acquisition module 410, a distance data determination module 420, and a placement determination module 430, wherein,
an original image obtaining module 410, configured to obtain an original image of the target object area through the camera;
a distance data determining module 420, configured to determine distance data between the target object area and the camera according to the original image;
and an object placement condition determining module 430, configured to determine an object placement condition of the target object placement area according to the distance data and a preset distance.
According to the technical scheme, the distance between the target object placing area and the camera is calculated according to the original image of the target object placing area, and the object placing condition of the target object placing area is judged by combining the preset distance. The method has the advantages that the method capable of automatically identifying the object placing condition is provided for the article distribution robot, the article distribution robot can trigger other work tasks independently according to the object placing condition, manual interaction is not needed, manual operation and influence are reduced, user experience is optimized, and the work efficiency of the article distribution robot is improved.
In an alternative embodiment, the distance data determining module 420 may include:
the depth information acquisition unit is used for acquiring the depth information of the original image through a pre-trained monocular depth estimation model;
and the distance data determining unit is used for obtaining the distance data between the target object area and the camera according to the depth information.
In an alternative embodiment, the placement determination module 430 may include:
and the object placing condition judgment unit is used for determining that the object to be delivered is placed in the target object placing area if the distance data is changed from the preset distance to be smaller than the preset distance.
In an alternative embodiment, the placement determination module 430 may include:
and the object placing condition judging unit is used for determining that the object to be dispensed placed in the target object placing area is taken out if the distance data is changed from the distance smaller than the preset distance to the preset distance.
In an alternative embodiment, the apparatus may further comprise:
the address judgment module is used for judging whether the taken-out address of the article to be delivered is consistent with the target delivery address associated with the article to be delivered;
and the error-taking reminding module is used for controlling to send error-taking reminding according to the consistent judgment result.
In an alternative embodiment, the raw image acquiring module 410 may include:
the continuous frame image acquisition unit is used for acquiring continuous frame original images of the target object area through the camera;
accordingly, the distance data determining module 420 may include:
the initial distance determining unit is used for respectively determining the initial distance between the target object area and the camera according to each frame of original image;
and the smoothing unit is used for smoothing the initial distance corresponding to each frame of original image to obtain the distance data.
In an alternative embodiment, the initial distance is a distance between a target placement in the target placement area and the camera.
In an alternative embodiment, if the current distribution mode is the tray-less distribution mode, the target placement object is an original placement tray set in the target placement area;
and if the current distribution mode is a tray distribution mode, the target placement object is an additional tray placed in the original placement tray set in the target placement object area.
In an alternative embodiment, at least one layer of the object placing region is provided with at least two object placing subareas; correspondingly, the target object area is an object area in the object area. The article distribution device provided by the embodiment of the application can execute the article distribution method provided by any embodiment of the application, and has corresponding functional modules and beneficial effects for executing each article distribution method.
EXAMPLE five
Fig. 5 is a structural diagram of a robot according to a fifth embodiment of the present application. FIG. 5 illustrates a block diagram of an exemplary robot 512 suitable for use in implementing embodiments of the present application. The robot 512 shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 5, the robot 512 is in the form of a general purpose computing device. The components of the robot 512 may include, but are not limited to: one or more processors or processing units 516, a system memory 528, and a bus 518 that couples the various system components including the system memory 528 and the processing unit 516.
Bus 518 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The robot 512 typically includes a variety of computer system readable media. These media may be any available media that can be accessed by the robot 512 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 528 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)530 and/or cache memory 532. The bot 512 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 534 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 518 through one or more data media interfaces. Memory 528 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 540 having a set (at least one) of program modules 542, including but not limited to an operating system, one or more application programs, other program modules, and program data, may be stored in, for example, the memory 528, each of which examples or some combination may include an implementation of a network environment. Program modules 542 generally perform the functions and/or methods of the embodiments described herein.
The robot 512 may also communicate with one or more external devices 514 (e.g., keyboard, pointing device, display 524, etc.), with one or more devices that enable a user to interact with the robot 512, and/or with any devices (e.g., network card, modem, etc.) that enable the robot 512 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 522. Also, the robot 512 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 520. As shown, the network adapter 520 communicates with the other modules of the robot 512 via a bus 518. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the robot 512, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 516 executes various functional applications and data processing by running at least one of other programs stored in the system memory 528, for example, to implement an article distribution method provided in the embodiment of the present application.
EXAMPLE six
The sixth embodiment of the present application further provides a computer-readable storage medium, on which a computer program (or referred to as computer-executable instructions) is stored, where the program is used for executing, by a processor, an article distribution method provided in the sixth embodiment of the present application: acquiring an original image of a target object area through the camera; determining distance data between the target object area and the camera according to the original image; and determining the object placing condition of the target object placing area according to the distance data and the preset distance.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or a conventional procedural programming language such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (12)

1. An article distribution method, applied to an article distribution robot provided with a camera above each of the storage areas, the method comprising:
acquiring an original image of a target object area through the camera;
determining distance data between the target object area and the camera according to the original image;
and determining the object placing condition of the target object placing area according to the distance data and the preset distance.
2. The method of claim 1, wherein determining distance data between the target object area and the camera from the raw image comprises:
acquiring depth information of the original image through a pre-trained monocular depth estimation model;
and obtaining distance data between the target object area and the camera according to the depth information.
3. The method of claim 1, wherein determining the object condition of the target object area based on the distance data and a predetermined distance comprises:
and if the distance data is changed from the preset distance to be smaller than the preset distance, determining that the object to be delivered is placed in the target object area.
4. The method of claim 1, wherein determining the object condition of the target object area based on the distance data and a predetermined distance comprises:
and if the distance data is changed from the distance smaller than the preset distance to the preset distance, determining that the object to be dispensed placed in the target object placing area is taken out.
5. The method according to claim 4, wherein after determining that the article to be dispensed placed in the target placement area is taken out, the method further comprises:
judging whether the taken-out address of the article to be delivered is consistent with a target delivery address associated with the article to be delivered;
and controlling to send out a wrong-taking prompt according to the consistent judgment result.
6. The method of claim 1, wherein the obtaining of the raw image of the target object area by the camera comprises:
acquiring continuous frame original images of a target object area through the camera;
correspondingly, determining distance data between the target object area and the camera according to the original image comprises the following steps:
respectively determining the initial distance between the target object area and the camera according to each frame of original image;
and smoothing the initial distance corresponding to each frame of original image to obtain the distance data.
7. The method of claim 1, wherein the initial distance is a distance between a target placement in the target placement area and the camera.
8. The method of claim 7, wherein if the current delivery mode is a no-tray delivery mode, the target placement is an original placement tray set to the target placement area;
and if the current distribution mode is a tray distribution mode, the target placement object is an additional tray placed in the original placement tray set in the target placement object area.
9. The method of any one of claims 1-8, wherein at least one layer of the shelving area is provided with at least two shelving sections; correspondingly, the target object area is an object area in the object area.
10. An article dispensing apparatus applied to an article dispensing robot provided with a camera above each floor object area, the apparatus comprising:
the original image acquisition module is used for acquiring an original image of the target object area through the camera;
the distance data determining module is used for determining distance data between the target object area and the camera according to the original image;
and the object placing condition determining module is used for determining the object placing condition of the target object placing area according to the distance data and the preset distance.
11. A robot, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of article distribution as claimed in any one of claims 1-9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of dispensing an article according to any one of claims 1 to 9.
CN202210198847.XA 2022-03-02 2022-03-02 Article distribution method and device, robot and storage medium Pending CN114559431A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210198847.XA CN114559431A (en) 2022-03-02 2022-03-02 Article distribution method and device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210198847.XA CN114559431A (en) 2022-03-02 2022-03-02 Article distribution method and device, robot and storage medium

Publications (1)

Publication Number Publication Date
CN114559431A true CN114559431A (en) 2022-05-31

Family

ID=81716113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210198847.XA Pending CN114559431A (en) 2022-03-02 2022-03-02 Article distribution method and device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN114559431A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520194A (en) * 2017-12-18 2018-09-11 上海云拿智能科技有限公司 Kinds of goods sensory perceptual system based on imaging monitor and kinds of goods cognitive method
CN111899131A (en) * 2020-06-30 2020-11-06 上海擎朗智能科技有限公司 Article distribution method, apparatus, robot and medium
CN111906780A (en) * 2020-06-30 2020-11-10 上海擎朗智能科技有限公司 Article distribution method, robot and medium
US20210154856A1 (en) * 2019-11-25 2021-05-27 Toyota Jidosha Kabushiki Kaisha Conveyance system, trained model generation method, trained model, control method, and program
CN113159669A (en) * 2021-03-23 2021-07-23 苏州银翼智能科技有限公司 Tray adjusting method and device, storage medium and electronic device
CN113246148A (en) * 2021-04-30 2021-08-13 上海擎朗智能科技有限公司 Distribution robot and positioning method thereof
CN113264313A (en) * 2020-06-12 2021-08-17 深圳市海柔创新科技有限公司 Shooting method for picking up/putting down goods, shooting module and transfer robot
US20210402610A1 (en) * 2019-06-17 2021-12-30 Lg Electronics Inc. Mobile robot and method of controlling the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520194A (en) * 2017-12-18 2018-09-11 上海云拿智能科技有限公司 Kinds of goods sensory perceptual system based on imaging monitor and kinds of goods cognitive method
US20210402610A1 (en) * 2019-06-17 2021-12-30 Lg Electronics Inc. Mobile robot and method of controlling the same
US20210154856A1 (en) * 2019-11-25 2021-05-27 Toyota Jidosha Kabushiki Kaisha Conveyance system, trained model generation method, trained model, control method, and program
CN113264313A (en) * 2020-06-12 2021-08-17 深圳市海柔创新科技有限公司 Shooting method for picking up/putting down goods, shooting module and transfer robot
CN111899131A (en) * 2020-06-30 2020-11-06 上海擎朗智能科技有限公司 Article distribution method, apparatus, robot and medium
CN111906780A (en) * 2020-06-30 2020-11-10 上海擎朗智能科技有限公司 Article distribution method, robot and medium
CN113159669A (en) * 2021-03-23 2021-07-23 苏州银翼智能科技有限公司 Tray adjusting method and device, storage medium and electronic device
CN113246148A (en) * 2021-04-30 2021-08-13 上海擎朗智能科技有限公司 Distribution robot and positioning method thereof

Similar Documents

Publication Publication Date Title
JP6744430B2 (en) How to automatically generate a shelf allocation table that assigns products to a shelf structure in a store
CN108382783B (en) Article pickup method, delivering method, access part method and storage medium
WO2019165894A1 (en) Article identification method, device and system, and storage medium
US11461753B2 (en) Automatic vending method and apparatus, and computer-readable storage medium
EP3816919A1 (en) Order processing method and device, server, and storage medium
WO2019184646A1 (en) Method and device for identifying merchandise, merchandise container
JP2020184356A (en) Method for tracking placement of product on shelf in store
CN109118137A (en) A kind of order processing method, apparatus, server and storage medium
CN109117824B (en) Commodity management method and device, electronic equipment and storage medium
WO2018196526A1 (en) Method and apparatus for automatically associating bin and confirming sequential casting
US11328250B2 (en) Inventory management server, inventory management system, inventory management program, and inventory management method
WO2019222246A1 (en) Systems and methods for automated storage and retrieval
CN111589730B (en) Goods picking method, device, equipment and storage medium
CN111325499A (en) Article delivery method and device, robot and storage medium
WO2022052810A1 (en) Method for guiding robot to transport cargo in warehouse, and apparatus
CN111661548B (en) Article sorting method, apparatus, device and storage medium
CN112232726A (en) Goods picking method, device, server and storage medium
US20230245476A1 (en) Location discovery
WO2022222801A1 (en) Warehousing management method and apparatus, warehousing robot, warehousing system, and medium
CN116415862A (en) Freight information processing method and system
CN109118150B (en) Commodity volume estimation method and device, computer equipment and storage medium
US20210082031A1 (en) Order processing method and device, and goods volume estimation method and device
US20180285708A1 (en) Intelligent Fixture System
CN114559431A (en) Article distribution method and device, robot and storage medium
CN112950658A (en) Optical disk evaluation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination