CN114559431B - Article distribution method, device, robot and storage medium - Google Patents
Article distribution method, device, robot and storage medium Download PDFInfo
- Publication number
- CN114559431B CN114559431B CN202210198847.XA CN202210198847A CN114559431B CN 114559431 B CN114559431 B CN 114559431B CN 202210198847 A CN202210198847 A CN 202210198847A CN 114559431 B CN114559431 B CN 114559431B
- Authority
- CN
- China
- Prior art keywords
- target
- area
- distance data
- placement
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003860 storage Methods 0.000 title claims abstract description 98
- 238000009826 distribution Methods 0.000 title claims abstract description 90
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000012545 processing Methods 0.000 claims description 16
- 238000009499 grossing Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 4
- 241000190070 Sarracenia purpurea Species 0.000 claims description 3
- 238000002716 delivery method Methods 0.000 claims 1
- 230000008901 benefit Effects 0.000 abstract description 11
- 230000008569 process Effects 0.000 abstract description 7
- 238000005192 partition Methods 0.000 description 57
- 235000012054 meals Nutrition 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 235000013361 beverage Nutrition 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 235000013305 food Nutrition 0.000 description 4
- 239000011229 interlayer Substances 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 239000010410 layer Substances 0.000 description 3
- 235000014347 soups Nutrition 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/006—Controls for manipulators by means of a wireless system for controlling one or several manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1661—Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Fuzzy Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Networks & Wireless Communication (AREA)
- Manipulator (AREA)
Abstract
The embodiment of the application discloses an article distribution method, an article distribution device, a robot and a storage medium. The method comprises the steps that an original image of a target object placing area is obtained through a camera; determining distance data between the target object placing area and the camera according to the original image; and determining the placement condition of the target placement area according to the distance data and the preset distance. According to the technical scheme of the embodiment of the application, the distance data of the camera and the target object placing area are determined according to the obtained original image of the target object placing area, and the object placing condition of the target object placing area is determined through judging the distance data and the preset distance. The method has the advantages that the existence condition of the articles to be distributed in the article distribution robot can be automatically identified through the camera, the working efficiency of autonomous service of the robot in the article distribution process is improved, and better working service is provided for users.
Description
Technical Field
The embodiment of the application relates to the robot technology, in particular to an article distribution method, an article distribution device, a robot and a storage medium.
Background
With the rapid development of robot technology, more and more production and living fields use robots to replace manpower to work, for example, service robots in restaurants can distribute food and other articles for customers.
In the prior art, after a service robot of a restaurant delivers objects to a designated dining table, a user takes out the objects, and then interacts with the robot through user operation to determine that the current objects are taken out, and the robot receives the object delivery signals and then performs subsequent work. However, this method requires a manual operation by a user to determine the state of the article to be dispensed, which reduces the efficiency of the robot when dispensing the article.
Disclosure of Invention
The application provides an article distribution method, an article distribution device, a robot and a storage medium, so as to improve the distribution efficiency of the robot.
In a first aspect, an embodiment of the present application provides an article distribution method, including:
Acquiring an original image of a target object placing area through the camera;
determining distance data between the target object placing area and the camera according to the original image;
And determining the placement condition of the target placement area according to the distance data and the preset distance.
In a second aspect, embodiments of the present application also provide an article dispensing apparatus, including:
the original image acquisition module is used for acquiring an original image of the target object placing area through the camera;
The distance data determining module is used for determining distance data between the target object placing area and the camera according to the original image;
And the object placement condition determining module is used for determining the object placement condition of the target object placement area according to the distance data and the preset distance.
In a third aspect, an embodiment of the present application further provides a robot, including:
One or more processors;
A memory for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement any one of the method for dispensing items provided by the embodiment of the first aspect of the present application.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements any of the methods of dispensing articles provided by the embodiments of the first aspect of the present application.
According to the technical scheme, the distance between the target object placing area and the camera is calculated according to the original image of the target object placing area, and the object placing condition of the target object placing area is judged by combining the preset distance. The method has the advantages that the method capable of automatically identifying the object placing condition is provided for the object dispensing robot, the object dispensing robot is helped to automatically trigger other work tasks according to the object placing condition, manual interaction is not needed, manual operation and influence are reduced, user experience is optimized, and work efficiency of the object dispensing robot is improved.
Drawings
FIG. 1 is a flow chart of a method for distributing articles according to a first embodiment of the present application;
FIG. 2 is a flow chart of a method for delivering articles according to a second embodiment of the present application;
FIG. 3 is a flow chart of a method for delivering articles according to a third embodiment of the present application;
FIG. 4 is a block diagram of an article dispensing apparatus according to a fourth embodiment of the present application;
fig. 5 is a structural diagram of a robot according to a fifth embodiment of the present application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings.
Example 1
Fig. 1 is a flowchart of an article distribution method according to an embodiment of the present application. The embodiment of the application is applicable to the object placement situation of the object placement area, the method can be executed by an object distribution device, the device can be realized by software and/or hardware, and the device can be specifically configured on an object distribution robot which is correspondingly provided with a camera above each object placement area, and can also be configured in a background server.
Referring to the method for delivering articles shown in fig. 1, the method specifically comprises the following steps:
s110, acquiring an original image of the target object placing area through the camera.
The target storage area may be a storage area built in the article dispensing robot, for example, an in-cabin interlayer of the meal delivery robot, and an area on each interlayer where articles can be placed may be referred to as a target storage area. Specifically, each object placing area is shot through a camera arranged above each object placing area, and an image corresponding to the object placing area is obtained and used as an original image. It will be appreciated that the dimensions of the original image should at a minimum cover the entire target placement area.
S120, determining distance data between the target object placing area and the camera according to the original image.
The distance data may be a distance between the target object placement area and the corresponding camera. It will be appreciated that the distance data should be maximized when there are no items to be dispensed in the target storage area. Specifically, the method for determining the distance data from the original image may be that the distance information included in the image may be directly obtained from the collected original image by a camera (e.g., a 3D camera, an infrared camera, etc.) that may capture depth information; and a common 2D camera can be used for calculating the distance data corresponding to the acquired original image through a pre-trained distance estimation model.
In an optional embodiment, the determining the distance data between the target object-placing area and the camera according to the original image may include: obtaining depth information of the original image through a pre-trained monocular depth estimation model; and obtaining distance data between the target object placing area and the camera according to the depth information.
The principle of the monocular depth estimation model is that accurate data related to predicted distance data is obtained by analyzing an original image, a picture including depth information based on the original image is obtained according to the accurate data, and the depth information picture is analyzed to obtain distance data between a target object placing area and a camera. Particularly, when training the monocular depth estimation model, pictures of a transparent water cup, beverages, soup and the like can be added for training, and the model is optimized in a mode of adding a data set, so that accuracy of recognition distance calculation is improved.
In the above embodiment, the depth information of the original image is obtained by a pre-trained monocular depth estimation model; thereby obtaining the distance data between the target object placing area and the camera. The method has the advantages that distance data between the object placement area and the camera can be acquired rapidly and accurately, so that effective basis is provided for judging the placement condition of the object placement area, and the work efficiency of the object distribution robot is improved.
In an optional implementation manner, the acquiring, by the camera, the original image of the target placement area may include: and acquiring continuous frame original images of the target object placing area through the camera.
The original images of the continuous frames can be understood as the original images of the camera which are shot in real time, and the original images of each frame are recorded for calculating the distance information in a pre-trained monocular depth estimation model.
Correspondingly, determining the distance data between the target object placement area and the camera according to the original image may include: according to each frame of original image, determining initial distance between the target object placing area and the camera; and carrying out smoothing processing on the initial distance corresponding to each frame of original image to obtain the distance data.
Wherein, the initial distance may be distance information calculated corresponding to each frame of the original image. Specifically, continuous frame original images are input into a pre-trained monocular depth estimation model to obtain initial distances corresponding to the original images of each frame, and the continuous initial distances are subjected to smoothing processing to obtain distance data. Any smoothing algorithm in the prior art, such as an example filtering algorithm, may be used in the smoothing method, which is not limited in this embodiment of the present application. It is worth noting that the number of the parts,
According to the technical scheme, the continuous frame original image is obtained, and is subjected to smoothing processing, so that the influence of environmental factors is reduced, the change condition of the distance data can be found more accurately, and a reliable basis is provided for the subsequent judgment of the placement condition of the target placement area.
In an alternative embodiment, the initial distance is a distance between a target placement in the target placement area and the camera.
If the current distribution mode is a palletless distribution mode, the target placed object is an original placed object tray arranged in the target placed object area;
And if the current delivery mode is a tray delivery mode, the target placement object is an additional tray placed in an original placement tray arranged in the target placement area.
In some specific scenarios, for example, in a restaurant distribution example, the object distribution robot itself has an intra-cabin layering, and some objects can be directly placed in an interlayer for distribution, and the interlayer can be understood as the original object placement tray, which is inherent to the object distribution robot. Of course, some articles are not easily placed directly on the original storage tray and require an additional tray to be padded for distribution.
Therefore, the article dispensing robot is classified into a palletless dispensing mode in which no additional pallet participates and a palletized dispensing mode in which additional pallets participate according to the dispensing modes, and the selection of the target placement object is different according to the difference between the two modes. It can be understood that in the palletless delivery mode, the target placement is the original placement pallet, and the calculated distance data is based on the distance from the camera to the original placement pallet; similarly, in the tray distribution mode, the target placement object is an additional tray, and the calculated distance data is based on the distance from the camera to the additional tray.
According to the technical scheme of the embodiment, the distance data are calculated and distinguished by the additional tray, so that the impression of the object placement condition of the object placement area of the identification target can not be generated due to the change of the distance even if the camera identifies the additional tray, the false identification condition is reduced, and the accuracy of the distance data determination is improved.
S130, determining the placement condition of the target placement area according to the distance data and the preset distance.
The preset distance may be a distance parameter used as a comparison standard, and may be set manually, or may be changed according to different target placement objects in the target placement area. For example, if the current mode is the palletless delivery mode, the distance data is calculated based on the distance from the camera to the original storage tray, for example, the preset distance may be 23 cm; if the tray distribution mode is currently adopted, the thickness of the tray is1 cm, and the preset distance can be 22 cm. The placement condition may be the presence of the to-be-dispensed items in the target placement area, i.e., whether the to-be-dispensed items are within the target placement area. Specifically, the distance data calculated by the pre-trained monocular depth estimation algorithm is compared with a preset distance to judge the existence of the objects to be distributed.
In an optional implementation manner, the determining, according to the distance data and the preset distance, the placement condition of the target placement area may include: and if the distance data is changed from the preset distance to be smaller than the preset distance, determining that the object to be distributed is placed in the target object placing area.
Continuing the former example, in the tray-free distribution mode, when the currently detected distance data between the camera and the target storage area is changed from 23 centimeters to less than 23 centimeters, judging that the target storage area is placed with the objects to be distributed.
In another optional implementation manner, the determining the placement condition of the target placement area according to the distance data and the preset distance may include: and if the distance data is smaller than the preset distance and is changed into the preset distance, determining that the objects to be distributed placed in the target placement area are taken out.
Continuing the former example, in the tray-free distribution mode, when the detected distance data between the camera and the target storage area is changed from less than 23 centimeters to equal to 23 centimeters, judging that the objects to be distributed in the target storage area are taken out.
According to the technical scheme, the distance data is compared with the preset distance, so that the existence condition of the objects to be distributed in the target object placing area can be simply and accurately judged, the judgment method is simple and quick to calculate, a large amount of calculation quantity is saved, and the working efficiency of the object distribution robot is improved.
In an alternative embodiment, after determining that the objects to be dispensed placed in the target storage area are taken out, the method may further include: judging whether the taken-out address of the to-be-dispensed object is consistent with a target dispensing address associated with the to-be-dispensed object; and controlling to send out an error taking prompt according to the consistency judging result.
The fetched address may be positioning information for determining when the to-be-dispensed article is fetched, for example, positioning information of a dining table in a restaurant, positioning information of a shelf in a warehouse, and the like. The target delivery address may be understood as positioning information to which the article to be delivered needs to be delivered. Specifically, whether the object to be delivered is correctly delivered to the target delivery address is defined by judging the consistency of the fetched address and the target delivery address. If the target delivery address is not reached, the to-be-delivered object is taken out, and the object delivery robot gives out a wrong taking alarm to prompt the user to put back. If the distribution is successful, the article distribution robot can execute subsequent other works according to the preset task condition.
Taking restaurant delivery as an example, a target delivery address associated with an article A to be delivered is a dining table B, when the article A is taken out, the article delivery robot acquires current position information and compares the current position information with the position information of the dining table B, if the current position information is inconsistent with the position information, the article delivery robot proves that the article A is accidentally taken by a user, a prompt lamp is started immediately, an audible alarm is sent, and the user is prompted to return to the original position. If the current position information is mild with the position information of the dining table B, the distribution is successful, and the article distribution robot continues to work according to the task condition, such as distributing other dining tables or returning to the dining table to load new articles to be distributed.
According to the technical scheme, whether the delivery of the to-be-delivered objects is successful or not is determined by judging whether the taken-out addresses of the to-be-delivered objects are consistent with the target delivery addresses, so that judgment of delivery conditions can be accurately carried out, prompt is timely carried out under the condition of being obtained by a user by mistake, probability of wrong delivery is greatly reduced, repeated delivery work is reduced, and work efficiency of the object delivery robot is improved.
According to the technical scheme, the distance between the target object placing area and the camera is calculated according to the original image of the target object placing area, and the object placing condition of the target object placing area is judged by combining the preset distance. The method has the advantages that the method capable of automatically identifying the object placing condition is provided for the object dispensing robot, the object dispensing robot is helped to automatically trigger other work tasks according to the object placing condition, manual interaction is not needed, manual operation and influence are reduced, user experience is optimized, and work efficiency of the object dispensing robot is improved.
In an alternative embodiment, at least one of the storage areas is provided with at least two storage areas; correspondingly, the target object placing area is an object placing area in the object placing area.
The storage area can be different areas for dividing the storage area, can be used for placing the objects to be distributed associated with the same target distribution address, and can also be used for placing the objects to be distributed associated with different target distribution addresses. It should be noted that the article dispensing robot includes at least one storage area, each storage area is provided with at least two storage areas, and each storage area can be used as an independent target storage area. It will be appreciated that when the image of the storage area is acquired, that is, the image of the storage area is acquired. According to the images of different object placement areas, the distance data of each object placement area can be calculated according to the processing method in the previous step.
According to the technical scheme, different articles to be distributed in the same layer of article placement area can be distributed to different target distribution addresses by distinguishing different article placement areas, under the condition that the capacity of the article distribution robot is fixed, the space in the cabin is utilized to the greatest extent, so that distribution tasks which can be executed by the article distribution robot each time are more, and the working efficiency of the article distribution robot is greatly improved.
Example two
Fig. 2 is a flowchart of an article distribution method according to a second embodiment of the present application. The embodiment of the application can be suitable for the situation of respectively distributing different articles in the same original article-placing tray, and supplements the distribution operation in the work of the article-distributing robot so as to improve the efficiency of article distribution.
Referring to the method for delivering articles shown in fig. 2, the method specifically comprises the following steps:
S210, acquiring a current task to be distributed in an associated distribution event; and different tasks to be distributed in the associated distribution event correspond to the objects to be distributed and are placed in different storage areas in the original storage tray of the current robot.
The associated delivery event is a set of tasks to be delivered, which needs to be continuously executed by the current robot, and the associated delivery event comprises at least two tasks to be delivered. The current robot can divide at least two storage areas for at least one original storage tray, wherein the same robot can be provided with at least two original storage trays. In order to avoid confusion of articles, articles to be distributed corresponding to different tasks to be distributed can be placed in different storage areas in a distinguished mode so as to be distributed respectively. It should be noted that, because there may be more to-be-delivered objects corresponding to the same to-be-delivered task in actual situations, the to-be-delivered objects cannot be all placed in one placement partition, so that to-be-delivered objects corresponding to the same to-be-delivered task can be placed in different placement partitions respectively.
Specifically, taking a restaurant using a robot to send a meal as an example, the current process from the meal taking port to the completion of all tasks to be dispatched, returning to the meal taking port to fetch the meal again, is called an associated dispatch event. Assuming that the current robot is provided with 4 layers of original storage trays, each layer of original storage trays can be divided into 2 storage areas, the current robot is provided with 8 storage areas. Catering in different storage areas can be distributed to different target distribution addresses, and can also be distributed to the same target distribution address. If the catering needs in the 8 storage areas are distributed to 6 different target distribution addresses, the associated distribution event contains 6 tasks to be distributed which need to be continuously executed.
S220, selecting a target object storage partition corresponding to the current task to be distributed from all the object storage partitions.
Because the articles to be distributed corresponding to the current tasks to be distributed are to be placed in at least one storage partition, before distribution, the at least one storage partition to be used is matched with the corresponding tasks to be distributed according to actual conditions. And when the current task to be distributed is executed, selecting a storage partition (namely a target storage partition) matched with the current task to be distributed.
In an alternative embodiment, the method for delivering the article may further include: presetting a binding relation between a target delivery address and a storage partition in an associated delivery event; and determining a target object placement partition corresponding to the current task to be distributed according to the binding relation.
Before all the delivery work starts, different target delivery addresses and different object placement partitions can be bound in advance, and objects to be delivered corresponding to all the tasks to be delivered in the associated delivery event are respectively placed in the corresponding target object placement partitions according to the binding relation.
Taking a restaurant meal delivery scene as an example, different meal delivery addresses corresponding to different storage partitions can be set before the meal delivery robot is put into use. For example, the article storage partition 1 and the table 1 can be bound, the article storage partition 2 and the table 2 can be bound, and the articles required to be sent to the table 1 can be placed in the article storage partition 1 according to the binding relation. The advantage of this is that each storage partition is bound with a fixed target delivery address, so that the delivery party can better distribute and manage the tasks to be delivered. However, the distribution mode is single, the objects to be distributed in different object storage areas cannot be delivered to the same target distribution address, and the distribution flexibility and distribution efficiency are reduced.
In another alternative embodiment, before acquiring the current task to be delivered in the associated delivery event, the method may further include: setting corresponding relations between different tasks to be distributed and object placement partitions in the associated distribution event; and selecting a target object storage partition corresponding to the current task to be distributed from the object storage partitions according to the corresponding relation.
The corresponding relation between different tasks to be distributed and the storage partitions can be understood as which storage partition the objects to be distributed corresponding to the different tasks to be distributed should be placed in. Therefore, the corresponding relation can determine the target object placement area corresponding to the objects to be distributed of the current task to be distributed. In practical situations, the correspondence may be set manually, or may be automatically allocated to the opposite partition by the background server of the current robot according to a specific task to be distributed.
For example, before the current task to be delivered in the associated delivery event is acquired, the correspondence between different tasks to be delivered and the storage partition may be preset.
In a specific example, assume that the associated delivery event contains 6 tasks to be delivered that need to be continuously performed. When the current robot takes meals, each task to be distributed is matched with at least one storage partition for placing catering (namely, the corresponding relation between the task to be distributed and the storage partition is set).
According to the technical scheme, in the distribution process, the corresponding target object placement partition is determined before each task to be distributed is executed, and the object placement partition can be distributed according to different actual conditions, so that dynamic management of object placement partitions used by different tasks to be distributed is realized, and the distribution flexibility and distribution efficiency are improved.
S230, controlling the current robot to go to the target delivery address of the current task to be delivered so as to deliver the objects to be delivered placed in the target object placement partition.
After the current task to be distributed and the corresponding target object placement partition are determined, the current robot is controlled to travel to the target distribution address corresponding to the current task to be distributed, and therefore the objects to be distributed in the target object placement partition are distributed to the target distribution address.
According to the technical scheme, at least two storage areas are arranged on one original storage tray and used for storing the to-be-distributed objects corresponding to different to-be-distributed tasks, so that the technical effect that different objects in the same original storage tray can be distributed to different target distribution addresses respectively is achieved, and the defect that the objects in the original storage tray can only be distributed to a single target distribution address in the prior art is overcome. Different storage areas of the same original storage tray can store articles, so that the utilization efficiency of storage areas is improved, the round trip times of a robot are reduced, the working energy consumption of the robot is reduced, meanwhile, more efficient distribution is achieved, and the article distribution efficiency is greatly improved.
Example III
Fig. 3 is a flowchart of an article distribution method according to a third embodiment of the present application. The embodiment of the application supplements the judging operation of whether the objects to be distributed are correctly delivered on the basis of the technical schemes of the previous embodiments so as to improve the accuracy of the object distribution process.
Referring to fig. 3, an article distribution method specifically includes the following steps:
s310, acquiring a current task to be distributed in an associated distribution event; and different tasks to be distributed in the associated distribution event correspond to the objects to be distributed and are placed in different storage areas in the original storage tray of the current robot.
S320, selecting a target object storage partition corresponding to the current task to be distributed from the object storage partitions.
S330, controlling the current robot to go to the target delivery address of the current task to be delivered so as to deliver the objects to be delivered placed in the target object placement partition.
S340, acquiring an original image obtained by the image acquisition device in the current robot for acquiring the target object placement partition.
In the process of delivering the articles by the current robot, the image acquisition device can acquire image information of the articles to be delivered and is used for judging the states of the articles to be delivered subsequently. Aiming at the current task to be distributed, the image acquisition device needs to acquire the original image of the target object placement partition. At least one image acquisition device can be arranged in the current robot and used for acquiring original images of different object placement areas. In order to improve the accuracy of the photographed original image and further improve the accuracy of the state recognition result of the objects to be distributed, a corresponding image acquisition device can be arranged for each object placement partition. In an alternative embodiment, the image acquisition device may be mounted above the original storage tray, and the original image corresponding to the storage partition may be captured downward in real time or at a fixed time. The image capturing device may be a camera or other devices, which is not limited in the present application.
It should be noted that in practical situations, the current robot generally includes at least two original storage trays, so a corresponding image capturing device should be installed above each original storage tray.
S350, identifying whether the objects to be distributed placed in the target object placement partition are taken out or not according to the original image.
And identifying the existence state of the objects to be distributed in the target object placement partition according to the original image of the target object placement partition acquired by the image acquisition device, so as to judge whether the objects to be distributed in the target object placement partition are taken out. For example, a preset image processing algorithm may be used to compare the continuously captured original images, so as to identify whether the object to be dispensed remains in the target object placement area. The distance from the image acquisition device to the object to be distributed in the original image can be acquired through a preset image processing algorithm, and whether the object to be distributed is taken out is judged according to the change of the distance.
In an alternative embodiment, the identifying whether the to-be-dispensed object placed in the target placement area is taken out according to the original image may include: processing the original image to obtain a depth image of the original image; determining distance data between the target object placement partition and the image acquisition device according to the depth image; and determining whether the objects to be distributed placed in the target object placement partition are taken out or not according to the distance data.
The original image can be converted into a depth image containing depth information through a preset image processing algorithm, so that all distance information from the image acquisition device to different objects to be distributed in the target object placement partition is obtained. And judging whether the to-be-dispensed objects are taken out according to the change condition of the distance information. According to the method, the state of the objects to be distributed is judged through the distance data of the depth image, all the objects to be distributed in the target object placement partition can be identified at the same time, and the accuracy and the efficiency of object identification are improved.
In an alternative embodiment, the determining whether the to-be-dispensed object placed in the target placement area is taken out according to the distance data may include: if the distance data meets a preset taking-out distance condition, determining that the objects to be distributed arranged in the target object placing partition are taken out; and if the distance data does not meet the preset taking-out distance condition, determining that the objects to be distributed arranged in the target object placing partition are not taken out.
The method comprises the steps that a taking-out distance condition can be preset according to actual conditions, and when distance data acquired from a depth image meet the preset taking-out distance condition, the fact that an article to be distributed corresponding to the depth image is taken out is judged; if the distance data acquired from the depth image does not meet the preset distance taking-out condition, judging that the object to be distributed corresponding to the depth image is not taken out. This has the advantage that it is possible to accurately identify whether the article to be dispensed has been removed.
Optionally, the preset extraction distance condition may be: and the standard distance between the image acquisition device corresponding to the target object placement partition and the target object placement of the object to be distributed in the target object placement partition is not smaller than the standard distance.
The distance between the image capturing device mounted above the target placement section and the target placement may be defined as a standard distance. And when the distance data between the image acquisition device and the target object placement partition is not smaller than the standard distance, determining that the objects to be distributed in the target object placement partition are taken out.
The target placement object can be placed on an original placement tray of the current robot and is used for lifting objects of the objects to be distributed. For example, taking a restaurant for delivering a meal, for convenience in delivery and taking, the restaurant uses an additional tray to lift the meal into an original storage tray of the robot, and a customer does not take the additional tray when taking the meal. This additional tray can thus be understood as a target placement. The standard distance is the distance between the image acquisition device and the additional tray.
The target placement may also be the original placement tray itself of the current robot. In practical cases, the objects to be distributed can be directly placed in the original object placing tray for distribution without any object lifting, and the standard distance is the distance between the image acquisition device and the original object placing tray of the current robot.
It should be noted that, articles with different heights may be placed in the same target placement area, for example, the stacking heights of the articles are different; it is also possible to place items with varying heights, such as dishes or soup (due to the possibility of scattering dishes or soup during the dispensing process, resulting in varying heights), so it may be provided that it is only possible to determine that the item to be dispensed in the target compartment has been completely removed if the distance data collected is not less than the standard distance.
The advantage of this arrangement is that whether the height of the article transported in the target object placement partition changes or not can accurately identify the state of the article taken out, and the accuracy of article state identification is improved.
Thus, optionally, if the associated dispensing event is set to an attached additional tray dispensing mode, the target placement is an additional tray; and if the associated delivery event is set to be in the no-additional-tray delivery mode, the target placement is the original placement tray to which the target placement partition belongs.
According to the actual situation, when the articles are delivered, the articles are directly placed in the target object placement zone of the current robot without using an additional tray, and the associated delivery event belongs to an additional tray-free delivery mode, and the standard distance in the mode is the distance between the target object placement zone and the corresponding image acquisition device. For example, when the distance between the original storage tray of the current robot and the camera installed above the original storage tray is 23 cm, the standard distance is set to be 23 cm.
If an additional tray is needed when the objects are distributed, the objects to be distributed are lifted and placed in the target object placement area of the current robot by the additional tray, and then the standard distance is changed into the distance between the additional tray in the target object placement area and the corresponding image acquisition device. For example, if the thickness of the additional tray of the restaurant is 1cm, the standard distance is set to 22 cm.
The advantage of this arrangement is that different distribution modes can be formulated according to whether additional trays are contained or not, and the state that the article is taken out can be accurately judged according to different marking distances.
S360, controlling the acquisition operation of the next current task to be distributed in the associated distribution event according to the identification result.
The result of the identification is "article taken out" or "article not taken out". The state of "article taken out" can be also classified into "article taken out correctly" and "article taken out by mistake". And judging whether the next current task to be distributed in the associated distribution event is to be performed according to the different identification results. For example, if it is identified that the to-be-dispensed object is correctly taken out, the current to-be-dispensed task is completed, and then the next current to-be-dispensed task can be obtained; if the fact that the to-be-dispensed objects are taken out by mistake or the to-be-dispensed objects are not taken out is identified, the current robot can be controlled to feed back the actual situation.
In an optional implementation manner, the controlling to execute the next current task to be dispatched according to the identification result may include: if the objects to be distributed placed in the target object placement partition are taken out, judging whether the taken out addresses are consistent with the target distribution addresses or not; and controlling to execute the next current task to be distributed in the associated distribution event according to the consistency judging result.
If the distance data is not smaller than the currently set standard distance, judging that the to-be-dispatched objects are taken out, comparing the taken out addresses of the to-be-dispatched objects with the target dispatching addresses, judging whether the taken out addresses are consistent with the target dispatching addresses, and determining whether to acquire the next current to-be-dispatched task according to the judging result.
According to the technical scheme of the embodiment, whether the current task to be distributed is completed or not can be judged according to the consistency of the addresses, so that the next task to be distributed is started, and a judgment basis is provided for whether the subsequent task to be distributed in the related distribution event can be started or not.
In an optional implementation manner, the controlling to execute the next current task to be dispatched in the associated dispatch event according to the consistency determination result may include: if the fetched address is consistent with the target delivery address, controlling to execute a next current task to be delivered in the associated delivery event; and if the fetched address is inconsistent with the target delivery address, controlling to send out alarm information.
When the fetched address is consistent with the target delivery address, the article is considered to be correctly fetched, namely the current task to be delivered is completed, and the next task to be delivered can be started. And when the fetched address is inconsistent with the target delivery address, namely 'the article is fetched by mistake', controlling the current robot to send out preset alarm information.
For example, before the current robot distributes the food and beverage to the dining table of the corresponding customer, the customer takes the food and beverage out completely, at this time, the current robot recognizes that the taken out address of the article to be distributed is the target distribution address, then the task of distributing the food and beverage at this time is completed, and the relevant information of the next task to be distributed (where the article in which storage area is distributed) is acquired. When the to-be-dispensed object is taken out, the current robot recognizes that the current taken-out address of the to-be-dispensed object is not consistent with the target dispensing address corresponding to the object storage partition, the to-be-dispensed object is proved to be taken away by mistake, and immediately controls the current robot to send out preset alarm information, such as flashing of a prompt lamp, and simultaneously sound is sent out: the article is taken out by mistake and the sound effect of the article is requested to be put back.
The advantage of doing so lies in can deal with the condition that article was taken out by mistake, has solved the problem that article often was taken away by mistake in the daily use, has improved the coping ability of robot in the delivery article in-process, has improved the delivery correct rate, helps the promotion of delivery efficiency.
According to the technical scheme, whether the to-be-delivered objects are taken away at the target delivery address is judged through the image information acquired by the image acquisition device, so that continuous delivery of the next to-be-delivered task is performed. The advantage of this is that the continuous processing capacity of the tasks to be distributed is improved, and the efficiency of the whole distribution service process is further improved.
Example IV
Fig. 4 is a block diagram of an article dispensing device according to a fourth embodiment of the present application, where the embodiment of the present application is applicable to identifying an article placement situation of a target article placement area, and the device may be implemented in software and/or hardware, and may be configured in a current robot or a background server of the current robot. As shown in fig. 4, the article dispensing apparatus 400 may include: an original image acquisition module 410, a distance data determination module 420, and a placement situation determination module 430, wherein,
An original image obtaining module 410, configured to obtain an original image of the target object placement area through the camera;
A distance data determining module 420, configured to determine distance data between the target object placement area and the camera according to the original image;
And the placement condition determining module 430 is configured to determine a placement condition of the target placement area according to the distance data and a preset distance.
According to the technical scheme, the distance between the target object placing area and the camera is calculated according to the original image of the target object placing area, and the object placing condition of the target object placing area is judged by combining the preset distance. The method has the advantages that the method capable of automatically identifying the object placing condition is provided for the object dispensing robot, the object dispensing robot is helped to automatically trigger other work tasks according to the object placing condition, manual interaction is not needed, manual operation and influence are reduced, user experience is optimized, and work efficiency of the object dispensing robot is improved.
In an alternative embodiment, the distance data determining module 420 may include:
the depth information acquisition unit is used for acquiring the depth information of the original image through a pre-trained monocular depth estimation model;
and the distance data determining unit is used for obtaining the distance data between the target object placing area and the camera according to the depth information.
In an alternative embodiment, the placement determination module 430 may include:
and the object placement condition judging first unit is used for determining that the object to be distributed is placed in the target object placement area if the distance data is changed from the preset distance to be smaller than the preset distance.
In an alternative embodiment, the placement determination module 430 may include:
And the object placement condition judging unit is used for determining that the objects to be distributed placed in the target object placement area are taken out if the distance data is changed from being smaller than the preset distance to the preset distance.
In an alternative embodiment, the apparatus may further include:
the address judging module is used for judging whether the taken out address of the article to be delivered is consistent with the target delivery address associated with the article to be delivered;
and the error taking reminding module is used for controlling to send out error taking reminding according to the consistency judging result.
In an alternative embodiment, the raw image acquisition module 410 may include:
the continuous frame image acquisition unit is used for acquiring continuous frame original images of the target object placing area through the camera;
accordingly, the distance data determining module 420 may include:
the initial distance determining unit is used for respectively determining initial distances between the target object placing area and the camera according to each frame of original image;
and the smoothing processing unit is used for carrying out smoothing processing on the initial distance corresponding to each frame of original image to obtain the distance data.
In an alternative embodiment, the initial distance is a distance between a target placement in the target placement area and the camera.
In an optional implementation manner, if the current delivery mode is a palletless delivery mode, the target placement object is an original placement tray set in the target placement area;
And if the current delivery mode is a tray delivery mode, the target placement object is an additional tray placed in an original placement tray arranged in the target placement area.
In an alternative embodiment, at least one of the storage areas is provided with at least two storage areas; correspondingly, the target object placing area is an object placing area in the object placing area. The article distribution device provided by the embodiment of the application can execute the article distribution method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of executing the article distribution methods.
Example five
Fig. 5 is a structural diagram of a robot according to a fifth embodiment of the present application. Fig. 5 shows a block diagram of an exemplary robot 512 suitable for use in implementing embodiments of the present application. The robot 512 shown in fig. 5 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 5, robot 512 is in the form of a general purpose computing device. Components of robot 512 may include, but are not limited to: one or more processors or processing units 516, a system memory 528, a bus 518 that connects the various system components (including the system memory 528 and processing units 516).
Bus 518 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Robot 512 typically includes a variety of computer system readable media. Such media can be any available media that can be accessed by robot 512 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 528 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 530 and/or cache memory 532. The robot 512 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 534 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive"). Although not shown in fig. 5, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 518 through one or more data media interfaces. Memory 528 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the application.
A program/utility 540 having a set (at least one) of program modules 542 may be stored in, for example, memory 528, such program modules 542 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 542 generally perform the functions and/or methods in the described embodiments of the application.
The robot 512 may also communicate with one or more external devices 514 (e.g., keyboard, pointing device, display 524, etc.), one or more devices that enable a user to interact with the robot 512, and/or any devices (e.g., network card, modem, etc.) that enable the robot 512 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 522. Also, the robot 512 may communicate with one or more networks, such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet, through a network adapter 520. As shown, the network adapter 520 communicates with other modules of the robot 512 via the bus 518. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with robot 512, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 516 performs various functional applications and data processing by executing at least one of the other programs among the plurality of programs stored in the system memory 528, for example, to implement an article dispensing method according to an embodiment of the present application.
Example six
A sixth embodiment of the present application also provides a computer-readable storage medium having stored thereon a computer program (or referred to as computer-executable instructions) which, when executed by a processor, is configured to perform an article dispensing method provided by the embodiments of the present application: acquiring an original image of a target object placing area through the camera; determining distance data between the target object placing area and the camera according to the original image; and determining the placement condition of the target placement area according to the distance data and the preset distance.
The computer storage media of embodiments of the application may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Note that the above is only a preferred embodiment of the present application and the technical principle applied. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, while the application has been described in connection with the above embodiments, the application is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the application, which is set forth in the following claims.
Claims (8)
1. An article distribution method is characterized by being applied to an article distribution robot provided with cameras above each layer of object placement area, and the method comprises the following steps:
Acquiring an original image of a target object placing area through the camera;
determining distance data between the target object placing area and the camera according to the original image;
Determining the object placement condition of the object placement area according to the distance data and a preset distance;
the object placing condition of the object placing area is the existence condition of the objects to be distributed in the object placing area; when no object to be distributed exists in the target object placing area, the distance data reaches the maximum value; correspondingly, the determining the object placement condition of the object placement area according to the distance data and the preset distance includes:
Comparing the distance data with a preset distance to judge the existence of the objects to be distributed in a target object placing area;
The determining distance data between the target object placing area and the camera according to the original image comprises the following steps:
Obtaining depth information of the original image through a pre-trained monocular depth estimation model; wherein, when training the monocular depth estimation model, adding pictures of a transparent water cup for training;
Obtaining distance data between the target object placing area and the camera according to the depth information;
The distance data is the distance between the target placed object in the target placed object area and the camera; if the current distribution mode is a palletless distribution mode, the target placed object is an original placed object tray arranged in the target placed object area; if the current distribution mode is a tray distribution mode, the target placed object is an additional tray placed in an original placed object tray arranged in the target placed object area;
After determining that the items to be dispensed placed in the target storage area are removed, the method further comprises: judging whether the taken-out address of the to-be-dispensed object is consistent with a target dispensing address associated with the to-be-dispensed object; controlling to send out error taking reminding according to the consistency judging result; wherein the retrieved address is positioning information that determines when the item to be dispensed is retrieved.
2. The method according to claim 1, wherein the determining the placement condition of the target placement area according to the distance data and the preset distance includes:
and if the distance data is changed from the preset distance to be smaller than the preset distance, determining that the object to be distributed is placed in the target object placing area.
3. The method according to claim 1, wherein the determining the placement condition of the target placement area according to the distance data and the preset distance includes:
and if the distance data is smaller than the preset distance and is changed into the preset distance, determining that the objects to be distributed placed in the target placement area are taken out.
4. The method of claim 1, wherein the obtaining, by the camera, the original image of the target placement area comprises:
acquiring continuous frame original images of a target object placing area through the camera;
Correspondingly, determining the distance data between the target object placing area and the camera according to the original image comprises the following steps:
According to each frame of original image, determining initial distance between the target object placing area and the camera;
and carrying out smoothing processing on the initial distance corresponding to each frame of original image to obtain the distance data.
5. The method according to any one of claims 1-4, wherein at least one of the placement areas is provided with at least two placement areas; correspondingly, the target object placing area is an object placing area in the object placing area.
6. An article dispensing apparatus for use with an article dispensing robot having a camera disposed above each of a plurality of storage areas, the apparatus comprising:
the original image acquisition module is used for acquiring an original image of the target object placing area through the camera;
The distance data determining module is used for determining distance data between the target object placing area and the camera according to the original image;
the object placement condition determining module is used for determining the object placement condition of the target object placement area according to the distance data and the preset distance;
the object placing condition of the object placing area is the existence condition of the objects to be distributed in the object placing area; when no object to be distributed exists in the target object placing area, the distance data reaches the maximum value; correspondingly, the storage condition determining module is specifically configured to:
Comparing the distance data with a preset distance to judge the existence of the objects to be distributed in a target object placing area;
The distance data determining module includes:
the depth information acquisition unit is used for acquiring the depth information of the original image through a pre-trained monocular depth estimation model; wherein, when training the monocular depth estimation model, adding pictures of a transparent water cup for training;
the distance data determining unit is used for obtaining distance data between the target object placing area and the camera according to the depth information;
The distance data is the distance between a target placed object in the target placed object area and the camera; if the current distribution mode is a palletless distribution mode, the target placed object is an original placed object tray arranged in the target placed object area; if the current distribution mode is a tray distribution mode, the target placed object is an additional tray placed in an original placed object tray arranged in the target placed object area;
the apparatus further comprises:
The address judging module is used for judging whether the taken-out address of the article to be delivered is consistent with the target delivery address associated with the article to be delivered after determining that the article to be delivered placed in the target placement area is taken out;
The error taking reminding module is used for controlling to send out error taking reminding according to the consistency judging result;
wherein the retrieved address is positioning information that determines when the item to be dispensed is retrieved.
7. A robot comprising a robot body, a robot body and a robot body, characterized by comprising the following steps:
One or more processors;
A memory for storing one or more programs;
when executed by the one or more processors, causes the one or more processors to implement an item delivery method as claimed in any one of claims 1 to 5.
8. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements an article dispensing method as claimed in any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210198847.XA CN114559431B (en) | 2022-03-02 | 2022-03-02 | Article distribution method, device, robot and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210198847.XA CN114559431B (en) | 2022-03-02 | 2022-03-02 | Article distribution method, device, robot and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114559431A CN114559431A (en) | 2022-05-31 |
CN114559431B true CN114559431B (en) | 2024-08-23 |
Family
ID=81716113
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210198847.XA Active CN114559431B (en) | 2022-03-02 | 2022-03-02 | Article distribution method, device, robot and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114559431B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520194A (en) * | 2017-12-18 | 2018-09-11 | 上海云拿智能科技有限公司 | Kinds of goods sensory perceptual system based on imaging monitor and kinds of goods cognitive method |
CN111564005A (en) * | 2020-05-07 | 2020-08-21 | 北京三快在线科技有限公司 | Storage device control method and storage device |
CN111899131A (en) * | 2020-06-30 | 2020-11-06 | 上海擎朗智能科技有限公司 | Article distribution method, apparatus, robot and medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11370123B2 (en) * | 2019-06-17 | 2022-06-28 | Lg Electronics Inc. | Mobile robot and method of controlling the same |
JP7192748B2 (en) * | 2019-11-25 | 2022-12-20 | トヨタ自動車株式会社 | Conveyance system, learned model generation method, learned model, control method and program |
CN113264313A (en) * | 2020-06-12 | 2021-08-17 | 深圳市海柔创新科技有限公司 | Shooting method for picking up/putting down goods, shooting module and transfer robot |
CN111906780B (en) * | 2020-06-30 | 2022-04-01 | 上海擎朗智能科技有限公司 | Article distribution method, robot and medium |
CN113159669A (en) * | 2021-03-23 | 2021-07-23 | 苏州银翼智能科技有限公司 | Tray adjusting method and device, storage medium and electronic device |
CN113246148A (en) * | 2021-04-30 | 2021-08-13 | 上海擎朗智能科技有限公司 | Distribution robot and positioning method thereof |
-
2022
- 2022-03-02 CN CN202210198847.XA patent/CN114559431B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520194A (en) * | 2017-12-18 | 2018-09-11 | 上海云拿智能科技有限公司 | Kinds of goods sensory perceptual system based on imaging monitor and kinds of goods cognitive method |
CN111564005A (en) * | 2020-05-07 | 2020-08-21 | 北京三快在线科技有限公司 | Storage device control method and storage device |
CN111899131A (en) * | 2020-06-30 | 2020-11-06 | 上海擎朗智能科技有限公司 | Article distribution method, apparatus, robot and medium |
Also Published As
Publication number | Publication date |
---|---|
CN114559431A (en) | 2022-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109118137B (en) | Order processing method, device, server and storage medium | |
CN108382783B (en) | Article pickup method, delivering method, access part method and storage medium | |
EP3816919A1 (en) | Order processing method and device, server, and storage medium | |
CN109117824B (en) | Commodity management method and device, electronic equipment and storage medium | |
CN110472915B (en) | Cargo transportation management method and system | |
US20210170605A1 (en) | System and method for task assignment management | |
CN110189483A (en) | Robot article receiving and sending method, relevant apparatus, article receiving and sending robot and storage medium | |
US20150220783A1 (en) | Method and system for semi-automated venue monitoring | |
CN111906780B (en) | Article distribution method, robot and medium | |
CN114186943A (en) | Article distribution method, article distribution device, electronic equipment and computer readable storage medium | |
CN110803447B (en) | Article transportation management method, device and system and storage medium | |
CN109733783A (en) | A kind of cargo restocking method, apparatus, electronic equipment and storage medium | |
CN111191804A (en) | Method, system, device and storage medium for generating restaurant service task information | |
CN111899131A (en) | Article distribution method, apparatus, robot and medium | |
CN109640246B (en) | Information acquisition method, device, system and storage medium | |
CN110245900B (en) | Cargo allocation method, device, server and medium | |
CN113320865B (en) | Warehouse management method, device, warehouse robot, warehouse system and medium | |
CN114743299B (en) | Intelligent warehouse processing method based on image recognition | |
US20210082031A1 (en) | Order processing method and device, and goods volume estimation method and device | |
US20180285708A1 (en) | Intelligent Fixture System | |
CN116090942A (en) | Multi-scene robot distribution method and system based on Internet of things | |
CN114559431B (en) | Article distribution method, device, robot and storage medium | |
CN116415862A (en) | Freight information processing method and system | |
CN113298467A (en) | Scheduling method, device, equipment and storage medium | |
US20230143065A1 (en) | Warehouse sorting method, server, robot, system, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |