WO2021062681A1 - Automatic meal delivery method and apparatus, and robot - Google Patents

Automatic meal delivery method and apparatus, and robot Download PDF

Info

Publication number
WO2021062681A1
WO2021062681A1 PCT/CN2019/109559 CN2019109559W WO2021062681A1 WO 2021062681 A1 WO2021062681 A1 WO 2021062681A1 CN 2019109559 W CN2019109559 W CN 2019109559W WO 2021062681 A1 WO2021062681 A1 WO 2021062681A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
robot
camera
face
delivery
Prior art date
Application number
PCT/CN2019/109559
Other languages
French (fr)
Chinese (zh)
Inventor
黄巍伟
郑小刚
王国栋
Original Assignee
中新智擎科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中新智擎科技有限公司 filed Critical 中新智擎科技有限公司
Priority to PCT/CN2019/109559 priority Critical patent/WO2021062681A1/en
Publication of WO2021062681A1 publication Critical patent/WO2021062681A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators

Definitions

  • the embodiments of the present invention relate to the field of electronic information technology, and in particular to a method, device and robot for automatic meal delivery.
  • the inventor found that the above related technologies have at least the following problems: in application scenarios such as eating in restaurants, robots usually can only deliver meals to a fixed table or near a table, and customers When ordering food, a fixed table must be selected for ordering. If the customer changes the dining position or joins the table with others, the food delivery robot cannot accurately deliver the food to the user.
  • the purpose of the embodiments of the present invention is to provide an automatic meal delivery method, device, and robot, which can accurately deliver the meal to the user.
  • an embodiment of the present invention provides an automatic meal delivery method, which is applied to a robot, and the method includes:
  • the robot is controlled to deliver the meal to the location of the user.
  • the step of locating the position of the user in the dining area in combination with the face of the user further includes:
  • the location of the user is located according to a preset binocular stereo vision positioning algorithm.
  • the dining area is configured with a first camera, the image is collected by the first camera, and the dining area includes at least two sub-areas;
  • the method further includes:
  • the robot is equipped with a second camera, and the second camera is a binocular camera,
  • the step of locating the position of the user according to a preset binocular stereo vision positioning algorithm further includes:
  • the position of the user is located according to the current position of the robot, the shooting direction of the second camera, and the depth information.
  • the step of obtaining a depth image containing the user further includes:
  • the step of controlling the robot to transport the food to the location further includes:
  • the dining area is provided with a position correction device
  • the method also includes:
  • an embodiment of the present invention provides an automatic meal delivery device, which is applied to a robot, and the device includes:
  • the receiving module is configured to receive a delivery instruction for delivering meals, wherein the delivery instruction carries the face of the user;
  • the positioning module is used to locate the position of the user in the dining area in combination with the face of the user;
  • the control module is used to control the robot to transport the food to the location of the user.
  • an embodiment of the present invention provides a robot, including:
  • At least one processor and,
  • a memory communicatively connected with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method described in the first aspect above.
  • embodiments of the present invention also provide a computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to make a computer execute The method described in the first aspect above.
  • embodiments of the present invention also provide a computer program product.
  • the computer program product includes a computer program stored on a computer-readable storage medium.
  • the computer program includes program instructions. When the program instructions are executed by a computer, the computer executes the method described in the first aspect above.
  • the embodiment of the present invention provides an automatic meal delivery method applied to a robot.
  • the delivery instruction wherein the delivery instruction carries the face of the user, and then, combined with the face of the user, locates the location of the user in the dining area, and finally, controls the robot to deliver the meal to At the location of the user, the robot executes the method of automatic meal delivery provided in the embodiment of the present invention, and can accurately deliver the meal to the user, so as to improve the dining experience of the user.
  • FIG. 1 is a schematic diagram of an exemplary system structure of an embodiment of an automatic meal delivery method applied to an embodiment of the present invention
  • Figure 2 is a flowchart of a method for automatic meal delivery provided by an embodiment of the present invention
  • FIG. 3 is a sub-flow chart of step 120 in the method shown in FIG. 2;
  • FIG. 4 is a sub-flow chart of step 123 in the method shown in FIG. 3;
  • FIG. 5 is a sub-flow chart of step 130 in the method shown in FIG. 2;
  • Figure 6 is a flow chart of another method for automatic meal delivery provided by an embodiment of the present invention.
  • Figure 7 is a schematic structural diagram of an automatic meal delivery device provided by an embodiment of the present invention.
  • Figure 8 is a schematic structural diagram of another automatic meal delivery device provided by an embodiment of the present invention.
  • Fig. 9 is a schematic diagram of the hardware structure of a robot for executing the above-mentioned automatic meal delivery method provided by an embodiment of the present invention.
  • FIG. 1 is a schematic diagram of an exemplary system structure applied to an embodiment of the automatic meal delivery method of the present invention.
  • the system structure includes: a robot 10, a terminal 20 and at least one first camera 30.
  • the robot 10 is in communication connection with the terminal 20 and the first camera 30 respectively.
  • the communication connection may be a network connection, and may include various connection types, such as wired, wireless communication, or fiber optic cable.
  • the user can purchase meals through the terminal 20, the robot 10 can collect the user's face through the terminal 20, and the robot 10 can also acquire an image of the dining area through the first camera 30.
  • the robot 10 can replace the food delivery personnel to complete the food delivery work.
  • the terminal 20 collects the user's face, and the robot 10 communicates with the terminal 20 to obtain the user's face collected by the terminal 20.
  • the robot 10 receives A delivery instruction for the delivery of meals from the terminal 20, the delivery instruction carrying the user’s face, through the first camera 30 installed in the dining area and the second camera installed on the robot 10, the robot 10 locates the user in the dining area The meal is delivered to the user’s location after the user’s location.
  • the number of robots 10 used for food delivery may be one or more according to different application scenarios, and the number of robots 10 is not limited in this application.
  • the food management method provided by the embodiment of the present application is generally executed by the above-mentioned robot 10, and correspondingly, the food management device is generally set in the robot 10.
  • the terminal 20 is an electronic device that can collect a user's face image and install a meal shopping application.
  • the terminal 20 may be a mobile terminal, a tablet, a computer, or other devices that carry a camera or can connect an external camera to collect images.
  • the terminal 20 may be an electronic device in a restaurant. When the user places an order, the user’s face image is collected and the meal information purchased by the user is obtained. After the meal information is obtained, the meal information is sent to the kitchen, and the After the meal is completed, a delivery instruction is sent to the robot 10 to notify the robot 10 to perform meal delivery work.
  • the terminal 20 may also be a user's personal electronic device. When the user needs to have a meal, the user can select meals through an application on the terminal 20.
  • the number of the terminals 20 may be one or more.
  • data can be shared among multiple terminals 20, or an upper terminal is provided to obtain data collected by all lower terminals.
  • two terminals 20 are provided in the kitchen and restaurant respectively, and the two terminals 20 can share data.
  • the terminal 20 of the restaurant collects the user’s face and food information, it sends the food information to the kitchen.
  • the terminal 20 is for the catering staff to read.
  • the terminal 20 in the kitchen sends a delivery instruction to notify the robot 10 to deliver the meal.
  • the terminal 20 in the restaurant sends the face of the user corresponding to the meal information to the robot. 10.
  • the first camera 30 is an image acquisition device whose field of view can cover the entire dining area.
  • the first camera 30 can obtain the user's face image regardless of the user's orientation, for example, in the dining space as shown in the figure, preferably in the restaurant
  • the first camera 30 can be set on the ceiling and above the four corners of the dining area, and/or a panoramic camera is set in the center of the dining area to obtain the approximate position of the user in the dining area.
  • the location of the user in the dining area is specifically used to obtain the location of the user through a preset binocular stereo vision positioning algorithm, it is understandable that the second The camera 11 is a binocular camera to obtain parallax and depth information of the image.
  • the embodiment of the present invention provides an automatic meal delivery method, which can be executed by the above-mentioned capable robot 10. Please refer to FIG. 2, which shows a flowchart of an automatic meal delivery method applied according to the above system structure. , The method includes but is not limited to the following steps:
  • Step 110 Receive a delivery instruction for delivering meals.
  • the delivery instruction carries the face of the user, and the face of the user is collected when the user brushes his face to purchase the meal.
  • the user orders a meal through the terminal 20 as shown in FIG. 1. After the meal is completed, the robot 10 will receive a delivery instruction for delivering the meal. After the robot 10 obtains the delivery instruction, it will execute the delivery instruction.
  • Step 120 Locate the position of the user in the dining area in combination with the face of the user.
  • the camera uses face recognition, combined with the user's face carried by the delivery instruction, to achieve accurate positioning of the user's location in the dining area .
  • Step 130 Control the robot to deliver the meal to the location of the user.
  • the robot After obtaining the location of the user, the robot sets a travel route to the user, and transports the food to the location. It is understandable that in practical applications, due to the complex environment of the restaurant, the situation of people walking may affect the delivery process of the robot. Therefore, the robot used in the embodiment of the present invention also has a certain obstacle avoidance ability, and can deliver meals. It is delivered to the user's location accurately and quickly.
  • the embodiment of the present invention provides an automatic meal delivery method applied to a robot.
  • the method receives a delivery instruction for delivering meals, wherein the delivery instruction carries the user's face, and then combines the user's face with the delivery instruction.
  • the face of the user is located in the dining area, and finally, the robot is controlled to deliver the meal to the user’s location.
  • the robot executes the automatic meal delivery method provided in the embodiment of the present invention, which can The food is delivered to the user accurately to improve the user's dining experience.
  • FIG. 3, is a sub-flow chart of step 120 in the method shown in FIG. 2.
  • the step 120 specifically includes:
  • Step 121 Obtain an image of the dining area.
  • the robot obtains the image of the dining area through the first camera set in the restaurant, and then recognizes the user from the image of the dining area through a preset face recognition algorithm, and recognizes the After the user, the second camera set on the robot is used to locate the user's position through a preset binocular stereo vision positioning algorithm.
  • Step 122 Search for the user in the image according to a preset face recognition algorithm and combined with the face of the user.
  • the preset face recognition algorithm is an algorithm that is preset in the robot to recognize faces.
  • the algorithm can extract the faces in the image of the dining area obtained by the camera and the ones collected when the user swipes the face to buy meals. Match the characteristic information of the obtained face to determine the user in the image of the dining area.
  • the preset face recognition algorithm may be a common face recognition algorithm such as a local feature analysis method (Local Face Analysis), an eigenface method (Eigenface or PCA), a neural network method (Neural Networks), etc.
  • Step 123 After searching for the user, locate the position of the user according to a preset binocular stereo vision positioning algorithm.
  • the robot After the robot obtains the image of the dining area through the camera of the restaurant, and recognizes the user from the image, the robot obtains the position of the user relative to the robot through the preset binocular stereo vision positioning algorithm.
  • the principle is to analyze the depth and parallax of the image of the dining area and the image taken by the binocular camera carried by the robot itself to obtain the depth information of the user relative to the robot, and combine the restaurant map to obtain the user's location.
  • the dining area is configured with a first camera, and the image is captured by the first camera, and the dining area includes at least two sub-areas. Please refer to FIG. 4, which is shown in FIG. 3.
  • a sub-flow chart of step 123 in the method, the step 123 specifically includes:
  • Step 1231 After searching for the user, identify the sub-area where the user is located.
  • the dining area of the restaurant can be divided into N sub-areas. Please refer to FIG. 1 together.
  • the dining area S is divided into nine sub-areas. After the user is searched through the camera 30, it can be determined that the sub-area where the user is located is the sub-area S5.
  • each sub-region can be distinguished by color, pattern, or a combination of color and pattern, and a distinguishing mark is set and numbered in each of the sub-regions, so that the sub-region where the user is located can be identified by the distinguishing mark.
  • Step 1232 Control the robot to move to the sub-area where the user is located.
  • the robot After determining the sub-area where the user is located, the robot is controlled to move to the sub-area where the user is located, so as to narrow the search range of the second camera of the robot, so as to further realize precise positioning of the user.
  • the robot is equipped with a second camera, and the second camera is a binocular camera.
  • the step 123 further includes:
  • Step 1233 Obtain a depth image containing the user in the sub-region where the user is located through the second camera.
  • the depth image includes depth information of the user.
  • the step of acquiring the depth image containing the user further includes: acquiring the depth image containing the face of the user according to the preset face recognition algorithm.
  • the second camera is controlled to scan the sub-area where the user is located, and images are collected in real time during the scanning process. Then, the preset face recognition algorithm described in step 122 can be used to identify the data collected by the second camera. Whether the image contains the face of the user, after the image containing the face of the user is recognized, the depth information of the user is acquired, and the image containing the depth information is the depth image.
  • the second camera is a binocular camera, that is, it has two cameras.
  • the second camera scans the sub-area where the user is located, and collects images for identifying the user's face
  • Only one camera of the binocular camera can be used to collect images.
  • the other camera is turned on, and the images collected by the two cameras and the parallax of the two cameras are combined to obtain the user's depth information.
  • the depth information is the distance information between the second camera and the user.
  • Step 1234 Obtain the current position of the robot and the shooting direction of the second camera.
  • the robot when the robot obtains the depth image containing the face of the user, it also needs to obtain the current position of the robot and the shooting direction of the second camera to further accurately locate the user.
  • the robot can move and search within the sub-region to obtain a depth image containing the user's face.
  • an algorithm that can recognize the posture or back of the user is provided, and the position of the user is determined by recognizing the posture or back of the user.
  • the terminal 20 described in FIG. 1 needs to also be equipped to collect the user’s posture. Or the function of the back view.
  • Step 1235 locate the position of the user according to the current position of the robot, the shooting direction of the second camera, and the depth information.
  • the direction relative to the robot can be determined, and then combined with the depth information, that is, the distance between the user and the robot, you can get The precise positioning of the user in the dining area, that is, the sub-area where the user is located.
  • the second camera provided on the robot further searches for the target customer to further determine the specific location of the user.
  • the area that the food delivery robot needs to search becomes smaller, the sight of its binocular camera is less blocked, and it is closer to the target customer, which is conducive to obtaining clearer images and improving the accuracy of face recognition.
  • the search area becomes smaller, which also shortens the search time, and improves efficiency and user experience.
  • FIG. 5 is a sub-flow chart of step 130 in the method shown in FIG. 2.
  • the step 130 specifically includes:
  • Step 131 Plan a meal delivery route according to the current location of the robot and the location of the user.
  • Step 132 Control the robot to move to the position of the user along the food delivery path.
  • the robot after searching and locating the user’s location, the robot obtains its current location, plans the food delivery path according to its current location and the user’s location, and controls the robot’s sample food delivery path to move to the user’s location .
  • the robot In practical applications, since the environment between the robot and the user is usually complex, and other mobile customers, service personnel, animals, robots, etc. are prone to appear, the robot also needs to have certain obstacle avoidance functions.
  • the robot can detect obstacles through sensors, and when encountering obstacles, re-plan the food delivery path. Or, after detecting the obstacle in advance, calculate whether the obstacle will move and the distance moved, and plan the food delivery path according to the movement of the obstacle. For example, all the food delivery robots can share the food delivery path according to the The planned food delivery path determines the current food delivery path of the robot to avoid collisions between the food delivery robots. It can also be a combination of the above two methods.
  • FIG. 6 shows a flowchart of another method of automatic meal delivery. Based on the method of automatic meal delivery described in FIGS. 2 to 5, the method further includes:
  • Step 141 When the robot passes the position correction device, read the corrected position from the position correction device.
  • Step 142 Obtain the current position where the robot itself records.
  • Step 143 Determine whether the corrected position is the same as the current position. If they are not the same, skip to step 144; if they are the same, skip to step 131.
  • Step 144 Replace the corrected position with the current position.
  • an embodiment of the present invention also provides a method for correcting the position of a robot. Specifically, when the robot passes through the correcting device, the corrected position is read from the correcting device, and the corrected position is the preset coordinate information of the correcting device on the dining area. When it is determined that the current position of the robot is not the same as the corrected position, it indicates that the robot's own positioning has a deviation. At this time, the corrected position is replaced with the current position to correct the positioning deviation of the robot. After the current position of the robot is corrected, the step 131 is executed, and the food delivery path from the robot to the user is re-planned according to the corrected current position of the robot to ensure that the food can be accurately delivered to the user.
  • the correction device may be a transmitting device that enables the robot to read information.
  • a corresponding sensor or other receiving device should be installed on the robot to obtain or read the information of the correction device.
  • the correction device may be a magnetic strip, and the robot is provided with a magnetic sensor accordingly.
  • the calibration device may be a radio frequency tag, and the robot should be provided with a reader/writer accordingly. Specifically, it can be set according to actual needs.
  • the correction device A is taken as an example of a magnetic stripe, and the magnetic stripe is arranged on the dividing line between the sub-regions.
  • the four magnetic stripe divide the dining area exactly There are nine sub-regions (S1, S2...S9), the horizontally placed magnetic stripe A1 and magnetic stripe A2 have horizontal coordinate information, and the vertically placed magnetic stripe A3 and magnetic stripe A4 have horizontal coordinate information.
  • the chassis of the robot is equipped with a magnetic sensor. Whenever the robot passes the boundary between the sub-areas (magnetic stripe A), the robot performs a positioning detection, and the magnetic sensor reads the current coordinate position of the magnetic stripe. Correct the position information of the robot to ensure the accuracy of the robot's operation.
  • the embodiment of the present invention also provides an automatic meal delivery device. Please refer to FIG. 7, which shows a schematic structural diagram of an automatic meal delivery device.
  • the automatic meal delivery device 200 includes: a receiving module 210, a positioning Module 220 and control module 230.
  • the receiving module 210 is configured to receive a delivery instruction for delivering meals, wherein the delivery instruction carries the face of the user.
  • the positioning module 220 is used to locate the position of the user in the dining area in combination with the face of the user.
  • the control module 230 is used to control the robot to transport the meal to the location of the user.
  • the positioning module 220 is also used to obtain an image of the dining area; according to a preset face recognition algorithm and combined with the user's face, search for the user in the image; After the user is searched, the location of the user is located according to a preset binocular stereo vision positioning algorithm.
  • the dining area is configured with a first camera, the image is collected by the first camera, the dining area includes at least two sub-areas, and the positioning module 220 is also used for searching After reaching the user, identify the sub-area where the user is located; control the robot to move to the sub-area where the user is located.
  • the robot is equipped with a second camera
  • the second camera is a binocular camera
  • the positioning module 220 is further configured to pass the second camera in the sub-area where the user is located, Acquire a depth image containing the user, where the depth image contains the depth information of the user; obtain the current position of the robot and the shooting direction of the second camera; according to the current position of the robot, the The shooting direction of the second camera and the depth information locate the position of the user.
  • the positioning module 220 is further configured to obtain the depth image including the face of the user according to the preset face recognition algorithm.
  • control module 230 is further configured to plan a meal delivery path according to the current position of the robot and the location of the user; control the robot to move to the user along the meal delivery path s position.
  • the dining area is provided with a position correction device.
  • FIG. 8 shows a schematic structural diagram of another automatic meal delivery device.
  • the device 200 further includes a correction module 240.
  • the correction module 240 is used to read the correction position from the position correction device when the robot passes the position correction device; obtain the current position where the robot itself is recorded; determine the correction position Whether it is the same as the current position; if not, replace the corrected position with the current position.
  • the embodiment of the present invention also provides a robot. Please refer to Fig. 9, which shows the hardware structure of the robot capable of executing the automatic meal delivery method described in Figs. 2-6.
  • the robot 10 may be the robot 10 shown in FIG. 1.
  • the robot 10 includes: at least one processor 11; and a memory 12 communicatively connected with the at least one processor 11, and one processor 11 is taken as an example in FIG. 9.
  • the memory 12 stores instructions that can be executed by the at least one processor 11, and the instructions are executed by the at least one processor 11, so that the at least one processor 11 can execute the instructions shown in FIGS. 2 to 6 above.
  • the processor 11 and the memory 12 may be connected by a bus or in other ways. In FIG. 9, the connection by a bus is taken as an example.
  • the memory 12 can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, as corresponding to the automatic meal delivery method in the embodiment of the present application.
  • Program instructions/modules for example, the various modules shown in FIG. 7 and FIG. 8.
  • the processor 11 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 12, that is, realizing the automatic meal delivery method in the foregoing method embodiment.
  • the memory 12 may include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of an automatic meal delivery device.
  • the memory 12 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 12 may optionally include a memory remotely provided with respect to the processor 11, and these remote memories may be connected to an automatic meal delivery device via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 12, and when executed by the one or more processors 11, the automatic meal delivery method in any of the foregoing method embodiments is executed, for example, the above-described diagram is executed.
  • the method steps from 2 to 6 realize the functions of the modules and units in Figs. 7 to 8.
  • the embodiment of the present application also provides a non-volatile computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, for example,
  • the above described method steps in FIGS. 2 to 6 implement the functions of the modules in FIGS. 7 to 8.
  • the embodiments of the present application also provide a computer program product, including a calculation program stored on a non-volatile computer-readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, cause all
  • the computer executes the automatic meal delivery method in any of the foregoing method embodiments, for example, executes the method steps in FIGS. 2 to 6 described above to realize the functions of the modules in FIGS. 7 to 8.
  • the embodiment of the present invention provides an automatic meal delivery method applied to a robot.
  • the method receives a delivery instruction for delivering meals, wherein the delivery instruction carries the user's face, and then combines the user's face with the delivery instruction.
  • the face of the user is located in the dining area, and finally, the robot is controlled to deliver the meal to the user’s location.
  • the robot executes the automatic meal delivery method provided in the embodiment of the present invention, and can The food is delivered to the user accurately to improve the user's dining experience.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physically separate. Units can be located in one place or distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each implementation manner can be implemented by means of software plus a general hardware platform, and of course, it can also be implemented by hardware.
  • a person of ordinary skill in the art can understand that all or part of the processes in the methods of the foregoing embodiments can be implemented by a computer program instructing relevant hardware.
  • the program can be stored in a computer readable storage medium, and the program can be stored in a computer readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium can be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Abstract

An automatic meal delivery method applied to a robot (10), which relates to the technical field of electronic information. The method comprises receiving a delivery instruction for delivering a meal, the delivery instruction bearing the face of a user; then, combining the face of the user, and positioning the location of the user within a dining area; and finally controlling the robot (10) to deliver the meal to the location of the user. A robot (10) executes the described automatic meal delivery method, thus a meal may be accurately delivered to a user so as to improve the dining experience of the user.

Description

一种自动送餐的方法、装置及机器人Method, device and robot for automatic meal delivery 技术领域Technical field
本发明实施例涉及电子信息技术领域,特别涉及一种自动送餐的方法、装置及机器人。The embodiments of the present invention relate to the field of electronic information technology, and in particular to a method, device and robot for automatic meal delivery.
背景技术Background technique
随着人工智能技术的发展,机器人越来越智能化,机器人在各行各业中的应用也越来越广泛,而随着智慧新零售时代到来,目前出现了一些智能化餐厅,采用机器人为顾客提供送餐服务。With the development of artificial intelligence technology, robots are becoming more and more intelligent, and the application of robots in various industries has become more and more extensive. With the advent of the new era of smart retail, there are currently some intelligent restaurants that use robots for customers. Provide room service.
在实现本发明实施例过程中,发明人发现以上相关技术中至少存在如下问题:在餐厅等吃饭等应用场景中,机器人通常都只能将餐品送至固定某一餐位或餐桌附近,顾客点餐时需要选定固定的餐位进行点餐,若顾客更换就餐位置,或者与他人拼桌,送餐的机器人无法将餐品准确送入用户手中。In the process of implementing the embodiments of the present invention, the inventor found that the above related technologies have at least the following problems: in application scenarios such as eating in restaurants, robots usually can only deliver meals to a fixed table or near a table, and customers When ordering food, a fixed table must be selected for ordering. If the customer changes the dining position or joins the table with others, the food delivery robot cannot accurately deliver the food to the user.
发明内容Summary of the invention
针对现有技术的上述缺陷,本发明实施例的目的是提供一种自动送餐的方法、装置及机器人,该机器人能够将餐品准确送至用户手中。In view of the above-mentioned shortcomings of the prior art, the purpose of the embodiments of the present invention is to provide an automatic meal delivery method, device, and robot, which can accurately deliver the meal to the user.
本发明实施例的目的是通过如下技术方案实现的:The purpose of the embodiments of the present invention is achieved through the following technical solutions:
为解决上述技术问题,第一方面,本发明实施例中提供了一种自动送餐的方法,应用于机器人,所述方法包括:In order to solve the above technical problems, in the first aspect, an embodiment of the present invention provides an automatic meal delivery method, which is applied to a robot, and the method includes:
接收用于配送餐品的配送指令,其中,所述配送指令携带用户的人脸;Receiving a delivery instruction for delivering meals, wherein the delivery instruction carries the face of the user;
结合所述用户的人脸,在用餐区域内定位所述用户的位置;Locate the position of the user in the dining area in combination with the face of the user;
控制所述机器人将所述餐品运送至所述用户的位置。The robot is controlled to deliver the meal to the location of the user.
在一些实施例中,所述结合所述用户的人脸,在用餐区域内定位所述用户的位置的步骤,进一步包括:In some embodiments, the step of locating the position of the user in the dining area in combination with the face of the user further includes:
获取所述用餐区域的图像;Acquiring an image of the dining area;
根据预设人脸识别算法,并且结合所述用户的人脸,在所述图像中搜索所述用户;Searching for the user in the image according to a preset face recognition algorithm and combining the face of the user;
在搜索到所述用户之后,根据预设双目立体视觉定位算法,定位所述用户的位置。After the user is searched, the location of the user is located according to a preset binocular stereo vision positioning algorithm.
在一些实施例中,所述用餐区域配置有第一摄像头,所述图像是由所述第一摄像头采集到的,所述用餐区域包括至少两个子区域;In some embodiments, the dining area is configured with a first camera, the image is collected by the first camera, and the dining area includes at least two sub-areas;
在根据预设双目立体视觉定位算法,定位所述用户的位置的步骤之前,所述方法还包括:Before the step of locating the position of the user according to a preset binocular stereo vision positioning algorithm, the method further includes:
在搜索到所述用户之后,识别所述用户所在的子区域;After searching for the user, identify the sub-region where the user is located;
控制所述机器人移动至所述用户所在的子区域。Controlling the robot to move to the sub-area where the user is located.
在一些实施例中,所述机器人配置有第二摄像头,所述第二摄像头为双目摄像头,In some embodiments, the robot is equipped with a second camera, and the second camera is a binocular camera,
所述根据预设双目立体视觉定位算法,定位所述用户的位置的步骤,进一步包括:The step of locating the position of the user according to a preset binocular stereo vision positioning algorithm further includes:
通过所述第二摄像头,在所述用户所在的子区域内,获取包含所述用户的深度图像,其中,所述深度图像包含所述用户的深度信息;Using the second camera to obtain a depth image containing the user in a sub-region where the user is located, where the depth image contains depth information of the user;
获取所述机器人的当前位置和所述第二摄像头的拍摄方向;Acquiring the current position of the robot and the shooting direction of the second camera;
根据所述机器人的当前位置、所述第二摄像头的拍摄方向和所述深度信息,定位所述用户的位置。The position of the user is located according to the current position of the robot, the shooting direction of the second camera, and the depth information.
在一些实施例中,所述获取包含所述用户的深度图像的步骤,进一步包括:In some embodiments, the step of obtaining a depth image containing the user further includes:
根据所述预设人脸识别算法,获取包含所述用户的人脸的所述深度图像。Acquire the depth image including the face of the user according to the preset face recognition algorithm.
在一些实施例中,所述控制所述机器人将所述餐品运送至所述位置的步骤,进一步包括:In some embodiments, the step of controlling the robot to transport the food to the location further includes:
根据所述机器人的所述当前位置和所述用户的位置,规划送餐路径;Plan a meal delivery route according to the current location of the robot and the location of the user;
控制所述机器人沿所述送餐路径移动至所述用户的位置。Controlling the robot to move along the food delivery path to the position of the user.
在一些实施例中,所述用餐区域设置有位置校正装置;In some embodiments, the dining area is provided with a position correction device;
所述方法还包括:The method also includes:
当所述机器人经过所述位置校正装置时,从所述位置校正装置读取校正位置;When the robot passes the position correction device, read the correction position from the position correction device;
获取所述机器人自身记录所述当前位置;Acquiring the current position recorded by the robot itself;
判断所述校正位置与所述当前位置是否相同;Determine whether the corrected position is the same as the current position;
若不相同,将所述校正位置替换为所述当前位置。If they are not the same, replace the corrected position with the current position.
为解决上述技术问题,第二方面,本发明实施例中提供了一种自动送餐的装置,应用于机器人,所述装置包括:In order to solve the above technical problems, in a second aspect, an embodiment of the present invention provides an automatic meal delivery device, which is applied to a robot, and the device includes:
接收模块,用于接收用于配送餐品的配送指令,其中,所述配送指令携带用户的人脸;The receiving module is configured to receive a delivery instruction for delivering meals, wherein the delivery instruction carries the face of the user;
定位模块,用于结合所述用户的人脸,在用餐区域内定位所述用户的位置;The positioning module is used to locate the position of the user in the dining area in combination with the face of the user;
控制模块,用于控制所述机器人将所述餐品运送至所述用户的位置。The control module is used to control the robot to transport the food to the location of the user.
为解决上述技术问题,第三方面,本发明实施例提供了一种机器人,包括:In order to solve the above technical problems, in a third aspect, an embodiment of the present invention provides a robot, including:
至少一个处理器;以及,At least one processor; and,
与所述至少一个处理器通信连接的存储器;其中,A memory communicatively connected with the at least one processor; wherein,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如上第一方面所述的方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method described in the first aspect above.
为解决上述技术问题,第四方面,本发明实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如上第一方面所述的方法。In order to solve the above technical problems, in the fourth aspect, embodiments of the present invention also provide a computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to make a computer execute The method described in the first aspect above.
为解决上述技术问题,第五方面,本发明实施例还提供了一种计算 机程序产品,所述计算机程序产品包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行如上第一方面所述的方法。In order to solve the above technical problems, in the fifth aspect, embodiments of the present invention also provide a computer program product. The computer program product includes a computer program stored on a computer-readable storage medium. The computer program includes program instructions. When the program instructions are executed by a computer, the computer executes the method described in the first aspect above.
与现有技术相比,本发明的有益效果是:区别于现有技术的情况,本发明实施例中提供了一种应用于机器人的自动送餐的方法,该方法通过接收用于配送餐品的配送指令,其中,所述配送指令携带用户的人脸,然后,结合所述用户的人脸,在用餐区域内定位所述用户的位置,最后,控制所述机器人将所述餐品运送至所述用户的位置,机器人执行本发明实施例提供的自动送餐的方法,能够将餐品准确送到用户手中,以提高用户就餐体验。Compared with the prior art, the beneficial effect of the present invention is: different from the prior art, the embodiment of the present invention provides an automatic meal delivery method applied to a robot. The delivery instruction, wherein the delivery instruction carries the face of the user, and then, combined with the face of the user, locates the location of the user in the dining area, and finally, controls the robot to deliver the meal to At the location of the user, the robot executes the method of automatic meal delivery provided in the embodiment of the present invention, and can accurately deliver the meal to the user, so as to improve the dining experience of the user.
附图说明Description of the drawings
一个或多个实施例中通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件/模块和步骤表示为类似的元件/模块和步骤,除非有特别申明,附图中的图不构成比例限制。One or more embodiments are exemplified by the pictures in the corresponding drawings. These exemplified descriptions do not constitute a limitation on the embodiments. The components/modules and steps with the same reference numerals in the drawings represent For similar components/modules and steps, unless otherwise stated, the figures in the drawings do not constitute a limitation of scale.
图1是应用于本发明实施例的自动送餐的方法的实施例的示例性系统结构示意图;FIG. 1 is a schematic diagram of an exemplary system structure of an embodiment of an automatic meal delivery method applied to an embodiment of the present invention;
图2是本发明实施例提供的一种自动送餐的方法的流程图;Figure 2 is a flowchart of a method for automatic meal delivery provided by an embodiment of the present invention;
图3是图2所示方法中步骤120的一子流程图;FIG. 3 is a sub-flow chart of step 120 in the method shown in FIG. 2;
图4是图3所示方法中步骤123的一子流程图;FIG. 4 is a sub-flow chart of step 123 in the method shown in FIG. 3;
图5是图2所示方法中步骤130的一子流程图;FIG. 5 is a sub-flow chart of step 130 in the method shown in FIG. 2;
图6是本发明实施例提供的另一种自动送餐的方法的流程图;Figure 6 is a flow chart of another method for automatic meal delivery provided by an embodiment of the present invention;
图7是本发明实施例提供的一种自动送餐的装置的结构示意图;Figure 7 is a schematic structural diagram of an automatic meal delivery device provided by an embodiment of the present invention;
图8是本发明实施例提供的另一种自动送餐的装置的结构示意图;Figure 8 is a schematic structural diagram of another automatic meal delivery device provided by an embodiment of the present invention;
图9是本发明实施例提供的执行上述自动送餐的方法的机器人的硬件结构示意图。Fig. 9 is a schematic diagram of the hardware structure of a robot for executing the above-mentioned automatic meal delivery method provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进。这些都属于本发明的保护范围。The present invention will be described in detail below in conjunction with specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be pointed out that for those of ordinary skill in the art, several modifications and improvements can be made without departing from the concept of the present invention. These all belong to the protection scope of the present invention.
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions, and advantages of this application clearer, the following further describes the application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, and are not used to limit the present application.
需要说明的是,如果不冲突,本发明实施例中的各个特征可以相互结合,均在本申请的保护范围之内。另外,虽然在装置示意图中进行了功能模块划分,在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于装置中的模块划分,或流程图中的顺序执行所示出或描述的步骤。此外,本文所采用的“第一”、“第二”等字样并不对数据和执行次序进行限定,仅是对功能和作用基本相同的相同项或相似项进行区分。It should be noted that, if there is no conflict, the various features in the embodiments of the present invention can be combined with each other, and all fall within the protection scope of the present application. In addition, although the functional modules are divided in the schematic diagram of the device, and the logical sequence is shown in the flowchart, in some cases, the module division in the device may be different from the module division in the device, or the sequence shown in the flowchart may be executed. Or the steps described. In addition, the words "first" and "second" used herein do not limit the data and execution order, but only distinguish the same or similar items with basically the same function and effect.
除非另有定义,本说明书所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本说明书中在本发明的说明书中所使用的术语只是为了描述具体的实施方式的目的,不是用于限制本发明。本说明书所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。Unless otherwise defined, all technical and scientific terms used in this specification have the same meaning as commonly understood by those skilled in the technical field of the present invention. The terms used in the specification of the present invention in this specification are only for the purpose of describing specific embodiments, and are not used to limit the present invention. The term "and/or" used in this specification includes any and all combinations of one or more related listed items.
请参见图1,为应用于本发明的自动送餐的方法的实施例的示例性系统结构示意图。如图1所示,该系统结构包括:机器人10、终端20和至少一个第一摄像头30。所述机器人10分别与所述终端20和所述第一摄像头30通信连接,所述通信连接可以是网络连接,可以包括各种连接类型,比如有线、无线通信或者光纤电缆等。Please refer to FIG. 1, which is a schematic diagram of an exemplary system structure applied to an embodiment of the automatic meal delivery method of the present invention. As shown in FIG. 1, the system structure includes: a robot 10, a terminal 20 and at least one first camera 30. The robot 10 is in communication connection with the terminal 20 and the first camera 30 respectively. The communication connection may be a network connection, and may include various connection types, such as wired, wireless communication, or fiber optic cable.
用户可以通过所述终端20进行餐品的购买,所述机器人10能够通过所述终端20采集所述用户的人脸,所述机器人10还能够通过所述第一摄像头30获取用餐区域的图像。所述机器人10能够取代送餐人员完成送餐工作。The user can purchase meals through the terminal 20, the robot 10 can collect the user's face through the terminal 20, and the robot 10 can also acquire an image of the dining area through the first camera 30. The robot 10 can replace the food delivery personnel to complete the food delivery work.
具体地,用户通过终端20购买餐品时,终端20采集用户的人脸, 机器人10与终端20通信连接,获取终端20用于采集的用户的人脸,在餐品完成后,机器人10接收到来自终端20的配送餐品的配送指令,该配送指令携带用户的人脸,通过安装在用餐区域的第一摄像头30和安装在机器人10身上的第二摄像头,机器人10定位所述用户在用餐区域内的位置后,将餐品运送到所述用户的位置。Specifically, when a user purchases a meal through the terminal 20, the terminal 20 collects the user's face, and the robot 10 communicates with the terminal 20 to obtain the user's face collected by the terminal 20. After the meal is completed, the robot 10 receives A delivery instruction for the delivery of meals from the terminal 20, the delivery instruction carrying the user’s face, through the first camera 30 installed in the dining area and the second camera installed on the robot 10, the robot 10 locates the user in the dining area The meal is delivered to the user’s location after the user’s location.
可选地,用于送餐的机器人10,根据应用场景的不同,其数量可以是一台或多台,本申请不对其数量进行限定。Optionally, the number of robots 10 used for food delivery may be one or more according to different application scenarios, and the number of robots 10 is not limited in this application.
需要说明的是,本申请实施例所提供的餐品管理的方法一般由上述机器人10执行,相应地,餐品管理的装置一般设置于所述机器人10中。It should be noted that the food management method provided by the embodiment of the present application is generally executed by the above-mentioned robot 10, and correspondingly, the food management device is generally set in the robot 10.
所述终端20为能够采集用户的人脸图像,安装有餐品选购应用程序的电子设备。所述终端20可以是移动终端、平板、电脑等携带摄像头或者能够外接摄像头以采集图像的设备。所述终端20可以是餐厅的电子设备,在用户点单时采集用户的人脸图像并获取用户所购买的餐品信息,在获取到餐品信息后,将餐品信息发送至厨房,并在餐品完成后,发送配送指令至机器人10,通知所述机器人10进行餐品配送工作。所述终端20也可以是用户的私人的电子设备,用户在需要就餐时,可通过所述终端20上的应用程序选择餐品。The terminal 20 is an electronic device that can collect a user's face image and install a meal shopping application. The terminal 20 may be a mobile terminal, a tablet, a computer, or other devices that carry a camera or can connect an external camera to collect images. The terminal 20 may be an electronic device in a restaurant. When the user places an order, the user’s face image is collected and the meal information purchased by the user is obtained. After the meal information is obtained, the meal information is sent to the kitchen, and the After the meal is completed, a delivery instruction is sent to the robot 10 to notify the robot 10 to perform meal delivery work. The terminal 20 may also be a user's personal electronic device. When the user needs to have a meal, the user can select meals through an application on the terminal 20.
可选的是,所述终端20的数量可以是一个或多个。当所述终端20为多个时,多个所述终端20之间能够进行数据共享,或设置有一上位终端获取所有下位终端所采集的数据。例如,厨房和餐厅分别设置有两个所述终端20,两个所述终端20能够进行数据共享,餐厅的终端20采集到用户的人脸和餐品信息后,将餐品信息发送至厨房的终端20以供制餐人员读取,在餐品完成后,厨房的终端20发送配送指令通知机器人10送餐的同时,餐厅的终端20一并将餐品信息对应的用户的人脸发送至机器人10。Optionally, the number of the terminals 20 may be one or more. When there are multiple terminals 20, data can be shared among multiple terminals 20, or an upper terminal is provided to obtain data collected by all lower terminals. For example, two terminals 20 are provided in the kitchen and restaurant respectively, and the two terminals 20 can share data. After the terminal 20 of the restaurant collects the user’s face and food information, it sends the food information to the kitchen. The terminal 20 is for the catering staff to read. After the meal is completed, the terminal 20 in the kitchen sends a delivery instruction to notify the robot 10 to deliver the meal. At the same time, the terminal 20 in the restaurant sends the face of the user corresponding to the meal information to the robot. 10.
所述第一摄像头30为视野范围能够覆盖整个用餐区域的图像采集装置。可选地,为实现对整个用餐区域的视野覆盖,实现无论用户的朝向如何第一摄像头30都能够获取到用户的人脸图像,例如,如图所述用餐空间中,优选地,在餐厅的天花板,用餐区域的四个角落上方皆可 设置有所述第一摄像头30,和/或,在用餐区域的正中央设置有全景相机,以获取用户在所述用餐区域中的大致位置。The first camera 30 is an image acquisition device whose field of view can cover the entire dining area. Optionally, in order to achieve coverage of the entire dining area, the first camera 30 can obtain the user's face image regardless of the user's orientation, for example, in the dining space as shown in the figure, preferably in the restaurant The first camera 30 can be set on the ceiling and above the four corners of the dining area, and/or a panoramic camera is set in the center of the dining area to obtain the approximate position of the user in the dining area.
此外,由于本发明实施例中,具体用于定位所述用户在用餐区域内的位置,是通过预设双目立体视觉定位算法获取用户的位置的,因此,可以理解的是,所述第二摄像头11为双目摄像头,以获取图像的视差和深度信息。In addition, since in the embodiment of the present invention, the location of the user in the dining area is specifically used to obtain the location of the user through a preset binocular stereo vision positioning algorithm, it is understandable that the second The camera 11 is a binocular camera to obtain parallax and depth information of the image.
具体地,下面结合附图,对本发明实施例作进一步阐述。Specifically, the embodiments of the present invention will be further described below in conjunction with the accompanying drawings.
本发明实施例提供了一种自动送餐的方法,该方法可被上述能机器人10执行,请参见图2,其示出了根据上述系统结构所应用的一种自动送餐的方法的流程图,该方法包括但不限于以下步骤:The embodiment of the present invention provides an automatic meal delivery method, which can be executed by the above-mentioned capable robot 10. Please refer to FIG. 2, which shows a flowchart of an automatic meal delivery method applied according to the above system structure. , The method includes but is not limited to the following steps:
步骤110:接收用于配送餐品的配送指令。Step 110: Receive a delivery instruction for delivering meals.
其中,所述配送指令携带用户的人脸,所述用户的人脸是所述用户在刷脸购买所述餐品时采集到的。在本发明实施例中,用户通过如图1所述终端20点餐,餐品完成后,机器人10会接收到配送餐品的配送指令,机器人10获取到配送指令后,执行所述配送指令。Wherein, the delivery instruction carries the face of the user, and the face of the user is collected when the user brushes his face to purchase the meal. In the embodiment of the present invention, the user orders a meal through the terminal 20 as shown in FIG. 1. After the meal is completed, the robot 10 will receive a delivery instruction for delivering the meal. After the robot 10 obtains the delivery instruction, it will execute the delivery instruction.
步骤120:结合所述用户的人脸,在用餐区域内定位所述用户的位置。Step 120: Locate the position of the user in the dining area in combination with the face of the user.
在本发明实施例中,为准确获取用户在用餐区域的具体位置,通过摄像头,采用人脸识别的方式,结合配送指令携带的用户的人脸,实现对用户在用餐区域内的位置的精确定位。In the embodiment of the present invention, in order to accurately obtain the user's specific location in the dining area, the camera uses face recognition, combined with the user's face carried by the delivery instruction, to achieve accurate positioning of the user's location in the dining area .
步骤130:控制所述机器人将所述餐品运送至所述用户的位置。Step 130: Control the robot to deliver the meal to the location of the user.
在获取到用户的定位后,机器人设定好到用户的行进路线,将餐品运送到所述位置。可以理解的是,在实际应用中,由于餐厅环境复杂,人员走动的情况可能会影响到机器人的送餐过程,因此,本发明实施例采用的机器人还具有一定的避障能力,能够将餐品准确快速的送到用户的位置。After obtaining the location of the user, the robot sets a travel route to the user, and transports the food to the location. It is understandable that in practical applications, due to the complex environment of the restaurant, the situation of people walking may affect the delivery process of the robot. Therefore, the robot used in the embodiment of the present invention also has a certain obstacle avoidance ability, and can deliver meals. It is delivered to the user's location accurately and quickly.
本发明实施例中提供了一种应用于机器人的自动送餐的方法,该方法通过接收用于配送餐品的配送指令,其中,所述配送指令携带用户的 人脸,然后,结合所述用户的人脸,在用餐区域内定位所述用户的位置,最后,控制所述机器人将所述餐品运送至所述用户的位置,机器人执行本发明实施例提供的自动送餐的方法,能够将餐品准确送到用户手中,以提高用户就餐体验。The embodiment of the present invention provides an automatic meal delivery method applied to a robot. The method receives a delivery instruction for delivering meals, wherein the delivery instruction carries the user's face, and then combines the user's face with the delivery instruction. The face of the user is located in the dining area, and finally, the robot is controlled to deliver the meal to the user’s location. The robot executes the automatic meal delivery method provided in the embodiment of the present invention, which can The food is delivered to the user accurately to improve the user's dining experience.
在一些实施例中,请参见图3,为图2所示方法中步骤120的一子流程图,所述步骤120具体包括:In some embodiments, please refer to FIG. 3, which is a sub-flow chart of step 120 in the method shown in FIG. 2. The step 120 specifically includes:
步骤121:获取所述用餐区域的图像。Step 121: Obtain an image of the dining area.
在本发明实施例中,机器人通过设置在餐厅内的第一摄像头获取用餐区域的图像,然后通过预设人脸识别算法,从所述用餐区域的图像中识别出所述用户,识别到所述用户后,再通过设置在机器人身上的第二摄像头,通过预设双目立体视觉定位算法定位用户的位置。In the embodiment of the present invention, the robot obtains the image of the dining area through the first camera set in the restaurant, and then recognizes the user from the image of the dining area through a preset face recognition algorithm, and recognizes the After the user, the second camera set on the robot is used to locate the user's position through a preset binocular stereo vision positioning algorithm.
步骤122:根据预设人脸识别算法,并且结合所述用户的人脸,在所述图像中搜索所述用户。Step 122: Search for the user in the image according to a preset face recognition algorithm and combined with the face of the user.
所述预设人脸识别算法为在所述机器人中预设的能够识别人脸的算法,该算法能够提取摄像头所获取的用餐区域的图像中的人脸和用户刷脸购买餐品时所采集到的人脸的特征信息,将特征信息进行匹配,确定所述用餐区域的图像中的用户。所述预设人脸识别算法可以是局部特征分析方法(Local Face Analysis)、特征脸方法(Eigenface或PCA)、神经网络方法(Neural Networks)等常用的人脸识别算法。The preset face recognition algorithm is an algorithm that is preset in the robot to recognize faces. The algorithm can extract the faces in the image of the dining area obtained by the camera and the ones collected when the user swipes the face to buy meals. Match the characteristic information of the obtained face to determine the user in the image of the dining area. The preset face recognition algorithm may be a common face recognition algorithm such as a local feature analysis method (Local Face Analysis), an eigenface method (Eigenface or PCA), a neural network method (Neural Networks), etc.
步骤123:在搜索到所述用户之后,根据预设双目立体视觉定位算法,定位所述用户的位置。Step 123: After searching for the user, locate the position of the user according to a preset binocular stereo vision positioning algorithm.
在所述机器人通过餐厅的摄像头获取到用餐区域的图像,并从该图像中识别出用户后,机器人通过所述预设双目立体视觉定位算法,获取用户相对于机器人的位置。其原理在于,分析用餐区域的图像和机器人自身所携带的双目摄像头所拍摄的图像的深度和视差,以获取用户相对于机器人的深度信息,结合餐厅地图,获取用户的位置。After the robot obtains the image of the dining area through the camera of the restaurant, and recognizes the user from the image, the robot obtains the position of the user relative to the robot through the preset binocular stereo vision positioning algorithm. The principle is to analyze the depth and parallax of the image of the dining area and the image taken by the binocular camera carried by the robot itself to obtain the depth information of the user relative to the robot, and combine the restaurant map to obtain the user's location.
在一些实施例中,所述用餐区域配置有第一摄像头,所述图像是由 所述第一摄像头采集到的,所述用餐区域包括至少两个子区域,请参见图4,为图3所示方法中步骤123的一子流程图,所述步骤123具体包括:In some embodiments, the dining area is configured with a first camera, and the image is captured by the first camera, and the dining area includes at least two sub-areas. Please refer to FIG. 4, which is shown in FIG. 3. A sub-flow chart of step 123 in the method, the step 123 specifically includes:
步骤1231:在搜索到所述用户之后,识别所述用户所在的子区域。Step 1231: After searching for the user, identify the sub-area where the user is located.
由于实际应用中,为节约收放空间,便于机器人的送餐工作,提高用户体验,通常情况下,送餐的机器人的高度有限,安装在机器人上的第二摄像头容易被桌椅等障碍物或者其它用户遮挡视线,或者目标用户处于低头看手机状态,导致送餐机器人获取目标用户的人脸图像比较困难。为解决这一问题,可先将餐厅的用餐区域划分成N个子区域。请一并参见图1,图1中将用餐区域S划分为九个子区域,通过摄像头30搜索到所述用户后,可以确定所述用户所在的子区域为子区域S5。In practical applications, in order to save storage space, facilitate the robot’s food delivery work, and improve user experience, under normal circumstances, the height of the food delivery robot is limited, and the second camera installed on the robot is easily affected by obstacles such as tables and chairs or Other users obscure the line of sight, or the target user is in a state of looking down at the mobile phone, which makes it difficult for the food delivery robot to obtain the target user's face image. To solve this problem, the dining area of the restaurant can be divided into N sub-areas. Please refer to FIG. 1 together. In FIG. 1, the dining area S is divided into nine sub-areas. After the user is searched through the camera 30, it can be determined that the sub-area where the user is located is the sub-area S5.
可选的是,每个子区域可通过颜色、图案或者颜色与图案的结合进行区分,在每个所述子区域设置区别标志并进行编号,以使能够通过区分标志识别确定用户所在的子区域。Optionally, each sub-region can be distinguished by color, pattern, or a combination of color and pattern, and a distinguishing mark is set and numbered in each of the sub-regions, so that the sub-region where the user is located can be identified by the distinguishing mark.
步骤1232:控制所述机器人移动至所述用户所在的子区域。Step 1232: Control the robot to move to the sub-area where the user is located.
在确定用户所在子区域后,控制所述机器人移动至用户所在的子区域,以缩小机器人第二摄像头的搜索范围,以进一步实现对所述用户的精确定位。After determining the sub-area where the user is located, the robot is controlled to move to the sub-area where the user is located, so as to narrow the search range of the second camera of the robot, so as to further realize precise positioning of the user.
在一些实施例中,所述机器人配置有第二摄像头,所述第二摄像头为双目摄像头,请继续参见图4,所述步骤123还包括:In some embodiments, the robot is equipped with a second camera, and the second camera is a binocular camera. Please continue to refer to FIG. 4, and the step 123 further includes:
步骤1233:通过所述第二摄像头,在所述用户所在的子区域内,获取包含所述用户的深度图像。Step 1233: Obtain a depth image containing the user in the sub-region where the user is located through the second camera.
其中,所述深度图像包含所述用户的深度信息。所述获取包含所述用户的深度图像的步骤,进一步包括:根据所述预设人脸识别算法,获取包含所述用户的人脸的所述深度图像。Wherein, the depth image includes depth information of the user. The step of acquiring the depth image containing the user further includes: acquiring the depth image containing the face of the user according to the preset face recognition algorithm.
具体地,控制所述第二摄像头扫描所述用户所在的子区域,在扫描过程中实时采集图像,然后,可通过上述步骤122所述的预设人脸识别算法,识别第二摄像头所采集的图像中是否包含所述用户的人脸,在识别到包含所述用户的人脸的图像后,获取用户的深度信息,包含深度信 息的该图像即为所述深度图像。Specifically, the second camera is controlled to scan the sub-area where the user is located, and images are collected in real time during the scanning process. Then, the preset face recognition algorithm described in step 122 can be used to identify the data collected by the second camera. Whether the image contains the face of the user, after the image containing the face of the user is recognized, the depth information of the user is acquired, and the image containing the depth information is the depth image.
可选的,由于第二摄像头为双目摄像头,也即是带两个摄像头,在所述第二摄像头扫描所述用户所在的子区域,采集用于识别用户的人脸的图像的过程中,可以仅采用该双目摄像头的一个摄像头采集图像,在确定图像中包含用户的人脸时,开启另一个摄像头,结合两个摄像头所采集的图像及两个摄像头的视差,获得用户的深度信息。可以理解的是,所述深度信息即为所述第二摄像头和所述用户的距离信息。Optionally, since the second camera is a binocular camera, that is, it has two cameras. When the second camera scans the sub-area where the user is located, and collects images for identifying the user's face, Only one camera of the binocular camera can be used to collect images. When it is determined that the image contains the user's face, the other camera is turned on, and the images collected by the two cameras and the parallax of the two cameras are combined to obtain the user's depth information. It can be understood that the depth information is the distance information between the second camera and the user.
步骤1234:获取所述机器人的当前位置和所述第二摄像头的拍摄方向。Step 1234: Obtain the current position of the robot and the shooting direction of the second camera.
进一步地,机器人在获取到包含所述用户的人脸的深度图像时,还需要获取该机器人当前的位置,以及第二摄像头的拍摄方向,以进一步精确定位所述用户。Further, when the robot obtains the depth image containing the face of the user, it also needs to obtain the current position of the robot and the shooting direction of the second camera to further accurately locate the user.
可以理解的是,由于机器人刚移动到所述用户所在的子区域时,可能存在用户背对机器人,导致机器人无法识别到用户的人脸的情况。因此,进一步地,机器人能够在所述子区域内移动搜索,以获取到包含用户的人脸的深度图像。或者,设置有能够识别用户的姿态或背影等的算法,通过识别用户的姿态或背影的方式,确定用户的位置,这种情况下,则需要图1所述的终端20也具备采集用户的姿态或背影的功能。It is understandable that when the robot has just moved to the sub-area where the user is located, there may be situations where the user faces the robot, causing the robot to fail to recognize the user's face. Therefore, further, the robot can move and search within the sub-region to obtain a depth image containing the user's face. Alternatively, an algorithm that can recognize the posture or back of the user is provided, and the position of the user is determined by recognizing the posture or back of the user. In this case, the terminal 20 described in FIG. 1 needs to also be equipped to collect the user’s posture. Or the function of the back view.
步骤1235:根据所述机器人的当前位置、所述第二摄像头的拍摄方向和所述深度信息,定位所述用户的位置。Step 1235: locate the position of the user according to the current position of the robot, the shooting direction of the second camera, and the depth information.
根据所述机器人的当前位置和所述第二摄像头的拍摄方向,可以确定相对于所述机器人在哪个方向上,然后结合所述深度信息,也即是用户和所述机器人的距离,即可得到所述用户的在所述用餐区域即其所在子区域的精确定位。According to the current position of the robot and the shooting direction of the second camera, the direction relative to the robot can be determined, and then combined with the depth information, that is, the distance between the user and the robot, you can get The precise positioning of the user in the dining area, that is, the sub-area where the user is located.
本发明实施例通过设置在机器人上的第二摄像头进一步搜索目标客户,以进一步确定用户的具体位置。由于送餐机器人需要搜索的区域变小,其双目摄像头视线被遮挡的情况减少,且距离目标客户更近,有利于获取更加清晰的图像,提升人脸识别准确率。另外,搜索区域变小,还缩短了搜索花费的时间,提高了效率和用户体验。In the embodiment of the present invention, the second camera provided on the robot further searches for the target customer to further determine the specific location of the user. As the area that the food delivery robot needs to search becomes smaller, the sight of its binocular camera is less blocked, and it is closer to the target customer, which is conducive to obtaining clearer images and improving the accuracy of face recognition. In addition, the search area becomes smaller, which also shortens the search time, and improves efficiency and user experience.
在一些实施例中,请参见图5,为图2所示方法中步骤130的一子流程图,所述步骤130具体包括:In some embodiments, please refer to FIG. 5, which is a sub-flow chart of step 130 in the method shown in FIG. 2. The step 130 specifically includes:
步骤131:根据所述机器人的所述当前位置和所述用户的位置,规划送餐路径。Step 131: Plan a meal delivery route according to the current location of the robot and the location of the user.
步骤132:控制所述机器人沿所述送餐路径移动至所述用户的位置。Step 132: Control the robot to move to the position of the user along the food delivery path.
在本发明实施例中,搜索并定位到用户所在位置后,机器人获取当前自身所在的位置,根据其当前位置和用户的位置,规划送餐路径,并控制机器人样送餐路径运动至用户的位置。In the embodiment of the present invention, after searching and locating the user’s location, the robot obtains its current location, plans the food delivery path according to its current location and the user’s location, and controls the robot’s sample food delivery path to move to the user’s location .
在实际应用中,由于机器人与用户之前的环境通常较为复杂,容易出现其他移动的客户、服务人员、动物、机器人等,因此,该机器人还需要具有一定的避障功能。In practical applications, since the environment between the robot and the user is usually complex, and other mobile customers, service personnel, animals, robots, etc. are prone to appear, the robot also needs to have certain obstacle avoidance functions.
具体地,机器人可以通过传感器检测障碍物,在遇到障碍物时,重新规划送餐路径。或者,也可以是,预先检测到障碍物后,计算障碍物是否会移动以及移动的距离,根据障碍物的移动情况规划送餐路径,例如,所有的送餐机器人可以共享送餐路径,根据已规划的送餐路径确定当前该机器人的送餐路径,以避免送餐机器人之间产生碰撞。还可以是,上述两种方法的结合。Specifically, the robot can detect obstacles through sensors, and when encountering obstacles, re-plan the food delivery path. Or, after detecting the obstacle in advance, calculate whether the obstacle will move and the distance moved, and plan the food delivery path according to the movement of the obstacle. For example, all the food delivery robots can share the food delivery path according to the The planned food delivery path determines the current food delivery path of the robot to avoid collisions between the food delivery robots. It can also be a combination of the above two methods.
在一些实施例中,请参见图6,其示出了另一种自动送餐的方法的流程图,基于图2至图5所述的自动送餐的方法,所述方法还包括:In some embodiments, please refer to FIG. 6, which shows a flowchart of another method of automatic meal delivery. Based on the method of automatic meal delivery described in FIGS. 2 to 5, the method further includes:
步骤141:当所述机器人经过所述位置校正装置时,从所述位置校正装置读取校正位置。Step 141: When the robot passes the position correction device, read the corrected position from the position correction device.
步骤142:获取所述机器人自身记录当前所处的当前位置。Step 142: Obtain the current position where the robot itself records.
步骤143:判断所述校正位置与所述当前位置是否相同。若不相同,跳转至步骤144;若相同,跳转至步骤131。Step 143: Determine whether the corrected position is the same as the current position. If they are not the same, skip to step 144; if they are the same, skip to step 131.
步骤144:将所述校正位置替换为所述当前位置。Step 144: Replace the corrected position with the current position.
由于机器人在餐厅内部送餐过程中,随着送餐次数及运动距离的增加,其自身定位会出现偏差。因此,本发明实施例还提供了一种机器人 的位置校正方法,具体地,机器人通过校正装置时,从校正装置上读取校正位置,该校正位置为校正装置在用餐区域上的预设坐标信息,在确定机器人的当前位置与该校正位置不相同时,说明机器人的自身定位出现了偏差,此时,将校正位置替换为当前位置,以校正机器人的定位偏差。在校正完所述机器人的当前位置后,执行所述步骤131,根据所述机器人的校正后的当前位置,重新规划机器人到用户的送餐路径,以保证餐品能够准确送到用户手中。As the robot delivers food inside the restaurant, as the number of food deliveries and movement distance increase, its positioning will deviate. Therefore, an embodiment of the present invention also provides a method for correcting the position of a robot. Specifically, when the robot passes through the correcting device, the corrected position is read from the correcting device, and the corrected position is the preset coordinate information of the correcting device on the dining area. When it is determined that the current position of the robot is not the same as the corrected position, it indicates that the robot's own positioning has a deviation. At this time, the corrected position is replaced with the current position to correct the positioning deviation of the robot. After the current position of the robot is corrected, the step 131 is executed, and the food delivery path from the robot to the user is re-planned according to the corrected current position of the robot to ensure that the food can be accurately delivered to the user.
所述校正装置可以是能够使机器人读取信息的发射装置,相应的,机器人身上应安装有相应的传感器等接受装置以获取或读取校正装置的信息。例如,所述校正装置可以是磁条,所述机器人相应则设置有磁感应器。或者,所述校正装置可以是射频标签,则所述机器人相应应设置有读写器。具体地,可根据实际需要进行设置。The correction device may be a transmitting device that enables the robot to read information. Correspondingly, a corresponding sensor or other receiving device should be installed on the robot to obtain or read the information of the correction device. For example, the correction device may be a magnetic strip, and the robot is provided with a magnetic sensor accordingly. Alternatively, the calibration device may be a radio frequency tag, and the robot should be provided with a reader/writer accordingly. Specifically, it can be set according to actual needs.
请一并参见图1,在本发明实施例中,以所述校正装置A为磁条为例,且该磁条设置在各子区域之间的分界线上,四条磁条将用餐区域正好分为九个子区域(S1、S2……S9),水平放置的磁条A1和磁条A2具有水平方向的坐标信息,竖直放置的磁条A3和磁条A4具有水平方向的坐标信息,送餐的机器人的底盘上则相应设置有磁感应器,每当机器人经过所述子区域之间的分界线(磁条A)时,机器人进行一次定位检测,磁感应器读取当前磁条的坐标位置,以修正机器人的位置信息,保障机器人运行的准确性。Please refer to FIG. 1 together. In the embodiment of the present invention, the correction device A is taken as an example of a magnetic stripe, and the magnetic stripe is arranged on the dividing line between the sub-regions. The four magnetic stripe divide the dining area exactly There are nine sub-regions (S1, S2...S9), the horizontally placed magnetic stripe A1 and magnetic stripe A2 have horizontal coordinate information, and the vertically placed magnetic stripe A3 and magnetic stripe A4 have horizontal coordinate information. The chassis of the robot is equipped with a magnetic sensor. Whenever the robot passes the boundary between the sub-areas (magnetic stripe A), the robot performs a positioning detection, and the magnetic sensor reads the current coordinate position of the magnetic stripe. Correct the position information of the robot to ensure the accuracy of the robot's operation.
本发明实施例还提供了一种自动送餐的装置,请参见图7,其示出了一种自动送餐的装置的结构示意图,所述自动送餐的装置200包括:接收模块210、定位模块220和控制模块230。The embodiment of the present invention also provides an automatic meal delivery device. Please refer to FIG. 7, which shows a schematic structural diagram of an automatic meal delivery device. The automatic meal delivery device 200 includes: a receiving module 210, a positioning Module 220 and control module 230.
所述接收模块210用于接收用于配送餐品的配送指令,其中,所述配送指令携带用户的人脸。The receiving module 210 is configured to receive a delivery instruction for delivering meals, wherein the delivery instruction carries the face of the user.
所述定位模块220用于结合所述用户的人脸,在用餐区域内定位所述用户的位置。The positioning module 220 is used to locate the position of the user in the dining area in combination with the face of the user.
所述控制模块230用于控制所述机器人将所述餐品运送至所述用户 的位置。The control module 230 is used to control the robot to transport the meal to the location of the user.
在一些实施例中,所述定位模块220还用于获取所述用餐区域的图像;根据预设人脸识别算法,并且结合所述用户的人脸,在所述图像中搜索所述用户;在搜索到所述用户之后,根据预设双目立体视觉定位算法,定位所述用户的位置。In some embodiments, the positioning module 220 is also used to obtain an image of the dining area; according to a preset face recognition algorithm and combined with the user's face, search for the user in the image; After the user is searched, the location of the user is located according to a preset binocular stereo vision positioning algorithm.
在一些实施例中,所述用餐区域配置有第一摄像头,所述图像是由所述第一摄像头采集到的,所述用餐区域包括至少两个子区域,所述定位模块220还用于在搜索到所述用户之后,识别所述用户所在的子区域;控制所述机器人移动至所述用户所在的子区域。In some embodiments, the dining area is configured with a first camera, the image is collected by the first camera, the dining area includes at least two sub-areas, and the positioning module 220 is also used for searching After reaching the user, identify the sub-area where the user is located; control the robot to move to the sub-area where the user is located.
在一些实施例中,所述机器人配置有第二摄像头,所述第二摄像头为双目摄像头,所述定位模块220还用于通过所述第二摄像头,在所述用户所在的子区域内,获取包含所述用户的深度图像,其中,所述深度图像包含所述用户的深度信息;获取所述机器人的当前位置和所述第二摄像头的拍摄方向;根据所述机器人的当前位置、所述第二摄像头的拍摄方向和所述深度信息,定位所述用户的位置。In some embodiments, the robot is equipped with a second camera, the second camera is a binocular camera, and the positioning module 220 is further configured to pass the second camera in the sub-area where the user is located, Acquire a depth image containing the user, where the depth image contains the depth information of the user; obtain the current position of the robot and the shooting direction of the second camera; according to the current position of the robot, the The shooting direction of the second camera and the depth information locate the position of the user.
在一些实施例中,所述定位模块220还用于根据所述预设人脸识别算法,获取包含所述用户的人脸的所述深度图像。In some embodiments, the positioning module 220 is further configured to obtain the depth image including the face of the user according to the preset face recognition algorithm.
在一些实施例中,所述控制模块230还用于根据所述机器人的所述当前位置和所述用户的位置,规划送餐路径;控制所述机器人沿所述送餐路径移动至所述用户的位置。In some embodiments, the control module 230 is further configured to plan a meal delivery path according to the current position of the robot and the location of the user; control the robot to move to the user along the meal delivery path s position.
在一些实施例中,所述用餐区域设置有位置校正装置,请一并参见图8,其示出了另一种自动送餐的装置的结构示意图,所述装置200还包括:校正模块240。In some embodiments, the dining area is provided with a position correction device. Please also refer to FIG. 8, which shows a schematic structural diagram of another automatic meal delivery device. The device 200 further includes a correction module 240.
所述校正模块240用于当所述机器人经过所述位置校正装置时,从所述位置校正装置读取校正位置;获取所述机器人自身记录当前所处的所述当前位置;判断所述校正位置与所述当前位置是否相同;若不相同,将所述校正位置替换为所述当前位置。The correction module 240 is used to read the correction position from the position correction device when the robot passes the position correction device; obtain the current position where the robot itself is recorded; determine the correction position Whether it is the same as the current position; if not, replace the corrected position with the current position.
本发明实施例还提供了一种机器人,请参见图9,其示出了能够执 行图2至图6所述自动送餐的方法的机器人的硬件结构。所述机器人10可以是图1所示的机器人10。The embodiment of the present invention also provides a robot. Please refer to Fig. 9, which shows the hardware structure of the robot capable of executing the automatic meal delivery method described in Figs. 2-6. The robot 10 may be the robot 10 shown in FIG. 1.
所述机器人10包括:至少一个处理器11;以及,与所述至少一个处理器11通信连接的存储器12,图9中以其以一个处理器11为例。所述存储器12存储有可被所述至少一个处理器11执行的指令,所述指令被所述至少一个处理器11执行,以使所述至少一个处理器11能够执行上述图2至图6所述的自动送餐的方法。所述处理器11和所述存储器12可以通过总线或者其他方式连接,图9中以通过总线连接为例。The robot 10 includes: at least one processor 11; and a memory 12 communicatively connected with the at least one processor 11, and one processor 11 is taken as an example in FIG. 9. The memory 12 stores instructions that can be executed by the at least one processor 11, and the instructions are executed by the at least one processor 11, so that the at least one processor 11 can execute the instructions shown in FIGS. 2 to 6 above. The method of automatic meal delivery described above. The processor 11 and the memory 12 may be connected by a bus or in other ways. In FIG. 9, the connection by a bus is taken as an example.
存储器12作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的自动送餐的方法对应的程序指令/模块,例如,附图7和附图8所示的各个模块。处理器11通过运行存储在存储器12中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例自动送餐的方法。As a non-volatile computer-readable storage medium, the memory 12 can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, as corresponding to the automatic meal delivery method in the embodiment of the present application. Program instructions/modules, for example, the various modules shown in FIG. 7 and FIG. 8. The processor 11 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 12, that is, realizing the automatic meal delivery method in the foregoing method embodiment.
存储器12可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据自动送餐的装置的使用所创建的数据等。此外,存储器12可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器12可选包括相对于处理器11远程设置的存储器,这些远程存储器可以通过网络连接至自动送餐的装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 12 may include a program storage area and a data storage area. The program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of an automatic meal delivery device. In addition, the memory 12 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices. In some embodiments, the memory 12 may optionally include a memory remotely provided with respect to the processor 11, and these remote memories may be connected to an automatic meal delivery device via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
所述一个或者多个模块存储在所述存储器12中,当被所述一个或者多个处理器11执行时,执行上述任意方法实施例中的自动送餐的方法,例如,执行以上描述的图2至图6的方法步骤,实现图7至图8中的各模块和各单元的功能。The one or more modules are stored in the memory 12, and when executed by the one or more processors 11, the automatic meal delivery method in any of the foregoing method embodiments is executed, for example, the above-described diagram is executed. The method steps from 2 to 6 realize the functions of the modules and units in Figs. 7 to 8.
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。The above-mentioned products can execute the methods provided in the embodiments of the present application, and have functional modules and beneficial effects corresponding to the execution methods. For technical details that are not described in detail in this embodiment, please refer to the method provided in the embodiment of this application.
本申请实施例还提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如,执行以上描述的图2至图6的方法步骤,实现图7至图8中的各模块的功能。The embodiment of the present application also provides a non-volatile computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, for example, The above described method steps in FIGS. 2 to 6 implement the functions of the modules in FIGS. 7 to 8.
本申请实施例还提供了一种计算机程序产品,包括存储在非易失性计算机可读存储介质上的计算程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时时,使所述计算机执行上述任意方法实施例中的自动送餐的方法,例如,执行以上描述的图2至图6的方法步骤,实现图7至图8中的各模块的功能。The embodiments of the present application also provide a computer program product, including a calculation program stored on a non-volatile computer-readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, cause all The computer executes the automatic meal delivery method in any of the foregoing method embodiments, for example, executes the method steps in FIGS. 2 to 6 described above to realize the functions of the modules in FIGS. 7 to 8.
本发明实施例中提供了一种应用于机器人的自动送餐的方法,该方法通过接收用于配送餐品的配送指令,其中,所述配送指令携带用户的人脸,然后,结合所述用户的人脸,在用餐区域内定位所述用户的位置,最后,控制所述机器人将所述餐品运送至所述用户的位置,机器人执行本发明实施例提供的自动送餐的方法,能够将餐品准确送到用户手中,以提高用户就餐体验。The embodiment of the present invention provides an automatic meal delivery method applied to a robot. The method receives a delivery instruction for delivering meals, wherein the delivery instruction carries the user's face, and then combines the user's face with the delivery instruction. The face of the user is located in the dining area, and finally, the robot is controlled to deliver the meal to the user’s location. The robot executes the automatic meal delivery method provided in the embodiment of the present invention, and can The food is delivered to the user accurately to improve the user's dining experience.
需要说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。It should be noted that the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physically separate. Units can be located in one place or distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读 存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。Through the description of the above implementation manners, those of ordinary skill in the art can clearly understand that each implementation manner can be implemented by means of software plus a general hardware platform, and of course, it can also be implemented by hardware. A person of ordinary skill in the art can understand that all or part of the processes in the methods of the foregoing embodiments can be implemented by a computer program instructing relevant hardware. The program can be stored in a computer readable storage medium, and the program can be stored in a computer readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments. Wherein, the storage medium can be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;在本发明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本发明的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, not to limit them; under the idea of the present invention, the technical features of the above embodiments or different embodiments can also be combined. The steps can be implemented in any order, and there are many other variations of the different aspects of the present invention as described above. For the sake of brevity, they are not provided in the details; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art The skilled person should understand that: they can still modify the technical solutions recorded in the foregoing embodiments, or equivalently replace some of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the implementations of the present invention Examples of the scope of technical solutions.

Claims (10)

  1. 一种自动送餐的方法,其特征在于,应用于机器人,所述方法包括:An automatic meal delivery method, characterized in that it is applied to a robot, and the method includes:
    接收用于配送餐品的配送指令,其中,所述配送指令携带用户的人脸;Receiving a delivery instruction for delivering meals, wherein the delivery instruction carries the face of the user;
    结合所述用户的人脸,在用餐区域内定位所述用户的位置;Locate the position of the user in the dining area in combination with the face of the user;
    控制所述机器人将所述餐品运送至所述用户的位置。The robot is controlled to deliver the meal to the location of the user.
  2. 根据权利要求1所述的方法,其特征在于,The method of claim 1, wherein:
    所述结合所述用户的人脸,在用餐区域内定位所述用户的位置的步骤,进一步包括:The step of locating the position of the user in the dining area in combination with the face of the user further includes:
    获取所述用餐区域的图像;Acquiring an image of the dining area;
    根据预设人脸识别算法,并且结合所述用户的人脸,在所述图像中搜索所述用户;Searching for the user in the image according to a preset face recognition algorithm and combining the face of the user;
    在搜索到所述用户之后,根据预设双目立体视觉定位算法,定位所述用户的位置。After the user is searched, the location of the user is located according to a preset binocular stereo vision positioning algorithm.
  3. 根据权利要求2所述的方法,其特征在于,所述用餐区域配置有第一摄像头,所述图像是由所述第一摄像头采集到的,所述用餐区域包括至少两个子区域;The method according to claim 2, wherein the dining area is equipped with a first camera, the image is collected by the first camera, and the dining area includes at least two sub-areas;
    在根据预设双目立体视觉定位算法,定位所述用户的位置的步骤之前,所述方法还包括:Before the step of locating the position of the user according to a preset binocular stereo vision positioning algorithm, the method further includes:
    在搜索到所述用户之后,识别所述用户所在的子区域;After searching for the user, identify the sub-region where the user is located;
    控制所述机器人移动至所述用户所在的子区域。Controlling the robot to move to the sub-area where the user is located.
  4. 根据权利要求3所述的方法,其特征在于,所述机器人配置有第二摄像头,所述第二摄像头为双目摄像头,The method according to claim 3, wherein the robot is equipped with a second camera, and the second camera is a binocular camera,
    所述根据预设双目立体视觉定位算法,定位所述用户的位置的步骤,进一步包括:The step of locating the position of the user according to a preset binocular stereo vision positioning algorithm further includes:
    通过所述第二摄像头,在所述用户所在的子区域内,获取包含所述用户的深度图像,其中,所述深度图像包含所述用户的深度信息;Using the second camera to obtain a depth image containing the user in a sub-region where the user is located, where the depth image contains depth information of the user;
    获取所述机器人的当前位置和所述第二摄像头的拍摄方向;Acquiring the current position of the robot and the shooting direction of the second camera;
    根据所述机器人的当前位置、所述第二摄像头的拍摄方向和所述深度信息,定位所述用户的位置。The position of the user is located according to the current position of the robot, the shooting direction of the second camera, and the depth information.
  5. 根据权利要求4所述的方法,其特征在于,The method of claim 4, wherein:
    所述获取包含所述用户的深度图像的步骤,进一步包括:The step of obtaining a depth image containing the user further includes:
    根据所述预设人脸识别算法,获取包含所述用户的人脸的所述深度图像。Acquire the depth image including the face of the user according to the preset face recognition algorithm.
  6. 根据权利要求4或5所述的方法,其特征在于,所述控制所述机器人将所述餐品运送至所述位置的步骤,进一步包括:The method according to claim 4 or 5, wherein the step of controlling the robot to transport the meal to the location further comprises:
    根据所述机器人的所述当前位置和所述用户的位置,规划送餐路径;Plan a meal delivery route according to the current location of the robot and the location of the user;
    控制所述机器人沿所述送餐路径移动至所述用户的位置。Controlling the robot to move along the food delivery path to the position of the user.
  7. 根据权利要求6所述的方法,其特征在于,所述用餐区域设置有位置校正装置;The method according to claim 6, wherein the dining area is provided with a position correction device;
    所述方法还包括:The method also includes:
    当所述机器人经过所述位置校正装置时,从所述位置校正装置读取校正位置;When the robot passes the position correction device, read the correction position from the position correction device;
    获取所述机器人自身记录的所述当前位置;Acquiring the current position recorded by the robot itself;
    判断所述校正位置与所述当前位置是否相同;Determine whether the corrected position is the same as the current position;
    若不相同,将所述校正位置替换为所述当前位置。If they are not the same, replace the corrected position with the current position.
  8. 一种自动送餐的装置,其特征在于,应用于机器人,所述装置包括:An automatic meal delivery device, characterized in that it is applied to a robot, and the device includes:
    接收模块,用于接收用于配送餐品的配送指令,其中,所述配送指令携带用户的人脸;The receiving module is configured to receive a delivery instruction for delivering meals, wherein the delivery instruction carries the face of the user;
    定位模块,用于结合所述用户的人脸,在用餐区域内定位所述用户的位置;The positioning module is used to locate the position of the user in the dining area in combination with the face of the user;
    控制模块,用于控制所述机器人将所述餐品运送至所述用户的位置。The control module is used to control the robot to transport the food to the location of the user.
  9. 一种机器人,其特征在于,包括:A robot, characterized in that it comprises:
    至少一个处理器;以及,At least one processor; and,
    与所述至少一个处理器通信连接的存储器;其中,A memory communicatively connected with the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1-7任一项所述的方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute any one of claims 1-7 Methods.
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如权利要求1-7任一项所述的方法。A computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to make a computer execute the method according to any one of claims 1-7 .
PCT/CN2019/109559 2019-09-30 2019-09-30 Automatic meal delivery method and apparatus, and robot WO2021062681A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/109559 WO2021062681A1 (en) 2019-09-30 2019-09-30 Automatic meal delivery method and apparatus, and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/109559 WO2021062681A1 (en) 2019-09-30 2019-09-30 Automatic meal delivery method and apparatus, and robot

Publications (1)

Publication Number Publication Date
WO2021062681A1 true WO2021062681A1 (en) 2021-04-08

Family

ID=75336345

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/109559 WO2021062681A1 (en) 2019-09-30 2019-09-30 Automatic meal delivery method and apparatus, and robot

Country Status (1)

Country Link
WO (1) WO2021062681A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114211509A (en) * 2021-12-31 2022-03-22 上海钛米机器人股份有限公司 Food delivery robot's box and food delivery robot
CN114211509B (en) * 2021-12-31 2024-05-03 上海钛米机器人股份有限公司 Box of meal delivery robot and meal delivery robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120100505A (en) * 2011-03-04 2012-09-12 (주)아이엠테크놀로지 Unmanned serving system and method for controlling the same
KR20170112487A (en) * 2016-03-31 2017-10-12 경남대학교 산학협력단 Intelligent service robot system for restaurant
CN108245099A (en) * 2018-01-15 2018-07-06 深圳市沃特沃德股份有限公司 Robot moving method and device
CN108748174A (en) * 2018-05-31 2018-11-06 芜湖酷哇机器人产业技术研究院有限公司 It is capable of the meal delivery robot of automatic identification
CN208697447U (en) * 2018-05-25 2019-04-05 上海非码网络科技有限公司 Intelligent restaurant robot
CN109993157A (en) * 2019-05-06 2019-07-09 深圳前海微众银行股份有限公司 Allocator, device, equipment and readable storage medium storing program for executing based on robot
CN110008268A (en) * 2019-03-19 2019-07-12 深兰科技(上海)有限公司 A kind of recognition methods, device and the storage medium of position of eating

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120100505A (en) * 2011-03-04 2012-09-12 (주)아이엠테크놀로지 Unmanned serving system and method for controlling the same
KR20170112487A (en) * 2016-03-31 2017-10-12 경남대학교 산학협력단 Intelligent service robot system for restaurant
CN108245099A (en) * 2018-01-15 2018-07-06 深圳市沃特沃德股份有限公司 Robot moving method and device
CN208697447U (en) * 2018-05-25 2019-04-05 上海非码网络科技有限公司 Intelligent restaurant robot
CN108748174A (en) * 2018-05-31 2018-11-06 芜湖酷哇机器人产业技术研究院有限公司 It is capable of the meal delivery robot of automatic identification
CN110008268A (en) * 2019-03-19 2019-07-12 深兰科技(上海)有限公司 A kind of recognition methods, device and the storage medium of position of eating
CN109993157A (en) * 2019-05-06 2019-07-09 深圳前海微众银行股份有限公司 Allocator, device, equipment and readable storage medium storing program for executing based on robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114211509A (en) * 2021-12-31 2022-03-22 上海钛米机器人股份有限公司 Food delivery robot's box and food delivery robot
CN114211509B (en) * 2021-12-31 2024-05-03 上海钛米机器人股份有限公司 Box of meal delivery robot and meal delivery robot

Similar Documents

Publication Publication Date Title
US11130239B2 (en) Method for automatically generating waypoints for imaging shelves within a store
US10634503B2 (en) System and method of personalized navigation inside a business enterprise
US10565548B2 (en) Planogram assisted inventory system and method
US9796093B2 (en) Customer service robot and related systems and methods
US11506501B2 (en) System and method of personalized navigation inside a business enterprise
CN110710852B (en) Meal delivery method, system, medium and intelligent device based on meal delivery robot
US9205886B1 (en) Systems and methods for inventorying objects
CN102103663B (en) Ward visit service robot system and target searching method thereof
US20190392506A1 (en) Method for managing click and delivery shopping events
JP6904421B2 (en) Store equipment, store management methods, programs
US11960297B2 (en) Robot generating map based on multi sensors and artificial intelligence and moving based on map
US20180260800A1 (en) Unmanned vehicle in shopping environment
JP2015230236A (en) Merchandise guidance device, terminal equipment, merchandise guidance method, and program
JP2017174272A (en) Information processing device and program
CN112033390B (en) Robot navigation deviation rectifying method, device, equipment and computer readable storage medium
CN202010257U (en) Ward round robot system based on Bayesian theory
CN111429194B (en) User track determination system, method, device and server
WO2021062681A1 (en) Automatic meal delivery method and apparatus, and robot
CN113450414A (en) Camera calibration method, device, system and storage medium
CN110298527B (en) Information output method, system and equipment
US20220084258A1 (en) Interaction method based on optical communication apparatus, and electronic device
KR102572851B1 (en) Mobile robot device for moving to destination and operation method thereof
CN112055033B (en) Interaction method and system based on optical communication device
TWI747333B (en) Interaction method based on optical communictation device, electric apparatus, and computer readable storage medium
US20240029400A1 (en) Systems and methods for on-device object recognition via mobile device cluster

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19947514

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/08/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19947514

Country of ref document: EP

Kind code of ref document: A1