CN115657654A - Visual identification method for food delivery robot - Google Patents

Visual identification method for food delivery robot Download PDF

Info

Publication number
CN115657654A
CN115657654A CN202210882218.9A CN202210882218A CN115657654A CN 115657654 A CN115657654 A CN 115657654A CN 202210882218 A CN202210882218 A CN 202210882218A CN 115657654 A CN115657654 A CN 115657654A
Authority
CN
China
Prior art keywords
target
robot
food delivery
food
robots
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210882218.9A
Other languages
Chinese (zh)
Other versions
CN115657654B (en
Inventor
邓俊广
席娓
邓俊涛
陈锦锋
张挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Cogstek Automation Technology Co ltd
Original Assignee
Dongguan Cogstek Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Cogstek Automation Technology Co ltd filed Critical Dongguan Cogstek Automation Technology Co ltd
Priority to CN202210882218.9A priority Critical patent/CN115657654B/en
Publication of CN115657654A publication Critical patent/CN115657654A/en
Application granted granted Critical
Publication of CN115657654B publication Critical patent/CN115657654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The invention provides a visual identification method for a food delivery robot, which comprises the following steps: identifying the food delivery robot in the control area and determining a target robot; positioning a target food delivery position in the control area, determining a target food delivery point, and transmitting a target food delivery point food delivery instruction to a controller of a target robot nearby; based on a preset visual target tracking algorithm, starting a preset camera device by a target robot, locking a target food delivery point and a corresponding user, and determining a visual locking target; and receiving a food delivery instruction corresponding to the target food delivery point, and linking the visual locking target with the target robot.

Description

Visual identification method for food delivery robot
Technical Field
The invention belongs to the technical field of dining car robots, and particularly relates to a visual identification method for a food delivery robot.
Background
In recent years, with the development of artificial intelligence, robots are increasingly applied in life, more and more people select contactless catering, and many restaurants begin to introduce a meal delivery robot with a single function. How to accurately and quickly deliver goods with high precision becomes a key link influencing the automatic food preparation and delivery operation efficiency of the robot.
Disclosure of Invention
In order to solve the above problems, a main object of the present invention is to provide a visual recognition method for a meal delivery robot.
The technical scheme provides a visual identification method for a food delivery robot, which comprises the following steps:
identifying the food delivery robot in the control area and determining a target robot;
positioning a target food delivery position in the control area, determining a target food delivery point, and transmitting a target food delivery point food delivery instruction to a controller of the target robot nearby;
based on a preset visual target tracking algorithm, starting a preset camera device by a target robot, locking a target food delivery point and a corresponding user, and determining a visual locking target;
and receiving a food delivery instruction corresponding to the target food delivery point, and linking the visual locking target with the target robot.
As an embodiment of the present invention, the identifying a food delivery robot in a control area and determining a target robot includes:
acquiring an area range of a control area, and identifying the mark number of the food delivery robot in the area range;
retrieving the information of the corresponding food delivery robot through the standard serial number; wherein the content of the first and second substances,
the information at least comprises machine model, machine code and machine electric quantity;
and recording and identifying the successfully searched food delivery robot, and determining the target robot.
As an embodiment of the present technical solution, the controller for locating a target meal delivery position in a control area, determining a target meal delivery point, and transmitting the target meal delivery point to a target robot in a nearby place comprises:
positioning a target food delivery position in the control area through a preset space positioning technology, and determining target food delivery points;
calculating the space distance between the target food serving point and the target robot, and screening out the target robot with the closest space distance; wherein, the first and the second end of the pipe are connected with each other,
the target robots with the closest spatial distance at least comprise one;
transmitting the target food delivery point to a controller of the target robot nearby.
As an embodiment of the present invention, the controller for transmitting the target meal to the target robot comprises:
when one target robot is successfully screened, transmitting the target food serving point to a controller of the target robot;
and when the number of successfully screened target robots is more than one, the target food delivery points are synchronously transmitted to the controllers of all the target robots, the successfully received signal is fed back to a preset control terminal by the target robot which receives the successfully first, and meanwhile, the information of the corresponding target food delivery points is interrupted by other target robots.
As an embodiment of the present technical solution, the determining a visual locking target by starting a preset camera device by a target robot based on a preset visual target tracking algorithm to lock a target food serving point and a corresponding user includes:
starting a preset camera device by a target robot closest to the target food serving point, detecting the target food serving point and a corresponding user, and locking a target detection frame;
detecting a relative distance of the target food serving point between the target food serving point and the nearest target robot;
determining, from the relative distance, a scale change in relative distance between the target serving point and the nearest target robot;
determining the scale change of a target detection frame of the target food serving point through the target detection frame;
and locking the target food serving point and the corresponding user through the relative distance, the scale change of the relative distance, the target detection frame mark and the target measurement frame scale change, and determining the visual locking target.
As an embodiment of the present technical solution, the determining, by the target detection box, a target detection box scale change of a target food serving point includes:
extracting feature points in a target detection frame of the target food serving point, tracking the extracted feature points through a preset tracking algorithm, and determining a tracking detection image;
according to the tracked tracking detection image, calculating an affine transformation matrix of a target detection frame of the target food serving point;
and calculating the scale change of the target detection frame of the target food serving point according to the affine transformation matrix.
As an embodiment of the present technical solution, the receiving a food delivery instruction corresponding to the target food delivery point, and linking the visually locked target food delivery point with the target robot, includes:
the target robot closest to the target food delivery point receives the corresponding food delivery instruction sent by the corresponding target food delivery point;
determining the matching relation between each target robot and each target food serving point;
and associating each visual vehicle target with each target food delivery point according to the matching relation between each target robot and each target food delivery point.
As an embodiment of the present invention, the tracking algorithm at least includes one of frame sampling, frame mixing, or optical flow method.
As an embodiment of the present technical solution, the receiving a food delivery instruction corresponding to the target food delivery point, and linking the visual locking target and the target robot, includes:
the method comprises the steps that a target robot receives a corresponding food delivery order sent by a corresponding target food delivery point, then food delivery is carried out, position confirmation is carried out on the target robot and other food delivery robots at any time, if the distance between the target robot and other food delivery robots is smaller than a preset value, other food delivery robots smaller than the preset distance are marked as nearby robots, a priority robot in each nearby robot and the target robot is determined according to the number and weight of food products on each nearby robot and the target robot, the priority robot is controlled to send infrared rays in the advancing direction (infrared receiving devices with large areas are arranged around the food delivery robot, a row of infrared sending devices are arranged above the infrared receiving devices and irradiate downwards as shown in the following figure), the downward irradiation angle of the infrared sending devices is calculated and controlled according to the preset value of the preset distance, infrared receiving devices with large areas are arranged around each nearby robot and the target robot, the priority robot is subjected to avoidance until the distance between the priority robots is larger than the preset value according to avoid the food delivery orders, and then the food delivery robots are prevented from colliding with each other food delivery robot in a specific linkage mode, and the food delivery robots are carried out continuously through the collision between the food delivery robots in a specific linkage mode of the food delivery robots,
step A1: controlling the angle value of downward irradiation of a row of infrared transmitting devices of each meal delivery robot according to a preset distance numerical value by using a formula (1)
Figure BDA0003764621420000041
Wherein θ represents an angle value of downward irradiation of the infrared transmitting devices in the row of each meal delivery robot (the angle value is an angle of downward swing with the horizontal irradiation direction of the infrared transmitting devices as a reference); s 0 Representing a predetermined distance value; h represents the height value of a row of infrared transmitting devices of each meal delivery robot;
step A2: if the distance between the target robot and other meal delivery robots is smaller than a preset value, determining a priority robot in the target robot and the nearby robots of the target robot according to the number and the weight of the meal items on each nearby robot and the target robot by using a formula (2);
Figure BDA0003764621420000051
in which I 0 A priority robot selection value representing the target robot and a neighboring robot of the target robot; n is 0 Representing the number of food items on the target robot; m is a group of 0 Representing a weight of a meal on the target robot; n (i) represents the number of meals on the ith nearby robot of the target robot; m (i) represents the meal weight on the ith nearby robot of the target robot; d represents the total number of nearby robots of the target robot;
Figure BDA0003764621420000052
the value of i is substituted into the parenthesis from 1 to D to obtain the maximum value in the parenthesis;
Figure BDA0003764621420000053
the value of i is substituted into the bracket from 1 to D to obtain the value of i when the maximum value is obtained in the bracket; g (I), G 0 Representing an operation intermediate quantity for simplifying calculation; if I 0 If not, then represents the I < th > of the target robot 0 If the adjacent robot is the priority robot, the target robot and other adjacent robots are controlled to the I < th > robot 0 The nearby robots avoid;
if I 0 If the target robot is a priority robot, controlling all nearby robots of the target robot to avoid the target robot;
step A3: controlling each nearby robot or target robot to move and avoid according to the number and the position of the infrared signals received by the large-area infrared receiving device of each nearby robot or target robot except the priority robot by using a formula (3)
Figure BDA0003764621420000061
Wherein L (a) represents the shortest moving avoiding distance of the a-th food delivery robot needing to avoid; r represents the width value of any side of each meal delivery robot; k (a) represents the number of infrared signals received by a large-area infrared receiving device of the a-th food delivery robot needing avoidance; k represents the total number of a row of infrared transmitting devices arranged on any side of each meal delivery robot;
and controlling the a-th meal delivery robot needing to be avoided to move towards the direction of reducing the infrared signals received by the large-area infrared receiving device, and moving at least the distance value of L (a), thereby completing the avoidance.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a method for a food delivery robot to perform a visual recognition method according to an embodiment of the present invention.
Fig. 2 is a flowchart of a method for visual recognition of a food delivery robot according to an embodiment of the present invention.
Fig. 3 is a flowchart of a method for a food delivery robot to perform a visual recognition method according to an embodiment of the present invention.
Fig. 4 is a scene diagram of a visual identification method for a food delivery robot according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly connected to the other element. When an element is referred to as being "connected to" another element, it can be directly or indirectly connected to the other element.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the device or element so referred to must be in a particular orientation, constructed or operated in a particular orientation, and is not to be construed as limiting the invention.
Moreover, it should be noted that, in this document, relational terms such as first and second, and the like are only used for distinguishing one entity or operation from another entity or operation, and do not necessarily require or imply any actual relationship or order between the entities or operations, and the terms "plurality" and "a plurality" mean two or more unless explicitly and specifically limited otherwise. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
The first embodiment is as follows:
the technical scheme provides a visual identification method for a food delivery robot, which comprises the following steps:
identifying the food delivery robot in the control area and determining a target robot;
positioning a target food delivery position in the control area, determining a target food delivery point, and transmitting a target food delivery point food delivery instruction to a controller of the target robot nearby;
based on a preset visual target tracking algorithm, starting a preset camera device by a target robot, locking a target food delivery point and a corresponding user, and determining a visual locking target;
and receiving a food delivery instruction corresponding to the target food delivery point, and linking the visual locking target with the target robot.
The working principle and the beneficial effects of the technical scheme are as follows:
the technical scheme provides a visual identification method for a food delivery robot, which comprises the steps of identifying the food delivery robot in a control area, determining a target robot, only controlling the target robot in the food delivery area, positioning a target food delivery position in the control area, determining a target food delivery point, transmitting a target food delivery point food delivery instruction to a controller of the target robot nearby, distributing nearby, improving food delivery efficiency, starting a preset camera device based on a preset visual target tracking algorithm, locking the target food delivery point and a corresponding user, determining a visual locking target, improving locking accuracy, receiving the food delivery instruction corresponding to the target food delivery point, and linking the visual locking target with the target robot, thereby providing a high-automation and intelligent food delivery mode.
Example two:
this technical scheme provides an embodiment, discerning the food delivery robot in the control area, confirm the target robot, include:
acquiring an area range of a control area, and identifying the mark number of the food delivery robot in the area range;
retrieving the information of the corresponding food delivery robot through the standard serial number; wherein the content of the first and second substances,
the information at least comprises machine model, machine code and machine electric quantity;
and recording and identifying the successfully searched food delivery robot, and determining the target robot.
The working principle and the beneficial effects of the technical scheme are as follows:
according to the technical scheme, the food delivery robots in the control area are identified, the target robot is determined, the area range of the control area is obtained, and the mark numbers of the food delivery robots in the area range are identified, so that the information of the corresponding food delivery robots can be conveniently retrieved through standard numbers from the instruction calling library; the information at least comprises the machine model, the machine code and the machine electric quantity, so that the condition that the robot needs to be adapted with the robot and needs to be directly adapted when the robot is connected with a control terminal due to the reasons of insufficient electric quantity and the like is avoided, and the accuracy of adaptation is improved; and recording and identifying the successfully searched food delivery robot, determining the target robot and improving the efficiency.
Example three:
the present technical solution provides an embodiment, wherein the controller for locating a target meal delivery position in a control area, determining a target meal delivery point, and transmitting the target meal delivery point to a target robot in a near distance includes:
positioning a target food delivery position in the control area through a preset space positioning technology, and determining target food delivery points;
calculating the space distance between the target food serving point and the target robot, and screening out the target robot with the closest space distance; wherein the content of the first and second substances,
the target robot with the closest spatial distance comprises at least one;
transmitting the target food delivery point to a controller of the target robot nearby.
The working principle and the beneficial effects of the technical scheme are as follows:
this technical scheme fixes a position the target food delivery position in the control area, confirms target food delivery point to transmit target food delivery point to the controller of target robot nearby, include: through a preset space positioning technology, the target food delivery position in the control area is positioned, the target food delivery point is determined, the food delivery place is accurately positioned, and the delivery error or delivery deviation of the robot is avoided. Calculating the space distance between the target food serving point and the target robots, and screening out the target robots with the closest space distance, wherein the target robots with the closest space distance at least comprise one; and transmitting the target food delivery point nearby to a controller of the target robot, and reacting and feeding back the instruction of the food delivery point through at least one target robot.
Example four:
this technical solution provides an embodiment, the controller that transmits the target meal to the target robot nearby includes:
when one target robot is successfully screened, transmitting the target food serving point to a controller of the target robot;
and when the number of successfully screened target robots is more than one, the target food delivery points are synchronously transmitted to the controllers of all the target robots, the successfully received signal is fed back to a preset control terminal by the target robot which receives the successfully first, and meanwhile, the information of the corresponding target food delivery points is interrupted by other target robots.
The working principle and the beneficial effects of the technical scheme are as follows:
the technical scheme includes that the target food delivery points are transmitted to the controller of the target robot nearby, and the method comprises the steps that when one target robot is selected successfully, the target food delivery points are transmitted to the controller of the target robot; the target robot is used for feeding back signals of the target food delivery points, when the number of the successfully screened target robots is more than one, the target food delivery points are synchronously transmitted to the controllers of all the target robots, the successfully received signals are fed back to the preset control terminal by the target robot which receives the signals successfully firstly, and meanwhile, the other target robots interrupt the information of the corresponding target food delivery points, so that unnecessary resource loss is reduced
Example five:
this technical scheme provides an embodiment, will be based on preset visual target tracking algorithm, the target robot starts preset camera device, locks target food delivery point and corresponding user, confirms the vision locking target, includes:
starting a preset camera device by a target robot closest to the target food delivery point, detecting the target food delivery point and a corresponding user, and locking a target detection frame;
detecting a relative distance of the target food serving point between the target food serving point and the nearest target robot;
determining a scale change in relative distance between the target serving point and the nearest target robot from the relative distance;
determining the scale change of a target detection frame of the target food serving point through the target detection frame;
and locking the target food serving point and the corresponding user through the relative distance, the scale change of the relative distance, the target detection frame mark and the scale change of the target detection frame, and determining the visual locking target.
The working principle and the beneficial effects of the technical scheme are as follows:
the technical scheme includes that a preset camera device is started by a target robot based on a preset visual target tracking algorithm to lock a target food serving point and a corresponding user, a visual locking target is determined, the preset camera device is started by the target robot closest to the target food serving point, the target food serving point and the corresponding user are detected, a target detection frame is locked, objects in a retrieval frame are accurately locked, and the relative distance between the target food serving point and the target food serving point closest to the target robot is detected; the scale change of the relative distance between the target food serving point and the nearest target robot is determined through the relative distance, the scale change of the target detection frame of the target food serving point is determined through the target detection frame, so that the space correction of a two-dimensional picture and a three-dimensional space is realized, the target food serving point and a corresponding user are locked through the relative distance, the scale change of the target detection frame mark and the scale change of the target detection frame, the visual locking target is determined, and the accurate identification of visual grabbing is improved.
Example six:
the technical solution provides an embodiment, where determining a scale change of a target detection box of a target food serving through the target detection box includes:
extracting feature points in a target detection frame of the target food serving point, tracking the extracted feature points through a preset tracking algorithm, and determining a tracking detection image;
calculating an affine transformation matrix of a target detection frame of the target food serving point according to the tracked tracking detection image;
and calculating the scale change of a target detection frame of the target food serving point according to the affine transformation matrix.
The working principle and the beneficial effects of the technical scheme are as follows:
the technical scheme determines the scale change of the target detection frame of the target food serving point through the target detection frame, and comprises the following steps: extracting feature points in a target detection frame of the target food serving point, tracking the extracted feature points through a preset tracking algorithm, determining a tracking detection image, improving detection accuracy, and calculating an affine transformation matrix of the target detection frame of the target food serving point according to the tracked tracking detection image; according to the affine transformation matrix, the transformation matrix is improved, the target food delivery point is accurately searched, and the scale change of the target detection frame of the target food delivery point is calculated, so that the target food delivery point is more accurately searched.
Example seven:
this technical scheme provides an embodiment, receive the food delivery instruction that target food delivery point corresponds to link target food delivery point and target robot with the vision locking, include:
the target robot closest to the target food delivery point receives the corresponding food delivery instruction sent by the corresponding target food delivery point;
determining the matching relation between each target robot and each target food serving point;
and associating each visual vehicle target with each target food delivery point according to the matching relation between each target robot and each target food delivery point.
The working principle and the beneficial effects of the technical scheme are as follows:
the technical scheme is that a food delivery instruction corresponding to a target food delivery point is received, a visual locking target is linked with a target robot, and the target robot closest to the target food delivery point receives the food delivery instruction corresponding to the target food delivery point; determining the matching relation between each target robot and each target food delivery point; and associating each visual vehicle target with the target food delivery point according to the matching relation between each target robot and each target food delivery point, and completing the linkage of the target robots through matching.
Example eight:
the present disclosure provides an embodiment, wherein the tracking algorithm at least includes one of frame sampling, frame mixing, or optical flow.
Example nine:
the technical solution provides an embodiment, 9, and the method for visually recognizing a food delivery robot according to claim 1, wherein the receiving a food delivery instruction corresponding to the target food delivery point and linking the visual locking target with the target robot includes:
the target robot receives the corresponding food delivery order from the corresponding target food delivery point, delivers food, carries out position confirmation with other food delivery robots at any time, if the distance between the target robot and other food delivery robots is smaller than a preset value, records other food delivery robots smaller than the preset distance as nearby robots, determines a preferred robot in each nearby robot and the target robot according to the number and weight of food on each nearby robot and the target robot, controls the preferred robot to send infrared rays in the advancing direction (a large-area infrared receiving device is arranged around the food delivery robot, a row of infrared sending devices is arranged above the infrared receiving devices, and the infrared sending devices irradiate downwards as shown in the following figure), the downward irradiation angle of the infrared sending devices is calculated and controlled by the preset value of the preset distance, large-area infrared receiving devices are arranged around each nearby robot and the target robot, carries out avoidance on the preferred robot according to the received infrared ray signal until the distance between the preferred robots is larger than the preset value, then finishes the food delivery orders, and then carries out collision avoidance of the food delivery robots in a reasonable and continuous mode, and further comprises the steps that the food delivery robots avoid collision between the food delivery robots are avoided by the adjacent robots in a specific robot,
step A1: controlling the angle value of downward irradiation of a row of infrared transmitting devices of each meal delivery robot according to a preset distance numerical value by using a formula (1)
Figure BDA0003764621420000141
Wherein θ represents an angle value of downward irradiation of the infrared transmitting devices in the row of each meal delivery robot (the angle value is an angle of downward swing with the horizontal irradiation direction of the infrared transmitting devices as a reference); s. the 0 Representing a predetermined distance value; h represents the height value of a row of infrared transmitting devices of each meal delivery robot;
step A2: if the distance between the target robot and other meal delivery robots is smaller than a preset value, determining a priority robot in the target robot and the nearby robots of the target robot according to the quantity and the weight of the meal items on each nearby robot and the target robot by using a formula (2)
Figure BDA0003764621420000142
Wherein I 0 A priority robot selection value representing the target robot and a neighboring robot of the target robot; n is 0 Representing the number of food items on the target robot; m 0 Representing a weight of a meal on the target robot; n (i) represents the number of meals on the ith nearby robot of the target robot; m (i) represents the meal weight on the ith nearby robot of the target robot; d represents the total number of the robots nearby the target robot;
Figure BDA0003764621420000151
the value of i is substituted into the parenthesis from 1 to D to obtain the maximum value in the parenthesis;
Figure BDA0003764621420000152
the value of i is substituted into the parenthesis from 1 to D to obtain the value of i when the maximum value is obtained in the parenthesis; g (I), G 0 Representing an operation intermediate quantity for simplifying calculation; if I 0 If not, then represents the I < th > of the target robot 0 If the nearby robot is the priority robot, the target robot and other nearby robots are controlled to the ith 0 The nearby robots avoid;
if I 0 If the number of robots is not less than 0, the target robot is a priority robot, and all nearby robots of the target robot are controlled to avoid the target robot;
step A3: controlling each nearby robot or the target robot to move and avoid according to the number and the position of the infrared signals received by the large-area infrared receiving device of each nearby robot or the target robot except the priority robot by using a formula (3)
Figure BDA0003764621420000153
Wherein L (a) represents the shortest moving avoidance distance of the a-th food delivery robot needing to carry out avoidance; r represents the width value of any surface of each meal delivery robot; k (a) represents the number of infrared signals received by a large-area infrared receiving device of the a-th food delivery robot needing avoidance; k represents the total number of a row of infrared transmitting devices arranged on any side of each meal delivery robot;
and controlling the a-th food delivery robot needing to be avoided to move towards the direction of reducing the infrared signals received by the large-area infrared receiving device, and moving at least the distance value of L (a), thereby completing the avoidance.
The beneficial effects of the above technical scheme are: controlling the downward irradiation angle value of a row of infrared transmitting devices of each meal delivery robot according to a preset distance numerical value by using a formula (1) in the step A1, so that the arrangement of the infrared transmitting devices can meet the set requirement, and the infrared transmitting devices can not be horizontally and upwardly irradiated to avoid the influence of emitted infrared rays on diners; then, determining a priority robot in the target robot and the nearby robots of the target robot according to the number and the weight of the food items on each nearby robot and the target robot by using a formula (2) in the step A2, so that the food delivery robots with more food items and heavier weight are delivered preferentially, the food items are prevented from shaking and scattering due to sudden stop of movement and reverse avoidance, and the integrity and the safety of the food items are protected; and finally, controlling each nearby robot or target robot to move and avoid according to the number and the positions of the infrared signals received by the large-area infrared receiving devices of each nearby robot or target robot except the priority robot by using the formula (3) in the step A3, so that the food delivery robots can reasonably avoid in sequence, and the tasks of each food delivery robot can be completed.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A visual recognition method for a meal delivery robot, comprising:
identifying the food delivery robot in the control area and determining a target robot;
positioning a target food delivery position in the control area, determining a target food delivery point, and transmitting a target food delivery point food delivery instruction to a controller of the target robot nearby;
based on a preset visual target tracking algorithm, starting a preset camera device by a target robot, locking a target food delivery point and a corresponding user, and determining a visual locking target;
and receiving a food delivery instruction corresponding to the target food delivery point, and linking the visual locking target with the target robot.
2. The visual recognition method for meal delivery robots according to claim 1, wherein the step of identifying meal delivery robots within a control area and determining target robots comprises the following steps:
acquiring an area range of a control area, and identifying the mark number of the food delivery robot in the area range;
retrieving the information of the corresponding food delivery robot through the standard serial number; wherein, the first and the second end of the pipe are connected with each other,
the information at least comprises machine model, machine code and machine electric quantity;
and recording and identifying the successfully searched food delivery robot, and determining the target robot.
3. The visual recognition method for a food delivery robot of claim 1, wherein the controller locating a target food delivery location within a control area, determining a target food delivery point, and transmitting the target food delivery point to a target robot nearby comprises:
when one target robot is successfully screened, transmitting the target food serving point to a controller of the target robot;
and when the number of successfully screened target robots is more than one, the target food delivery points are synchronously transmitted to the controllers of all the target robots, the successfully received signal is fed back to a preset control terminal by the target robot which receives the successfully first, and meanwhile, the information of the corresponding target food delivery points is interrupted by other target robots.
4. The visual identification method for the food delivery robot as claimed in claim 1, wherein the step of enabling the target robot to start a preset camera device based on the preset visual target tracking algorithm to lock the target food delivery point and the corresponding user and determine the visual locking target comprises:
starting a preset camera device by a target robot closest to the target food delivery point, detecting the target food delivery point and a corresponding user, and locking a target detection frame;
detecting a relative distance of the target food serving point between the target food serving point and the nearest target robot;
determining a scale change in relative distance between the target serving point and the nearest target robot from the relative distance;
determining the scale change of a target detection frame of the target food serving point through the target detection frame;
and locking the target food serving point and the corresponding user through the relative distance, the scale change of the relative distance, the target detection frame mark and the scale change of the target detection frame, and determining the visual locking target.
5. The visual identification method for the food delivery robot as claimed in claim 4, wherein the determining the target detection frame dimension change of the target food delivery point by the target detection frame comprises:
extracting feature points in a target detection frame of the target food serving point, tracking the extracted feature points through a preset tracking algorithm, and determining a tracking detection image;
according to the tracked tracking detection image, calculating an affine transformation matrix of a target detection frame of the target food serving point;
and calculating the scale change of the target detection frame of the target food serving point according to the affine transformation matrix.
6. The visual identification method for the meal delivery robot as claimed in claim 4, wherein the receiving the meal delivery instruction corresponding to the target meal delivery point and linking the visual locking target meal delivery point and the target robot comprises:
the target robot closest to the target food delivery point receives the corresponding food delivery instruction sent by the corresponding target food delivery point;
determining the matching relation between each target robot and each target food serving point;
and associating each visual vehicle target with each target food delivery point according to the matching relation between each target robot and each target food delivery point.
7. The visual identification method for the meal delivery robot according to claim 4, wherein the tracking algorithm comprises at least one of frame sampling, frame mixing or optical flow method.
8. The visual recognition method for the food delivery robot as claimed in claim 1, wherein the receiving the food delivery command corresponding to the target food delivery point and linking the visual locking target with the target robot comprises:
the method comprises the steps that a target robot receives a corresponding food delivery order sent by a corresponding target food delivery point, then food delivery is carried out, position confirmation is carried out on the target robot and other food delivery robots at any time, if the distance between the target robot and other food delivery robots is smaller than a preset value, other food delivery robots smaller than the preset distance are marked as nearby robots, a priority robot in each nearby robot and the target robot is determined according to the number and weight of food products on each nearby robot and the target robot, the priority robot is controlled to send infrared rays in the advancing direction (infrared receiving devices with large areas are arranged around the food delivery robot, a row of infrared sending devices are arranged above the infrared receiving devices and irradiate downwards as shown in the following figure), the downward irradiation angle of the infrared sending devices is calculated and controlled according to the preset value of the preset distance, infrared receiving devices with large areas are arranged around each nearby robot and the target robot, the priority robot is subjected to avoidance until the distance between the priority robots is larger than the preset value according to avoid the food delivery orders, and then the food delivery robots are prevented from colliding with each other food delivery robot in a specific linkage mode, and the food delivery robots are carried out continuously through the collision between the food delivery robots in a specific linkage mode of the food delivery robots,
step A1: the angle value of downward irradiation of a row of infrared transmitting devices of each meal delivery robot is controlled according to a preset distance numerical value by using a formula (1)
Figure RE-FDA0003868857450000041
Wherein θ represents an angle value of downward irradiation of the infrared transmitting devices in a row of each meal delivery robot (the angle value is an angle of downward swing with the horizontal irradiation direction of the infrared transmitting devices as a reference); s 0 Representing a predetermined distance value; h represents the height value of a row of infrared transmitting devices of each meal delivery robot;
step A2: if the distance between the target robot and other meal delivery robots is smaller than a preset value, determining the target robot and a priority robot in the nearby robots of the target robot according to the number and the weight of meal items on each nearby robot and the target robot by using a formula (2)
Figure RE-FDA0003868857450000042
Wherein I 0 A priority robot selection value representing the target robot and a nearby robot of the target robot; n is 0 Representing the number of food items on the target robot; m is a group of 0 Representing a weight of a meal on the target robot; n (i) represents the number of meals on the ith nearby robot of the target robot; m (i) represents the meal weight on the ith nearby robot of the target robot; d represents the total number of the robots nearby the target robot;
Figure RE-FDA0003868857450000043
the value of i is substituted into the parenthesis from 1 to D to obtain the maximum value in the parenthesis;
Figure RE-FDA0003868857450000044
the value of i is substituted into the bracket from 1 to D to obtain the value of i when the maximum value is obtained in the bracket; g (I), G 0 Representing an operation intermediate quantity for simplifying calculation;
if I 0 If not, then represents the I < th > of the target robot 0 If the adjacent robot is the priority robot, the target robot and other adjacent robots are controlled to the I < th > robot 0 The nearby robots carry out avoidance;
if I 0 If the number of robots is not less than 0, the target robot is a priority robot, and all nearby robots of the target robot are controlled to avoid the target robot;
step A3: controlling each nearby robot or target robot to move and avoid according to the number and the position of the infrared signals received by the large-area infrared receiving device of each nearby robot or target robot except the priority robot by using a formula (3)
Figure RE-FDA0003868857450000051
Wherein L (a) represents the shortest moving avoiding distance of the a-th food delivery robot needing to avoid; r represents the width value of any surface of each meal delivery robot; k (a) represents the number of infrared signals received by a large-area infrared receiving device of the a-th food delivery robot needing avoidance; k represents the total number of a row of infrared transmitting devices arranged on any side of each meal delivery robot;
and controlling the a-th food delivery robot needing to be avoided to move towards the direction of reducing the infrared signals received by the large-area infrared receiving device, and moving at least the distance value of L (a), thereby completing the avoidance.
CN202210882218.9A 2022-07-26 2022-07-26 Visual recognition method for meal delivery robot Active CN115657654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210882218.9A CN115657654B (en) 2022-07-26 2022-07-26 Visual recognition method for meal delivery robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210882218.9A CN115657654B (en) 2022-07-26 2022-07-26 Visual recognition method for meal delivery robot

Publications (2)

Publication Number Publication Date
CN115657654A true CN115657654A (en) 2023-01-31
CN115657654B CN115657654B (en) 2023-12-08

Family

ID=85023444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210882218.9A Active CN115657654B (en) 2022-07-26 2022-07-26 Visual recognition method for meal delivery robot

Country Status (1)

Country Link
CN (1) CN115657654B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103792948A (en) * 2013-09-16 2014-05-14 弗徕威数码科技(上海)有限公司 Intelligent service robot for hotel and ward
CN110389587A (en) * 2019-05-20 2019-10-29 长沙理工大学 A kind of robot path planning's new method of target point dynamic change
CN110710852A (en) * 2019-10-30 2020-01-21 广州铁路职业技术学院(广州铁路机械学校) Meal delivery method, system, medium and intelligent device based on meal delivery robot
CN113031629A (en) * 2021-05-27 2021-06-25 德阳恒博秸油科技有限公司 Intelligent conveying terminal for catering industry and working method thereof
CN214504217U (en) * 2021-01-21 2021-10-26 南京林业大学 Self-positioning navigation food delivery robot
WO2021227519A1 (en) * 2020-05-15 2021-11-18 深圳市优必选科技股份有限公司 Target tracking method and apparatus, and computer-readable storage medium and robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103792948A (en) * 2013-09-16 2014-05-14 弗徕威数码科技(上海)有限公司 Intelligent service robot for hotel and ward
CN110389587A (en) * 2019-05-20 2019-10-29 长沙理工大学 A kind of robot path planning's new method of target point dynamic change
CN110710852A (en) * 2019-10-30 2020-01-21 广州铁路职业技术学院(广州铁路机械学校) Meal delivery method, system, medium and intelligent device based on meal delivery robot
WO2021227519A1 (en) * 2020-05-15 2021-11-18 深圳市优必选科技股份有限公司 Target tracking method and apparatus, and computer-readable storage medium and robot
CN214504217U (en) * 2021-01-21 2021-10-26 南京林业大学 Self-positioning navigation food delivery robot
CN113031629A (en) * 2021-05-27 2021-06-25 德阳恒博秸油科技有限公司 Intelligent conveying terminal for catering industry and working method thereof

Also Published As

Publication number Publication date
CN115657654B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN108469786B (en) Large-scale intelligent storage distributed sorting system
CN109863102B (en) Sorting auxiliary method, sorting system and platform machine tool
US10494180B2 (en) Systems and methods for distributed autonomous robot interfacing using live image feeds
US10083418B2 (en) Distributed autonomous robot systems and mehtods
CN107202571B (en) For executing the inspection system and method for inspection in storage facility
US9802317B1 (en) Methods and systems for remote perception assistance to facilitate robotic object manipulation
EP1043642B1 (en) Robot system having image processing function
CN110304386B (en) Robot and repositioning method after code losing of robot
WO2017201483A1 (en) Method for tracking placement of products on shelves in a store
Firby et al. An architecture for vision and action
CN100434932C (en) Collaborative work of multiple lidars, and dat processing method
CN108177162B (en) The interference region setting device of mobile robot
CA2928174C (en) Systems and methods for automated device pairing
CN112441055B (en) Train unhooking robot unhooking control method
CN108773433A (en) A kind of butt junction location calibration method and AGV trolleies based on AGV trolleies
WO2013145632A1 (en) Flow line data analysis device, system, program and method
CN108803603A (en) AGV trolley butt junction location methods based on coded image and AGV trolleies
CN112767540A (en) Automatic loading method for AGV
CN113110325A (en) Multi-arm sorting operation mobile delivery device, and optimized management system and method
US20230347511A1 (en) Distributed Autonomous Robot Interfacing Systems and Methods
CN114405866B (en) Visual guide steel plate sorting method, visual guide steel plate sorting device and system
CN114708209A (en) Production interaction method and system based on 3D modeling and visual inspection
CN106904442B (en) The fully automatic feeding machine people system and its feeding and transportation method of view-based access control model
CN115657654B (en) Visual recognition method for meal delivery robot
CN113927601B (en) Method and system for realizing precise picking of mechanical arm based on visual recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant