WO2022127439A1 - Procédé et appareil de traitement d'évitement d'obstacle pour robot, dispositif, et support de stockage lisible par ordinateur - Google Patents

Procédé et appareil de traitement d'évitement d'obstacle pour robot, dispositif, et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2022127439A1
WO2022127439A1 PCT/CN2021/129398 CN2021129398W WO2022127439A1 WO 2022127439 A1 WO2022127439 A1 WO 2022127439A1 CN 2021129398 W CN2021129398 W CN 2021129398W WO 2022127439 A1 WO2022127439 A1 WO 2022127439A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
obstacle
real person
prompt
human
Prior art date
Application number
PCT/CN2021/129398
Other languages
English (en)
Chinese (zh)
Inventor
李泽华
张涛
陈永昌
申鑫瑞
Original Assignee
深圳市普渡科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市普渡科技有限公司 filed Critical 深圳市普渡科技有限公司
Publication of WO2022127439A1 publication Critical patent/WO2022127439A1/fr

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • G06F16/2433Query languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present application relates to the field of artificial intelligence, and in particular, to a method, apparatus, device and computer-readable storage medium for handling a robot in trouble.
  • AI artificial intelligence
  • robot restaurants and smart warehouses are typical applications of AI technology in the field of people's death.
  • intelligent robots not only bring a sense of high-tech to users, but also bring users other experiences, such as dining pleasure, efficient handling, and so on.
  • the walking route of the robot may be blocked by real people or other objects.
  • the existing solution is to try to make the robot have adaptive ability, that is, to make the robot have the function of automatic obstacle avoidance, and change the preset walking route by itself when encountering obstacles such as real people, so as to avoid obstacles. Open obstacles.
  • a method for handling an obstacle of a robot including:
  • the obstacle on the current walking route is a real person, enable a preset first prompt mode to prompt the real person to avoid the robot;
  • a preset second prompt mode is enabled to prompt the staff in the scene to assist in removing the obstacle that is not a real person.
  • an obstacle handling device for a robot comprising:
  • the robot stops moving on the current walking route
  • a detection module for detecting whether the obstacles on the current walking route of the robot are real people
  • a first prompting module configured to enable a first prompting mode if the obstacle on the current walking route is a real person, so as to prompt the real person to avoid the robot;
  • the second prompting module is configured to enable a second prompting mode if the obstacle on the current walking route is not a real person, so as to prompt the staff in the scene to assist in removing the obstacle that is not a real person.
  • the present application provides a device, the device includes a memory and a processor, the memory stores a computer program, the computer program can be executed on the processor, and the processor executes the The computer program implements the steps of the technical solution of the above-mentioned method for handling an obstacle in a robot.
  • the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the technical solution of the above-mentioned method for handling an obstacle in a robot are realized. .
  • FIG. 1 is a flowchart of a method for handling an obstacle in a robot provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of forming a syntax tree by splitting a SQL statement into corresponding statements, conditions, expressions, etc. by performing semantic analysis on SQL provided by an embodiment of the present application;
  • 3a is a schematic diagram of a robot provided by an embodiment of the present application adopting a ">"-type walking mode to avoid obstacles on the current walking route of the robot;
  • 3b is a schematic diagram of a robot provided by an embodiment of the present application adopting a " ⁇ " type walking mode to avoid obstacles on the current walking route of the robot;
  • FIG. 4 is a schematic structural diagram of a robot obstacle handling device provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • the present application proposes a method for handling obstacles of a robot, which can be applied to a robot.
  • the robot can be a robot operating in a restaurant, such as a food delivery robot, or a medicine delivery robot operating in a medical place, such as a hospital, or It is a transfer robot that works in warehouses and other places, and so on.
  • the places where these robots work may have both adults (eg, adult patients in hospitals and other places) and minors such as young children (eg, toddlers, children, etc., eating in restaurants).
  • the method for handling an obstacle to a robot mainly includes steps S101 to S104, which are described in detail as follows:
  • Step S101 when it is detected that there is an obstacle on the current walking route of the robot, the robot is controlled to stop moving forward on the current walking route.
  • the so-called obstacle refers to the person or thing on the predetermined route of the mobile device, such as a robot, that hinders the mobile device from moving forward.
  • an optoelectronic method for example, a visual sensor, a laser radar, or an infrared thermal imaging device, etc., can be used to detect whether there is an obstacle on the current walking route of the robot. Once it is detected that there is an obstacle on the current walking route of the robot, the central processing unit in the robot sends a stop operation instruction to the driving unit, so that the robot stops moving forward on the current walking route.
  • Step S102 Detect whether the obstacles on the current walking route of the robot are real people.
  • the obstacles on the walking route of the robot are mainly divided into people or objects.
  • different obstacle removal schemes can be adopted for different obstacles. Therefore, when an obstacle is detected on the current walking route of the robot, it is also necessary to detect whether the obstacle on the current walking route of the robot is a real person.
  • a real person in this application, refers to a real person, a natural person.
  • the heat signal installed on the robot can be used.
  • the infrared sensor receives one or more heat signals from obstacles on the current walking route of the robot; if the wavelength of the heat signal is about 5 ⁇ 12um, it is determined that the obstacles on the current walking route of the robot are real people.
  • the above embodiment is based on the principle of thermal imaging to detect whether the obstacle on the current walking route of the robot is a real person.
  • whether the obstacle on the current walking route of the robot is a real person can be detected based on the principle of computer graphics. Specifically, In other words, it can be implemented through steps S201 to S205 as shown in FIG. 2, and the description is as follows:
  • Step S201 According to the image of the obstacle acquired by the image acquisition device, determine whether the obstacle on the current walking route of the robot is a humanoid.
  • the so-called "human-like” refers to a person or object with human physical characteristics. From the perspective of computer graphics, real people and objects similar to real people, such as mannequins, artificial robots, etc., all belong to human-like bodies. Therefore, when it is determined that the obstacle on the current walking route of the robot is not a humanoid, the obstacle cannot be a real person.
  • the image of the obstacle can be obtained through the image acquisition device integrated on the robot, such as a camera, and then the image of the obstacle is matched with the pre-stored human-like images in the database on geometric parameters such as shape and size to determine the current walking of the robot. Whether the obstacles on the route are humanoid.
  • Step S202 if the obstacle on the current walking route of the robot is a human body, collect the key feature point information in the human body-like face image for the human body-like face, wherein the key feature point information in the human body-like face image includes the key feature point information.
  • the plane position information of the feature point if the obstacle on the current walking route of the robot is a human body, collect the key feature point information in the human body-like face image for the human body-like face, wherein the key feature point information in the human body-like face image includes the key feature point information.
  • the plane position information of the feature point if the obstacle on the current walking route of the robot is a human body, collect the key feature point information in the human body-like face image for the human body-like face, wherein the key feature point information in the human body-like face image includes the key feature point information.
  • humanoids include real people and real-life-like objects. Therefore, when it is determined that the obstacle on the robot's current walking route is not a humanoid, the obstacle cannot be a real person; when it is determined that the obstacle on the robot's current walking route is a humanoid, it is necessary to further confirm whether the humanoid is a real human.
  • the fact is that the properties of real people and the properties of objects similar to real people still have essential differences in physiological and/or psychological sense. For example, real people can understand semantics, so they can actively avoid robots when they hear the prompt voice issued by robots, but Humanoids, such as mannequins, cannot understand the semantics and thus cannot avoid it.
  • different image acquisition devices may be used to target the human-like face from different positions at the same time. At least two images are collected, and then the at least two human-like face images are corrected so that the at least two human-like face images are consistent in the horizontal direction. After acquiring at least two human-like face images in the same horizontal direction, perform key feature point detection on each human-like face image respectively, and determine several key feature point information in each human-like face image.
  • the key feature points include 65 to 106 key points including the tip of the nose, the lower eyelid point of the left eye, the lower eyelid point of the right eye, the corner of the left mouth, and the corner of the right mouth.
  • the plane position information and reliability of these key feature points in the human-like face image can be further determined, wherein the plane position information can be the key feature points in the The two-dimensional coordinates in the human-like face image, and the reliability is used to indicate the accuracy of the positioning of key feature points in the human-like face image.
  • Step S203 According to the plane position information of the key feature points in the human-like face image, obtain the three-dimensional position information of these key feature points.
  • the credibility indicates the accuracy of the key feature points in the human-like face image. Therefore, according to the credibility of each key feature point, the key features whose credibility is greater than a certain threshold can be selected. point to determine the three-dimensional position information of these key feature points, for example, the three-dimensional coordinates of the key feature points in the three-dimensional coordinate system. Specifically, the three-dimensional position information of each key feature point can be calculated according to the plane position information of each key feature point based on the similar triangle principle.
  • Step S204 obtaining a face fitting curved surface according to the three-dimensional position information of the key feature points.
  • a surface fitting algorithm such as the least squares method, can be used to fit the three-dimensional position information of these key feature points as a fitting factor to fit a surface, which is the face fitting surface .
  • Step S205 According to the distance between the key feature points in the human-like face image and the face fitting surface, determine whether the obstacle on the current walking route of the robot is a real person.
  • the depth information of the key feature points is not obvious, and the face fitting surface obtained by fitting is generally a smooth face surface, and the key feature points are connected to such a face fitting surface.
  • the distance is also small, approaching zero, and the depth information of the key feature points in the face image of a real person is obvious.
  • the fitted face fitting surface is generally not a smooth face surface, and the key feature points are like this The distance of the face fitting surface is also larger.
  • judging whether the obstacle on the current walking route of the robot is a real person may be: calculating the human-like face image The sum of the distances between the key feature points in the face image and the face fitting surface. If the sum of the distances is greater than the dynamic depth information threshold, it is determined that the humanoid is a real person. If the sum of the distances is not greater than the dynamic depth information threshold, Then it is determined that such a human body is not a real person.
  • the a priori depth distance threshold preset in the dynamic depth information threshold is determined in real time, and the credibility of each key feature point used to calculate the sum of the distance between the key feature point and the face fitting surface is determined. degree related.
  • Step S103 if the obstacle on the current walking route is a real person, the preset first prompt mode is activated to prompt the real person to avoid the robot.
  • the first prompting mode may be a prompting manner of voice prompting, image prompting or a combination of voice and image.
  • step S103 can be implemented through the following steps S1031 to S1033:
  • Step S1031 Determine whether the real person on the current walking route of the robot is a child.
  • the image of the real person on the current walking route of the robot can be collected through an image collection device integrated on the robot, such as a camera, etc., and then the collected image of the real person is compared with the image of the child preset in the database. Matching is carried out from the physical features to determine whether the real person on the current walking route of the robot is a young child. It is also possible to determine whether the real person on the robot's current walking route is a child by collecting the voice information of the real person on the robot's current walking route. For example, when the robot plays a voice to the real person, the real person may playfully say "I won't let go. "To respond to the robot, when the robot collects the voice information of "I will not let go", it can determine whether the real person on the robot's current walking route is a young child according to its voiceprint features.
  • Step S1032 if the real person is a young child, form a terrain feature animation of the surrounding environment of the young child, and prompt the young child to avoid the robot through the terrain feature animation combined with voice.
  • the environment around the young child can be formed into a terrain feature animation, and the young child can be prompted to avoid the robot through the terrain feature animation combined with voice.
  • forming a terrain feature animation of the surrounding environment of the child, and prompting the child to avoid the robot through the terrain feature animation combined with voice can be: by sensing the surrounding environment data of the child, determine the prompt information of the robot's current walking route; The animation processing model extracts the features of the prompt information of the robot's current walking route to form a terrain feature animation; projects the terrain feature animation on the target projection surface; uses the child's voice to explain the terrain feature animation projected on the target projection surface to Guide young children to avoid robots.
  • Step S1033 If the real person is not a child, prompt the real person who is not a child to avoid the robot through voice and the generated facial expression image.
  • the avoidance robot may be: displaying the generated expression image to express embarrassment on a display device with a preset exaggeration method, and cyclically playing the expression image to express embarrassment to the real person who is not a child at a fixed interval. facial expression images and prompt voices with actual semantics, requesting the real person who is not a child to avoid the robot.
  • Step S104 If the obstacle on the current walking route of the robot is not a real person, the second prompt mode is enabled to prompt the staff in the scene to assist in removing the obstacle that is not a real person.
  • the second prompt module may select an alert mode.
  • the aforementioned voice prompt mode and/or image prompt mode cannot take effect or the effect is not good (of course, when the voice prompt is played, the staff in the scene may also come to exclude the The obstacle of a real person, but there is still an effect difference from the active avoidance of a real person). Therefore, if the obstacles on the current walking route of the robot are not real people, the warning prompt mode is enabled to prompt the staff in the scene to assist in removing non-human obstacles. For example, after the warning prompt mode is enabled, the robot emits an audible and visual alarm to remind the staff in the scene that the robot encounters obstacles that are not real people, and needs him/her to assist in removing these non-human obstacles.
  • step S105 may also be included: after enabling the first prompt mode, or after enabling the second prompt mode, if the real person still does not avoid the obstacle of the robot or the non-human person, then upgrade the first prompt mode.
  • the prompting mode or the second prompting mode continues to prompt.
  • the prompting mode can be upgraded.
  • the real person after enabling the voice prompting mode and/or image prompting mode and repeating the voice prompting and/or image prompting to the real person for more than 3 times, the real person still does not avoid the robot and can enhance the voice Voice decibels and/or use more exaggerated methods when prompting, for example, playing angry facial expressions; or, after enabling the warning prompt mode, if the non-human obstacle has not been eliminated, increase the intensity of sound and light alarms, for example, Make a louder alarm sound, make the alarm light glow more intensely, or increase the flashing frequency of the alarm light, etc.
  • the real person still does not avoid the robot or the obstacle of the non-real person is still not eliminated, for example, the young children on the current walking route of the robot still cannot understand the voice prompts and/or image prompts, or , in a specific place such as a hospital, although adult patients on the robot's current walking route can understand the voice prompts and/or image prompts, they still cannot actively avoid the robot due to disease reasons, or, despite the upgraded warning prompt mode, out of order For various other reasons, the staff could not arrive in time to eliminate the obstacles that are not real people.
  • the detouring trajectory of the current walking route around the obstacle can be adjusted according to the size of the obstacle, and the robot can be controlled to travel on the adjusted walking route to avoid the obstacle.
  • the robot can be controlled Use the " ⁇ " or ">” walking method to avoid obstacles on the robot's current walking route.
  • the robot can first deviate a small angle to the right as shown by the dotted line, walk a distance from the right side of the obstacle, and then turn left (at this time, a ">"-type route is formed), and return to the preset
  • the robot can first deviate a small angle to the left, walk a distance from the left of the obstacle, and then turn right (at this time, a " ⁇ " type route), return to the preset walking route. Because whether the " ⁇ "-type walking method or the ">"-type walking method is adopted, the robot deviates from the predetermined route by a small angle, so the algorithm is not complicated, and there is no need for complex training of the robot. accomplish.
  • the first prompt mode or the second prompt mode may also be the first prompt mode or the second prompt mode, or after the upgrade of the first prompt mode or the second prompt mode reaches a preset time.
  • the detouring trajectory of the current walking route around the obstacle can be adjusted according to the size of the obstacle, and the robot is controlled to travel on the adjusted walking route to avoid the obstacle.
  • the voice prompt mode and/or the image prompt mode may be turned off after the real person avoids the robot, or the warning prompt mode may be turned off after the non-human obstacle is removed.
  • the voice prompt mode and/or image prompt mode are enabled to prompt the real person to avoid the robot, and when it is confirmed that the obstacle is not a real person, the warning prompt mode is enabled to prompt work Personnel assists in removing obstacles, because no matter it is voice prompts, image prompts, warning prompts or the upgrade of the three, there is no need to train the robot.
  • the application of the The cost of the technical solution is low; on the other hand, as long as the voice prompt mode, the image prompt mode, the warning prompt mode or the upgrade mode of the two are enabled, the purpose of removing obstacles can be achieved, and there is no need for robot training as in the prior art. The worry of not being able to achieve the expected results when it is in place.
  • FIG. 4 is an apparatus for handling an obstacle of a robot provided by an embodiment of the present application, which may include a stop operation module 401 , a detection module 402 , a first prompt module 403 , and a second prompt module 404 , and may further include
  • the third prompt module 405 is described in detail as follows:
  • the stop operation module 401 is used to control the robot to stop moving forward on the current walking route when it is detected that there is an obstacle on the current walking route of the robot;
  • the detection module 402 is used to detect whether the obstacle on the current walking route of the robot is a real person
  • the first prompting module 403 is configured to enable the first prompting mode if the obstacle on the current walking route of the robot is a real person, so as to prompt the real person to avoid the robot;
  • the second prompting module 404 is configured to enable the second prompting mode if the obstacle on the current walking route of the robot is not a real person, so as to prompt the staff in the scene to assist in removing the obstacle that is not a real person;
  • the third prompting module 405 is used for, after the first prompting module 403 enables the first prompting mode, or after the second prompting module 404 enables the second prompting mode, if the real person still does not avoid the obstacle of the robot or the non-human person, Then the upgrade prompt mode continues to prompt.
  • the detection module 402 in the example of FIG. 4 includes a first determination unit, a collection unit, a three-dimensional position information acquisition unit, a fitting unit and a judgment unit, wherein:
  • a first determining unit configured to determine whether the obstacle on the current walking route of the robot is a humanoid according to the image of the obstacle acquired by the image acquisition device;
  • the acquisition unit is used to collect the key feature point information in the human-like face image for the human-like face if the obstacle on the current walking route of the robot is a human-like face, wherein the key feature point information in the human-like face image includes: The plane position information of the key feature point;
  • a stereoscopic position information acquisition unit configured to obtain the stereoscopic position information of the key feature points according to the plane position information of the key feature points in the human-like face image
  • the fitting unit is used to obtain the face fitting surface according to the three-dimensional position information of the key feature points;
  • the judgment unit is used for judging whether the obstacle on the current walking route of the robot is a real person according to the distance between the key feature points in the human-like face image and the face fitting surface.
  • the above judgment unit may include a calculation unit and a second determination unit, wherein:
  • a calculation unit used to calculate the sum of the distances between the key feature points in the human-like face image and the face fitting surface
  • the second determining unit is configured to determine that the human-like face is a real person if the sum of the distances between the key feature points in the human-like face image and the face fitting surface is greater than the dynamic depth information threshold, and if the key feature in the human-like face image is a real person If the sum of the distances between the feature points and the face fitting surface is not greater than the dynamic depth information threshold, it is determined that this type of human body is not a real person.
  • the first prompting module 403 in the example of FIG. 4 may include a third determining unit, an animation prompting unit and an audio and video prompting unit, wherein:
  • the third determination unit is used to determine whether the real person on the current walking route of the robot is a young child
  • the animation prompting unit is used to form a terrain feature animation of the surrounding environment of the child if the real person on the robot's current walking route is a child, and remind the child to avoid the robot through the terrain feature animation combined with voice;
  • the audio and video prompting unit is used to prompt the real person who is not a young child to avoid the robot through voice and generated facial expressions if the real person on the current walking route of the robot is not a child.
  • the above animation prompting unit includes a fourth determining unit, a feature extraction unit, a projection unit and a guiding unit, wherein:
  • the fourth determining unit is used to determine the prompt information of the current walking route of the robot by sensing the surrounding environment data of the child;
  • the feature extraction unit is used to extract features from the prompt information of the current walking route of the robot by the preset animation processing model, so as to form a terrain feature animation;
  • the projection unit is used to project the terrain feature animation onto the target projection surface
  • the guidance unit uses a child's voice to explain the terrain feature animation projected onto the target projection surface, so as to guide the young child to avoid the robot.
  • the above-mentioned audio and video prompting unit may include a display unit and a playback unit, wherein:
  • a display unit used to display the generated embarrassed expression image in an exaggerated manner on the display device
  • the playback unit is used to play the embarrassed expression images and the prompt voice with actual semantics to the real person who is not a child in a cycle at a fixed interval, so as to request the real person who is not a child to avoid the robot.
  • the apparatus shown in FIG. 4 may also include a control module or a shutdown module, wherein:
  • the control module is used to control the robot to avoid obstacles by walking in the " ⁇ " or ">” shape if the real person still does not avoid the robot or the non-human obstacle after upgrading the prompt mode;
  • the closing module is used to turn off the voice prompt mode and/or image prompt mode after the real person avoids the robot, or close the warning prompt mode after the non-human obstacle is removed.
  • the voice prompt mode and/or image prompt mode are enabled to prompt the real person to avoid the robot. Since it is not necessary to train the robot whether it is a voice prompt, an image prompt, a warning prompt or an upgrade of the three, therefore, compared with the solution of performing a large amount of training on the robot to remove obstacles, on the one hand, the technical solution of the present application is low in cost; On the other hand, as long as the voice prompt mode, the image prompt mode, the warning prompt mode or the upgrade mode of the two are enabled, the purpose of removing obstacles can be achieved, and there is no need for the robot to be poorly trained and unable to achieve expectations as in the prior art. Worry about the effect.
  • FIG. 5 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • the device 5 of this embodiment mainly includes: a processor 50 , a memory 51 , and a computer program 52 stored in the memory 51 and executable on the processor 50 , such as a program of a robot failure handling method.
  • the processor 50 executes the computer program 52, it implements the steps in the above embodiment of the method for handling an obstacle in a robot, for example, steps S101 to S105 shown in FIG. 1 .
  • the processor 50 executes the computer program 52
  • the functions of the modules/units in the above-mentioned device embodiments are implemented, for example, the stop operation module 401, the detection module 402, the first prompt module 403, the second prompt module 404 and The function of the third prompt module 405 .
  • the computer program 52 of the method for dealing with a robot encountering an obstacle mainly includes: when it is detected that there is an obstacle on the current walking route of the robot, stop the robot from moving forward on the current walking route; and detect whether the obstacle on the current walking route of the robot is a real person.
  • the computer program 52 may be divided into one or more modules/units, which are stored in the memory 51 and executed by the processor 50 to complete the present application.
  • One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, the instruction segments being used to describe the execution of the computer program 52 in the device 5 .
  • the computer program 52 can be divided into the functions of the stop operation module 401, the detection module 402, the first prompt module 403, the second prompt module 404 and the third prompt module 405 (modules in the virtual device), and the specific functions of each module are as follows : stop operation module 401, used to stop the robot from moving forward on the current walking route when it is detected that there is an obstacle on the current walking route of the robot; detection module 402, used to detect whether the obstacle on the current walking route of the robot is a real person; A prompting module 403, for enabling a first prompting mode, such as a voice prompting mode and/or an image prompting mode, if the obstacle on the current walking route of the robot is a real person, to prompt the real person to avoid the robot; the second prompting module 404, for If the obstacle on the current walking route of the robot is not a
  • Device 5 may include, but is not limited to, processor 50 , memory 51 .
  • FIG. 5 is only an example of the device 5, and does not constitute a limitation to the device 5. It may include more or less components than the one shown, or combine some components, or different components, such as Computing devices may also include input and output devices, network access devices, buses, and the like.
  • the so-called processor 50 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processors) Processor, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • CPU Central Processing Unit
  • DSP Digital Signal Processors
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 51 may be an internal storage unit of the device 5 , such as a hard disk or a memory of the device 5 .
  • the memory 51 can also be an external storage device of the device 5, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card) equipped on the device 5 Wait.
  • the memory 51 may also include both an internal storage unit of the device 5 and an external storage device.
  • the memory 51 is used to store computer programs and other programs and data required by the device.
  • the memory 51 can also be used to temporarily store data that has been output or is to be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

L'invention concerne un procédé et un appareil de traitement d'évitement d'obstacle pour robot, un dispositif, et un support de stockage lisible par ordinateur. Le procédé comprend les étapes suivantes : lorsqu'il est détecté qu'il y a un obstacle sur un itinéraire de marche actuel d'un robot, commander le robot pour arrêter de marcher sur l'itinéraire de marche actuel (étape S101) ; détecter si l'obstacle sur l'itinéraire de marche actuel du robot est une personne (étape S102) ; si l'obstacle sur l'itinéraire de marche actuel du robot est une personne, activer un premier mode d'invite, pour inviter la personne à éviter le robot (étape S103) ; et si l'obstacle sur l'itinéraire de marche actuel du robot n'est pas une personne, activer un second mode d'invite, pour inviter un travailleur dans la scène à aider à éliminer l'obstacle qui n'est pas une personne (étape S104).
PCT/CN2021/129398 2020-12-17 2021-11-08 Procédé et appareil de traitement d'évitement d'obstacle pour robot, dispositif, et support de stockage lisible par ordinateur WO2022127439A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011497739.XA CN112506204B (zh) 2020-12-17 2020-12-17 机器人遇障处理方法、装置、设备和计算机可读存储介质
CN202011497739.X 2020-12-17

Publications (1)

Publication Number Publication Date
WO2022127439A1 true WO2022127439A1 (fr) 2022-06-23

Family

ID=74922230

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/129398 WO2022127439A1 (fr) 2020-12-17 2021-11-08 Procédé et appareil de traitement d'évitement d'obstacle pour robot, dispositif, et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN112506204B (fr)
WO (1) WO2022127439A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506204B (zh) * 2020-12-17 2022-12-30 深圳市普渡科技有限公司 机器人遇障处理方法、装置、设备和计算机可读存储介质
CN113641095B (zh) * 2021-08-18 2024-02-09 苏州英特数智控制系统有限公司 基于激光雷达的主动安全靠机控制方法及系统

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013086234A (ja) * 2011-10-20 2013-05-13 Panasonic Corp 目的方向通知システム、目的方向通知方法、及び、目的方向通知プログラム
CN106272431A (zh) * 2016-09-21 2017-01-04 旗瀚科技有限公司 一种智能人机互动方法
CN107092252A (zh) * 2017-04-11 2017-08-25 杭州光珀智能科技有限公司 一种基于机器视觉的机器人主动避障方法及其装置
CN108344414A (zh) * 2017-12-29 2018-07-31 中兴通讯股份有限公司 一种地图构建、导航方法及装置、系统
CN108958263A (zh) * 2018-08-03 2018-12-07 江苏木盟智能科技有限公司 一种机器人避障方法及机器人
CN109291064A (zh) * 2018-11-18 2019-02-01 赛拓信息技术有限公司 餐厅智能送餐机器人
CN109571502A (zh) * 2018-12-30 2019-04-05 深圳市普渡科技有限公司 机器人配送方法
CN109571468A (zh) * 2018-11-27 2019-04-05 深圳市优必选科技有限公司 安防巡检机器人及安防巡检方法
CN110033612A (zh) * 2019-05-21 2019-07-19 上海木木聚枞机器人科技有限公司 一种基于机器人的行人提醒方法、系统及机器人
CN110442126A (zh) * 2019-07-15 2019-11-12 北京三快在线科技有限公司 一种移动机器人及其避障方法
US20200201337A1 (en) * 2018-12-20 2020-06-25 Jason Yan Anti-drop-off system for robot
CN112506204A (zh) * 2020-12-17 2021-03-16 深圳市普渡科技有限公司 机器人遇障处理方法、装置、设备和计算机可读存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106469288A (zh) * 2015-08-12 2017-03-01 中兴通讯股份有限公司 一种提示方法及终端
CN105446682A (zh) * 2015-11-17 2016-03-30 厦门正景智能工程有限公司 一种通过投影将儿童涂画转换为动画仿真互动展示系统
CN105740779B (zh) * 2016-01-25 2020-11-13 北京眼神智能科技有限公司 人脸活体检测的方法和装置
CN106054881A (zh) * 2016-06-12 2016-10-26 京信通信系统(广州)有限公司 一种执行终端的避障方法及执行终端
KR102662949B1 (ko) * 2016-11-24 2024-05-02 엘지전자 주식회사 인공지능 이동 로봇 및 그 제어방법
CN107272724A (zh) * 2017-08-04 2017-10-20 南京华捷艾米软件科技有限公司 一种体感飞行装置及其控制方法
CN109709945B (zh) * 2017-10-26 2022-04-15 深圳市优必选科技有限公司 一种基于障碍物分类的路径规划方法、装置及机器人
WO2019127262A1 (fr) * 2017-12-28 2019-07-04 深圳前海达闼云端智能科技有限公司 Procédé de détection in vivo de visage humain basé sur une extrémité en nuage, dispositif électronique et produit de programme
CN111966088B (zh) * 2020-07-14 2022-04-05 合肥工业大学 一种自动驾驶儿童玩具车控制系统及控制方法
CN111930127B (zh) * 2020-09-02 2021-05-18 广州赛特智能科技有限公司 一种机器人障碍物识别及避障方法

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013086234A (ja) * 2011-10-20 2013-05-13 Panasonic Corp 目的方向通知システム、目的方向通知方法、及び、目的方向通知プログラム
CN106272431A (zh) * 2016-09-21 2017-01-04 旗瀚科技有限公司 一种智能人机互动方法
CN107092252A (zh) * 2017-04-11 2017-08-25 杭州光珀智能科技有限公司 一种基于机器视觉的机器人主动避障方法及其装置
CN108344414A (zh) * 2017-12-29 2018-07-31 中兴通讯股份有限公司 一种地图构建、导航方法及装置、系统
CN108958263A (zh) * 2018-08-03 2018-12-07 江苏木盟智能科技有限公司 一种机器人避障方法及机器人
CN109291064A (zh) * 2018-11-18 2019-02-01 赛拓信息技术有限公司 餐厅智能送餐机器人
CN109571468A (zh) * 2018-11-27 2019-04-05 深圳市优必选科技有限公司 安防巡检机器人及安防巡检方法
US20200201337A1 (en) * 2018-12-20 2020-06-25 Jason Yan Anti-drop-off system for robot
CN109571502A (zh) * 2018-12-30 2019-04-05 深圳市普渡科技有限公司 机器人配送方法
CN110033612A (zh) * 2019-05-21 2019-07-19 上海木木聚枞机器人科技有限公司 一种基于机器人的行人提醒方法、系统及机器人
CN110442126A (zh) * 2019-07-15 2019-11-12 北京三快在线科技有限公司 一种移动机器人及其避障方法
CN112506204A (zh) * 2020-12-17 2021-03-16 深圳市普渡科技有限公司 机器人遇障处理方法、装置、设备和计算机可读存储介质

Also Published As

Publication number Publication date
CN112506204A (zh) 2021-03-16
CN112506204B (zh) 2022-12-30

Similar Documents

Publication Publication Date Title
US11737635B2 (en) Moving robot and control method thereof
WO2022127439A1 (fr) Procédé et appareil de traitement d'évitement d'obstacle pour robot, dispositif, et support de stockage lisible par ordinateur
Guerrero-Higueras et al. Tracking people in a mobile robot from 2d lidar scans using full convolutional neural networks for security in cluttered environments
JP5803043B2 (ja) 移動式ロボットシステム及び移動式ロボットを作動させる方法
EP2778995A2 (fr) Procédé et système informatiques permettant de fournir une assistance personnelle automatique et active au moyen d'un dispositif robotique/d'une plate-forme
CN107174418A (zh) 一种智能轮椅及其控制方法
US20140313308A1 (en) Apparatus and method for tracking gaze based on camera array
US20210294414A1 (en) Information processing apparatus, information processing method, and program
CN107357292A (zh) 一种儿童室内智能看护系统及其看护方法
WO2021143543A1 (fr) Robot et son procédé de commande
GB2527207A (en) Mobile human interface robot
US11540690B2 (en) Artificial intelligence robot cleaner
Deng et al. Safety-aware robotic steering of a flexible endoscope for nasotracheal intubation
US10814487B2 (en) Communicative self-guiding automation
WO2023000679A1 (fr) Procédé et appareil de commande de recharge de robot, ainsi que support d'enregistrement
Mayachita et al. Implementation of Entertaining Robot on ROS Framework
Jubril et al. A multisensor electronic traveling aid for the visually impaired
Ma et al. An Intelligent Caregiver Elderly Robot Based on Raspberry Pi
TWM517882U (zh) 智能式行動看護裝置
Viswanathan Navigation and obstacle avoidance help (NOAH) for elderly wheelchair users with cognitive impairment in long-term care
Zhang et al. Recognition for Robot First Aid: Recognizing a Person's Health State after a Fall in a Smart Environment with a Robot
Kularatne et al. Elderly care home robot using emotion recognition, voice recognition and medicine scheduling
Kangutkar Obstacle avoidance and path planning for smart indoor agents
US20240036585A1 (en) Robot device operating in mode corresponding to position of robot device and control method thereof
Langer et al. Where Does It Belong? Autonomous Object Mapping in Open-World Settings

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905368

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21905368

Country of ref document: EP

Kind code of ref document: A1