CN112506204A - Robot obstacle meeting processing method, device, equipment and computer readable storage medium - Google Patents

Robot obstacle meeting processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN112506204A
CN112506204A CN202011497739.XA CN202011497739A CN112506204A CN 112506204 A CN112506204 A CN 112506204A CN 202011497739 A CN202011497739 A CN 202011497739A CN 112506204 A CN112506204 A CN 112506204A
Authority
CN
China
Prior art keywords
robot
obstacle
real person
prompt
walking route
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011497739.XA
Other languages
Chinese (zh)
Other versions
CN112506204B (en
Inventor
李泽华
张涛
陈永昌
申鑫瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202011497739.XA priority Critical patent/CN112506204B/en
Publication of CN112506204A publication Critical patent/CN112506204A/en
Priority to PCT/CN2021/129398 priority patent/WO2022127439A1/en
Application granted granted Critical
Publication of CN112506204B publication Critical patent/CN112506204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • G06F16/2433Query languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The application relates to the field of artificial intelligence, and provides a robot obstacle handling method, a robot obstacle handling device and a computer readable storage medium, so that coupling during heterogeneous data management is reduced, and reusability is improved. The method comprises the following steps: when detecting that the robot has an obstacle on the current walking route, controlling the robot to stop moving ahead on the current walking route; detecting whether the obstacle on the current walking route of the robot is a real person or not; if the obstacle on the current walking route of the robot is a real person, starting a first prompting mode to prompt the real person to avoid the robot; and if the obstacle on the current walking route of the robot is not a real person, starting a second prompt mode to prompt a worker in the scene to assist in eliminating the obstacle of the non-real person. The obstacle removing method and the obstacle removing device can achieve the purpose of removing obstacles only by simply prompting when the obstacles are encountered.

Description

Robot obstacle meeting processing method, device, equipment and computer readable storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method, a device, equipment and a computer readable storage medium for processing obstacle encountering of a robot.
Background
With the rapid development of Artificial Intelligence (AI), AI technology is gradually applied to fields closely related to human life, for example, a robot restaurant, an intelligent warehouse, etc. are typical applications of AI technology in the field of civilian life. In these application scenarios, the intelligent robot not only brings a high-tech feeling to the user, but also brings other experiences to the user, such as fun of dining, efficient transportation, and the like.
In the occasions where robots such as a robot restaurant and an intelligent warehouse are in charge of main work and the personnel are complicated, the walking route of the robots can be blocked by real people or other objects. For the dilemma in such a scenario, the existing solution is to try to make the robot have self-adaptive capability, that is, make the robot have a function of automatically avoiding obstacles, and change a preset walking route when encountering obstacles such as a real person, so as to avoid the obstacles.
However, the existing solutions mentioned above need to be based on a great deal of training of the robot in advance, which on the one hand would mean a high cost; on the other hand, if the final training result of the robot is not intelligent enough, the robot cannot avoid obstacles and can touch the obstacles to cause the robot to be damaged, so that the cost is higher.
Disclosure of Invention
The application provides a robot obstacle encountering processing method, a device, equipment and a computer readable storage medium, which can timely eliminate obstacles on a robot walking route at lower cost.
In one aspect, the application provides a robot obstacle handling method, including:
when detecting that an obstacle exists on a current walking route of the robot, controlling the robot to stop moving ahead on the current walking route;
detecting whether the obstacle on the current walking route of the robot is a real person or not;
if the obstacle on the current walking route is a real person, starting a preset first prompt mode to prompt the real person to avoid the robot;
and if the obstacles on the current walking route are not true people, starting a preset second prompt mode to prompt staff in the scene to assist in eliminating the obstacles of the non-true people.
In another aspect, the present application provides a robot obstacle handling device, including:
the robot stops moving ahead on the current walking line;
the detection module is used for detecting whether the obstacle on the current walking route of the robot is a real person or not;
the first prompting module is used for starting a first prompting mode to prompt the real person to avoid the robot if the obstacle on the current walking route is the real person;
and the second prompting module is used for starting a second prompting mode if the obstacle on the current walking route is not a real person so as to prompt a worker in a scene to assist in eliminating the obstacle of the non-real person.
In a third aspect, the present application provides an apparatus, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the technical solution of the robot obstacle handling method when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the technical solution of the robot obstacle handling method as described above.
According to the technical scheme provided by the application, when the obstacle is confirmed to be a real person, the first prompt mode is started to prompt the real person to avoid the robot, when the obstacle is not confirmed to be a real person, the second prompt mode is started to prompt staff to assist in removing the obstacle, and no matter which specific prompt mode is adopted, the robot does not need to be trained, so that on one hand, compared with the scheme of removing the obstacle by performing a large amount of training on the robot, the technical scheme provided by the application is low in cost; on the other hand, the purpose of eliminating obstacles can be achieved by simply giving a prompt, and the rear worries that the robot cannot be trained in place and the expected effect cannot be obtained like the prior art are not needed.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a robot obstacle handling method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a syntax tree formed by splitting an SQL statement into corresponding statements, conditions, expressions, and the like through semantic analysis of SQL according to an embodiment of the present application;
fig. 3a is a schematic diagram of a robot provided by an embodiment of the present application, which avoids obstacles on a current walking route of the robot by adopting a ">" type walking manner;
fig. 3b is a schematic diagram of the robot provided by the embodiment of the present application, which avoids obstacles on the current walking route of the robot by adopting a '<' type walking manner;
fig. 4 is a schematic structural diagram of a robot obstacle handling device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In this specification, adjectives such as first and second may only be used to distinguish one element or action from another, without necessarily requiring or implying any actual such relationship or order. References to an element or component or step (etc.) should not be construed as limited to only one of the element, component, or step, but rather to one or more of the element, component, or step, etc., where the context permits.
In the present specification, the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The application provides a robot obstacle handling method, which can be applied to a robot, wherein the robot can be a robot working in a restaurant, such as a dish delivery robot, a medicine delivery robot working in a medical place, such as a hospital, a transfer robot working in a place such as a warehouse, and the like. The sites where these robots work may be both adults (e.g., adult patients in a hospital or the like) and minors (e.g., infants, children, etc. having meals in a restaurant). As shown in fig. 1, the method for handling the robot obstacle mainly includes steps S101 to S104, which are detailed as follows:
step S101: and when detecting that the obstacle exists on the current walking route of the robot, controlling the robot to stop moving forwards on the current walking route.
The obstacle is a person or object that obstructs a travel of a mobile device such as a robot on a predetermined route of the mobile device. In the embodiment of the application, whether an obstacle exists on the current walking route of the robot or not can be detected in an optoelectronic manner, for example, through a vision sensor, a laser radar or an infrared thermal imaging device. Once the obstacle on the current walking route of the robot is detected, the central processing unit in the robot sends a running stopping instruction to the driving unit, so that the robot stops walking on the current walking route.
Step S102: and detecting whether the obstacle on the current walking route of the robot is a real person.
As described above, obstacles on the walking path of the robot are mainly classified into people or objects. In the embodiment of the application, different obstacle elimination schemes can be adopted for different obstacles, so that when the obstacle existing on the current walking route of the robot is detected, whether the obstacle on the current walking route of the robot is a real person needs to be detected.
As for the infrared sensor, the heat signal sent by the human body is different from the heat signals sent by other objects, so as to detect whether the obstacle on the current walking route of the robot is a real person or not, and receive one or more heat signals sent by the obstacle on the current walking route of the robot by using the infrared sensor installed on the robot; and if the wavelength of the heat signal is about 5-12 um, determining that the obstacle on the current walking route of the robot is a real person.
The above-mentioned embodiment is to detect whether the obstacle on the current walking route of the robot is a real person based on the thermal imaging principle, and in another embodiment of the present application, whether the obstacle on the current walking route of the robot is a real person may be detected based on the computer graphics principle, and specifically, may be implemented by steps S201 to S205 as illustrated in fig. 2, and the following description is given:
step S201: and determining whether the obstacle on the current walking route of the robot is a human-like body or not according to the image of the obstacle acquired by the image acquisition device.
From the viewpoint of computer graphics, real persons and objects similar to real persons, such as mannequins and the like, belong to the human-like body. Therefore, when it is determined that the obstacle on the current walking route of the robot is not a humanoid body, the obstacle cannot be a real person. The image of the obstacle can be acquired by an image acquisition device integrated on the robot, such as a camera, and then the image of the obstacle is matched with the human-like body image prestored in the database on geometric parameters such as shape, size and the like, so as to determine whether the obstacle on the current walking route of the robot is a human-like body.
Step S202: if the obstacle on the current walking route of the robot is a human-like body, acquiring key feature point information in a human-like body face image aiming at the face of the human-like body, wherein the key feature point information in the human-like body face image comprises plane position information of the key feature point.
As previously mentioned, the human-like body includes a real person and an object similar to the real person. Therefore, when it is determined that the obstacle on the current walking route of the robot is not a humanoid body, the obstacle cannot be a real person; when it is determined that the obstacle on the current walking route of the robot is a humanoid body, whether the humanoid body is a real person or not needs to be further confirmed, because the attribute of the real person and the attribute of an object similar to the real person still have essential differences in physiological and/or psychological meanings, for example, the real person can understand the semantics, so that the robot can be actively avoided when a prompt voice sent by the robot is heard, but the humanoid body such as a mannequin cannot understand the semantics, so that the robot cannot be avoided.
In order to further improve the acquisition accuracy of the human-like face image, in the embodiment of the application, when the human-like face image is acquired, at least two images are acquired from different positions at the same time by different image acquisition devices respectively, and then the at least two human-like face images are corrected, so that the at least two human-like face images are kept consistent in the horizontal direction. After at least two human face images with the same horizontal direction are obtained, key feature point detection is carried out on each human face image, and a plurality of key feature point information in each human face image is determined. In the embodiment of the present application, the key feature points include 65 to 106 key points including a nose tip point, a lower eyelid point for the left eye, a lower eyelid point for the right eye, a left mouth corner point, a right mouth corner point, and the like. For example, after determining the key feature points in the human-like face image, planar position information and a reliability of the key feature points in the human-like face image may be further determined, where the planar position information may be two-dimensional coordinates of the key feature points in the human-like face image, and the reliability is used to indicate an accuracy of positioning the key feature points in the human-like face image.
Step S203: and acquiring the three-dimensional position information of the key feature points according to the plane position information of the key feature points in the human-like face image.
As mentioned above, the confidence level indicates the accuracy of positioning the key feature points in the human-like face image, and therefore, the key feature points with confidence level greater than a certain threshold value can be selected according to the confidence level of each key feature point to determine the stereo position information of the key feature points, for example, the three-dimensional coordinates of the key feature points in the three-dimensional coordinate system. Specifically, the three-dimensional position information of each key feature point may be calculated based on the principle of similar triangles from the plane position information of the key feature point.
Step S204: and obtaining a face fitting curved surface according to the three-dimensional position information of the key feature points.
After the three-dimensional position information of the key feature points is obtained, a curved surface fitting algorithm, such as a least square method, can be adopted to fit the three-dimensional position information of the key feature points as a fitting factor to form a curved surface, which is a face fitting curved surface.
Step S205: and judging whether the obstacle on the current walking route of the robot is a real person or not according to the distance between the key characteristic point in the human-like face image and the face fitting curved surface.
Generally, the depth information of key feature points of an image of a face of a non-real person is not obvious, a fitted face fitting curved surface is generally a smooth face curved surface, the distance from the key feature points to such a face fitting curved surface is small and approaches to zero, the depth information of the key feature points in the image of the face of the real person is obvious, the fitted face fitting curved surface is generally not a smooth face curved surface, and the distance from the key feature points to such a face fitting curved surface is large. Based on the imaging characteristics, in the embodiment of the application, whether the obstacle on the current walking route of the robot is a real person or not is judged according to the distance between the key feature point in the human-like face image and the face fitting curved surface: and calculating the sum of the distances between the key characteristic points in the human-like body face image and the face fitting curved surface, if the sum of the distances is greater than the dynamic depth information threshold value, determining that the human-like body is a real person, and if the sum of the distances is not greater than the dynamic depth information threshold value, determining that the human-like body is not a real person. In the above embodiment, the preset prior depth distance threshold is determined in real time during the dynamic depth information threshold, and is related to the reliability of each key feature point used for calculating the sum of the distances between the key feature points and the face fitting curved surface.
Step S103: and if the obstacle on the current walking route is a real person, starting a preset first prompt mode to prompt the real person to avoid the robot.
Specifically, the first prompt mode may be a voice prompt mode, an image prompt mode, or a voice and image combined prompt mode.
Although the obstacles on the current walking route are both true people, there still exist some differences between true people, for example, a young child and an adult, the former may not or not be able to understand semantic information accurately enough, but may understand image (including video, animation, etc., which may be regarded as continuously played image) and other visual representation methods, and the latter generally has no obstacle on semantic understanding on the premise of normal intelligence. Therefore, different presentation modes can be given for the distinction. Specifically, step S103 can be realized by steps S1031 to S1033 as follows:
step S1031: and determining whether the real person on the current walking route of the robot is a child.
In the embodiment of the application, an image of a real person on the current walking route of the robot can be acquired through an image acquisition device integrated on the robot, such as a camera and the like, and then the acquired image of the real person is matched with a child image preset in a database from the aspect of physical and appearance characteristics, so that whether the real person on the current walking route of the robot is a child or not is determined. Whether the real person on the current walking route of the robot is a child or not can be determined by collecting the sound information of the real person on the current walking route of the robot, for example, when the robot plays voice to the real person, the real person may turn to respond to the robot in a manner of 'i don't give away ', and when the robot collects the voice information of' i don't give away', whether the real person on the current walking route of the robot is a child or not can be determined according to the voiceprint characteristics of the robot.
Step S1032: and if the real person is a child, forming a terrain feature animation of the environment around the child, and prompting the child to avoid the robot by combining the terrain feature animation and the voice.
As described above, a child has limited understanding of semantic information compared to an adult, and semantic information can be easily understood by the child based on image and semantic interpretation. Therefore, in the embodiment of the application, after the fact that the real person on the current walking route of the robot is the child is determined, the terrain feature animation can be formed in the environment around the child, and the child is prompted to avoid the robot through the terrain feature animation and the voice. Specifically, the terrain feature animation is formed around the child, and the robot for prompting the child to avoid through the terrain feature animation and combining with voice can be: determining prompt information of the current walking route of the robot by sensing the surrounding environment data of the child; extracting the characteristics of the prompt information of the current walking route of the robot by a preset animation processing model to form a terrain characteristic animation; projecting the terrain feature animation onto a target projection surface; and explaining the terrain feature animation projected onto the target projection surface by adopting the child sound so as to guide the child to avoid the robot.
Step S1033: and if the real person is not a child, prompting the real person who is not the child to avoid the robot through voice and the generated expression image.
Because an adult has a strong ability to understand semantics, and meanwhile, in order to make a request more humanized, as an embodiment of the present application, if a real person is not a child, a real person avoidance robot that prompts that the real person is not a child through voice and a generated expression image may be: and displaying the generated expression image used for representing the difficult situation on display equipment by a preset exaggeration method, and circularly playing the expression image used for representing the difficult situation and prompt voice with actual semantics to the real person who is not a child at fixed intervals to request the real person who is not a child to avoid the robot. When adults see the exaggerated expression images difficult to be in the mood, and the expression images difficult to be in the mood are played in a circulating mode, and the prompting voice with actual semantics (for example, please give a lead, please give me the past, and the like) is played in a circulating mode, from the perspective of human nature, the adults can actively avoid the robot to pass through the robot in a smooth mode under the general condition.
Step S104: and if the obstacle on the current walking route of the robot is not a real person, starting a second prompt mode to prompt a worker in the scene to assist in eliminating the obstacle of the non-real person.
Specifically, the second prompt module can select a warning mode.
Obviously, when the obstacle on the current walking route of the robot is not a real person, the aforementioned voice prompt mode and/or image prompt mode cannot be effective or has a poor effect (of course, when the voice prompt is played, the staff in the scene may also exclude the obstacle of the non-real person, but there is still a difference in effect with the active avoidance of the real person). Therefore, if the obstacle on the current walking route of the robot is not a real person, the warning prompt mode is started to prompt the staff in the scene to assist in eliminating the obstacle of the non-real person. For example, after the alert prompt mode is enabled, the robot issues an audible and visual alarm to prompt the worker in the scene that the robot encounters obstacles to non-real persons, requiring him/her to assist in eliminating those obstacles.
Further, after step S104, step S105 may be further included: after the first prompt mode is started or the second prompt mode is started, if the real person still does not avoid the obstacle of the robot or the non-real person and is not eliminated, the first prompt mode or the second prompt mode is upgraded to continue prompting.
In some scenarios, it is not excluded that after the voice prompt mode and/or the image prompt mode are enabled, or after the alert prompt mode is enabled, the real person still does not avoid obstacles of the robot or the non-real person. In such a scenario, the prompt mode may be upgraded, for example, when the voice prompt mode and/or the image prompt mode is enabled, and after the voice prompt and/or the image prompt is performed to the real person for more than 3 times, the real person still does not avoid the robot, and the voice decibel during the voice prompt may be enhanced and/or a more exaggerated manner may be adopted, for example, an angry expression image is played; or, after the warning prompt mode is activated, the obstacle of the non-real person is not eliminated, and the intensity of the sound and light alarm is increased, for example, a sharper alarm sound is emitted, a stronger light is emitted by the alarm lamp, or the flashing frequency of the alarm lamp is increased.
In some scenarios, it is not excluded that after the prompt mode is upgraded, the real person still does not avoid the obstacle of the robot or the non-real person, for example, a young child on the current walking route of the robot still cannot understand the voice prompt and/or the image prompt, or, in a specific location of a hospital, an adult patient on the current walking route of the robot still cannot actively avoid the robot for a disease reason although the adult patient can understand the voice prompt and/or the image prompt, or, although the prompt mode is upgraded, a worker cannot timely arrive at the scene to eliminate the obstacle of the non-real person for various other reasons. In this scenario, the detour trajectory of the current walking route around the obstacle may be adjusted according to the size of the obstacle, and the robot may be controlled to travel along the adjusted walking route to avoid the obstacle, for example, the robot may be controlled to avoid the obstacle on the current walking route of the robot by adopting a "<" type or ">" type walking manner. As shown in fig. 3a, the robot may deviate from the right by a small angle, walk a distance from the right side of the obstacle, turn left (at this time, form a ">" type route), and return to the preset walking route, as shown by the dotted line; alternatively, as shown in fig. 3b, the robot may deviate from the left by a small angle, walk a distance from the left side of the obstacle, and then turn right (forming a "<" shaped route), and return to the preset walking route, as shown by the dotted line. Because the robot is just deviated from the preset route by a small angle no matter the < "> type walking mode or the >' type walking mode is adopted, the algorithm is not complex, and the robot can be realized without complex training.
In other embodiments of the present invention, the substrate may be,or the first prompt mode or the second prompt mode or the first prompt mode can be upgraded or After the second prompt mode reaches the preset time, if the preset time is specifically 2s, 3s or 4s, the second prompt mode may be determined according to the obstacle Adjusts the detour track of the current walking route around the obstacle, controls the robot to adjust the walking route The wire travels to avoid the obstacle.
In order to save the consumption of resources such as electric energy, in the embodiment of the present application, the voice prompt mode and/or the image prompt mode may be turned off after the real person avoids the robot, or the warning prompt mode may be turned off after the obstacle of the non-real person is eliminated.
As can be seen from the obstacle encountering processing method of the robot illustrated in fig. 1, when it is determined that the obstacle is a real person, the voice prompt mode and/or the image prompt mode are/is enabled to prompt the real person to avoid the robot, and when it is determined that the obstacle is not a real person, the warning prompt mode is enabled to prompt a worker to assist in removing the obstacle, because the robot is not required to be trained no matter whether the voice prompt mode, the image prompt mode, the warning prompt mode or the three are upgraded, on the one hand, compared with a scheme of removing the obstacle by performing a large amount of training on the robot, the technical scheme of the application is low in cost; on the other hand, as long as the voice prompt mode, the image prompt mode, the warning prompt mode or the upgrading mode of the two modes are started, the purpose of removing obstacles can be achieved, and the rear worries that the robot is not trained in place and cannot obtain expected effects like the prior art are not needed.
Referring to fig. 4, the robot obstacle handling device provided in the embodiment of the present application may include a stop module 401, a detection module 402, a first prompt module 403, a second prompt module 404, and further may further include a third prompt module 405, which is detailed as follows:
the operation stopping module 401 is configured to control the robot to stop moving ahead on the current walking route when it is detected that an obstacle exists on the current walking route of the robot;
a detection module 402, configured to detect whether an obstacle on a current walking route of the robot is a real person;
the first prompting module 403 is configured to, if an obstacle on a current walking route of the robot is a real person, start a first prompting mode to prompt the real person to avoid the robot;
a second prompt module 404, configured to enable a second prompt mode to prompt a worker in a scene to assist in eliminating obstacles except a real person if the obstacle on the current walking route of the robot is not a real person;
the third prompt module 405 is configured to, after the first prompt module 403 enables the first prompt mode, or after the second prompt module 404 enables the second prompt mode, if the real person still does not avoid the obstacle of the robot or the non-real person and is not yet eliminated, upgrade the prompt mode to continue prompting.
Optionally, the detection module 402 illustrated in fig. 4 includes a first determining unit, a collecting unit, a stereo position information obtaining unit, a fitting unit, and a determining unit, where:
the first determining unit is used for determining whether the obstacle on the current walking route of the robot is a humanoid body according to the image of the obstacle acquired by the image acquisition device;
the robot comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring key feature point information in a human-like face image aiming at the face of a human-like body if the obstacle on the current walking route of the robot is the human-like body, and the key feature point information in the human-like face image comprises plane position information of the key feature point;
the three-dimensional position information acquisition unit is used for acquiring the three-dimensional position information of key feature points in the human-like face image according to the plane position information of the key feature points;
the fitting unit is used for obtaining a face fitting curved surface according to the three-dimensional position information of the key feature points;
and the judging unit is used for judging whether the obstacle on the current walking route of the robot is a real person or not according to the distance between the key characteristic point in the human-like face image and the face fitting curved surface.
Alternatively, the judging unit may include a calculating unit and a second determining unit, wherein:
the calculating unit is used for calculating the sum of the distances between the key characteristic points in the human-like face image and the face fitting curved surface;
and the second determining unit is used for determining that the human body is a real person if the sum of the distances between the key feature point in the human body-like face image and the face fitting curved surface is greater than the dynamic depth information threshold value, and determining that the human body is not a real person if the sum of the distances between the key feature point in the human body-like face image and the face fitting curved surface is not greater than the dynamic depth information threshold value.
Optionally, the first prompt module 403 illustrated in fig. 4 may include a third determination unit, an animation prompt unit, and an audiovisual prompt unit, wherein:
the third determining unit is used for determining whether a real person on the current walking route of the robot is a child;
the animation prompting unit is used for forming a terrain feature animation on the environment around the child if the real person on the current walking route of the robot is the child, and prompting the child to avoid the robot by combining the terrain feature animation and voice;
and the audio and video prompting unit is used for prompting that the real person who is not a child avoids the robot through voice and the generated expression image if the real person on the current walking route of the robot is not a child.
Optionally, the animation prompting unit includes a fourth determining unit, a feature extracting unit, a projecting unit, and a guiding unit, where:
the fourth determining unit is used for determining prompt information of the current walking route of the robot by sensing the surrounding environment data of the child;
the feature extraction unit is used for extracting features of the prompt information of the current walking route of the robot by a preset animation processing model so as to form a terrain feature animation;
the projection unit is used for projecting the terrain feature animation onto a target projection surface;
and the guiding unit explains the terrain feature animation projected onto the target projection surface by adopting child sounds so as to guide children to avoid the robot.
Optionally, the audio/video prompting unit may include a display unit and a playing unit, wherein:
a display unit for displaying the generated expression image in an exaggerated manner on a display device;
a playing unit for playing expression images of difficult conditions and prompt voices with actual semantics to the real people who are not children circularly at fixed intervals to request the real people who are not children to avoid the robot
Optionally, the apparatus illustrated in fig. 4 may further include a control module or a shutdown module, wherein:
the control module is used for controlling the robot to adopt a < '> type or a >' type walking mode to avoid obstacles if the real person still does not avoid the obstacles of the robot or the non-real person and is not eliminated after the prompt mode is upgraded;
and the closing module is used for closing the voice prompt mode and/or the image prompt mode after the real person avoids the robot, or closing the warning prompt mode after the obstacle of the non-real person is eliminated.
It can be known from the above description of the technical solutions that when it is determined that the obstacle is a real person, the voice prompt mode and/or the image prompt mode are/is enabled to prompt the real person to avoid the robot, and when it is determined that the obstacle is not a real person, the warning prompt mode is enabled to prompt the worker to assist in removing the obstacle, because the robot does not need to be trained no matter whether the voice prompt, the image prompt, the warning prompt or the upgrade of the three, on the one hand, compared with a scheme of removing the obstacle by performing a lot of training on the robot, the technical solution of the present application is low in cost; on the other hand, as long as the voice prompt mode, the image prompt mode, the warning prompt mode or the upgrading mode of the two modes are started, the purpose of removing obstacles can be achieved, and the rear worries that the robot is not trained in place and cannot obtain expected effects like the prior art are not needed.
Fig. 5 is a schematic structural diagram of an apparatus provided in an embodiment of the present application. As shown in fig. 5, the apparatus 5 of this embodiment mainly includes: a processor 50, a memory 51 and a computer program 52 stored in the memory 51 and executable on the processor 50, such as a program of a robot obstacle handling method. The processor 50, when executing the computer program 52, implements the steps in the above-described robot obstacle handling method embodiment, such as the steps S101 to S105 shown in fig. 1. Alternatively, the processor 50 executes the computer program 52 to implement the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the shutdown module 401, the detection module 402, the first prompt module 403, the second prompt module 404, and the third prompt module 405 shown in fig. 4.
Illustratively, the computer program 52 of the robot obstacle handling method mainly includes: when detecting that the robot has an obstacle on the current walking route, stopping the robot from moving forwards on the current walking route; detecting whether the obstacle on the current walking route of the robot is a real person or not; if the obstacle on the current walking route of the robot is a real person, starting a voice prompt mode and/or an image prompt mode to prompt the real person to avoid the robot; if the obstacle on the current walking route of the robot is not a real person, starting a warning prompt mode to prompt a worker in a scene to assist in eliminating the obstacle of the non-real person; after the voice prompt mode and/or the image prompt mode are started or the warning prompt mode is started, if the real person still does not avoid the obstacle of the robot or the non-real person and is not eliminated, the prompt mode is upgraded. The computer program 52 may be divided into one or more modules/units, which are stored in the memory 51 and executed by the processor 50 to complete the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 52 in the device 5. For example, the computer program 52 may be divided into the functions of the stop operation module 401, the detection module 402, the first prompt module 403, the second prompt module 404, and the third prompt module 405 (modules in the virtual device), and the specific functions of each module are as follows: a stop module 401, configured to stop the robot from moving forward on the current walking route when it is detected that an obstacle exists on the current walking route of the robot; a detection module 402, configured to detect whether an obstacle on a current walking route of the robot is a real person; the first prompting module 403 is configured to, if an obstacle on a current walking route of the robot is a real person, enable a first prompting mode, such as a voice prompting mode and/or an image prompting mode, to prompt the real person to avoid the robot; a second prompt module 404, configured to enable a second prompt mode, such as a warning prompt mode, if the obstacle on the current walking route of the robot is not a real person, so as to prompt a worker in a scene to assist in eliminating the obstacle of the non-real person; the third prompt module 405 is configured to upgrade the prompt mode if the real person still does not avoid the obstacle of the robot or the non-real person and is not yet excluded after the first prompt module 403 enables the voice prompt mode and/or the image prompt mode or after the second prompt module 404 enables the warning prompt mode.
The device 5 may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is merely an example of a device 5 and does not constitute a limitation of device 5 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., a computing device may also include input-output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the device 5, such as a hard disk or a memory of the device 5. The memory 51 may also be an external storage device of the device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc., provided on the device 5. Further, the memory 51 may also include both internal storage units of the device 5 and external storage devices. The memory 51 is used for storing computer programs and other programs and data required by the device. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as required to different functional units and modules, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a non-transitory computer readable storage medium. Based on such understanding, all or part of the processes in the method of the embodiments may also be implemented by instructing related hardware through a computer program, where the computer program of the robot obstacle handling method may be stored in a computer-readable storage medium, and when being executed by a processor, the computer program may implement the steps of the embodiments of the methods, that is, when it is detected that an obstacle exists on the current walking route of the robot, the robot stops moving ahead on the current walking route; detecting whether the obstacle on the current walking route of the robot is a real person or not; if the obstacle on the current walking route of the robot is a real person, starting a voice prompt mode and/or an image prompt mode to prompt the real person to avoid the robot; if the obstacle on the current walking route of the robot is not a real person, starting a warning prompt mode to prompt a worker in a scene to assist in eliminating the obstacle of the non-real person; after the voice prompt mode and/or the image prompt mode are started or the warning prompt mode is started, if the real person still does not avoid the obstacle of the robot or the non-real person and is not eliminated, the prompt mode is upgraded. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The non-transitory computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the non-transitory computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, non-transitory computer readable media does not include electrical carrier signals and telecommunications signals as subject to legislation and patent practice. The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application. The above-mentioned embodiments, objects, technical solutions and advantages of the present application are described in further detail, it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present invention.

Claims (10)

1. A robot obstacle handling method, characterized in that the method comprises:
when detecting that an obstacle exists on a current walking route of the robot, controlling the robot to stop moving ahead on the current walking route;
detecting whether the obstacle on the current walking route of the robot is a real person or not;
if the obstacle on the current walking route is a real person, starting a preset first prompt mode to prompt the real person to avoid the robot;
and if the obstacles on the current walking route are not true people, starting a preset second prompt mode to prompt staff in the scene to assist in eliminating the obstacles of the non-true people.
2. A robotic obstacle handling method as claimed in claim 1, the method further comprising:
after the first prompt mode or the second prompt mode is started, if the real person still does not avoid the obstacle of the robot or the non-real person and is not eliminated, the first prompt mode or the second prompt mode is upgraded to continue prompting.
3. The robot obstacle handling method according to claim 1, wherein the detecting whether the obstacle on the current walking route of the robot is a real person includes:
determining whether the obstacle is a human-like body according to the image of the obstacle acquired by the image acquisition device;
if the obstacle is a human-like body, acquiring key feature point information in a human-like body face image aiming at the face of the human-like body, wherein the key feature point information in the human-like body face image comprises plane position information of the key feature point;
acquiring the three-dimensional position information of the key feature points according to the plane position information of the key feature points;
obtaining a face fitting curved surface according to the three-dimensional position information of the key feature points;
and judging whether the obstacle is a real person or not according to the distance between the key characteristic point and the face fitting curved surface.
4. The robot obstacle handling method of claim 3, wherein the determining whether the obstacle is a real person according to the distance between the key feature point and the face-fitted curved surface comprises:
determining the sum of the distances between the key feature points and the face fitting curved surface;
and if the sum of the distances is greater than the dynamic depth information threshold value, determining that the human-like body is a real person, otherwise, determining that the human-like body is not a real person.
5. A robot obstacle handling method according to claim 1, wherein the enabling of the first prompt mode to prompt the real person to avoid the robot comprises:
determining whether the real person is a child;
if the real person is a child, forming a terrain feature animation of the environment around the child, and prompting the child to avoid the robot by combining the terrain feature animation and voice;
and if the real person is not a child, prompting the real person who is not the child to avoid the robot through voice and the generated expression image.
6. The robot obstacle handling method according to claim 5, wherein the forming of the terrain feature animation to the environment around the child, the prompting of the child to avoid the robot by the terrain feature animation in combination with voice, comprises:
determining prompt information of the current walking route of the robot by sensing the data of the environment around the child;
extracting the characteristics of the prompt information of the current walking route of the robot by a preset animation processing model to form the terrain characteristic animation;
projecting the terrain feature animation onto a target projection surface;
and explaining the terrain feature animation projected onto the target projection surface by adopting a preset child voice frequency so as to guide the child to avoid the robot.
7. A robotic obstacle handling method as claimed in any one of claims 1 to 6, the method further comprising:
if the real person still does not avoid the obstacle of the robot or the non-real person after the first prompt mode or the second prompt mode is upgraded, adjusting the detour track of the current walking route around the obstacle according to the size of the obstacle, and controlling the robot to move along the adjusted walking route to avoid the obstacle; or
And after the real person avoids the robot, closing the voice prompt mode and/or the image prompt mode, or after the obstacle of the non-real person is eliminated, closing the warning prompt mode.
8. A robotic obstacle handling device, the device comprising:
the operation stopping module is used for controlling the robot to stop moving ahead on the current walking line when the robot is detected to have obstacles on the current walking line;
the detection module is used for detecting whether the obstacle on the current walking route of the robot is a real person or not;
the first prompting module is used for starting a first prompting mode to prompt the real person to avoid the robot if the obstacle on the current walking route is the real person;
and the second prompting module is used for starting a second prompting mode if the obstacle on the current walking route is not a real person so as to prompt a worker in a scene to assist in eliminating the obstacle of the non-real person.
9. An apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011497739.XA 2020-12-17 2020-12-17 Robot obstacle meeting processing method, device, equipment and computer readable storage medium Active CN112506204B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011497739.XA CN112506204B (en) 2020-12-17 2020-12-17 Robot obstacle meeting processing method, device, equipment and computer readable storage medium
PCT/CN2021/129398 WO2022127439A1 (en) 2020-12-17 2021-11-08 Robot obstacle avoidance processing method and apparatus, device, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011497739.XA CN112506204B (en) 2020-12-17 2020-12-17 Robot obstacle meeting processing method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112506204A true CN112506204A (en) 2021-03-16
CN112506204B CN112506204B (en) 2022-12-30

Family

ID=74922230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011497739.XA Active CN112506204B (en) 2020-12-17 2020-12-17 Robot obstacle meeting processing method, device, equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112506204B (en)
WO (1) WO2022127439A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641095A (en) * 2021-08-18 2021-11-12 苏州英特数智控制系统有限公司 Active safety backup control method and system based on laser radar
WO2022127439A1 (en) * 2020-12-17 2022-06-23 深圳市普渡科技有限公司 Robot obstacle avoidance processing method and apparatus, device, and computer readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446682A (en) * 2015-11-17 2016-03-30 厦门正景智能工程有限公司 Simulated interactive display system for converting drawing of child into animation by projection
CN105740779A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Method and device for human face in-vivo detection
WO2016145789A1 (en) * 2015-08-12 2016-09-22 中兴通讯股份有限公司 Prompting method, terminal and computer storage medium
CN106054881A (en) * 2016-06-12 2016-10-26 京信通信系统(广州)有限公司 Execution terminal obstacle avoidance method and execution terminal
CN107092252A (en) * 2017-04-11 2017-08-25 杭州光珀智能科技有限公司 A kind of robot automatic obstacle avoidance method and its device based on machine vision
CN107272724A (en) * 2017-08-04 2017-10-20 南京华捷艾米软件科技有限公司 A kind of body-sensing flight instruments and its control method
CN108124486A (en) * 2017-12-28 2018-06-05 深圳前海达闼云端智能科技有限公司 Face living body detection method based on cloud, electronic device and program product
TW201825037A (en) * 2016-11-24 2018-07-16 南韓商Lg電子股份有限公司 Moving robot and control method thereof
CN109709945A (en) * 2017-10-26 2019-05-03 深圳市优必选科技有限公司 A kind of paths planning method based on obstacle classification, device and robot
CN111930127A (en) * 2020-09-02 2020-11-13 广州赛特智能科技有限公司 Robot obstacle identification and obstacle avoidance method
CN111966088A (en) * 2020-07-14 2020-11-20 合肥工业大学 Control system and control method for automatically-driven toy car for children

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013086234A (en) * 2011-10-20 2013-05-13 Panasonic Corp Destination direction notification system, destination direction notification method, and destination direction notification program
CN106272431A (en) * 2016-09-21 2017-01-04 旗瀚科技有限公司 A kind of intelligence man-machine interaction method
CN108344414A (en) * 2017-12-29 2018-07-31 中兴通讯股份有限公司 A kind of map structuring, air navigation aid and device, system
CN108958263A (en) * 2018-08-03 2018-12-07 江苏木盟智能科技有限公司 A kind of Obstacle Avoidance and robot
CN109291064A (en) * 2018-11-18 2019-02-01 赛拓信息技术有限公司 Dining room Intelligent meal delivery robot
CN109571468B (en) * 2018-11-27 2021-03-02 深圳市优必选科技有限公司 Security inspection robot and security inspection method
US20200201337A1 (en) * 2018-12-20 2020-06-25 Jason Yan Anti-drop-off system for robot
CN109571502A (en) * 2018-12-30 2019-04-05 深圳市普渡科技有限公司 Robot allocator
CN110033612B (en) * 2019-05-21 2021-05-04 上海木木聚枞机器人科技有限公司 Robot-based pedestrian reminding method and system and robot
CN110442126A (en) * 2019-07-15 2019-11-12 北京三快在线科技有限公司 A kind of mobile robot and its barrier-avoiding method
CN112506204B (en) * 2020-12-17 2022-12-30 深圳市普渡科技有限公司 Robot obstacle meeting processing method, device, equipment and computer readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016145789A1 (en) * 2015-08-12 2016-09-22 中兴通讯股份有限公司 Prompting method, terminal and computer storage medium
CN105446682A (en) * 2015-11-17 2016-03-30 厦门正景智能工程有限公司 Simulated interactive display system for converting drawing of child into animation by projection
CN105740779A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Method and device for human face in-vivo detection
CN106054881A (en) * 2016-06-12 2016-10-26 京信通信系统(广州)有限公司 Execution terminal obstacle avoidance method and execution terminal
TW201825037A (en) * 2016-11-24 2018-07-16 南韓商Lg電子股份有限公司 Moving robot and control method thereof
CN107092252A (en) * 2017-04-11 2017-08-25 杭州光珀智能科技有限公司 A kind of robot automatic obstacle avoidance method and its device based on machine vision
CN107272724A (en) * 2017-08-04 2017-10-20 南京华捷艾米软件科技有限公司 A kind of body-sensing flight instruments and its control method
CN109709945A (en) * 2017-10-26 2019-05-03 深圳市优必选科技有限公司 A kind of paths planning method based on obstacle classification, device and robot
CN108124486A (en) * 2017-12-28 2018-06-05 深圳前海达闼云端智能科技有限公司 Face living body detection method based on cloud, electronic device and program product
CN111966088A (en) * 2020-07-14 2020-11-20 合肥工业大学 Control system and control method for automatically-driven toy car for children
CN111930127A (en) * 2020-09-02 2020-11-13 广州赛特智能科技有限公司 Robot obstacle identification and obstacle avoidance method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022127439A1 (en) * 2020-12-17 2022-06-23 深圳市普渡科技有限公司 Robot obstacle avoidance processing method and apparatus, device, and computer readable storage medium
CN113641095A (en) * 2021-08-18 2021-11-12 苏州英特数智控制系统有限公司 Active safety backup control method and system based on laser radar
CN113641095B (en) * 2021-08-18 2024-02-09 苏州英特数智控制系统有限公司 Active safety machine leaning control method and system based on laser radar

Also Published As

Publication number Publication date
CN112506204B (en) 2022-12-30
WO2022127439A1 (en) 2022-06-23

Similar Documents

Publication Publication Date Title
CN108303972B (en) Interaction method and device of mobile robot
CN112506204B (en) Robot obstacle meeting processing method, device, equipment and computer readable storage medium
US10055892B2 (en) Active region determination for head mounted displays
US20190049955A1 (en) Driver state recognition apparatus, driver state recognition system, and driver state recognition method
US20170318407A1 (en) Systems and Methods for Generating Spatial Sound Information Relevant to Real-World Environments
US10024679B2 (en) Smart necklace with stereo vision and onboard processing
Guerrero-Higueras et al. Tracking people in a mobile robot from 2d lidar scans using full convolutional neural networks for security in cluttered environments
JP5803043B2 (en) Mobile robot system and method for operating a mobile robot
US20140279733A1 (en) Computer-based method and system for providing active and automatic personal assistance using a robotic device/platform
US20160078278A1 (en) Wearable eyeglasses for providing social and environmental awareness
US20180232571A1 (en) Intelligent assistant device communicating non-verbal cues
JP7383828B2 (en) Obstacle recognition method, device, autonomous mobile device and storage medium
JP2017529521A (en) Wearable earpieces that provide social and environmental awareness
GB2527207A (en) Mobile human interface robot
CN106660205A (en) System, method and computer program product for handling humanoid robot interaction with human
KR20200078311A (en) The control method of robot
WO2020031767A1 (en) Information processing device, information processing method, and program
CN113116224B (en) Robot and control method thereof
US11154991B2 (en) Interactive autonomous robot configured for programmatic interpretation of social cues
CN109106563A (en) A kind of automation blind-guide device based on deep learning algorithm
CN115423865A (en) Obstacle detection method, obstacle detection device, mowing robot, and storage medium
US20220281117A1 (en) Remote physiological data sensing robot
US11780098B2 (en) Robot, robot control method, and recording medium
Wozniak et al. Depth sensor based detection of obstacles and notification for virtual reality systems
CN117148836A (en) Self-moving robot control method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant