CN114721396A - Mobile robot escaping processing method and device and mobile robot - Google Patents

Mobile robot escaping processing method and device and mobile robot Download PDF

Info

Publication number
CN114721396A
CN114721396A CN202210397756.9A CN202210397756A CN114721396A CN 114721396 A CN114721396 A CN 114721396A CN 202210397756 A CN202210397756 A CN 202210397756A CN 114721396 A CN114721396 A CN 114721396A
Authority
CN
China
Prior art keywords
information
mobile robot
obstacle
dynamic
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210397756.9A
Other languages
Chinese (zh)
Inventor
李源源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Network Co Ltd
Original Assignee
Hangzhou Ezviz Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Network Co Ltd filed Critical Hangzhou Ezviz Network Co Ltd
Priority to CN202210397756.9A priority Critical patent/CN114721396A/en
Publication of CN114721396A publication Critical patent/CN114721396A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a processing method for getting rid of poverty of a mobile robot, which comprises the following steps: detecting whether a dynamic obstacle with a variable state exists in the current environment at the mobile robot side, detecting whether the mobile robot is in a trapped state, if the dynamic obstacle is detected and the mobile robot is in the trapped state, virtualizing a position area where the dynamic obstacle is detected as an obstacle-free area, adjusting a planned path by using the virtual obstacle-free area, and advancing based on the adjusted planned path. The application realizes quick escaping in a dynamic environment.

Description

Mobile robot escaping processing method and device and mobile robot
Technical Field
The invention relates to the field of intelligent robots, in particular to a mobile robot, a processing method and a processing device for getting rid of difficulties and a mobile robot.
Background
With the intellectualization of robots, more and more mobile robots can adjust a planned path under the condition that an obstacle is detected so as to avoid the incapability of traveling due to the obstacle.
Under the condition that a large number of dense obstacles exist, the situation that the mobile robot cannot reach the target position point inevitably occurs, so that the robot is trapped and cannot escape; or, the user can get rid of the difficulty only through multiple times of collision, the success rate of getting rid of the difficulty is low, and hardware damage is easily caused.
Disclosure of Invention
The invention provides a processing method for removing a mobile robot from a poverty, which aims to improve the speed of removing the mobile robot from the poverty.
The invention provides a processing method for getting rid of poverty of a mobile robot, which comprises the following steps:
on the side of the mobile robot, there is,
detecting whether a dynamic barrier with a variable state exists in the current environment, detecting whether the mobile robot is in a trapped state,
if the dynamic obstacle is detected and the mobile robot is in a trapped state, the area where the dynamic obstacle is detected is virtualized as an obstacle-free area, the planned path is adjusted by using the virtualized obstacle-free area,
and traveling based on the adjusted planned path.
Preferably, the method further comprises:
acquiring the range of the trapped area of the mobile robot,
when the range of the trapped area of the mobile robot is smaller than the set area threshold value, the mobile robot is trapped by adopting a random motion mode,
and under the condition that the range of the trapped area of the mobile robot is not less than the set area threshold value, the step of virtualizing the position area where the dynamic obstacle is detected as an obstacle-free area and adjusting the planned path by using the virtual obstacle-free area is executed.
Preferably, the detecting whether there is a dynamic obstacle with a variable state in the current environment includes:
acquiring navigation map information of the current environment, wherein the navigation map information comprises semantic information,
identifying the dynamic barrier according to semantic information with barrier category information in the navigation map information;
the step of virtualizing the position area where the detected dynamic obstacle is located as an obstacle-free area and adjusting the planned path by using the virtualized obstacle-free area includes:
based on the navigation map information, the planned path is adjusted according to the area determined in the case where the detected dynamic obstacle is deemed to be absent.
Preferably, the acquiring the range of the trapped area of the mobile robot includes:
based on the navigation map information,
acquiring first position information of the mobile robot, acquiring first edge contour information for representing the range of a trapped area according to the first position information,
and determining the range of the trapped region by using the first edge profile information.
Preferably, the adjusting the planned path according to the determined area where the detected dynamic obstacle is considered to be absent comprises:
acquiring second position information of the dynamic obstacle and second edge profile information of the dynamic obstacle, wherein the second edge profile information is used for representing the space distribution of the dynamic obstacle,
judging whether inosculating points exist between the first edge profile and the second edge profile or whether the number of inosculating points is larger than a set first number threshold value according to the first edge profile information and the second edge profile information,
if so, shielding the dynamic obstacle information in the navigation map information, planning a path according to the navigation map information after shielding the dynamic obstacle information,
otherwise, judging whether the first edge contour information and the existing planned path have matching points or whether the number of the matching points is larger than a set second number threshold, if so, shielding the dynamic barrier information in the navigation map information, planning the path according to the navigation map information after shielding the dynamic barrier information,
and repeatedly executing the step of judging whether the matching points exist between the first edge profile and the second edge profile or not or whether the number of the matching points is larger than a set first number threshold or not according to the first edge profile information and the second edge profile information until the repeated execution times reach the set time threshold or the duration of the trapped state reaches the set time threshold.
Preferably, the obtaining navigation map information of the current environment includes:
acquiring image information and first map information in a current environment,
performing target detection based on the image information to obtain target detection results for classifying the targets in the image, wherein the target detection results at least comprise classification results of static targets and dynamic targets,
and determining semantic information of the map point cloud in the first map information by using the target detection result to obtain the navigation map comprising the semantic information.
Preferably, the detecting whether the mobile robot is in a trapped state comprises:
planning a current path based on the navigation map information,
judging that the vehicle is in a trapped state if a path from the current position to the target position is drawn out without regulation;
the determining semantic information of the map point cloud in the first map information by using the target detection result includes:
projecting the currently obtained map points into the image by utilizing external reference and internal reference of the camera according to the first map information to obtain the position information of the projection points of the map points in the pixel coordinate system,
and determining a classification result corresponding to the target frame as semantic information of the projection point according to the target frame in the target detection result of the projection point position information.
The invention also provides a device for processing the mobile robot to get rid of the trouble, which comprises a processor and a memory, wherein the memory stores a computer program, and the processor is configured to execute the computer program to realize the steps of any processing method for getting rid of the trouble of the mobile robot.
The invention further provides a mobile robot, which comprises a processor and a memory, wherein the memory stores a computer program, and the processor is configured to execute the computer program to realize the steps of the processing method for escaping from the mobile robot.
The invention further provides a computer-readable storage medium, wherein a computer program is stored in the storage medium, and when being executed by a processor, the computer program realizes the steps of the processing method for escaping from the difficulties of any mobile robot.
According to the processing method for getting rid of the mobile robot, the position area where the dynamic obstacle is detected is virtualized to be the obstacle-free area through detection of the dynamic obstacle, the planned path is adjusted by utilizing the virtualized obstacle-free area, so that the current dynamic obstacle is processed to be in the mode of no obstacle, the current dynamic obstacle is virtualized to be the accessible area without the obstacle, the accessible area is expanded for the cleaning robot in the trapped state, the path is planned based on the virtually expanded accessible area, the success rate of getting rid of the robot is improved, the cleaning robot can get rid of the trap quickly, and getting rid of the trap in the dynamic environment is achieved.
Drawings
Fig. 1 is a schematic flow chart of a processing method for removing difficulty of a mobile robot according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a path planning in a stationary obstacle environment;
FIG. 3 is a schematic diagram of an environment with dynamic obstacles;
fig. 4 is a schematic flowchart of a processing method for removing difficulty of a mobile robot according to an embodiment of the present application;
FIG. 5 is a schematic view of map points projected into an image;
FIG. 6 is a schematic diagram of obtaining first edge profile information and obstacle edge profile information;
fig. 7 is a schematic view of a processing device for removing the mobile robot according to an embodiment of the present disclosure;
fig. 8 is another schematic view of the processing device for escaping the mobile robot or the mobile robot according to the embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical means and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings.
The applicant researches and discovers that the existing trapped-free treatment does not distinguish the state of the obstacle, and the obstacle is regarded as a static obstacle whether being a static obstacle or a dynamic obstacle. However, a dynamic obstacle with a variable state inevitably exists in the moving process of the mobile robot, which causes the mobile robot to be easily trapped in a certain area when traveling according to an existing planned path planned under the condition of a static obstacle, and the dynamic change of the dynamic obstacle not only increases the probability of trapping, but also increases the difficulty of trapping.
In view of this, an embodiment of the present application provides a method for processing a mobile robot to get rid of difficulty, which can implement a difficulty-getting-out process in a dynamic environment.
Referring to fig. 1, fig. 1 is a schematic flow chart of a processing method for removing difficulty of a mobile robot according to an embodiment of the present disclosure. The method comprises the following steps: on the side of the mobile robot,
step 101, detecting whether a dynamic barrier with a variable state exists in the current environment, detecting whether the mobile robot is in a trapped state,
the dynamic barrier with the variable state can be a barrier with a variable position state, and can also be a barrier with a variable shape and/or size; the changes may be in real time or non-real time.
102, if the dynamic obstacle is detected and the mobile robot is in a trapped state, virtualizing a position area where the dynamic obstacle is detected as an obstacle-free area, adjusting a planned path by using the virtual obstacle-free area,
and 103, advancing based on the adjusted planned path.
Compared with the escaping mode without distinguishing dynamic and static obstacles, the embodiment of the application can adjust the planned route by utilizing the virtual obstacle-free area of the dynamic obstacles, enlarges the passable area under the escaping situation, increases the escaping success rate and is beneficial to improving the escaping speed.
For convenience of understanding, the following description is given by taking a cleaning robot as an example, and it should be understood that the present application may not be limited to an application scenario of the cleaning robot, and any application may be applicable to a mobile robot having mobility.
The cleaning robot records all detected obstacle information into the navigation map during the working process. During navigation from the current position to the target position, the cleaning robot plans a planned path for avoiding the obstacle to the target position according to the historical obstacle map information.
Referring to fig. 2, fig. 2 is a schematic diagram of a path planning in a static obstacle environment. When the obstacles in the environment are in a static state, the cleaning robot can easily plan a feasible path from the current position to the target position according to the detected information of the obstacles. However, when there is a dynamic object in the environment, the dynamic obstacle may be encountered when traveling along the currently planned path, and the information of the encountered dynamic obstacle is recorded in the navigation map. Due to the collision, the cleaning robot may re-plan the path according to the latest obstacle map. Referring to fig. 3, fig. 3 is a schematic diagram of an environment with dynamic obstacles. When the number of dynamic obstacles in the environment is large or the environment is narrow, the area which can pass through originally can be blocked by the recorded dynamic obstacles, a planning path from the current position to the target position cannot be planned, the cleaning robot is trapped, and the cleaning task is abnormally terminated.
As an example, the cleaning robot adopts a single-line rotating laser radar and a monocular camera as sensors, and image information in the current environment can be acquired through the camera so as to perform target detection based on the image information; acquiring space point information under the current environment through a laser radar so as to establish a point cloud map comprising map points as a navigation map; the map points are projected onto the image through external parameters of the camera and the laser radar, semantic information of corresponding point clouds is obtained, and then a navigation map containing the semantic information is constructed, so that when the cleaning robot is in a trapped state, dynamic obstacles in the navigation map are shielded purposefully, the influence of the dynamic obstacles on a planned path is weakened, the trafficability of the cleaning robot is improved, and the cleaning robot is helped to rapidly escape from the trap.
Referring to fig. 4, fig. 4 is a schematic flow chart of a processing method for removing the mobile robot according to the embodiment of the present disclosure. The processing method comprises the following steps:
step 401, obtaining first map information of a current environment, positioning based on the first map information, determining a current position of the cleaning robot,
as an example, the first map information may be pre-stored existing map information; or real-time map information constructed in the SLAM manner. The positioning based on the first map information may acquire the current position information in a machine vision manner.
Step 402, obtaining current image information, and performing target detection based on the current image information to at least identify a static obstacle and a dynamic obstacle, so as to obtain a target detection result.
As an example, the target detection may be target classification of the image by using a machine learning model, the number of classifications may be determined according to the machine learning model capability, for example, the classification may be static obstacles and dynamic obstacles, and may also be classified into multiple categories of furniture, pets, living goods, and the like, and then the classification may be marked as static obstacles and dynamic obstacles according to the classification result, for example, common tables, chairs, cabinets, and the like are used as static obstacles, and shoes, pets, and the like are used as dynamic obstacles.
Step 403, determining semantic information of the map point in the first map information based on the current target detection result and the currently observed map point information.
As an example, the cleaning robot projects the currently collected laser points as map points into a pixel coordinate system of the camera, preferably, the laser points are clustered, and the clustered laser points are projected into the pixel coordinate system. In particular, the amount of the solvent to be used,
assuming that the relative displacement between the laser radar and the camera is t, the relative rotation matrix is R, and the map point currently obtained by the laser radar is P ═ P (P)1,p2…), where p isi=(xi,yi,zi) X, y and z are respectively the coordinates of the current map point in the world coordinate system, the current obtained mapLike as
Figure BDA0003598228160000061
Wherein
Figure BDA0003598228160000062
u and v are coordinates of the current pixel point in the pixel coordinate system. Assume that the camera uses a pinhole model with an internal reference of K.
Firstly, converting the coordinates of map points into a camera coordinate system, and expressing the coordinates as follows by using a mathematical formula:
Figure BDA0003598228160000063
wherein,
Figure BDA0003598228160000064
for map point piCoordinates in a camera coordinate system;
then converting the space points in the camera coordinate system into the pixel coordinate system, and expressing the space points in the camera coordinate system as follows by using a mathematical expression:
Figure BDA0003598228160000065
wherein,
Figure BDA0003598228160000066
for map point piZ-coordinate in the camera coordinate system.
According to the target detection result, semantic information of a map point corresponding to a projection point in a target frame can be determined, that is, according to the target frame in the target detection result of the projection point position information, a classification result corresponding to the target frame is determined as the semantic information of the projection point.
Referring to fig. 5, fig. 5 is a schematic diagram of a map point projected into an image. In the figure, semantic information of map points corresponding to projection points located in the target frame is shoes or dynamic obstacles.
Through the steps 401 to 403, the navigation map currently including the semantic information can be obtained, and whether a dynamic obstacle exists can be detected through target detection performed by the obtained current image information.
As another example, if the navigation map including the semantic information is acquired by other means, for example, by acquiring image information by a camera other than the cleaning robot, it is also possible to detect whether or not there is a dynamic obstacle from the semantic information in the navigation map including the semantic information.
Step 404, the cleaning robot detects whether the cleaning robot is in a trapped state, if so, step 405 is executed, otherwise, normal cleaning is performed.
As an example, route planning is performed according to current location information and a navigation map currently including semantic information. And if the planned path from the current position to the target position is drawn out according to the current navigation map without regulation, judging that the navigation map is in the trapped state.
Step 405, in a situation that the dynamic obstacle is detected, acquiring first edge profile information for characterizing a range of the trapped area, second position information of the detected dynamic obstacle, and second edge profile information of the detected dynamic obstacle for characterizing a spatial distribution of the dynamic obstacle.
As an example, current position information is determined as first position information that is trapped, second position information of the detected dynamic obstacle is determined based on a navigation map including semantic information, and first edge profile information that characterizes a trapped area range and second edge profile information are searched.
In order to improve the accuracy of the edge contour information, the edge contour information may be clustered.
Referring to fig. 6, fig. 6 is a schematic diagram of acquiring first edge contour information and obstacle edge contour information. In the figure, the cleaning robot searches at the current position based on the navigation map to obtain first edge profile information and second edge profile information, such as an edge profile of a gray portion 601 and an edge profile of a dynamic obstacle 602 in the figure.
In step 406, the first edge profile information is used to obtain the range of the trapped region.
As an example, from the first edge profile information, the size of the area formed by the first edge profile is calculated as the trapped region range.
In the following step, the planned path will be adjusted according to the determined area where the detected dynamic obstacle is deemed to be absent, based on the navigation map information.
As an example, in step 407, it is determined whether the range of the trapped area is smaller than a set area threshold, if so, it indicates that the current trapped area is narrow, and a random motion trapping-free manner may be adopted to trap the trapped area, otherwise, it indicates that the current trapped area is large, step 408 is executed,
step 408, judging whether a coincident point with the same position information exists between the first edge profile and the second edge profile according to the first edge profile information and the second edge profile information,
if the matching points exist or the number of the matching points reaches the set first number threshold, according to the second position information, the dynamic obstacle information corresponding to the second edge contour in the navigation map containing the semantic information is shielded, namely, the dynamic obstacle information corresponding to the second edge contour in the navigation map is removed, so that the dynamic obstacle is regarded as not existing, the path is planned based on the navigation map after the dynamic obstacle information is removed, the adjustment of the planned path is realized, the navigation map is advanced according to the current planned path, the step 404 is returned until the number of times of returning to the step 404 reaches the set number threshold or the duration time of the trapped state reaches the set time threshold.
If not, step 409 is performed,
in this step, as an example, it may be determined whether there is an anastomotic point of both the first edge profile and the second edge profile by calculating an intersection of the first edge profile and the second edge profile; or, the determination may be directly performed by the edge profile information, that is, if the two edge profiles have the same position information, the same position part is the matching point.
Step 409, judging whether an anastomosis point exists between the second edge contour and the existing planning path according to the second edge contour information and the existing planning path information,
if the matching points exist or the number of the matching points reaches a set second number threshold, shielding the dynamic obstacle information corresponding to the second edge contour in the navigation map containing the semantic information, namely, rejecting the dynamic obstacle information corresponding to the second edge contour in the navigation map to regard the dynamic obstacle as nonexistence; and planning a path based on the navigation map after the dynamic obstacle information is removed, so as to realize the adjustment of the planned path, advancing according to the current planned path, returning to the step 404 until the number of times of returning to the step 404 reaches a set number threshold or the duration of the trapped state reaches a set time threshold.
Otherwise, returning to step 404 until the number of times of returning to step 404 reaches the set number threshold or the duration of the trapped state reaches the set time threshold, and then reporting the trapped information.
In this step, as an example, it may be determined whether there is an anastomotic point between the first edge contour and the second edge contour by calculating an intersection point of the second edge contour and the existing planned path; the determination can also be performed by using the position information, that is, if the second edge contour has the same position information with the planned path, the same position part is the anastomosis point. The existing planned path may be a historical planned path or an original planned path.
In the embodiment, the navigation map marked with the dynamic barrier information is used for carrying out information shielding on the current dynamic barrier in the trapped state, so that the current dynamic barrier is processed into a mode without the existence of the barrier, and the current dynamic barrier is virtualized into a passable area without the barrier, so that the passable area is expanded for the cleaning robot in the trapped state, a path is planned based on the virtually expanded passable area, and the success rate of trapping is favorably improved, so that the cleaning robot can rapidly escape; the multiple combined trap removal logics can form a trap removal strategy, so that multiple planned paths can be tried to travel, and the trap removal success rate is improved. Compared with a processing mode of regarding the dynamic barrier as a static barrier in the escaping process, the dynamic variability of the space position of the dynamic barrier can be fully utilized, unfavorable factors are converted into unfavorable factors, and the escaping success rate is increased.
Referring to fig. 7, fig. 7 is a schematic view of a processing apparatus for removing a mobile robot according to an embodiment of the present disclosure. The device includes:
a detection module for detecting whether a dynamic barrier with a variable state exists in the current environment and detecting whether the mobile robot is in a trapped state,
a getting-out processing module for virtualizing the position area where the dynamic obstacle is detected as an obstacle-free area when the dynamic obstacle is detected and the mobile robot is in a state of getting out of the way, adjusting the planned path by using the virtual obstacle-free area,
and the movement control module is used for advancing based on the adjusted planned path.
The apparatus further comprises:
and the escaping strategy selection module is used for acquiring the range of the mobile robot trapped area, escaping in a random motion mode under the condition that the range of the mobile robot trapped area is smaller than the set area threshold value, and triggering the processing module to work under the condition that the range of the mobile robot trapped area is not smaller than the set area threshold value.
The detection module comprises:
the dynamic obstacle detection submodule is used for identifying the dynamic obstacles according to semantic information with obstacle category information in the navigation map information based on the acquired navigation map information of the current environment, wherein the navigation map information comprises the semantic information;
a trapped state detection submodule for planning a current path based on the navigation map information, and determining that the vehicle is in a trapped state when a path from the current position to the target position cannot be planned
The de-trapping processing module is configured to adjust the planned path according to the determined area in a situation where the detected dynamic obstacle is deemed to be absent, based on the navigation map information.
The device also includes:
the navigation map acquisition module is used for acquiring image information and first map information in the current environment, carrying out target detection based on the image information, and obtaining a target detection result for classifying targets in the image, wherein the target detection result at least comprises a classification result of a static target and a dynamic target, and determining semantic information of map point cloud in the first map information by using the target detection result to obtain the navigation map comprising the semantic information.
Referring to fig. 8, fig. 8 is another schematic view of the processing device for escaping from the mobile robot or the mobile robot according to the embodiment of the present disclosure. Comprises a processor and a memory, wherein the memory stores a computer program, and the processor is configured to execute the steps of the computer program to realize the processing method for escaping from the difficulties of any mobile robot.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored in the storage medium, and when being executed by a processor, the computer program realizes the steps of the processing method for escaping from the difficulties of any mobile robot.
For the device/network side device/storage medium embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A processing method for getting rid of poverty of a mobile robot is characterized by comprising the following steps:
on the side of the mobile robot,
detecting whether a dynamic barrier with a variable state exists in the current environment, detecting whether the mobile robot is in a trapped state,
if the dynamic obstacle is detected and the mobile robot is in a trapped state, the position area where the dynamic obstacle is detected is virtualized to be an obstacle-free area, the planned path is adjusted by using the virtualized obstacle-free area,
and traveling based on the adjusted planned path.
2. The process of claim 1, wherein the process further comprises:
acquiring the range of the trapped area of the mobile robot,
when the range of the trapped area of the mobile robot is smaller than the set area threshold value, the mobile robot is trapped by adopting a random motion mode,
and under the condition that the range of the trapped area of the mobile robot is not less than the set area threshold value, the step of virtualizing the position area where the dynamic obstacle is detected as an obstacle-free area and adjusting the planned path by using the virtual obstacle-free area is executed.
3. The process of claim 2, wherein said detecting whether a dynamic barrier with a variable state exists in the current environment comprises:
acquiring navigation map information of the current environment, wherein the navigation map information comprises semantic information,
identifying the dynamic barrier according to semantic information with barrier category information in the navigation map information;
the step of virtualizing the position area where the detected dynamic obstacle is located as an obstacle-free area, and adjusting a planned path by using the virtualized obstacle-free area includes:
based on the navigation map information, the planned path is adjusted according to the area determined in the case where the detected dynamic obstacle is deemed to be absent.
4. The processing method of claim 3, wherein the obtaining the range of the mobile robot trapped area comprises:
based on the information on the navigation map, the navigation map information,
acquiring first position information of the mobile robot, acquiring first edge contour information for representing the range of a trapped area according to the first position information,
and determining the range of the trapped region by using the first edge profile information.
5. The processing method of claim 3, wherein adjusting the planned path according to the determined area where the detected dynamic obstacle is deemed to be absent comprises:
acquiring second position information of the dynamic obstacle and second edge profile information of the dynamic obstacle, wherein the second edge profile information is used for representing the space distribution of the dynamic obstacle,
judging whether the first edge profile and the second edge profile have matching points or whether the number of the matching points is larger than a set first number threshold value according to the first edge profile information and the second edge profile information,
if so, shielding the dynamic obstacle information in the navigation map information, planning a path according to the navigation map information after shielding the dynamic obstacle information,
otherwise, judging whether the first edge contour information and the existing planned path have matching points or whether the number of the matching points is larger than a set second number threshold, if so, shielding the dynamic barrier information in the navigation map information, planning the path according to the navigation map information after shielding the dynamic barrier information,
and repeatedly executing the step of judging whether the matching points exist between the first edge profile and the second edge profile or not or whether the number of the matching points is larger than a set first number threshold or not according to the first edge profile information and the second edge profile information until the repeated execution times reach the set time threshold or the duration of the trapped state reaches the set time threshold.
6. The processing method of claim 3, wherein the obtaining navigation map information of the current environment comprises:
acquiring image information and first map information in a current environment,
performing target detection based on the image information to obtain target detection results for classifying the targets in the image, wherein the target detection results at least comprise classification results of static targets and dynamic targets,
and determining semantic information of the map point cloud in the first map information by using the target detection result to obtain the navigation map comprising the semantic information.
7. The process of claim 6, wherein said detecting whether the mobile robot is in a trapped state comprises:
planning a current path based on the navigation map information,
judging that the vehicle is in a trapped state if a path from the current position to the target position is drawn out without regulation;
the determining semantic information of the map point cloud in the first map information by using the target detection result includes:
projecting the currently obtained map points into the image by utilizing external reference and internal reference of the camera according to the first map information to obtain the position information of the projection points of the map points in the pixel coordinate system,
and determining a classification result corresponding to the target frame as semantic information of the projection point according to the target frame in the target detection result of the projection point position information.
8. A device for handling mobile robots to get rid of difficulties, characterized in that the device comprises a processor and a memory, wherein the memory stores a computer program, and the processor is configured to execute the steps of the computer program to realize the method for handling mobile robots to get rid of difficulties according to any one of claims 1 to 7.
9. A mobile robot comprising a processor and a memory, the memory storing a computer program, the processor being configured to execute the steps of the computer program to implement the method of handling for escaping from a poverty in a mobile robot according to any one of claims 1 to 7.
10. A computer-readable storage medium, wherein a computer program is stored in the storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the processing method for escaping from the stranded mobile robot according to any one of claims 1 to 7.
CN202210397756.9A 2022-04-15 2022-04-15 Mobile robot escaping processing method and device and mobile robot Pending CN114721396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210397756.9A CN114721396A (en) 2022-04-15 2022-04-15 Mobile robot escaping processing method and device and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210397756.9A CN114721396A (en) 2022-04-15 2022-04-15 Mobile robot escaping processing method and device and mobile robot

Publications (1)

Publication Number Publication Date
CN114721396A true CN114721396A (en) 2022-07-08

Family

ID=82243528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210397756.9A Pending CN114721396A (en) 2022-04-15 2022-04-15 Mobile robot escaping processing method and device and mobile robot

Country Status (1)

Country Link
CN (1) CN114721396A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115437388A (en) * 2022-11-09 2022-12-06 成都朴为科技有限公司 Method and device for escaping from poverty of omnidirectional mobile robot
CN116125991A (en) * 2023-02-27 2023-05-16 麦岩智能科技(北京)有限公司 High-end scene-oriented commercial service robot-based forbidden zone escaping, storage medium and equipment
WO2024179496A1 (en) * 2023-02-28 2024-09-06 苏州宝时得电动工具有限公司 Control method, control apparatus, storage medium, and self-moving device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115437388A (en) * 2022-11-09 2022-12-06 成都朴为科技有限公司 Method and device for escaping from poverty of omnidirectional mobile robot
CN116125991A (en) * 2023-02-27 2023-05-16 麦岩智能科技(北京)有限公司 High-end scene-oriented commercial service robot-based forbidden zone escaping, storage medium and equipment
CN116125991B (en) * 2023-02-27 2023-08-15 麦岩智能科技(北京)有限公司 High-end scene-oriented commercial service robot-based forbidden zone escaping, storage medium and equipment
WO2024179496A1 (en) * 2023-02-28 2024-09-06 苏州宝时得电动工具有限公司 Control method, control apparatus, storage medium, and self-moving device

Similar Documents

Publication Publication Date Title
CN114721396A (en) Mobile robot escaping processing method and device and mobile robot
EP3293669B1 (en) Enhanced camera object detection for automated vehicles
EP3825903A1 (en) Method, apparatus and storage medium for detecting small obstacles
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
Rummelhard et al. Conditional monte carlo dense occupancy tracker
Zhou et al. Self‐supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain
JP5822255B2 (en) Object identification device and program
JP2022548743A (en) Obstacle information sensing method and device for mobile robot
US11562524B2 (en) Mobile robots to generate occupancy maps
CN110543807A (en) method for verifying obstacle candidate
KR20210027778A (en) Apparatus and method for analyzing abnormal behavior through object detection and tracking
US20200401151A1 (en) Device motion control
CN113359692A (en) Obstacle avoidance method and movable robot
CN104517125A (en) Real-time image tracking method and system for high-speed article
Baig et al. A robust motion detection technique for dynamic environment monitoring: A framework for grid-based monitoring of the dynamic environment
Hussain et al. Drivable region estimation for self-driving vehicles using radar
Chavez-Garcia Multiple sensor fusion for detection, classification and tracking of moving objects in driving environments
CN112986982A (en) Environment map reference positioning method and device and mobile robot
CN115509231A (en) Robot following obstacle avoidance method and device and storage medium
CN111309011A (en) Decision-making method, system, equipment and storage medium for autonomously exploring target
JP2022512165A (en) Intersection detection, neural network training and intelligent driving methods, equipment and devices
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
CN117152719B (en) Weeding obstacle detection method, weeding obstacle detection equipment, weeding obstacle detection storage medium and weeding obstacle detection device
CN115909402A (en) Method and device for determining pedestrian sight line gathering area and computer readable storage medium
Ben Romdhane et al. A comparative study of vision-based lane detection methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination