CN117330069A - Obstacle detection method, path planning method and self-mobile device - Google Patents
Obstacle detection method, path planning method and self-mobile device Download PDFInfo
- Publication number
- CN117330069A CN117330069A CN202311212949.3A CN202311212949A CN117330069A CN 117330069 A CN117330069 A CN 117330069A CN 202311212949 A CN202311212949 A CN 202311212949A CN 117330069 A CN117330069 A CN 117330069A
- Authority
- CN
- China
- Prior art keywords
- obstacle
- sensor
- blind area
- list
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000000007 visual effect Effects 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 3
- 206010000117 Abnormal behaviour Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The embodiment of the application provides an obstacle detection method, a path planning method, self-mobile equipment and a computer storage medium. The obstacle detection method comprises the following steps: acquiring a first obstacle list detected by a sensor at a first moment; determining whether a blind area obstacle exists from the first obstacle list based on the view angle information of the sensor; if blind area obstacles exist in the first obstacle data, recording position information of the blind area obstacles, otherwise, acquiring a second obstacle list detected by a sensor at a second moment, wherein the first moment is the last moment of the second moment; and determining blind area obstacles in the first obstacle list based on the second obstacle list, and recording position information of the blind area obstacles. The technical scheme provided by the embodiment of the invention combines the visual field range of the sensor and the detection results of the sensor for two times, realizes the detection of the blind area obstacle, and has higher detection accuracy.
Description
Technical Field
The embodiment of the invention relates to the technical field of decision planning, in particular to an obstacle detection method, a path planning method, self-mobile equipment and a computer storage medium.
Background
A self-moving device refers to a device capable of autonomous movement and navigation, typically sensing the environment with sensors, and making decisions based on the data detected by the sensors using algorithms and control systems. These devices may perform various tasks or perform certain tasks autonomously without human intervention.
Environments within a specific angle or distance range are typically detected from sensors configured on the mobile device, resulting in a certain detection dead zone. In the detection blind area, there is usually a blind area obstacle that affects the movement of the self-moving device, and if the self-moving device makes a decision based on the data detected by the sensor only during the movement, abnormal behaviors such as collision and unreasonable obstacle avoidance often occur.
Therefore, how to detect the blind area obstacle in the detection blind area of the sensor becomes a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides an obstacle detection method, a path planning method, self-mobile equipment and a computer storage medium.
In a first aspect, an embodiment of the present invention provides an obstacle detection method, applied to a self-mobile device, where the self-mobile device is configured with at least one sensor, the method includes:
Acquiring a first obstacle list detected by the sensor at a first moment;
determining whether a blind area obstacle exists from the first obstacle list based on the view angle information of the sensor;
if blind area obstacles exist in the first obstacle data, recording position information of the blind area obstacles, otherwise, acquiring a second obstacle list detected by the sensor at a second moment, wherein the first moment is the moment above the second moment;
and determining blind area obstacles in the first obstacle list based on the second obstacle list, and recording position information of the blind area obstacles.
In a second aspect, an embodiment of the present invention provides a path planning method, applied to a self-mobile device, where the self-mobile device is configured with at least one sensor, the method includes:
acquiring sensor data of the sensor and pre-recorded position information of a blind area obstacle, wherein the position information of the blind area obstacle is determined by the following operations: acquiring a first obstacle list detected by the sensor at a first moment, and determining whether a blind area obstacle exists in the first obstacle list based on the view angle information of the sensor; if blind area obstacles exist in the first obstacle data, recording position information of the blind area obstacles, otherwise, acquiring a second obstacle list detected by the sensor at a second moment, wherein the first moment is the moment above the second moment; determining blind area obstacles in the first obstacle list based on the second obstacle list, and recording position information of the blind area obstacles;
And planning a moving path of the self-moving device based on the sensor data and the pre-recorded position information of the blind area obstacle.
In a third aspect, an embodiment of the present invention provides an obstacle detection apparatus, which is applied to a self-mobile device, where the self-mobile device is configured with at least one sensor, and the method includes:
a first acquisition module, configured to acquire a first obstacle list detected by the sensor at a first time;
a first determining module, configured to determine whether a blind area obstacle exists in the first obstacle list based on the view angle information of the sensor;
the recording module is used for recording the position information of the blind area obstacle if the blind area obstacle exists in the first obstacle data, otherwise, acquiring a second obstacle list detected by the sensor at a second moment, wherein the first moment is the moment before the second moment;
and the second determining module is used for determining blind area obstacles in the first obstacle list based on the second obstacle list and recording position information of the blind area obstacles.
In a fourth aspect, an embodiment of the present invention provides a path planning apparatus, which is applied to a self-mobile device, where the self-mobile device is configured with at least one sensor, and the method includes:
A second acquisition module, configured to acquire sensor data of the sensor, and pre-recorded position information of a blind area obstacle, where the position information of the blind area obstacle is determined by: acquiring a first obstacle list detected by the sensor at a first moment, and determining whether a blind area obstacle exists in the first obstacle list based on the view angle information of the sensor; if blind area obstacles exist in the first obstacle data, recording position information of the blind area obstacles, otherwise, acquiring a second obstacle list detected by the sensor at a second moment, wherein the first moment is the moment above the second moment; determining blind area obstacles in the first obstacle list based on the second obstacle list, and recording position information of the blind area obstacles;
and the path planning module is used for planning a moving path of the self-moving equipment based on the sensor data and the position information of the pre-recorded blind area obstacle.
In a fifth aspect, an embodiment of the present invention provides a self-mobile device, including a device body, where the device body is provided with one or more sensors, a processing component and a storage component;
The storage component stores one or more computer instructions; the one or more computer instructions are used for being called and executed by the processing component to realize the obstacle detection method provided by the embodiment of the invention or realize the path planning method provided by the embodiment of the invention.
In a sixth aspect, in an embodiment of the present invention, a computer storage medium is provided, where a computer program is stored, where the computer program is executed by a computer, to implement the method for detecting an obstacle provided in the embodiment of the present invention, or implement the method for planning a path provided in the embodiment of the present invention.
The embodiment of the invention provides a method for detecting an obstacle, which comprises the following steps: acquiring a first obstacle list detected by a sensor at a first moment; determining whether a blind area obstacle exists from the first obstacle list based on the view angle information of the sensor; if blind area obstacles exist in the first obstacle data, recording position information of the blind area obstacles, otherwise, acquiring a second obstacle list detected by a sensor at a second moment, wherein the first moment is the last moment of the second moment; based on the second obstacle list, determining blind area obstacles in the first obstacle list, recording the position information of the blind area obstacles, combining the visual field range of the sensor and the detection results of the sensor for two times, and realizing detection of the blind area obstacles with higher detection accuracy.
These and other aspects of the invention will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 schematically illustrates a flowchart of an obstacle detection method according to an embodiment of the present invention;
FIG. 2 schematically illustrates a view angle of a sensor provided by an embodiment of the present invention;
FIG. 3 schematically illustrates a schematic diagram of determining a target obstacle according to an embodiment of the invention;
FIG. 4 schematically illustrates a flow chart of a path planning method according to another embodiment of the present invention;
fig. 5 schematically shows a block diagram of an obstacle detecting apparatus provided by an embodiment of the present invention;
fig. 6 schematically shows a block diagram of a path planning apparatus according to an embodiment of the invention.
Detailed Description
In order to enable those skilled in the art to better understand the present invention, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present invention with reference to the accompanying drawings.
In some of the flows described in the specification and claims of the present invention and in the foregoing figures, a plurality of operations occurring in a particular order are included, but it should be understood that the operations may be performed out of order or performed in parallel, with the order of operations such as 101, 102, etc., being merely used to distinguish between the various operations, the order of the operations themselves not representing any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
A self-moving device refers to a device capable of autonomous movement and navigation, typically sensing the environment with sensors, and making decisions based on the data detected by the sensors using algorithms and control systems. These devices may perform various tasks or perform certain tasks autonomously without human intervention.
Environments within a specific angle or distance range are typically detected from sensors configured on the mobile device, resulting in a certain detection dead zone. In the detection blind area, there is usually a blind area obstacle that affects the movement of the self-moving device, and if the self-moving device makes a decision based on the data detected by the sensor only during the movement, abnormal behaviors such as collision and unreasonable obstacle avoidance often occur.
Therefore, how to detect the blind area obstacle in the detection blind area of the sensor becomes a technical problem to be solved urgently.
In order to solve the technical problems in the related art, the embodiment of the invention provides an obstacle detection method, which comprises the following steps: acquiring a first obstacle list detected by a sensor at a first moment; determining whether a blind area obstacle exists from the first obstacle list based on the view angle information of the sensor; if blind area obstacles exist in the first obstacle data, recording position information of the blind area obstacles, otherwise, acquiring a second obstacle list detected by a sensor at a second moment, wherein the first moment is the last moment of the second moment; based on the second obstacle list, determining blind area obstacles in the first obstacle list, recording the position information of the blind area obstacles, combining the visual field range of the sensor and the detection results of the sensor for two times, and realizing detection of the blind area obstacles with higher detection accuracy.
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
Fig. 1 schematically illustrates a flowchart of an obstacle detection method according to an embodiment of the invention, where the obstacle detection method may be applied to a self-mobile device, and the self-mobile device is configured with at least one sensor, and as illustrated in fig. 1, the obstacle detection method may include the following steps:
101, acquiring a first obstacle list detected by a sensor at a first moment;
102, determining whether a blind area obstacle exists from a first obstacle list based on view angle information of a sensor;
103, if blind area obstacles exist in the first obstacle data, recording position information of the blind area obstacles, otherwise, acquiring a second obstacle list detected by a sensor at a second moment, wherein the first moment is the last moment of the second moment;
104, determining blind area obstacles in the first obstacle list based on the second obstacle list, and recording position information of the blind area obstacles.
According to an embodiment of the present invention, the self-moving device may be any device capable of moving autonomously in the environment in which it is located, for example, the self-moving device may include a robot, a purifier, an unmanned car, and the like. The robots may include a floor sweeping robot, a glass wiping robot, a home accompanying robot, a greeting robot, an autonomous service robot, and the like. The environment in which the self-mobile device is located may include, for example, an indoor environment, an outdoor environment, and the like.
According to an embodiment of the present invention, the first obstacle list may include a plurality of obstacles, which may be all objects detected by the sensor that may be obstructing the movement of the self-mobile device at the first moment. In an indoor environment, the obstacle may include, for example, a wall, furniture, home appliances, etc.; in an outdoor environment, the obstacle may include, for example, a pedestrian, a motor vehicle, a non-motor vehicle, or the like.
According to an embodiment of the invention, the sensor may comprise, for example, a camera, a lidar, an infrared sensor, etc. The field angle of the sensor may refer to the range perceived by the sensor and may be used to describe the detection angle that the sensor can cover.
With the sensor as the camera, the view angle of the camera can be different according to the design and the application of the camera, the common camera generally has the view angle with the horizontal range from 30 degrees to 120 degrees, and the view angle of the wide-angle camera can generally reach 150 degrees.
Since the sensor typically has a particular view angle, which may result in an obstacle being within the view angle of the sensor at a first time, as the view angle of the sensor deviates from the movement of the mobile device, in one embodiment of the invention it may be determined from the first list of obstacles by the view angle information of the sensor whether a blind spot obstacle is present. For example, an obstacle that appears outside the angle of view of the sensor, and that appears at the edge of the angle of view of the sensor, may be determined as a blind obstacle.
According to the embodiment of the invention, in the case that no blind area obstacle exists in the first obstacle list based on the view angle information of the sensor, the second obstacle list detected at the second moment of the sensor may be acquired, and the second obstacle list may include a plurality of obstacles therein, where the plurality of obstacles may be all objects detected by the sensor that may cause an obstacle to the movement of the self-mobile device at the second moment. In an indoor environment, the obstacle may include, for example, a wall, furniture, home appliances, etc.; in an outdoor environment, the obstacle may include, for example, a pedestrian, a motor vehicle, a non-motor vehicle, or the like.
According to an embodiment of the invention, the second time instant may be a time instant subsequent to the first time instant.
According to an embodiment of the present invention, the time interval between the first time and the second time may be related to the detection period of the sensor, i.e. the first time and the second time may be the times respectively corresponding to the two front and rear detections of the sensor. For example, in the case where the detection period of the sensor is 1 second, the second time may be one second after the first time, and in the case where the detection period of the sensor is 1 minute, the second time may be one minute after the first time.
In a practical application scenario, the positional relationship of a plurality of obstacles is often complex, and in a possible scenario, there may be some obstacle detected by the sensor at the first moment, and the obstacle is blocked by other obstacles along with the movement of the self-moving device, in this case, although the obstacle is still in the view angle of the sensor, it is still a blind area obstacle for the self-moving device alone. Thus, in another possible implementation of the present invention, in a case where it is determined that there is no blind area obstacle in the first obstacle list based on the view angle information of the sensor, the first obstacle list and the second obstacle list may be combined to determine a blind area obstacle, which may include a blind area obstacle that is blocked by other obstacles.
According to an embodiment of the present invention, determining whether a blind area obstacle exists from the first obstacle list based on the view angle information of the sensor may be specifically implemented as:
sequentially determining a first candidate obstacle from a plurality of obstacles in a first obstacle list;
determining location information of a first candidate obstacle;
determining a first azimuth angle of the first candidate obstacle and the sensor based on the position information;
based on the first azimuth and view angle information, it is determined whether the first candidate obstacle is a blind area obstacle.
According to the embodiment of the invention, each obstacle in the first obstacle list can be traversed in sequence to respectively determine the first azimuth angle of each obstacle and the sensor, so as to respectively determine whether each obstacle is a blind area obstacle.
According to an embodiment of the present invention, the position information of the first candidate obstacle may be determined in the form of coordinates.
According to the embodiment of the invention, when determining the position information of the first candidate obstacle, pose information of the self-mobile device can be further determined, the pose information can comprise the current position and the pose of the self-mobile device, and accordingly, the position of the self-mobile device can also be expressed in the form of coordinates, and the pose of the self-mobile device can comprise the orientation of the self-mobile device, for example.
According to the embodiment of the invention, after the coordinate position of the first candidate obstacle and the coordinate position of the self-mobile device are determined, the first azimuth angle of the first candidate obstacle relative to the self-mobile device can be determined, so that whether the first candidate obstacle is a blind area obstacle can be determined through the relation between the first azimuth angle and the view angle information.
According to an embodiment of the invention, in one possible implementation, the first azimuth angle may be calculated using an arctangent function. For example, the coordinates of the first candidate obstacle may be (x 1 ,y 1 ) The coordinates of the self-mobile device may be (x 2 ,y 2 ) The first azimuth angle can thus be calculated using the following formula (1).
First azimuth=atan2 (y 2 -y 1 ,x 2 -x 1 ); (1)
According to the embodiment of the invention, after the processing of the currently selected first candidate obstacle is finished, another obstacle can be selected from the first obstacle list, the obstacle is determined to be the first candidate obstacle, and the subsequent operation is correspondingly executed until each obstacle in the first obstacle list is traversed.
According to an embodiment of the present invention, based on the first azimuth angle and the view angle information, determining whether the first candidate obstacle is a blind area obstacle may be specifically implemented as:
Determining an effective view angle from the view angle information;
determining whether the first azimuth angle is included in the effective view angle;
in the case where the first azimuth angle is not included in the effective view angle, the first candidate obstacle is determined to be a blind area obstacle.
According to embodiments of the invention, the effective field of view angle may include a central region away from the edges of the field of view. Since the effective field of view angle is in the central region of the field of view angle of the sensor, an obstacle in this central region will not deviate from the field of view angle of the sensor with movement from the mobile device.
According to an embodiment of the present invention, if the first candidate obstacle has a certain volume, the first azimuth angle may be an azimuth angle of a midpoint of a face of the first candidate obstacle facing the sensor and the sensor, or the azimuth angles of a side of the first candidate obstacle facing the sensor, which is close to the sensor, and a side of the first candidate obstacle, which is far from the sensor, may be respectively determined, so that the first azimuth angle may be an angular range.
According to an embodiment of the present invention, in the case where it is determined that the first candidate obstacle is at the effective view angle of the sensor through the first azimuth angle, it may be determined that the first candidate obstacle is at the center area of the view angle of the sensor, not the blind area obstacle; in the case that the first candidate obstacle is determined not to be in the effective view angle of the sensor through the first azimuth angle, it may be determined that the first candidate obstacle is in an edge region of the view angle of the sensor, and is actually out of the view angle, it may be determined that the first candidate obstacle is a blind area obstacle.
According to an embodiment of the present invention, determining an effective view angle from view angle information may be specifically implemented as:
determining a first field of view and a second field of view at a field of view boundary of the sensor;
the view angle excluding the first view range and the second view range in the view angle information is determined as an effective view angle.
Fig. 2 schematically shows a schematic view of a view angle of a sensor according to an embodiment of the present invention.
As shown in fig. 2, 201 may represent a self-moving device, 202 may represent a sensor mounted on the self-moving device 201, and the field angle of view of the sensor may be an angle α between a dotted line 203 and a dotted line 204.
Dashed lines 203 and 204 may be edges of the field angle of sensor 202 such that included angle α may be determined first 1 And an included angle alpha 2 And the sensor 202 is arranged at an angle alpha with the dotted line 203 1 And an included angle alpha with the dashed line 204 2 Is determined as a first field of view and a second field of view. The first field of view and the second field of view may be field boundaries of view angles of the sensor 202.
Based on this, the angle alpha can be determined 3 The determined view angle is determined as the effective view angle, the included angle alpha 3 Can be the included angle alpha divided by alpha 1 And alpha 2 Is a part of the same. It should be noted that the included angle α 1 And an included angle alpha 2 May be the same or different, and includes an angle alpha 1 And an included angle alpha 2 The respective values can be selected by those skilled in the art according to the actual application requirements.
As shown in fig. 2, 205 may represent a first obstacle and 206 may represent a second obstacle. By the above operation, it can be determined that the first obstacle 205 is within the effective view angle, and thus the first obstacle 205 is not a blind area obstacle, and the second obstacle 206 is at the view boundary, being a blind area obstacle.
According to an embodiment of the present invention, based on the second obstacle list, determining the blind area obstacle in the first obstacle list may be specifically implemented as:
sequentially determining a second candidate obstacle from a plurality of obstacles in the first obstacle list;
determining location information of a second candidate obstacle;
determining a second azimuth angle of the second candidate obstacle and the sensor and first distance information based on the position information;
determining a target obstacle from the second obstacle list based on the second azimuth;
determining second distance information of the target obstacle and the sensor;
and determining that the target obstacle is a blind area obstacle under the condition that the second distance information is larger than the first distance information.
According to the embodiment of the invention, each obstacle in the first obstacle list can be traversed in turn, each obstacle is determined to be a second candidate obstacle in turn, and a second azimuth angle of each second candidate obstacle and the sensor is determined, so that whether a blind area obstacle exists in the second obstacle list is determined.
According to an embodiment of the present invention, the position information of the second candidate obstacle may be determined in the form of coordinates.
According to the embodiment of the invention, when determining the position information of the second candidate obstacle, pose information of the self-mobile device can be further determined, the pose information can comprise the current position and the pose of the self-mobile device, and accordingly, the position of the self-mobile device can also be expressed in the form of coordinates, and the pose of the self-mobile device can comprise the orientation of the self-mobile device, for example.
According to an embodiment of the present invention, after determining the coordinate position of the self-mobile device and the coordinate position of the second candidate obstacle, the second azimuth and the first distance information are obtained based on calculation according to the coordinates.
According to an embodiment of the present invention, the second azimuth angle may be calculated by referring to the above formula (1).
In one possible implementation, the first distance information may be calculated by euclidean distance according to an embodiment of the present invention. Specifically, the following formula (2) can be referred to.
First distance signalInformation = sqrt ((x) 4 -x 3 )^2+(y 4 -y 3 )^2); (2)
Wherein, (x) 4 ,y 4 ) May represent coordinates of the second candidate obstacle, (x) 3 ,y 3 ) May be represented as self-mobile.
According to an embodiment of the present invention, the target obstacle may include, in the second obstacle list, an obstacle in a range around the third obstacle, except for the third obstacle corresponding to the second candidate obstacle list.
According to the embodiment of the invention, in the case that the target obstacle is located around the third obstacle in a certain range and the second distance information of the target obstacle from the mobile device is larger than the first distance information of the third obstacle from the mobile device, it can be determined that the target obstacle is blocked by the third obstacle and is in the blind area of the sensor, and therefore, the target obstacle can be determined as the blind area obstacle.
According to an embodiment of the present invention, determining the target obstacle from the second obstacle list based on the second azimuth angle may be specifically implemented as:
determining a first field of view based on the second azimuth;
from the second list of obstacles, it is determined whether there is a target obstacle that is within the first field of view.
According to an embodiment of the present invention, if the second candidate obstacle has a certain volume, the second azimuth angle may be an azimuth angle of a midpoint of a face of the second candidate obstacle facing the sensor and the sensor, or an azimuth angle corresponding to a side of the second candidate obstacle facing the sensor, which is close to the sensor, and a side of the second candidate obstacle facing the sensor, which is far from the sensor, may be determined respectively, so that the second azimuth angle may be an angular range.
According to one embodiment of the present invention, when the second azimuth is determined by a midpoint of a face of the second candidate obstacle facing the sensor, determining a first boundary and a second boundary of the second candidate obstacle in the first direction and the second direction, respectively, based on the second azimuth, and determining a portion included in the first boundary and the second boundary as a first field of view; if the second azimuth angle is determined by the azimuth angles respectively corresponding to the side of the second candidate obstacle, which is close to the sensor, and the side of the second candidate obstacle, which is far from the sensor, the angular range included in the second azimuth angle may be determined as the first visual field range.
Fig. 3 schematically illustrates a schematic diagram of determining a target obstacle according to an embodiment of the invention.
Fig. 3a may be information detected at a first time from a sensor 302 deployed on a mobile device 301. In fig. 3a, the angle α may be the angle of view of the sensor 302 from the mobile device currently at point a. As shown in fig. 3a, the view angle α of the sensor 302 includes the obstacle 3031, the obstacle 3032, the obstacle 3033, and the obstacle 3034, so that the first obstacle list may record the obstacle 3031, the obstacle 3032, the obstacle 3033, and the obstacle 3034.
Fig. 3b may be information detected by the sensor 302 at a second time. In fig. 3B, the view angle α of the sensor 302 from the point a to the point B of the mobile device 301 includes only the obstacle 3031 and the obstacle 3032, and the obstacle 3033 and the obstacle 3034 are in the blind area of the sensor 302, so that the obstacle 3031 and the obstacle 3032 are recorded in the second obstacle list.
For the obstacle 3034, since the obstacle 3034 is outside the view angle of the sensor 302, the obstacle 3034 can be directly determined as a blind area obstacle.
For the obstacle 3033, the obstacle 3031 may be first determined as a second candidate obstacle from the first list of obstacles, and then a first field of view may be determined based on a second azimuth of the second candidate obstacle and the sensor 302, where the first field of view may be the included angle α 3 A determined range. As can be seen from fig. 3b, the obstacle 3034 is in the first field of view, and the second distance of the obstacle 3034 from the sensor is greater than the first distance of the obstacle 3031 from the sensor, i.e. the obstacle 3034 is blocked by the obstacle 3031, so that it can be determined that the obstacle 3034 is a blind obstacle.
According to an embodiment of the present invention, recording position information of a blind area obstacle may be specifically implemented as:
Determining whether a position record has been made for the blind area obstacle;
if yes, ignoring the blind area obstacle, otherwise, recording the position information of the blind area obstacle.
According to the embodiment of the invention, repeated recording of blind area obstacles can be avoided by checking before recording the position of the blind area obstacle.
According to an embodiment of the present invention, the acquiring of the first obstacle list detected by the sensor at the first time may be specifically implemented as:
acquiring an initial obstacle list detected by a sensor at a first moment;
and filtering the obstacles with set distance from the sensor in the initial obstacle list to obtain a first obstacle list.
According to an embodiment of the present invention, the initial obstacle list may have recorded therein obstacles that are further away from the mobile device, and for these further away obstacles, the obstacles may be removed from the initial obstacle list.
According to the embodiment of the invention, specifically, the distance from the mobile device to each obstacle in the initial obstacle list can be determined respectively, and the obstacle with the distance larger than the preset distance threshold value can be removed from the initial obstacle list.
According to an embodiment of the present invention, the obstacle detection method may further include:
And converting coordinates of the obstacles recorded in the first obstacle list and the second obstacle list from map coordinates to relative coordinates based on the self-mobile device based on pose information of the self-mobile device so as to determine position information of the obstacles based on the relative coordinates when detecting the obstacles.
Fig. 4 schematically illustrates a flow chart of a path planning method according to another embodiment of the present invention, where the path planning method may be applied to a self-mobile device, and the self-mobile device is configured with at least one sensor, and as illustrated in fig. 4, the path planning method may include the following steps:
401, acquiring sensor data of a sensor, and pre-recorded position information of a blind area obstacle, the position information of the blind area obstacle being determined by: acquiring a first obstacle list detected by a sensor at a first moment, and determining whether a blind area obstacle exists in the first obstacle list based on the view angle information of the sensor; if blind area obstacles exist in the first obstacle data, recording position information of the blind area obstacles, otherwise, acquiring a second obstacle list detected by a sensor at a second moment, wherein the first moment is the last moment of the second moment; determining blind area obstacles in the first obstacle list based on the second obstacle list, and recording position information of the blind area obstacles;
And 402, planning a moving path of the self-moving device based on the sensor data and the pre-recorded position information of the blind area obstacle.
Fig. 5 schematically illustrates a block diagram of an obstacle detecting apparatus according to an embodiment of the invention, which may be applied to a self-moving device configured with at least one sensor, as illustrated in fig. 5, and may include:
a first obtaining module 501, configured to obtain a first obstacle list detected by a sensor at a first time;
a first determining module 502, configured to determine, based on the view angle information of the sensor, whether a blind area obstacle exists in the first obstacle list;
a recording module 503, configured to record, if there is a blind area obstacle in the first obstacle data, position information of the blind area obstacle, and otherwise, acquire a second obstacle list detected by the sensor at a second moment, where the first moment is a moment previous to the second moment;
the second determining module 504 is configured to determine a blind area obstacle in the first obstacle list based on the second obstacle list, and record position information of the blind area obstacle.
According to an embodiment of the present invention, the first determining module 502 includes:
A first determination submodule for sequentially determining a first candidate obstacle from a plurality of obstacles in a first obstacle list;
a second determination submodule for determining position information of the first candidate obstacle;
a third determination sub-module for determining a first azimuth angle of the first candidate obstacle and the sensor based on the location information;
and a fourth determination submodule, configured to determine whether the first candidate obstacle is a blind area obstacle based on the first azimuth angle and the view angle information.
According to an embodiment of the invention, the fourth determination submodule comprises:
a first determination unit configured to determine an effective view angle from the view angle information;
a second determining unit configured to determine whether the first azimuth angle is included in the effective view angle;
and a third determination unit configured to determine that the first candidate obstacle is a blind area obstacle in a case where the first azimuth angle is not included in the effective view angle.
According to an embodiment of the present invention, the first determination unit includes:
a first determination subunit configured to determine a first field of view range and a second field of view range that are at a field of view boundary of the sensor;
and a second determination subunit configured to determine, as the effective field angle, the field angle excluding the first field range and the second field range in the field angle information.
According to an embodiment of the present invention, the second determining module 504 includes:
a fifth determining submodule for sequentially determining a second candidate obstacle from among the plurality of obstacles in the first obstacle list;
a sixth determining submodule for determining position information of the second candidate obstacle;
a seventh determining sub-module for determining a second azimuth angle of the second candidate obstacle with the sensor and the first distance information based on the position information;
an eighth determination submodule for determining a target obstacle from the second obstacle list based on the second azimuth;
a ninth determining submodule for determining second distance information between the target obstacle and the sensor;
and the blind area obstacle determination submodule is used for determining that the target obstacle is a blind area obstacle under the condition that the second distance information is larger than the first distance information.
According to an embodiment of the present invention, the eighth determination submodule includes:
a first visual field range determining unit configured to determine a first visual field range based on the second azimuth;
and a target obstacle determination unit configured to determine whether or not there is a target obstacle within the first field of view from the second obstacle list.
According to an embodiment of the present invention, the second determining module 504 includes:
A position record determining unit for determining whether position recording has been performed on the blind area obstacle;
and the position recording unit is used for ignoring the blind area obstacle if yes, and recording the position information of the blind area obstacle if not.
According to an embodiment of the present invention, the first acquisition module 501 includes:
an initial list acquisition unit configured to acquire an initial obstacle list detected by the sensor at a first time;
and the filtering unit is used for filtering the obstacles which are at a set distance from the sensor in the initial obstacle list to obtain a first obstacle list.
According to an embodiment of the present invention, the obstacle detecting apparatus 500 further includes:
and a coordinate conversion unit configured to convert coordinates of the obstacle recorded in the first obstacle list and the second obstacle list from map coordinates to relative coordinates with respect to the self-mobile device based on pose information of the self-mobile device, so as to determine position information of the obstacle based on the relative coordinates when detecting the obstacle.
The obstacle detection device of fig. 5 may perform the obstacle detection method of the embodiment shown in fig. 1, and its implementation principle and technical effects will not be described again. The specific manner in which the respective modules, units, and operations of the obstacle detecting apparatus in the above embodiments are performed has been described in detail in the embodiments related to the method, and will not be described in detail here.
Fig. 6 schematically shows a block diagram of a path planning apparatus according to an embodiment of the present invention, the path planning apparatus being applied to a self-mobile device, the self-mobile device being configured with at least one sensor, as in fig. 6, the path planning apparatus 600 comprising:
a second acquisition module 601, configured to acquire sensor data of a sensor, and pre-recorded position information of a blind area obstacle, where the position information of the blind area obstacle is determined by: acquiring a first obstacle list detected by a sensor at a first moment, and determining whether a blind area obstacle exists in the first obstacle list based on the view angle information of the sensor; if blind area obstacles exist in the first obstacle data, recording position information of the blind area obstacles, otherwise, acquiring a second obstacle list detected by a sensor at a second moment, wherein the first moment is the last moment of the second moment; determining blind area obstacles in the first obstacle list based on the second obstacle list, and recording position information of the blind area obstacles;
the path planning module 602 is configured to plan a moving path of the self-mobile device based on the sensor data and the pre-recorded position information of the blind area obstacle.
The path planning apparatus shown in fig. 6 may perform the path planning method described in the embodiment shown in fig. 4, and its implementation principle and technical effects are not repeated. The specific manner in which the various modules and units perform operations in the path planning apparatus in the above embodiments has been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The embodiment of the invention also provides self-moving equipment, which comprises an equipment body, wherein the equipment body is provided with one or more sensors, a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions are used for being called and executed by the processing component to realize the obstacle detection method provided by the embodiment of the invention or realize the path planning method provided by the embodiment of the invention.
Of course, the self-mobile device may necessarily also include other components, such as input/output interfaces, communication components, and the like. The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc. The communication component is configured to facilitate wired or wireless communication between the self-mobile device and other devices, and the like.
The embodiment of the invention also provides a computer readable storage medium which stores a computer program, and the computer program can realize the obstacle detection method or the path planning method provided by the embodiment of the invention when being executed by a computer.
The embodiment of the invention also provides a computer program product, which comprises a computer program, wherein the computer program can realize the obstacle detection method or the path planning method provided by the embodiment of the invention when being executed by a computer.
Wherein the processing components of the respective embodiments above may include one or more processors to execute computer instructions to perform all or part of the steps of the methods described above. Of course, the processing component may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements for executing the methods described above.
The storage component is configured to store various types of data to support operation in the device. The memory component may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (12)
1. An obstacle detection method, applied to a self-moving device configured with at least one sensor, comprising:
acquiring a first obstacle list detected by the sensor at a first moment;
determining whether a blind area obstacle exists from the first obstacle list based on the view angle information of the sensor;
if blind area obstacles exist in the first obstacle data, recording position information of the blind area obstacles, otherwise, acquiring a second obstacle list detected by the sensor at a second moment, wherein the first moment is the moment above the second moment;
And determining blind area obstacles in the first obstacle list based on the second obstacle list, and recording position information of the blind area obstacles.
2. The method of claim 1, wherein the determining whether a blind spot obstacle exists from the first obstacle list based on the view angle information of the sensor comprises:
sequentially determining a first candidate obstacle from a plurality of obstacles in the first obstacle list;
determining location information of the first candidate obstacle;
determining a first azimuth angle of the first candidate obstacle and the sensor based on the location information;
based on the first azimuth and the view angle information, determining whether the first candidate obstacle is a blind area obstacle.
3. The method of claim 2, wherein the determining whether the first candidate obstacle is a blind obstacle based on the first azimuth angle and the view angle information comprises:
determining an effective view angle from the view angle information;
determining whether the first azimuth angle is included in the effective field of view angle;
and determining that the first candidate obstacle is a blind area obstacle when the first azimuth angle is not included in the effective view angle.
4. A method according to claim 3, wherein said determining an effective field angle from said field angle information comprises:
determining a first field of view and a second field of view at a field of view boundary of the sensor;
and determining the effective visual field angle as the visual field angle except the first visual field range and the second visual field range in the visual field angle information.
5. The method of claim 1, wherein the determining blind zone obstacles in the first obstacle list based on the second obstacle list comprises:
sequentially determining a second candidate obstacle from a plurality of obstacles in the first obstacle list;
determining location information of the second candidate obstacle;
determining a second azimuth angle of the second candidate obstacle with the sensor and first distance information based on the position information;
determining a target obstacle from the second obstacle list based on the second azimuth;
determining second distance information of the target obstacle and the sensor;
and determining that the target obstacle is a blind area obstacle under the condition that the second distance information is larger than the first distance information.
6. The method of claim 5, wherein the determining a target obstacle from the second list of obstacles based on the second azimuth comprises:
determining a first field of view based on the second azimuth;
from the second list of obstacles, it is determined whether the target obstacle is present within the first field of view.
7. The method of claim 1, wherein the recording the location information of the blind spot obstacle comprises:
determining whether a position record has been made for the blind area obstacle;
if yes, ignoring the blind area obstacle, otherwise, recording the position information of the blind area obstacle.
8. The method of claim 1, wherein the obtaining a first list of obstacles detected by the sensor at a first time comprises:
acquiring an initial obstacle list detected by the sensor at a first moment;
and filtering the obstacles with set distance from the sensor in the initial obstacle list to obtain the first obstacle list.
9. The method according to claim 1, wherein the method further comprises:
And converting coordinates of the obstacles recorded in the first obstacle list and the second obstacle list from map coordinates to relative coordinates based on the self-mobile device based on pose information of the self-mobile device so as to determine position information of the obstacles based on the relative coordinates when detecting the obstacles.
10. A path planning method, applied to a self-mobile device, the self-mobile device being configured with at least one sensor, the method comprising:
acquiring sensor data of the sensor and pre-recorded position information of a blind area obstacle, wherein the position information of the blind area obstacle is determined by the following operations: acquiring a first obstacle list detected by the sensor at a first moment, and determining whether a blind area obstacle exists in the first obstacle list based on the view angle information of the sensor; if blind area obstacles exist in the first obstacle data, recording position information of the blind area obstacles, otherwise, acquiring a second obstacle list detected by the sensor at a second moment, wherein the first moment is the moment above the second moment; determining blind area obstacles in the first obstacle list based on the second obstacle list, and recording position information of the blind area obstacles;
And planning a moving path of the self-moving device based on the sensor data and the pre-recorded position information of the blind area obstacle.
11. The self-moving equipment is characterized by comprising an equipment body, wherein one or more sensors, a processing assembly and a storage assembly are arranged on the equipment body;
the storage component stores one or more computer instructions; the one or more computer instructions are configured to be invoked by the processing component to implement the obstacle detection method of any one of claims 1 to 9, or to implement the path planning method of claim 10.
12. A computer storage medium storing a computer program which, when executed by a computer, implements the obstacle detection method according to any one of claims 1 to 9, or implements the path planning method according to claim 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311212949.3A CN117330069A (en) | 2023-09-19 | 2023-09-19 | Obstacle detection method, path planning method and self-mobile device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311212949.3A CN117330069A (en) | 2023-09-19 | 2023-09-19 | Obstacle detection method, path planning method and self-mobile device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117330069A true CN117330069A (en) | 2024-01-02 |
Family
ID=89282149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311212949.3A Pending CN117330069A (en) | 2023-09-19 | 2023-09-19 | Obstacle detection method, path planning method and self-mobile device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117330069A (en) |
-
2023
- 2023-09-19 CN CN202311212949.3A patent/CN117330069A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12093050B2 (en) | Robot-assisted processing of a surface using a robot | |
US10127677B1 (en) | Using observations from one or more robots to generate a spatio-temporal model that defines pose values for a plurality of objects in an environment | |
US10006772B2 (en) | Map production method, mobile robot, and map production system | |
JP6672212B2 (en) | Information processing apparatus, vehicle, information processing method and program | |
KR101956447B1 (en) | Method and apparatus for position estimation of unmanned vehicle based on graph structure | |
EP3159123A1 (en) | Device for controlling driving of mobile robot having wide-angle cameras mounted thereon, and method therefor | |
US9129523B2 (en) | Method and system for obstacle detection for vehicles using planar sensor data | |
CN110850859B (en) | Robot and obstacle avoidance method and obstacle avoidance system thereof | |
US20200233061A1 (en) | Method and system for creating an inverse sensor model and method for detecting obstacles | |
CN111066064A (en) | Grid occupancy mapping using error range distribution | |
CN111258320A (en) | Robot obstacle avoidance method and device, robot and readable storage medium | |
WO2019100354A1 (en) | State sensing method and related apparatus | |
CN114153200A (en) | Trajectory prediction and self-moving equipment control method | |
CN113475977A (en) | Robot path planning method and device and robot | |
CN114779777A (en) | Sensor control method and device for self-moving robot, medium and robot | |
CN117330069A (en) | Obstacle detection method, path planning method and self-mobile device | |
EP4390313A1 (en) | Navigation method and self-propelled apparatus | |
WO2022091595A1 (en) | Object tracking device and object tracking method | |
CN113446971B (en) | Space recognition method, electronic device and non-transitory computer readable storage medium | |
CN115519586A (en) | Cliff detection method for robot, and storage medium | |
CN114777761A (en) | Cleaning machine and map construction method | |
Abd Rahman et al. | Tracking uncertain moving objects using dynamic track management in multiple hypothesis tracking | |
CN112214018A (en) | Robot path planning method and device | |
JP2021135540A (en) | Object tracking system, object tracking method, and object tracking program | |
US20240215788A1 (en) | Collided position determination method, computer-readable storage medium, and robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |