CN116300880A - Visual obstacle avoidance method and system, electronic equipment and medium - Google Patents

Visual obstacle avoidance method and system, electronic equipment and medium Download PDF

Info

Publication number
CN116300880A
CN116300880A CN202310064378.7A CN202310064378A CN116300880A CN 116300880 A CN116300880 A CN 116300880A CN 202310064378 A CN202310064378 A CN 202310064378A CN 116300880 A CN116300880 A CN 116300880A
Authority
CN
China
Prior art keywords
grid
point cloud
cloud data
camera
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310064378.7A
Other languages
Chinese (zh)
Inventor
钟火炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Century Electronics Co ltd
Original Assignee
Suzhou Century Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Century Electronics Co ltd filed Critical Suzhou Century Electronics Co ltd
Priority to CN202310064378.7A priority Critical patent/CN116300880A/en
Publication of CN116300880A publication Critical patent/CN116300880A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a visual obstacle avoidance method, a visual obstacle avoidance system, electronic equipment and a visual obstacle avoidance medium, wherein the visual obstacle avoidance method comprises the steps of determining the current marking condition of a grid map according to auxiliary point cloud data and current point cloud data acquired by a camera at the current sampling moment, wherein the camera is arranged on a target object, the grid map comprises a plurality of grids, the marking condition of the grid map comprises labels of the grids, the labels comprise obstacle grids and free grids, and the auxiliary point cloud data are positioned outside the grid map; and controlling the target object to perform visual obstacle avoidance according to the current marking condition. According to the embodiment provided by the disclosure, the positions of the obstacles can be refreshed in real time, so that the target object can conveniently perform visual obstacle avoidance according to the latest grid map, and the influence on the path planning of the target object due to untimely updating of the obstacle information is avoided.

Description

Visual obstacle avoidance method and system, electronic equipment and medium
Technical Field
The disclosure relates to the technical field of automatic navigation, in particular to a visual obstacle avoidance method and system, electronic equipment and medium.
Background
The mobile robot is taken as an important productivity of the current manufacturing industry, and the autonomous planning navigation capability of the mobile robot is a key for improving the working efficiency of the industry served by the mobile robot. The autonomous obstacle avoidance is an important component of autonomous navigation of the mobile robot, and is an important judgment standard for flexibility and safety of the robot. Autonomous obstacle avoidance relies on accurate identification and positioning of obstacles by sensors onboard the mobile robot.
In the existing scheme, a single-line laser radar or a multi-line laser radar is mostly adopted as a sensor for detecting the obstacle, the traditional single-line laser radar can only identify the obstacle in the plane of the radar installation height, and the object lower or higher than the laser level can not be identified; while conventional multi-line lidar can detect obstacles of different heights, it is expensive. In addition, in the existing scheme, the problem that the robot is located mostly utilizes a complex data structure to eliminate and generate observation points, then judges whether the obstacle disappears or not, is large in overall calculation amount and low in processing speed, and is easy to cause loss caused by untimely obstacle avoidance of the robot.
Disclosure of Invention
In view of the above, the present disclosure provides a visual obstacle avoidance method and system, an electronic device, and a medium, which can refresh the marking condition of a grid map, and facilitate the update path planning of a robot.
According to an aspect of the present disclosure, there is provided a visual obstacle avoidance method, including:
determining a current marking condition of the grid map according to auxiliary point cloud data and current point cloud data acquired by a camera at a current sampling moment, wherein the camera is arranged on a target object, the grid map comprises a plurality of grids, the marking condition of the grid map comprises labels of the grids, the labels comprise barrier grids and free grids, and the auxiliary point cloud data is positioned outside the grid map
And controlling the target object to perform visual obstacle avoidance according to the current marking condition.
In one possible implementation, in the case that the current sampling instant is the first sampling instant:
the determining the current marking condition of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling moment comprises the following steps:
if the current point cloud data includes obstacle point cloud data, determining a position of an obstacle grid according to the obstacle point cloud data, and determining a position of a free grid according to the camera and the auxiliary point cloud data,
determining the current marking condition according to the position of the barrier grid and the position of the free grid;
the step of controlling the target object to perform visual obstacle avoidance according to the current marking condition comprises the following steps:
and determining the path line of the target object based on the free grid so as to perform visual obstacle avoidance.
In a possible implementation manner, the determining the current marking situation of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling time further includes:
if the current point cloud data does not comprise obstacle point cloud data, determining the position of a free grid according to the camera and the auxiliary point cloud data;
Determining the current marking condition according to the position of the free grid;
the step of controlling the target object to perform visual obstacle avoidance according to the current marking condition comprises the following steps:
and determining the path line of the target object based on the free grid so as to perform visual obstacle avoidance.
In one possible implementation, in case the current sampling instant is not the first sampling instant:
the determining the current marking condition of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling moment comprises the following steps:
if the current point cloud data comprises obstacle point cloud data, determining the position of an obstacle grid according to the obstacle point cloud data, and determining the position of a free grid according to the camera and the auxiliary point cloud data;
determining the current marking condition according to the position of the barrier grid and the position of the free grid;
the step of controlling the target object to perform visual obstacle avoidance according to the current marking condition comprises the following steps:
and updating the path line of the target object according to the current marking condition, so as to perform visual obstacle avoidance.
In a possible implementation manner, the determining the current marking situation of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling time further includes:
If the current point cloud data does not comprise obstacle point cloud data, determining the position of a free grid according to the camera and the auxiliary point cloud data;
determining the current marking condition according to the position of the free grid;
the step of controlling the target object to perform visual obstacle avoidance according to the current marking condition comprises the following steps:
and updating the path line of the target object according to the current marking condition, so as to perform visual obstacle avoidance.
In one possible implementation manner, the determining the position of the obstacle grid according to the obstacle point cloud data, and determining the position of the free grid according to the camera and the auxiliary point cloud data includes:
projecting the obstacle point cloud data onto the grid map to obtain an area occupied by the obstacle point cloud data, marking a grid corresponding to the area as an obstacle grid, and for any data point in the auxiliary point cloud data:
and traversing the grids along the direction of the data point by taking the camera grid as an initial point, stopping until the first obstacle grid is traversed, and marking all the camera grids and all grids between the camera grid and the first obstacle grid as free grids, wherein the camera grid is obtained by projecting the center of the camera onto the grid map.
In one possible implementation, the determining the position of the free grid according to the camera and the auxiliary point cloud data includes:
for any one data point in the auxiliary point cloud data:
and traversing the grids along the direction of the data point by taking the camera grid as an initial point, stopping until traversing to the boundary grid of the grid map, and marking the camera grid, the boundary grid and all grids between the camera grid and the boundary grid as free grids, wherein the camera grid is obtained by projecting the center of the camera onto the grid map.
In one possible implementation, the visual obstacle avoidance method further includes:
preprocessing point cloud data acquired by a camera, wherein the preprocessing mode comprises one or more of voxel filtering, direct filtering and radius filtering; and/or the number of the groups of groups,
and determining obstacle point cloud data by utilizing a Narf key point extraction mode.
According to another aspect of the present disclosure, there is provided a visual obstacle avoidance system comprising:
the first processing module is configured to determine the current marking condition of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by a camera at the current sampling moment, and the camera is arranged on a target object;
The second processing module is configured to control the target object to perform visual obstacle avoidance according to the current marking condition;
the grid map comprises a plurality of grids, the marking condition of the grid map comprises a label of the grids, the label comprises an obstacle grid and a free grid, and the auxiliary point cloud data is located outside the grid map.
According to another aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; the processor is configured to implement the visual obstacle avoidance method when executing the instructions stored in the memory.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the above-described visual obstacle avoidance method.
According to the embodiment provided by the disclosure, the camera is used for continuously acquiring new point cloud data, all barriers in the field of view of the camera can be acquired, the current marking condition of the grid map can be rapidly and accurately determined by means of a small amount of auxiliary point cloud data, the calculation amount of the whole scheme is small, the barrier avoiding speed of a target object is improved, namely, the position of the barrier can be refreshed in real time, the grid map is refreshed, the target object can conveniently perform visual barrier avoiding according to the latest grid map, and the influence on the path planning of the target object due to untimely updating of barrier information is avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a flow chart of a visual obstacle avoidance method according to an embodiment of the present disclosure.
Fig. 2 shows a schematic diagram of a grid map according to an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of a grid map according to an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of a grid map according to an embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of a grid map according to an embodiment of the present disclosure.
Fig. 6 shows a schematic diagram of a grid map according to an embodiment of the present disclosure.
Fig. 7 shows a schematic diagram of a grid map according to an embodiment of the present disclosure.
Fig. 8 shows a schematic diagram of a grid map according to an embodiment of the present disclosure.
Fig. 9 shows a schematic diagram of a grid map according to an embodiment of the present disclosure.
Fig. 10 shows a schematic diagram of a grid map according to an embodiment of the present disclosure.
Fig. 11 shows a schematic diagram of a grid map according to an embodiment of the present disclosure.
Fig. 12 shows a flowchart of a visual obstacle avoidance method according to an embodiment of the disclosure.
Fig. 13 illustrates a block diagram of a visual obstacle avoidance system according to an embodiment of the disclosure.
Fig. 14 shows a block diagram of an apparatus for performing a visual obstacle avoidance method, according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
In order to facilitate understanding of the technical solutions provided by the embodiments of the present disclosure by those skilled in the art, a technical environment in which the technical solutions are implemented is described below.
The mobile robot is taken as an important productivity of the current manufacturing industry, and the autonomous planning navigation capability of the mobile robot is a key for improving the working efficiency of the industry served by the mobile robot. The autonomous obstacle avoidance is an important component of autonomous navigation of the mobile robot, and is an important judgment standard for flexibility and safety of the robot. Autonomous obstacle avoidance relies on accurate identification and positioning of obstacles by sensors onboard the mobile robot. In the existing scheme, a single-line laser radar or a multi-line laser radar is mostly adopted as a sensor for detecting the obstacle, the traditional single-line laser radar can only identify the obstacle in the plane of the radar installation height, and the object lower or higher than the laser level can not be identified; while conventional multi-line lidar can detect obstacles of different heights, it is expensive. In addition, in the existing scheme, the problem that the robot is located mostly utilizes a complex data structure to eliminate and generate observation points, then judges whether the obstacle disappears or not, is large in overall calculation amount and low in processing speed, and is easy to cause loss caused by untimely obstacle avoidance of the robot. Therefore, it is of great importance to research an obstacle avoidance method that quickly and accurately identifies and locates obstacles.
According to the embodiment provided by the disclosure, the camera is used for continuously acquiring new point cloud data, all barriers in the field of view of the camera can be acquired, the current marking condition of the grid map can be rapidly and accurately determined by means of a small amount of auxiliary point cloud data, the calculation amount of the whole scheme is small, the barrier avoiding speed of a target object is improved, namely, the position of the barrier can be refreshed in real time, the grid map is refreshed, the target object can conveniently perform visual barrier avoiding according to the latest grid map, and the influence on the path planning of the target object due to untimely updating of barrier information is avoided.
The embodiment of the disclosure proposes a visual obstacle avoidance method, as shown in fig. 1, according to a flowchart of the visual obstacle avoidance method of the embodiment of the disclosure, the visual obstacle avoidance method may include:
s100, determining the current marking condition of a grid map according to auxiliary point cloud data and current point cloud data acquired by a camera at the current sampling moment, wherein the camera is arranged on a target object; the grid map comprises a plurality of grids, the marking condition of the grid map comprises a label of the grids, the label comprises an obstacle grid and a free grid, and the auxiliary point cloud data is positioned outside the grid map;
And S200, controlling the target object to perform visual obstacle avoidance according to the current marking condition.
The camera may select a 3D depth camera based on structured light principles. The 3D depth camera generally adopts a plurality of stripe gratings, that is, the gratings are projected on the surface of the object to be measured in sequence according to a time sequence through a grating projection module, then the gratings on the surface of the object are photographed through a binocular camera, decoding is performed based on a preset encoding rule, binocular parallax matching is performed, and therefore a high-precision 3D point cloud is obtained. The 3D depth camera is utilized to obtain the point cloud data, so that the cost is low, and the recognition of obstacles in a 3-dimensional space is met. Sensor data of a general camera and the like are refreshed at a frequency of 15Hz, and a (two-dimensional) grid map is refreshed at a frequency of 10Hz, so that real-time refreshing of the map is performed.
As shown in the schematic diagram of the grid map according to the embodiment of the present disclosure in fig. 2, the grid map may include a plurality of grids (the grid map in fig. 2 includes 10×10 grids), wherein a center of a camera (hereinafter, referred to as a camera center) may be projected onto the grid occupied on the grid map as a camera grid. The travel route of the target object can be planned according to the marking condition of the grid map. The labeling of the grid map may be related to the labels of the grid. The labels of the grids may comprise obstacle grids, free grids, where an obstacle grid refers to a grid occupied by a projection of an obstacle point cloud (determined from a 3D point cloud acquired by a camera, see below) onto the grid map, and free grids refer to grids in the grid map other than the obstacle grid. The travel route of the target object can be planned according to the position of the free grid, so that collision caused by encountering an obstacle on the travel route can be avoided. The setting of the tag of any grid can be achieved by means of auxiliary point cloud data. The auxiliary point cloud may be set facing the camera, and the distance between the auxiliary point cloud and the target object needs to be greater than the obstacle avoidance distance of the target object, that is, if the auxiliary point cloud is projected onto the grid map, the projection of the auxiliary point cloud (for simplicity of description, the projection of the auxiliary point cloud onto the grid map is referred to as auxiliary point cloud data) is located outside the grid map, and may be shown in fig. 2, where the auxiliary point cloud is projected onto the grid map to form a row of data points (since the auxiliary point cloud data is used in subsequent calculation, the arrangement of the auxiliary point cloud in the 3D space may not be specifically limited, and the number of the auxiliary point cloud points may be set according to the actual requirement, so long as the connection between the data points presented by the projection and the camera can cover all grids within the camera field angle α, and the manner of setting the number of the data points may include, but is not limited to setting according to the resolution of the grid map, so that the auxiliary point cloud set can be ensured not be identified as an obstacle point cloud.
In this embodiment, the 3D depth camera may be disposed on a target object (for example, a mobile robot), and the current marking condition of the grid map may be determined by the auxiliary point cloud data and the current point cloud data acquired by the camera, that is, the respective positions of the obstacle grid and the free grid may be determined, so that the path of the target object may be planned according to the position of the free grid. The mobile robot can be ensured to realize safe obstacle avoidance in a complex environment by processing the auxiliary point cloud data and the real-time point cloud data acquired by the 3D depth camera.
According to the embodiment provided by the disclosure, the camera is used for continuously acquiring new point cloud data, all barriers in the field of view of the camera can be acquired, the current marking condition of the grid map can be rapidly and accurately determined by means of a small amount of auxiliary point cloud data, the calculation amount of the whole scheme is small, the barrier avoiding speed of a target object is improved, namely, the position of the barrier can be refreshed in real time, the grid map is refreshed, the target object can conveniently perform visual barrier avoiding according to the latest grid map, and the influence on the path planning of the target object due to untimely updating of barrier information is avoided.
In one possible implementation, in case the current sampling instant is the first sampling instant:
in S100, determining the current marking condition of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling time may include:
if the current point cloud data includes obstacle point cloud data, determining a position of an obstacle grid according to the obstacle point cloud data, and determining a position of a free grid according to the camera and auxiliary point cloud data,
determining the current marking condition according to the position of the barrier grid and the position of the free grid;
thus, in S200, according to the current marking situation, controlling the target object to perform visual obstacle avoidance may include:
and determining the path line of the target object based on the free grid, so as to perform visual obstacle avoidance.
If the current point cloud data includes obstacle point cloud data, determining a position of an obstacle grid according to the obstacle point cloud data, and determining a position of a free grid according to the camera and the auxiliary point cloud data may include:
if the current point cloud data is processed to obtain the obstacle point cloud data, the obstacle point cloud data may be projected onto a grid map to obtain an area occupied by the obstacle point cloud data (for simplifying the description, the area occupied by the obstacle point cloud data is referred to as the obstacle point cloud data), and a grid corresponding to the area is labeled as an obstacle grid, and for any data point in the auxiliary point cloud data:
And traversing the grids along the direction of the data point by taking the camera grid as an initial point until the first obstacle grid is traversed, and marking all the grids between the camera grid and the first obstacle grid as free grids, wherein the camera grid is obtained by projecting the center of the camera onto a grid map.
The above procedure will now be described in detail with reference to fig. 3 to 6:
the grid map in fig. 3 includes 10×10 grids, and the position of each grid can be determined by using the transverse coordinates and the longitudinal coordinates, where the transverse coordinates are the natural numbers 1, 2, … …, and 10, the longitudinal coordinates are the natural numbers 1, 2, … …, and 10, for example, the grid coordinate of the upper left corner is (1, 1) and the grid coordinate of the lower right corner is (10, 10) in the grid map in fig. 3, and the position of any grid in the grid map can be obtained in the same way. The grid positions in the rest of the drawings are the same as in fig. 3, and will not be described again.
Assuming that there is obstacle point cloud data as shown in fig. 3 in the current point cloud data acquired within the camera view angle α, the obstacle point cloud data is obtained after the current point cloud data is processed (that is, 11 black dots in fig. 3, it should be noted that, the obstacle point cloud data may include more than 11 black dots when the obstacle point cloud is projected onto the grid map, and the data points at the outermost side of the obstacle point cloud data, that is, the 11 black dots, replace the whole obstacle point cloud data to perform subsequent processing, so that the processing based on the outermost data points can significantly reduce the calculation amount, and at the same time, the target object can also be ensured to avoid the obstacle), and the area occupied by the obstacle point cloud data (that is, the occupied grid) is (3, 1), (3, 2), (4, 1), (4, 2), (5, 1), (5, 2), (6, 1), (6, 2). According to the area occupied by the obstacle point cloud data, the grid corresponding to the area can be marked as an obstacle grid (the grid marked with an 'X' in fig. 4), and the obstacle grid should be avoided in the path planning of the subsequent target object so as not to collide with the obstacle.
For any one data point in the auxiliary point cloud data: the camera grid is taken as an initial point, and the grid is traversed along the direction of the data point until the first obstacle grid is traversed. The method for traversing the grids can be realized by launching straight lines, taking the camera grids as a starting point, launching the straight lines according to a fixed angle, stopping when the launched straight lines meet the first barrier grids on the path, and setting the camera grids and the grids through which the straight lines pass as free grids. As shown in fig. 3, a straight line emitted from the camera center P is connected to the data point Q1 in the auxiliary point cloud data, and it is seen that there is no obstacle grid on the straight line, all grids occupied by the line segment PQ1 can be set as free grids (grids marked with "O" in fig. 5), and so on, the marking condition of the grid map in the camera view angle α, that is, fig. 6, can be obtained. And then determining the path line of the target object based on the free grids in the grid map (such as fig. 6) so as to realize visual obstacle avoidance.
According to the embodiment provided by the disclosure, the obstacle grid and the free grid can be determined by using less point cloud data by traversing the grid by means of the auxiliary point cloud data, so that the processing speed is high, and the accuracy is ensured.
In addition, in the case where the current sampling time is the first sampling time, the initial marking of the grid map may be that all grids are free grids, but may be freely set according to the actual situation, which does not limit the protection scope of the present disclosure.
In one possible implementation, in case the current sampling instant is the first sampling instant:
in S100, determining the current marking condition of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling time may further include:
if the current point cloud data does not comprise the obstacle point cloud data, determining the position of the free grid according to the camera and the auxiliary point cloud data;
determining the current marking condition according to the position of the free grid;
thus, in S200, according to the current marking situation, controlling the target object to perform visual obstacle avoidance may include:
and determining the path line of the target object based on the free grid, so as to perform visual obstacle avoidance.
If the current point cloud data does not include the obstacle point cloud data, determining the position of the free grid according to the camera and the auxiliary point cloud data may include:
for any one data point in the auxiliary point cloud data: and traversing the grids along the direction of the data point by taking the camera grid as an initial point until the camera grid is traversed to the boundary grid of the grid map, and marking the camera grid, the boundary grid and all grids between the camera grid and the boundary grid as free grids, wherein the camera grid is obtained by projecting the center of the camera onto the grid map.
The above procedure will now be described in detail with reference to fig. 7 and 8:
assuming that no obstacle point cloud data exists in the current point cloud data acquired within the camera view angle alpha, determining the position of a free grid according to the camera and the auxiliary point cloud data, and for any data point in the auxiliary point cloud data: and traversing the grid along the direction of the data point by taking the camera grid as an initial point until traversing to the boundary grid of the grid map. The method of traversing the grids can be realized by launching straight lines, taking the camera grid as a starting point, launching the straight lines according to a fixed angle, and setting the camera grid and all grids occupied by the straight lines as free grids if the launched straight lines are connected to the data points. As shown in fig. 7, a straight line emitted from the camera center P is connected to the data point Q2 in the auxiliary point cloud data, all the grids occupied by the line segment PQ2 can be set as free grids (grids marked with "O" in fig. 8), and the marking condition of the grid map within the camera view angle α, that is, fig. 9, can be obtained. And then determining the path line of the target object based on the free grid in the grid map (such as fig. 9) so as to realize visual obstacle avoidance.
According to the embodiment provided by the disclosure, the grid can be traversed by means of the auxiliary point cloud data, the free grid can be determined by using less point cloud data, the processing speed is high, and the accuracy is ensured.
In one possible implementation, in case the current sampling instant is not the first sampling instant:
in S100, determining the current marking condition of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling time may include:
if the current point cloud data comprises obstacle point cloud data, determining the position of an obstacle grid according to the obstacle point cloud data, and determining the position of a free grid according to the camera and auxiliary point cloud data;
determining the current marking condition according to the position of the barrier grid and the position of the free grid;
thus, in S200, according to the current marking situation, controlling the target object to perform visual obstacle avoidance may include:
and updating the path line of the target object according to the current marking condition, so as to perform visual obstacle avoidance.
If the current point cloud data includes obstacle point cloud data, determining a position of an obstacle grid according to the obstacle point cloud data, and determining a position of a free grid according to the camera and the auxiliary point cloud data may include:
If the current point cloud data is processed to obtain the obstacle point cloud data, the obstacle point cloud data may be projected onto a grid map to obtain an area occupied by the obstacle point cloud data (for simplifying the description, the area occupied by the obstacle point cloud data is referred to as the obstacle point cloud data), and a grid corresponding to the area is labeled as an obstacle grid, and for any data point in the auxiliary point cloud data:
and traversing the grids along the direction of the data point by taking the camera grid as an initial point until the first obstacle grid is traversed, and marking all the grids between the camera grid and the first obstacle grid as free grids, wherein the camera grid is obtained by projecting the center of the camera onto a grid map.
The above process of determining the respective positions of the obstacle grid and the free grid is described in detail above and will not be described here again.
Assuming that the last marking condition of the grid map is fig. 9 at the last sampling time, that is, the grids in the camera field angle α are all free grids at the last sampling time; the obstacle point cloud data shown in fig. 3 exists in the current point cloud data acquired by the camera at the current sampling time, which indicates that an obstacle appears in front of the target object and obstacle avoidance processing is needed. The current marking condition of the grid map, namely fig. 6, is obtained by determining the respective positions of the obstacle grid and the free grid (see above in detail) at the current sampling moment, and then the path line of the target object is determined based on the free grid in the new grid map (namely fig. 6) so as to perform visual obstacle avoidance.
Assuming that the last marking condition of the grid map is fig. 6 at the last sampling time, that is, there is an obstacle in the camera view angle α at the last sampling time; the presence of the obstacle point cloud data (i.e., 5 black dots in fig. 10) as shown in fig. 10 in the current point cloud data acquired by the camera at the current sampling time indicates that the obstacle (i.e., 11 black dots in fig. 3) located in front of the target object at the previous sampling time is not in place, and at this time, the area occupied by the obstacle point cloud data acquired by the camera (i.e., occupied grid) becomes (5, 3), (6, 3), (5, 4), (6, 4), and the grid map needs to be refreshed to avoid collision between the target object and the obstacle. By determining the respective positions of the obstacle grid and the free grid at the current sampling instant (see above for details), the current marking situation of the grid map is obtained, i.e. fig. 11, the grids at (3, 1), (3, 2), (4, 1), (4, 2), (5, 1), (5, 2), (6, 1), (6, 2) in fig. 11 are changed from the obstacle grid to the free grid, and the grids at (5, 3), (6, 3), (5, 4), (6, 4) in fig. 11 are changed from the free grid to the obstacle grid, which means that the grid map has been successfully refreshed, and the target object can determine the course based on the free grid in the new grid map (i.e. fig. 11) to achieve visual obstacle avoidance.
According to the embodiment provided by the disclosure, the grid map can be refreshed in real time by means of the auxiliary point cloud data and the acquired current point cloud data, the position of the obstacle is updated, the target object can conveniently plan the travel route in real time, collision with the obstacle is avoided, and the method is efficient and quick.
In one possible implementation, in case the current sampling instant is not the first sampling instant:
in S100, determining the current marking condition of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling time may further include:
if the current point cloud data does not comprise the obstacle point cloud data, determining the position of the free grid according to the camera and the auxiliary point cloud data;
determining the current marking condition according to the position of the free grid;
thus, in S200, according to the current marking situation, the control target object performs visual obstacle avoidance, including:
and updating the path line of the target object according to the current marking condition, so as to perform visual obstacle avoidance.
If the current point cloud data does not include the obstacle point cloud data, determining the position of the free grid according to the camera and the auxiliary point cloud data may include:
for any one data point in the auxiliary point cloud data: and traversing the grids along the direction of the data point by taking the camera grid as an initial point until the camera grid is traversed to the boundary grid of the grid map, and marking the camera grid, the boundary grid and all grids between the camera grid and the boundary grid as free grids, wherein the camera grid is obtained by projecting the center of the camera onto the grid map.
The above process of determining the respective positions of the free grids is described in detail above and will not be described in detail here.
Assuming that the last marking condition of the grid map is fig. 9 at the last sampling time, that is, the grids in the camera field angle α are all free grids at the last sampling time; no obstacle point cloud data exists in the current point cloud data acquired by the camera at the current sampling time, which indicates that no obstacle exists in front of the target object. By determining the position of the free grid at the current sampling time (see above for details), the current marking situation of the grid map is still fig. 9, and then the path of the target object can be determined continuously based on the free grid in the grid map (i.e. fig. 9) so as to perform visual obstacle avoidance.
Assuming that the last marking condition of the grid map is fig. 10 at the last sampling time, that is, there is an obstacle in the camera view angle α at the last sampling time; no obstacle point cloud data exists in the current point cloud data acquired by the camera at the current sampling time, which means that the obstacle originally positioned in front of the target object is not present, and the grid map needs to be updated. By determining the position of the free grid at the current sampling time (see above for details), the current marking situation of the grid map is obtained, i.e. fig. 9, and the grids at (5, 3), (6, 3), (5, 4), (6, 4) in the grid map at the previous sampling time (i.e. fig. 11) are changed from the obstacle grid to the free grid, which indicates that the grid map has been successfully refreshed, the target object can determine the travel route based on the free grid in the new grid map (i.e. fig. 9) to realize visual obstacle avoidance.
According to the embodiment provided by the disclosure, the grid map can be refreshed in real time by means of the auxiliary point cloud data and the acquired current point cloud data, the position of the obstacle is updated, the target object can conveniently plan the travel route in real time, collision with the obstacle is avoided, and the method is efficient and quick.
In one possible implementation, the visual obstacle avoidance method may further include: preprocessing the point cloud data acquired by the camera, wherein the preprocessing mode comprises one or more of voxel filtering, direct filtering and radius filtering; and/or determining the obstacle point cloud data by utilizing a Narf key point extraction mode.
As shown in the flowchart of the visual obstacle avoidance method according to an embodiment of the disclosure shown in fig. 12, obstacle point cloud data may be determined by:
s1, performing downsampling processing on original point cloud data acquired by a camera through voxel filtering;
s2, removing ground and data of the parts below the ground in the field angle of the camera through direct filtering according to the installation position of the camera;
s3, noise processing is carried out, and obstacle point cloud data are obtained.
Wherein S1 may include:
s11, performing three-dimensional rasterization (voxel) processing on original point cloud data, and dividing the whole point cloud into different small cubes through a preset resolution;
S12, calculating the mass center of each point in each small cube, and replacing the coordinates of all points in the size of the cube around the small cube with the coordinates of the mass center point cloud, so that partial noise points and a large number of point clouds can be filtered, and the shape characteristics of the original point cloud can be reserved.
The embodiment provided by the disclosure can reduce the number of point clouds by using the voxel filter, and can more accurately maintain the macroscopic geometric shape by replacing all the point clouds in the voxels with points close to the central point in the voxel grid.
Wherein S2 may include: and traversing the down-sampled point cloud data through direct filtering, and completely removing the point cloud data which is lower than a preset value on the ground and higher than the height of the robot according to a camera coordinate system. In this embodiment, only obstacles 5 cm above the ground and below the robot height may be detected.
The embodiment provided by the disclosure removes irrelevant data by using the direct filtering, and can obtain more accurate calculation results.
Wherein S3 may include:
s31, utilizing radius filtering to process noise points in the environment;
s32, NARF key points are extracted, and the representative and descriptive obstacle point cloud data are obtained.
Taking the factory environment as an example, noise of the factory environment is mainly formed because highly reflective objects, such as epoxy floors and metal reflective objects, reflect the point cloud to a position with an incorrect height. In S31, each point p in the point cloud may be i Determining a field with radius r, and if the number of point clouds N in the field range is<N threshold Then consider p i Is noise and p i And (5) removing. According to the embodiment provided by the disclosure, the outlier cloud is removed through radius filtering, and some suspended isolated points or invalid points existing in the original point cloud can be well removed.
NARF (Normal Aligned Radial Feature, normal aligned radial feature, NARF for short) keypoints are proposed for identifying objects from depth images, an important step in keypoint detection is to reduce the search space at feature extraction, focusing on important structures. The NARF key point extraction method can comprise the following steps: traversing each depth image point, and carrying out edge detection by searching a position with depth mutation in an adjacent area; traversing each depth image point, determining a coefficient for measuring the surface change according to the surface change of the adjacent area, and determining the main direction of the change; calculating an interest value according to the main direction found in the second step, and representing the difference between the direction and other directions and the change condition of the surface at the position, namely how stable the point is; smoothing and filtering the interest value; and carrying out maximum value-free compression to find a final key point, namely the NARF key point.
In S32, each depth image point photographed by the camera is traversed, and edge detection is performed by finding a position having a depth discontinuity in a neighboring region, because a point on the edge is more likely to be a key point than other points. The depth map is represented by a range image, and each point cloud contains 2D pixel coordinate value information and 3D position information measured under a world reference coordinate system. When the point clouds a and b are similar in the coordinate value of the 2D pixel, but the position of the 3D pixel is at the same positionFar from the euclidean distance, then there is a high probability of "edges" between a and b. In order to adapt to the sparsity of the point clouds, the judgment range of the 3D Euclidean distance 'far and near' between the point clouds depends on the distance between a certain point and the surrounding points. Each point p in the range image i Select all of the positions s 2 Neighboring point sets { n1, …, ni } within range, calculate their 3D position to p i Is recorded as the distance of (2)
Figure BDA0004062131380000101
And the distance set is incrementally ordered to obtain +.>
Figure BDA0004062131380000102
Assume that there are M points and p in the neighbor set { n1, …, ni } i In the same plane and select δ=d' M As a key value for judging "far and near", M and δ are threshold values set in advance. For each point in M, calculating the probability of edge points in the directions of up, down, left and right of the point, taking the right side as an example, and letting p x,y For the point in the x, y position in the depth map range image, the average 3D position, i.e., p, of the point adjacent to the right side is calculated by the following formula right
Figure BDA0004062131380000103
Wherein m is p The number of points used to calculate the average 3D position value, i.e., M, p x+i,y For M points each with p i Distance between them. Calculation of p x,y Adjacent to the right side point p right The 3D distance between these two points, D right
d right =||p x,y -p right ||
Wherein p is x,y Is the point positioned at the x and y positions in the depth map range image, p right The right-hand neighboring point found above. Using calculated 3D distance, D right And the selected key value delta, calculating the score of the right point, namely s right
Figure BDA0004062131380000111
Where max () is a function taking the maximum value from which a value lying between 0, 1) can be obtained, the higher the value, which means that there is a large increase between the distance set and the distance to the right point, indicating that there is a high probability of an edge point with the point. In the final application, s is taken right P greater than 0.8 x,y And performing non-maximum suppression (i.e., edge refinement), the edge points of the obstacle are obtained.
The obtaining of the edge points can obtain important, but not exclusive, basis of the key points, and the extraction process of the key points can also have the following requirements: information of the boundary and the surface structure must be considered, a position that can be reliably detected even if the object is observed from another angle must be selected, and a stable region must be selected so as to extract the normal line. The direction of the boundary and whether the upper, lower, left and right of each point cloud have boundaries can be determined through the above passing process.
For curved surfaces composed of point clouds, curvature somewhere is certainly a very important structural descriptive factor. The larger the curvature of a point means the more strongly the curvature of that point varies. And estimating normal lines on the local two-dimensional field of the edge points of the obstacle by using PCA, namely, principal component analysis, compressing a data space, intuitively representing the characteristics of the multi-element data in a low-dimensional space, and ignoring the field points with 3D distances larger than 2 delta to obtain the principal direction v and the curvature lambda of the points.
For the edge points, taking the weight w of the edge points as 1 and v as the edge direction; for the rest of the points, take its weight w=1- (1- λ) 3 The direction is the projection of v on a plane p perpendicular to p i And a line connecting the origin. So far, each point cloud has a weight value and a direction attribute, and the weight and the direction of each point are brought into the following formula:
Figure BDA0004062131380000112
Figure BDA0004062131380000113
Figure BDA0004062131380000114
I(p)=I 1 (p)·I 2 (p)
wherein, I (p) is the score value of p point as a key point; i is the number of points in the point cloud,
Figure BDA0004062131380000115
weights for points in the set of neighboring points { n1, …, ni }, n i 、n j Derived from the set of adjacent points { n1, …, ni }, w n For the weight corresponding to n-point, σ is the standard deviation of the distance between the neighboring point set { n1, …, ni } and p-point, +. >
Figure BDA0004062131380000116
Is p point and n point i Included angle alpha nj Is p point and n point j Is included in the bearing. And selecting a maximum value according to the score value of each point serving as a key point, and obtaining characteristic points in the depth point cloud image, namely obstacle point cloud.
Projecting the finally obtained obstacle point cloud onto a two-dimensional grid map in the direction perpendicular to the ground to form an obstacle region, marking the occupied grid map as an obstacle grid, planning a path bypassing the obstacle grid when the robot performs route planning, and refreshing the position of the obstacle in real time through a straight line clearing algorithm. Wherein the grid through which the line clearance algorithm passes, i.e. the line projected onto the two-dimensional grid map between the point cloud location and the sensor location, is marked as a free grid (see above for details of the procedure of setting the position of the free grid).
In one possible implementation manner, a plurality of cameras can be arranged on one target object, so that the target object can be monitored in all directions, no matter in which direction the obstacle appears, the target object can be quickly and accurately avoided, planning of a 360-degree dead-angle-free road route is facilitated, and the obstacle avoidance capability of the target object is improved.
The visual obstacle avoidance method based on the stereoscopic depth camera provided by the embodiment of the disclosure can more accurately describe useful information of the obstacle by using less point cloud data, improves obstacle avoidance processing speed, ensures real-time refreshing of dynamic obstacle by using a simple and effective linear updating algorithm, and does not influence road strength planning of a robot due to untimely updating of obstacle information.
The embodiment of the disclosure further provides a visual obstacle avoidance system, as shown in fig. 13, which is a block diagram of the visual obstacle avoidance system according to the embodiment of the disclosure, and the visual obstacle avoidance system may include:
the first processing module 1 is configured to determine a current marking condition of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling moment, wherein the camera is arranged on the target object, the grid map comprises a plurality of grids, the marking condition of the grid map comprises a label of the grids, the label comprises an obstacle grid and a free grid, and the auxiliary point cloud data is positioned outside the grid map;
The second processing module 2 is configured to control the target object to perform visual obstacle avoidance according to the current marking condition.
According to the embodiment provided by the disclosure, the camera is used for continuously acquiring new point cloud data, all barriers in the field of view of the camera can be acquired, the current marking condition of the grid map can be rapidly and accurately determined by means of a small amount of auxiliary point cloud data, the calculation amount of the whole scheme is small, the barrier avoiding speed of a target object is improved, namely, the position of the barrier can be refreshed in real time, the grid map is refreshed, the target object can conveniently perform visual barrier avoiding according to the latest grid map, and the influence on the path planning of the target object due to untimely updating of barrier information is avoided.
The idea of the embodiment of the visual obstacle avoidance system belongs to the same idea as the working process of the visual obstacle avoidance method in the embodiment, and the whole content of the embodiment of the visual obstacle avoidance method is incorporated into the embodiment of the visual obstacle avoidance system by way of full-text reference, and is not repeated.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; the processor is configured to implement the visual obstacle avoidance method when executing the instructions stored in the memory.
The idea of the embodiment of the electronic device and the working process of the visual obstacle avoidance method in the embodiment belong to the same idea, and the whole content of the embodiment of the visual obstacle avoidance method is incorporated into the embodiment of the electronic device by way of full-text reference, and is not repeated.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions that when executed by a processor implement the above-described visual obstacle avoidance method. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
The idea of the embodiment of the computer readable storage medium belongs to the same idea as the working process of the visual obstacle avoidance method in the embodiment, and the whole content of the embodiment of the visual obstacle avoidance method is incorporated into the embodiment of the computer readable storage medium by way of full text reference, and is not repeated.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when executed in a processor of an electronic device, performs the above-described visual obstacle avoidance method.
The idea of the embodiment of the computer program product belongs to the same idea as the working process of the visual obstacle avoidance method in the embodiment, and the whole content of the embodiment of the visual obstacle avoidance method is incorporated into the embodiment of the computer program product by way of full-text reference and will not be repeated.
Fig. 14 is a block diagram illustrating an apparatus 1900 for performing a visual obstacle avoidance method, according to an example embodiment. For example, the apparatus 1900 may be provided as a server or terminal device. Referring to fig. 14, the apparatus 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by the processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The apparatus 1900 may further comprise a power component 1926 configured to perform power management of the apparatus 1900, a wired or wireless network interface 1950 configured to connect the apparatus 1900 to a network, and an input/output interface 1958 (I/O interface). The apparatus 1900 may operate based on an operating system stored in the memory 1932, such as Windows Server TM ,Mac OS X TM ,Unix TM ,Linux TM ,FreeBSD TM Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of apparatus 1900 to perform the above-described methods.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (11)

1. A method of visual obstacle avoidance comprising:
determining the current marking condition of the grid map according to auxiliary point cloud data and current point cloud data acquired by a camera at the current sampling moment, wherein the camera is arranged on a target object, the grid map comprises a plurality of grids, the marking condition of the grid map comprises labels of the grids, the labels comprise barrier grids and free grids, and the auxiliary point cloud data are positioned outside the grid map;
and controlling the target object to perform visual obstacle avoidance according to the current marking condition.
2. The visual obstacle avoidance method of claim 1, wherein, if the current sampling instant is the first sampling instant:
the determining the current marking condition of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling moment comprises the following steps:
if the current point cloud data includes obstacle point cloud data, determining a position of an obstacle grid according to the obstacle point cloud data, and determining a position of a free grid according to the camera and the auxiliary point cloud data,
determining the current marking condition according to the position of the barrier grid and the position of the free grid;
the step of controlling the target object to perform visual obstacle avoidance according to the current marking condition comprises the following steps:
and determining the path line of the target object based on the free grid so as to perform visual obstacle avoidance.
3. The visual obstacle avoidance method of claim 2 wherein the determining the current marking of the grid map from the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling instant further comprises:
if the current point cloud data does not comprise obstacle point cloud data, determining the position of a free grid according to the camera and the auxiliary point cloud data;
Determining the current marking condition according to the position of the free grid;
the step of controlling the target object to perform visual obstacle avoidance according to the current marking condition comprises the following steps:
and determining the path line of the target object based on the free grid so as to perform visual obstacle avoidance.
4. The visual obstacle avoidance method of claim 1, wherein, in the event that the current sampling instant is not the first sampling instant:
the determining the current marking condition of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling moment comprises the following steps:
if the current point cloud data comprises obstacle point cloud data, determining the position of an obstacle grid according to the obstacle point cloud data, and determining the position of a free grid according to the camera and the auxiliary point cloud data;
determining the current marking condition according to the position of the barrier grid and the position of the free grid;
the step of controlling the target object to perform visual obstacle avoidance according to the current marking condition comprises the following steps:
and updating the path line of the target object according to the current marking condition, so as to perform visual obstacle avoidance.
5. The visual obstacle avoidance method of claim 4 wherein the determining the current marking of the grid map from the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling instant further comprises:
if the current point cloud data does not comprise obstacle point cloud data, determining the position of a free grid according to the camera and the auxiliary point cloud data;
determining the current marking condition according to the position of the free grid;
the step of controlling the target object to perform visual obstacle avoidance according to the current marking condition comprises the following steps:
and updating the path line of the target object according to the current marking condition, so as to perform visual obstacle avoidance.
6. The method of any one of claims 2 to 5, wherein determining the position of an obstacle grid from the obstacle point cloud data and determining the position of a free grid from the camera and the auxiliary point cloud data comprises:
projecting the obstacle point cloud data onto the grid map to obtain an area occupied by the obstacle point cloud data, marking a grid corresponding to the area as an obstacle grid, and for any data point in the auxiliary point cloud data:
And traversing the grids along the direction of the data point by taking the camera grid as an initial point, stopping until the first obstacle grid is traversed, and marking all the camera grids and all grids between the camera grid and the first obstacle grid as free grids, wherein the camera grid is obtained by projecting the center of the camera onto the grid map.
7. The method of any one of claims 2 to 5, wherein determining the location of the free grid from the camera and the auxiliary point cloud data comprises:
for any one data point in the auxiliary point cloud data:
and traversing the grids along the direction of the data point by taking the camera grid as an initial point, stopping until traversing to the boundary grid of the grid map, and marking the camera grid, the boundary grid and all grids between the camera grid and the boundary grid as free grids, wherein the camera grid is obtained by projecting the center of the camera onto the grid map.
8. The visual obstacle avoidance method of claim 1, further comprising:
Preprocessing point cloud data acquired by a camera, wherein the preprocessing mode comprises one or more of voxel filtering, direct filtering and radius filtering; and/or the number of the groups of groups,
and determining obstacle point cloud data by utilizing a Narf key point extraction mode.
9. A vision obstacle avoidance system, comprising:
the first processing module is configured to determine the current marking condition of the grid map according to the auxiliary point cloud data and the current point cloud data acquired by the camera at the current sampling moment, and the camera is arranged on the target object;
the second processing module is configured to control the target object to perform visual obstacle avoidance according to the current marking condition;
the grid map comprises a plurality of grids, the marking condition of the grid map comprises a label of the grids, the label comprises an obstacle grid and a free grid, and the auxiliary point cloud data is located outside the grid map.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the visual obstacle avoidance method of any one of claims 1 to 8 when executing the instructions stored by the memory.
11. A non-transitory computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the visual obstacle avoidance method of any of claims 1 to 8.
CN202310064378.7A 2023-01-13 2023-01-13 Visual obstacle avoidance method and system, electronic equipment and medium Pending CN116300880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310064378.7A CN116300880A (en) 2023-01-13 2023-01-13 Visual obstacle avoidance method and system, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310064378.7A CN116300880A (en) 2023-01-13 2023-01-13 Visual obstacle avoidance method and system, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN116300880A true CN116300880A (en) 2023-06-23

Family

ID=86784111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310064378.7A Pending CN116300880A (en) 2023-01-13 2023-01-13 Visual obstacle avoidance method and system, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN116300880A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116661468A (en) * 2023-08-01 2023-08-29 深圳市普渡科技有限公司 Obstacle detection method, robot, and computer-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116661468A (en) * 2023-08-01 2023-08-29 深圳市普渡科技有限公司 Obstacle detection method, robot, and computer-readable storage medium
CN116661468B (en) * 2023-08-01 2024-04-12 深圳市普渡科技有限公司 Obstacle detection method, robot, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
EP3581890B1 (en) Method and device for positioning
US11282210B2 (en) Method and apparatus for segmenting point cloud data, storage medium, and electronic device
EP3519770B1 (en) Methods and systems for generating and using localisation reference data
US10198632B2 (en) Survey data processing device, survey data processing method, and survey data processing program
US9625908B2 (en) Methods and systems for mobile-agent navigation
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
EP2738517B1 (en) System and methods for feature selection and matching
US20130142437A1 (en) Pose Estimation
CN109271861B (en) Multi-scale fusion point cloud traffic signboard automatic extraction method
CN113378760A (en) Training target detection model and method and device for detecting target
EP3686791A1 (en) Learning method and learning device for object detector based on cnn to be used for multi-camera or surround view monitoring using image concatenation and target object merging network, and testing method and testing device using the same
EP2743861B1 (en) Method and device for detecting continuous object in disparity direction based on disparity map
EP2166375A2 (en) System and method of extracting plane features
US11861855B2 (en) System and method for aerial to ground registration
CN110619299A (en) Object recognition SLAM method and device based on grid
CN113095184B (en) Positioning method, driving control method, device, computer equipment and storage medium
KR101549155B1 (en) Method of automatic extraction of building boundary from lidar data
CN116300880A (en) Visual obstacle avoidance method and system, electronic equipment and medium
EP2677462B1 (en) Method and apparatus for segmenting object area
CN111160280B (en) RGBD camera-based target object identification and positioning method and mobile robot
CN113887433A (en) Obstacle detection method and device, computer equipment and storage medium
CN116503803A (en) Obstacle detection method, obstacle detection device, electronic device and storage medium
CN113806464A (en) Road tooth determining method, device, equipment and storage medium
CN113763468B (en) Positioning method, device, system and storage medium
CN113189610A (en) Map-enhanced autonomous driving multi-target tracking method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination