CN114419601A - Obstacle information determination method, obstacle information determination device, electronic device, and storage medium - Google Patents

Obstacle information determination method, obstacle information determination device, electronic device, and storage medium Download PDF

Info

Publication number
CN114419601A
CN114419601A CN202210091407.4A CN202210091407A CN114419601A CN 114419601 A CN114419601 A CN 114419601A CN 202210091407 A CN202210091407 A CN 202210091407A CN 114419601 A CN114419601 A CN 114419601A
Authority
CN
China
Prior art keywords
coordinate
point cloud
point
current
adjacent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210091407.4A
Other languages
Chinese (zh)
Inventor
吴岗岗
杜建宇
王恒凯
曹天书
李超
李佳骏
赵逸群
王皓南
刘清宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202210091407.4A priority Critical patent/CN114419601A/en
Publication of CN114419601A publication Critical patent/CN114419601A/en
Priority to PCT/CN2022/141398 priority patent/WO2023142816A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a method and a device for determining obstacle information, electronic equipment and a storage medium. The method comprises the following steps: acquiring at least one point cloud data in a preset range of a current vehicle; the point cloud data comprises point cloud coordinates in a local coordinate system with the current vehicle as an origin; acquiring at least one obstacle identification condition, and identifying obstacle information of obstacles in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification condition comprises a detection distance condition between a point cloud coordinate and the current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate; the accuracy of determining the obstacles around the vehicle is improved, and therefore the driving safety of the vehicle is improved.

Description

Obstacle information determination method, obstacle information determination device, electronic device, and storage medium
Technical Field
The embodiment of the invention relates to the technical field of intelligent driving, in particular to a method and a device for determining obstacle information, electronic equipment and a storage medium.
Background
With the rapid development of the automobile industry and the continuous improvement of the living standard of people, automobiles rapidly enter common families. The safety problem of vehicle driving is receiving more and more extensive attention.
At present, in the driving process, a driver often directly observes obstacles around the vehicle, or the driver determines the obstacles on the left side and the right side of the vehicle by observing a door mirror, and then the vehicle drives according to the indication of the driver, and the artificial determination mode is easily influenced by the subjective experience of the driver or environmental factors, so that the driving safety of the vehicle is low.
Disclosure of Invention
The invention provides an obstacle information determination method, an obstacle information determination device, electronic equipment and a storage medium, which are used for improving the accuracy of determining obstacles around a vehicle so as to improve the safety of vehicle driving.
In a first aspect, an embodiment of the present invention provides an obstacle information determining method, where the method includes:
acquiring at least one point cloud data in a preset range of a current vehicle; the point cloud data comprises point cloud coordinates in a local coordinate system with the current vehicle as an origin;
acquiring at least one obstacle identification condition, and identifying obstacle information of obstacles in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification condition comprises a detection distance condition between a point cloud coordinate and the current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate.
Optionally, the obstacle information includes the number of obstacles, the number of boundary points of the obstacles, and the coordinates of the boundary points of the obstacles.
Optionally, the acquiring at least one point cloud data in a preset range of the current vehicle includes:
scanning a preset range of the current vehicle based on a preset radar sensor, and acquiring initial point cloud coordinates in the scanning result;
and respectively carrying out data preprocessing on each initial point cloud coordinate to obtain each target point cloud coordinate in the preset range of the current vehicle.
Optionally, the performing data preprocessing on each initial point cloud coordinate to obtain each target point cloud coordinate within the preset range of the current vehicle includes:
acquiring a preset coordinate storage matrix, shifting each row of coordinate data in the preset coordinate storage matrix to the right by one row, and storing the corresponding row of coordinate data in the initial point cloud coordinate in a first row of the preset coordinate storage matrix to obtain a coordinate adjustment matrix;
sorting the coordinate data in the coordinate adjustment matrix according to a preset sorting rule to obtain a coordinate sorting matrix;
acquiring at least two columns of coordinate data in the coordinate sorting matrix, determining a row coordinate mean value of each row coordinate data in the at least two columns of coordinate data, and taking each row coordinate mean value as a corresponding point cloud coordinate in the target point cloud coordinate.
Optionally, the obtaining at least one obstacle identification condition, and identifying obstacle information of an obstacle in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate includes:
for any point cloud coordinate, if the distance between the current point cloud coordinate and the current vehicle meets the detection distance condition, acquiring a current first adjacent distance between the current point cloud coordinate and a right adjacent point of the current point cloud coordinate;
and if the current first adjacent distance does not meet the first adjacent distance condition, accumulating the number of obstacles of the obstacle, accumulating the number of obstacles, accumulating the number of boundary points and accumulating the number of boundary points, and determining the coordinates of the boundary points based on the current point cloud coordinates.
Optionally, the obtaining at least one obstacle identification condition, and identifying obstacle information of an obstacle in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate, further includes:
if the current first adjacent distance meets the first adjacent distance condition, acquiring current adjacent included angles between the current point cloud coordinate and a left adjacent point coordinate and a right adjacent point coordinate of the current point cloud coordinate respectively;
if the current adjacent included angle does not meet the adjacent included angle condition, matching the current boundary point number of the barrier with a preset number threshold value; if the current boundary point number is within the preset number threshold range, accumulating the boundary point numbers and determining the boundary point coordinates; and if the current boundary point number is not within the preset number threshold range, accumulating the number of the obstacles, the number of the boundary points and the number of the boundary points, and determining the coordinates of the boundary points.
Optionally, the obtaining at least one obstacle identification condition, and identifying obstacle information of an obstacle in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate, further includes:
if the current adjacent included angle meets the adjacent included angle condition, acquiring a current second adjacent distance between the current point cloud coordinate and a left adjacent point of the current point cloud coordinate;
if the current second adjacent distance does not accord with the second adjacent condition, accumulating the boundary point numbers;
and traversing other point cloud coordinates and storing the number of the obstacles, the number of boundary points, the number of the boundary points and the coordinates of the boundary points of each identified obstacle if the current second adjacent distance meets the second adjacent condition and the current point cloud data is determined to be identified to be finished.
Optionally, after identifying the obstacle information of the obstacle within the preset range of the current vehicle, the method further includes:
acquiring a global coordinate system, and respectively determining global boundary point coordinates of each boundary point in the global coordinate system based on the boundary point coordinates of each boundary point in the local coordinate system and a preset coordinate conversion method;
and determining the global boundary point coordinates in the preset range of the current vehicle at the next moment respectively, and determining the boundary point type of each boundary point based on the comparison result of the global coordinate difference between the global boundary point coordinates at the current moment and the global boundary point coordinates at the next moment and a preset coordinate threshold.
Optionally, the boundary point type includes a dynamic boundary point and a dynamic boundary point;
correspondingly, after determining the boundary point type of each boundary point, the method further includes:
if the boundary point type is a dynamic boundary point, updating the global boundary point coordinates of the boundary points in real time, and updating the running track of the current vehicle in real time based on the boundary points updated in real time until the boundary points are not in the detection distance condition range or the current vehicle bypasses the boundary points;
and if the boundary point type is a static boundary point, determining the driving track of the current vehicle based on each boundary point until the boundary point is not in the detection distance condition range or the current vehicle bypasses the boundary point.
In a second aspect, an embodiment of the present invention further provides an obstacle information determining apparatus, where the apparatus includes:
the point cloud data acquisition module is used for acquiring at least one point cloud data in a preset range of the current vehicle; the point cloud data comprises point cloud coordinates in a local coordinate system with the current vehicle as an origin;
the obstacle information identification module is used for acquiring at least one obstacle identification condition and identifying obstacle information of obstacles in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification condition comprises a detection distance condition between a point cloud coordinate and the current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the obstacle information determination method as provided by any of the embodiments of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the obstacle information determination method provided in any embodiment of the present invention.
The technical scheme of the embodiment includes that at least one point cloud data in a preset range of a current vehicle is obtained; the point cloud data comprises point cloud coordinates under a local coordinate system with a current vehicle as an origin; therefore, more accurate radar data is obtained to provide necessary information for the deceleration obstacle avoidance and the obstacle detouring planned path of the automatic driving vehicle; acquiring at least one obstacle identification condition, and identifying obstacle information of obstacles in a preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification conditions comprise a detection distance condition between a point cloud coordinate and a current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate; the scanned point cloud data is identified through a plurality of obstacle identification conditions, obstacle information around the vehicle is determined, and therefore accuracy of obstacle identification is improved, and safety of vehicle driving is improved.
Drawings
In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, a brief description is given below of the drawings used in describing the embodiments. It should be clear that the described figures are only views of some of the embodiments of the invention to be described, not all, and that for a person skilled in the art, other figures can be derived from these figures without inventive effort.
Fig. 1 is a schematic flowchart of a method for determining obstacle information according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a local coordinate system according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a method for determining obstacle information according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a global coordinate system according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of an obstacle information determination apparatus according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an obstacle information determining method according to an embodiment of the present invention, where the embodiment is applicable to determining obstacles around a vehicle during automatic driving of the vehicle; the method is more suitable for the condition that the obstacles around the vehicle are determined without the camera or when the camera is damaged and the function is limited. The method may be performed by an obstacle information determination apparatus, which may be implemented by means of software and/or hardware.
Before describing the technical solution of the embodiment of the present invention, an exemplary description is first given to an application scenario in which the technical solution of the embodiment is implemented. Of course, the following application scenarios are only optional application scenarios, and this embodiment may also be implemented in other application scenarios, and the application scenarios of the implemented technical method are not limited in this embodiment. Specifically, the application scenarios include: at present, in the driving process, a driver often directly observes obstacles around the vehicle, or the driver determines the obstacles on the left side and the right side of the vehicle by observing a door mirror, and then the vehicle drives according to the indication of the driver, and the artificial determination mode is easily influenced by the subjective experience of the driver or environmental factors, so that the driving safety of the vehicle is low. In addition, in the automatic driving process, obstacle information around the vehicle is mostly acquired based on a camera mounted around the vehicle, but the camera is damaged and limited in function, so that a vehicle controller cannot acquire the obstacle information around the vehicle, and a great safety risk exists in the automatic driving of the vehicle.
In order to solve the technical problem, in the technical scheme of the embodiment, the state information of the obstacles around the automatic driving vehicle is obtained through calculation of the obtained radar data, and necessary information is provided for the automatic driving vehicle to decelerate and avoid the obstacles and the obstacle to detour to plan the path.
Based on the technical idea, the technical scheme of the embodiment obtains at least one point cloud data in a preset range of the current vehicle; the point cloud data comprises point cloud coordinates under a local coordinate system with a current vehicle as an origin; therefore, more accurate radar data is obtained to provide necessary information for the deceleration obstacle avoidance and the obstacle detouring planned path of the automatic driving vehicle; acquiring at least one obstacle identification condition, and identifying obstacle information of obstacles in a preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification conditions comprise a detection distance condition between a point cloud coordinate and a current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate; the scanned point cloud data is identified through a plurality of obstacle identification conditions, obstacle information around the vehicle is determined, and therefore accuracy of obstacle identification is improved, and safety of vehicle driving is improved.
As shown in fig. 1, the method specifically includes the following steps:
and S110, acquiring at least one point cloud data in a preset range of the current vehicle.
In this embodiment, the surroundings of the vehicle may be detected based on various radar detection devices, so as to obtain point cloud data within a preset range around the vehicle.
Optionally, the method for acquiring point cloud data may include: and scanning a preset range of the current vehicle based on a preset radar sensor, and acquiring initial point cloud coordinates in a scanning result.
The sensor obstacle detection sensing range and the sensor obstacle detection sensing capacity are determined by presetting the installation position of the radar sensor on the current vehicle, and the detection range of the laser radar is enlarged for reducing shielding as much as possible. Alternatively, it may be installed above the roof or under the floor of the current vehicle. The mounting positions of the radar sensors described above are merely exemplary mounting positions, and the present embodiment does not impose any limitation on the mounting positions of the radar sensors. The type of the preset radar sensor can be a laser radar and a vehicle millimeter wave radar, and of course, other types of radar sensors can also be adopted.
Specifically, the vehicle surroundings are continuously scanned based on the radar sensor, and the scanned point cloud data is stored in real time. The scanning angle of the radar sensor can be 360 degrees, and of course, the scanning angle can also be set in real time according to the current environment of the vehicle. In this embodiment, the stored point cloud data may include point cloud coordinates in a local coordinate system with the current vehicle as an origin.
As shown in fig. 2, for example, point cloud information of ± 45 ° ahead of the current vehicle is taken as an example, the data density of the point cloud data is at intervals of every 1 degree, and the name of each point cloud data is sequentially defined as FSP _ 0-90. Each point cloud data packet may include (X, Y) coordinate information each based on a local coordinate system in which the current vehicle is located. Specifically, the origin of the local coordinate system is the center point of the current vehicle, and the positive X direction is the direction in which the current vehicle travels and the left direction of the current vehicle in the Y direction.
Because the initial point cloud data scanned by the radar has the condition of single-frame false alarm or multi-frame jitter, before obstacle information identification is carried out on the basis of the initial point cloud data, data preprocessing needs to be carried out on the initial point cloud data so as to eliminate the accidental factors as far as possible, and therefore the accuracy of obstacle information identification is improved. Therefore, according to the technical scheme of the embodiment, after the initial point cloud data scanned by the radar sensor is obtained, data preprocessing is respectively performed on each initial point cloud coordinate to obtain each target point cloud coordinate in the current vehicle preset range.
Optionally, the method for performing data preprocessing on the initial point cloud data may include: acquiring a preset coordinate storage matrix, shifting each row of coordinate data in the preset coordinate storage matrix by one row to the right, and storing the corresponding row of coordinate data in the initial point cloud coordinate in a first row of the preset coordinate storage matrix to obtain a coordinate adjustment matrix; sorting the coordinate data in the coordinate adjustment matrix according to a preset sorting rule to obtain a coordinate sorting matrix; acquiring at least two columns of coordinate data in a coordinate sorting matrix, determining a row coordinate mean value of each row of coordinate data in the at least two columns of coordinate data, and taking the row coordinate mean value as a corresponding point cloud coordinate in a target point cloud coordinate.
In this embodiment, the data preprocessing method is exemplarily described by taking the 90-point cloud data as an example. Specifically, initial point cloud data of the current vehicles FSP _ 0-FSP _90 at the current time are read, and the initial point cloud data are stored in a newly-built initial point cloud data matrix. The matrix name of the initial point cloud data matrix may be FSP _ n _ XY, and the matrix is a 90 × 2 coordinate matrix. Specifically, the initial coordinate data information in FSP _ n _ XY is shown in the following table:
TABLE 1 initial coordinate data information Table
FSP_0_X FSP_0_Y
FSP_1_X FSP_1_Y
FSP_90_X FSP_90_Y
In order to eliminate the single-frame false alarm and stabilize the multi-frame jitter problem, the preprocessing method of the embodiment needs to follow the principles of error generation and normal distribution, so as to eliminate the error points through median filtering.
Before preprocessing the data, a data storage matrix is established in advance and used for storing the point cloud data subjected to data preprocessing at the previous moment. Specifically, in order to facilitate simultaneous data processing on X coordinate data and Y coordinate data in the coordinate data, the data storage matrix is divided into an X coordinate data storage sub-matrix and a Y coordinate data storage sub-matrix in the embodiment. Specifically, the matrix name of the X-coordinate data storage submatrix may be FSP _ save _ X, and the matrix name of the Y-coordinate data storage submatrix may be FSP _ save _ Y. The size of the data storage matrix is described by taking an X coordinate data storage submatrix as an example, the matrix size of the X coordinate data storage submatrix is a 90 multiplied by N matrix, and the size of N is the key of median average filtering. In order to ensure the timeliness and the accuracy after data preprocessing, the minimum value of N is not lower than 10 and the maximum value of N is not more than 50 in principle; for example, in the technical solution of this embodiment, N may be temporarily taken as 30, and of course, N may also be taken as another numerical value, and this embodiment does not limit the numerical value of N.
The same data preprocessing method of the initial point cloud data is exemplarily described as a method of processing the coordinate data in the X-coordinate data storage sub-matrix. Specifically, each row of coordinates in the X-coordinate data storage submatrix is moved to the right by one row, and the first row of coordinates in the initial point cloud data matrix, that is, each coordinate data in the X-row coordinates, is stored in the X-coordinate data storage submatrix, so as to obtain coordinate data after data adjustment.
Further, sorting the coordinate data after data adjustment in the X coordinate data storage sub-matrix. Optionally, for each row of data, the rows of data may be sorted in descending order to obtain sorted coordinate data. The beneficial effect of sorting the coordinate data in the embodiment is that invalid coordinate data in the current matrix can be screened out according to the sorted coordinate data, so that the reliability of the data is improved, and the accuracy of identifying the obstacle information is improved.
Further, at least one column of data in the sorted coordinate data in the X coordinate data storage submatrix is obtained, optionally, the obtaining mode may be to obtain at least one column in the middle, or to obtain at least one column randomly. In order to obtain reliable data, column coordinate data of a preset number of columns in the middle of the X coordinate data storage submatrix may be selected, a navigation coordinate mean value of each row of coordinate data in each selected column coordinate data is calculated, and each row of coordinate mean values are used as corresponding point cloud coordinates in the target point cloud coordinate.
For example, the first column of information of the preprocessed FSP _ n _ XY matrix is described as follows:
1) for the coordinate data of the current frame, each column of data in the FSP _ save _ X matrix is shifted to the right by one column, and the first column of data is set to 0.
2) And storing and filling the coordinate data of the first column in the FSP _ n _ XY matrix into the first column of the FSP _ save _ X matrix to complete data updating input.
3) And arranging the coordinate data of each row in the FSP _ save _ X matrix from large to small or from small to large, and filling the coordinate data into the FSP _ n _ X matrix according to the size sequence. The sorting of the coordinate data in this embodiment is to facilitate screening of invalid coordinate data, and the storing of the sorted data into the FSP _ n _ X matrix is to distinguish the screened data from the non-screened data.
4) And taking the row coordinate mean value of each row coordinate data of the middle M columns of the FSP _ n _ X matrix, and correspondingly filling the row coordinate mean value into the first column of the FSP _ n _ XY matrix. The size of the value M is related to the stability and timeliness of the preprocessed data, M is temporarily taken as 10 in the scheme, and the average value of all rows from the 11 th column to the 20 th column of the FSP _ n _ X calculation matrix is taken.
5) And finishing the preprocessing of the first column information of the FSP _ n _ XY matrix of the current frame, and updating the processed data into the FSP _ save _ X matrix, so as to be convenient for preprocessing the data of the subsequent frame.
It should be noted that, in this embodiment, a method for preprocessing data is described by taking an X coordinate data storage sub-matrix as an example, and in this embodiment, data preprocessing may also be performed on each coordinate data in a Y coordinate data storage sub-matrix by the same method.
Illustratively, the data preprocessing process of the second column information of the FSP _ n _ XY matrix comprises the following steps:
1) for the coordinate data of the current frame, each column of data in the FSP _ save _ Y matrix is shifted to the right by one column, and the first column of data is set to 0.
2) And storing and filling the coordinate data of the second column in the FSP _ n _ XY matrix into the first column of the FSP _ save _ Y matrix to complete data updating input.
3) And arranging the coordinate data of each row in the FSP _ save _ Y matrix from large to small or from small to large, and filling the coordinate data into the FSP _ n _ Y matrix according to the size sequence. The sorting of the coordinate data in this embodiment is to facilitate the screening of invalid coordinate data, and the storing of the sorted data into the FSP _ n _ Y matrix is to distinguish the screened data from the non-screened data.
4) And taking the row coordinate mean value of each row coordinate data of the middle M columns of the FSP _ n _ Y matrix, and correspondingly filling the row coordinate mean value into the second column of the FSP _ n _ XY matrix. The size of the value M is related to the stability and timeliness of the preprocessed data, M is temporarily taken as 10 in the scheme, and the average value of all rows from the 11 th column to the 20 th column of the FSP _ n _ X calculation matrix is taken.
5) And finishing the second column information preprocessing of the FSP _ n _ XY matrix of the current frame, and updating the processed data into the FSP _ save _ Y matrix, so as to be convenient for preprocessing the data of the subsequent frame.
And S120, acquiring at least one obstacle identification condition, and identifying obstacle information of obstacles in the current vehicle preset range based on each obstacle identification condition and each point cloud coordinate.
In this embodiment, the obstacle identification condition is used to identify the point cloud data in the above embodiment, and determine whether a target corresponding to the point cloud data is an obstacle.
In the present embodiment, the obstacle information includes the number of obstacles, the number of boundary points of the obstacles, and the coordinates of the boundary points of the obstacles. The boundary point of the obstacle can be understood as an inflection point of the obstacle, that is, a point cloud scanned by the radar sensor in the scanning process of the vehicle periphery.
The number of the obstacle recognition conditions is plural, so that the accuracy of the recognition result can be ensured. Specifically, the obstacle recognition condition includes a detection distance condition between the point cloud coordinate and the current vehicle, a first adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate, an adjacent angle condition between the point cloud coordinate and the left adjacent point coordinate and the right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate.
It is noted that if the current point cloud coordinate is the rightmost starting point, no right adjacent point exists, and the distance between the point cloud coordinate and the right adjacent point is set to be 0; correspondingly, if the current point cloud data is the leftmost ending point, no left adjacent point exists, and the distance between the point cloud coordinate and the left adjacent point is set to be 0. Further, since the start point and the end point cannot form an angle, the adjacent angles of the point cloud coordinates corresponding to FSP _90 and FSP _0 are set to 180 °.
Further, after the obstacle identification conditions are obtained, the obstacle information of the obstacles in the current vehicle preset range is identified based on the obstacle identification conditions and the cloud coordinates of the points.
Optionally, the method for identifying obstacle information of an obstacle in a preset range of a current vehicle includes: for any point cloud coordinate, if the distance between the current point cloud coordinate and the current vehicle meets the detection distance condition, acquiring a current first adjacent distance between the current point cloud coordinate and a right adjacent point of the current point cloud coordinate; and if the current first adjacent distance does not accord with the first adjacent distance condition, accumulating the number of the obstacles, accumulating the number of the boundary points and determining the coordinates of the boundary points based on the current point cloud coordinates.
Specifically, any point cloud coordinate scanned by the current vehicle is obtained, the current distance between the current point cloud coordinate and the vehicle is obtained, and then the current distance is matched with a preset detection distance condition. And if the current distance meets the detection distance condition, namely the distance is within the detection distance condition, indicating that the point cloud data is within the identification range of the current vehicle identification obstacle. Further, a right adjacent point coordinate of the current point cloud coordinate is obtained, a current first adjacent distance between the right adjacent point coordinate and the current first adjacent distance is obtained, and then the current adjacent distance is matched with a preset first adjacent distance condition. And if the current first adjacent distance does not meet the first adjacent condition, namely the distance between the right adjacent point of the current point cloud coordinate and the current point cloud coordinate is not within a preset distance range, determining the current point cloud coordinate as the obstacle, and further updating the obstacle information of the obstacle. Specifically, the current point is the starting point of the boundary of the obstacle, the number of the obstacles is accumulated to 1, the number of the obstacles added with 1 is recorded as the number of the new target obstacle, the number of the boundary points is accumulated to 1, and the current point cloud coordinate is used as the coordinate of the boundary point.
It should be noted that, if the current point cloud coordinate is the rightmost point cloud coordinate, the current first neighboring distance between the current point cloud coordinate and the right neighboring point is set to be 0 by default, that is, the current first neighboring distance does not meet the preset first neighboring distance condition, and the identification step corresponding to the condition that the current first neighboring distance does not meet the first neighboring distance is continuously executed.
Optionally, if the current first adjacent distance meets the first adjacent distance condition, obtaining current adjacent included angles between the current point cloud coordinate and a left adjacent point coordinate and a right adjacent point coordinate of the current point cloud coordinate respectively; if the current adjacent included angle does not accord with the adjacent included angle condition, matching the current boundary point number of the barrier with a preset number threshold value; if the current boundary point number is within the preset number threshold range, accumulating the boundary point number, and determining the boundary point coordinate; and if the current boundary point number is not within the preset number threshold range, accumulating the number of the obstacles, accumulating the number of the boundary points and determining the coordinates of the boundary points.
Specifically, if the current first adjacent distance meets the first adjacent condition, that is, the distance between the right adjacent point of the current point cloud coordinate and the current point cloud coordinate is within the preset distance range, it is further identified whether the current point cloud coordinate is an obstacle based on other obstacle identification conditions. Specifically, a current adjacent included angle between a current point cloud coordinate and a coordinate of a point adjacent to the right point is obtained, and then the current adjacent included angle is matched with a preset adjacent included angle condition. If the current adjacent included angle does not meet the adjacent included angle condition, namely the angle size of the current adjacent included angle is not within the range of the preset adjacent included angle threshold value, further acquiring the boundary point number of the boundary point in the identified obstacle, matching the boundary point number with the preset number threshold value, if the boundary point number is within the range of the preset number threshold value, continuing accumulating the boundary point number, and determining the current point cloud coordinate as the boundary point coordinate corresponding to the new boundary point number. On the contrary, if the boundary point number is not within the preset number threshold range, the current point cloud coordinate is determined as a new obstacle, and the obstacle information of the obstacle is further updated. Specifically, the current point is the starting point of the boundary of the obstacle, the number of the obstacles is accumulated to 1, the number of the obstacles added with 1 is recorded as the number of the new target obstacle, the number of the boundary points is accumulated to 1, and the current point cloud coordinate is used as the coordinate of the boundary point.
It should be noted that, if the current point cloud coordinate is the rightmost point cloud coordinate or the leftmost point cloud coordinate, the current point cloud coordinate is defaulted to make an included angle between the adjacent point coordinate and the right adjacent point coordinate of 180 degrees, that is, the current adjacent included angle does not meet the preset adjacent included angle condition, and the identification step corresponding to the condition that the current adjacent included angle does not meet the adjacent included angle is continuously executed.
Optionally, if the current adjacent included angle meets the adjacent included angle condition, obtaining a current second adjacent distance between the current point cloud coordinate and a left adjacent point of the current point cloud coordinate; if the current second adjacent distance does not accord with the second adjacent condition, accumulating the boundary point numbers; and traversing other point cloud coordinates and storing the number of the obstacles, the number of boundary points, the number of the boundary points and the coordinates of the boundary points of each identified obstacle if the current second adjacent distance meets the second adjacent condition and the current point cloud data identification is determined to be finished.
Specifically, if the current adjacent included angle meets the adjacent included angle condition, that is, the angle of the current adjacent included angle is within the preset adjacent included angle threshold range, it is further identified whether the current point cloud coordinate is an obstacle based on other obstacle identification conditions. Specifically, a left adjacent point coordinate of the current point cloud coordinate is obtained, a current second adjacent distance between the current adjacent point coordinate and the left adjacent point coordinate is obtained, and then the current adjacent distance is matched with a preset second adjacent distance condition. If the current second adjacent distance does not accord with the second adjacent condition, namely the distance between the right adjacent point of the current point cloud coordinate and the current point cloud coordinate is not in the preset distance range, further acquiring the boundary point number of the boundary point in the identified obstacle, and accumulating the boundary point number by 1. On the contrary, if the current second adjacent distance meets the second adjacent condition and other obstacle identification conditions exist, whether the current point cloud coordinate is an obstacle is further identified based on the other obstacle identification conditions. Optionally, if the current second adjacent distance meets the second adjacent condition and no other obstacle identification condition exists, it is determined that the current point cloud data identification is finished.
It should be noted that, if the current point cloud coordinate is the leftmost point cloud coordinate, the current second adjacent distance between the current point cloud coordinate and the left adjacent point is set to be 0 by default, that is, the current second adjacent distance does not meet the preset second adjacent distance condition, and the identification step corresponding to the condition that the current second adjacent distance does not meet the second adjacent distance condition is continuously executed.
Further, each point cloud data is identified based on the identification condition, and the number of the identified obstacles, the obstacle number, the number of boundary points, the number of the boundary points, and the coordinates of the boundary points are stored. Specifically, the obstacle information of the current frame may be stored in an obstacle information matrix. The matrix size of the obstacle matrix is preliminarily set to 30x7 in the present embodiment. In the row information of the obstacle information matrix, starting from a 1 st row to a 30 th row, each row represents the corresponding information of the obstacle with the current number; in column information of the obstacle information matrix, a first column represents the number ID of an obstacle, a second column represents the number of boundary points of the corresponding obstacle, and third to seventh columns represent information IDs of the respective boundary points. Specifically, the obstacle information is shown in the following table:
table 2 obstacle information table
Obstacle number 1 Number of boundary points 3 Boundary point 1 Boundary point 2 Boundary point 3
Obstacle number 2 Number of boundary points 2 Boundary point 1 Boundary point 2
Obstacle number 3 Number of boundary points 4 Boundary point 1 Boundary point 2 Boundary point 3 Boundary point 4
Obstacle number 4 Number of boundary points 5 Boundary point 1 Boundary point 2 Boundary point 3 Boundary point 4 Boundary point 5
Obstacle number 30
Therefore, the obstacle information is calculated, and the vehicle controller can complete the obstacle avoidance function of the vehicle according to the obstacle information provided by the embodiment of the invention.
The technical scheme of the embodiment includes that at least one point cloud data in a preset range of a current vehicle is obtained; the point cloud data comprises point cloud coordinates under a local coordinate system with a current vehicle as an origin; therefore, more accurate radar data is obtained to provide necessary information for the deceleration obstacle avoidance and the obstacle detouring planned path of the automatic driving vehicle; acquiring at least one obstacle identification condition, and identifying obstacle information of obstacles in a preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification conditions comprise a detection distance condition between a point cloud coordinate and a current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate; the scanned point cloud data is identified through a plurality of obstacle identification conditions, obstacle information around the vehicle is determined, and therefore accuracy of obstacle identification is improved, and safety of vehicle driving is improved.
Example two
Fig. 3 is a flowchart of a method for determining obstacle information according to a second embodiment of the present invention, where on the basis of the foregoing embodiments, a global coordinate system is obtained after "identifying obstacle information of an obstacle in a preset range of a current vehicle" is added, and global boundary point coordinates of boundary points in the global coordinate system are respectively determined based on boundary point coordinates of the boundary points in a local coordinate system and a preset coordinate conversion method, "where explanations of terms the same as or corresponding to those in the foregoing embodiments are omitted. Referring to fig. 3, the method for determining obstacle information according to the present embodiment includes:
s210, at least one point cloud data in a preset range of the current vehicle is obtained.
S220, acquiring at least one obstacle recognition condition, and recognizing obstacle information of obstacles in the current vehicle preset range based on the obstacle recognition conditions and the cloud coordinates of all points.
And S230, acquiring a global coordinate system, and respectively determining global boundary point coordinates of each boundary point in the global coordinate system based on the boundary point coordinates of each boundary point in the local coordinate system and a preset coordinate conversion method.
In the embodiment of the invention, sometimes the appearance of the obstacle can prevent the current vehicle from driving according to the originally planned route, but if the current vehicle bypasses the obstacle by modifying the originally planned route, the current vehicle can also continue to drive. Of course, to complete the detour of the current vehicle, the coordinate position of the obstacle needs to be determined to assist the current vehicle in performing detour route planning.
In the embodiment, the coordinate position of the obstacle is determined to be the coordinate position of the obstacle in the global coordinate system of the current route, and is not the coordinate position in the local coordinate system of the current vehicle.
Optionally, the method for acquiring the coordinate position of the obstacle in the global coordinate system may include: and acquiring a global coordinate system, and respectively determining global boundary point coordinates of each boundary point in the global coordinate system based on the boundary point coordinates of each boundary point in the local coordinate system and a preset coordinate conversion method.
Specifically, a preset global coordinate system is obtained, and as shown in fig. 4, the global coordinate system may use a starting point of a vehicle driving route where the current vehicle is located as an origin, use an initial driving direction of the vehicle as a positive X-axis direction, and use a left side of the vehicle when the vehicle initially drives as a positive Y-axis direction. In other words, the positive X-axis direction and the positive Y-axis direction of the global coordinate system are the same.
At the current moment, a local coordinate system where the current vehicle is located is obtained, and the transverse distance and the longitudinal distance between the original point of the local coordinate system and the original point of the global coordinate system, and the direction included angle between the X-axis direction of the local coordinate system and the X-axis direction of the global coordinate system are respectively determined. And determining a coordinate conversion method between the local coordinate system and the global coordinate system based on the transverse distance, the longitudinal distance and the direction included angle. Further, based on the boundary point coordinates of each boundary point of the obstacle in the local coordinate system and a preset coordinate conversion method, global boundary point coordinates of each boundary point in the global coordinate system are respectively determined.
Exemplarily, the conversion steps of the boundary points in the local coordinate system and the global coordinate system are determined by taking the conversion method of the boundary point 2 in fig. 4 as an example.
First, a global coordinate system XY-O is defined, which is the same as the origin and direction in the local coordinate system, and the vehicle longitudinal and transverse movement distances a, b (positive in the positive direction and negative in the negative direction) and the rotation angle α (positive in the counterclockwise direction and negative in the clockwise direction) of each frame are received and accumulated, thereby completing the conversion of the obstacle information and the boundary point of the obstacle from the local coordinate system to the global coordinate system.
Specifically, the coordinate information of the boundary point 2 where the obstacle is recognized in the local coordinate system is (X2, Y2), and when the obstacle needs to detour, the vehicle is displaced in the positive X-axis direction by b, in the positive Y-axis direction by a, and rotates by an angle α, and the coordinate conversion calculation is performed based on the above-identified coordinate conversion method. Specifically, the coordinate conversion formula is as follows:
X_2=x2×cos(-α)+y2×sin(-α)+b
Y_2=y2×cos(-α)-x2×sin(-α)+a
the transformation of the boundary point 2 coordinate information is completed based on the above expression, that is, in the global coordinate system XY-O, the coordinates of the boundary point 2 are (X _2, Y _ 2).
Further, global boundary point coordinates in the preset range of the current vehicle at the next moment are respectively determined, and the boundary point type of each boundary point is determined based on the comparison result of the global coordinate difference between the global boundary point coordinates at the current moment and the global boundary point coordinates at the next moment and a preset coordinate threshold.
Specifically, global boundary point coordinates of boundary points in an obstacle in a preset range of a current vehicle at the next moment under a global coordinate system are obtained, the global boundary point coordinates at the current moment are matched with the global boundary point coordinates at the next moment, and the boundary point type of each boundary point is determined based on a comparison result of a global coordinate difference between the global boundary point coordinates at the current moment and the global boundary point coordinates at the next moment and a preset coordinate threshold. If the global coordinate difference is within a preset coordinate threshold, determining the boundary point as a static boundary point, and filling the barrier information into a static boundary point information matrix in a global coordinate system; on the contrary, if the global coordinate difference is not within the preset coordinate threshold, the boundary point is determined to be a dynamic boundary point, and the obstacle information is filled into the dynamic boundary point information matrix under the global coordinate system.
Further, if the type of the boundary point is a dynamic boundary point, updating the global boundary point coordinates of each boundary point in real time, and updating the running track of the current vehicle in real time based on the boundary points updated in real time until the boundary point is not within the detection distance condition range or the current vehicle bypasses the boundary point; on the contrary, if the boundary point type is a static boundary point, the travel trajectory of the current vehicle is determined based on each boundary point until the boundary point is not within the detection distance condition range or the current vehicle bypasses the boundary point.
In this embodiment, when the vehicle has completed the detour, the vehicle controller cancels the detour of the obstacle, and the current vehicle continues to travel along the predetermined travel route without performing the obstacle information conversion.
The technical scheme of the embodiment includes that at least one point cloud data in a preset range of a current vehicle is obtained; the point cloud data comprises point cloud coordinates under a local coordinate system with a current vehicle as an origin; therefore, more accurate radar data is obtained to provide necessary information for the deceleration obstacle avoidance and the obstacle detouring planned path of the automatic driving vehicle; acquiring at least one obstacle identification condition, and identifying obstacle information of obstacles in a preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification conditions comprise a detection distance condition between a point cloud coordinate and a current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate; the scanned point cloud data is identified through a plurality of obstacle identification conditions, obstacle information around the vehicle is determined, and therefore accuracy of obstacle identification is improved, and safety of vehicle driving is improved.
The following is an embodiment of an obstacle information determination apparatus provided in an embodiment of the present invention, which belongs to the same inventive concept as the obstacle information determination methods in the above embodiments, and details that are not described in detail in the embodiment of the obstacle information determination apparatus may refer to the embodiment of the obstacle information determination method described above.
EXAMPLE III
Fig. 5 is a schematic structural diagram of an obstacle information determination apparatus according to a third embodiment of the present invention, which is applicable to a case where obstacles around a vehicle are determined during automatic driving of the vehicle; the method is more suitable for the condition that the obstacles around the vehicle are determined without the camera or when the camera is damaged and the function is limited. Referring to fig. 5, the specific structure of the obstacle information determination apparatus includes: a point cloud data acquisition module 310 and an obstacle information identification module 320; wherein the content of the first and second substances,
the point cloud data acquisition module 310 is used for acquiring at least one point cloud data in a preset range of the current vehicle; the point cloud data comprises point cloud coordinates in a local coordinate system with the current vehicle as an origin;
the obstacle information identification module 320 is configured to obtain at least one obstacle identification condition, and identify obstacle information of an obstacle in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification condition comprises a detection distance condition between a point cloud coordinate and the current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate.
The technical scheme of the embodiment includes that at least one point cloud data in a preset range of a current vehicle is obtained; the point cloud data comprises point cloud coordinates in a local coordinate system with the current vehicle as an origin; therefore, more accurate radar data is obtained to provide necessary information for the deceleration obstacle avoidance and the obstacle detouring planned path of the automatic driving vehicle; acquiring at least one obstacle identification condition, and identifying obstacle information of obstacles in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification condition comprises a detection distance condition between a point cloud coordinate and the current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate; the scanned point cloud data is identified through a plurality of obstacle identification conditions, obstacle information around the vehicle is determined, and therefore accuracy of obstacle identification is improved, and safety of vehicle driving is improved.
On the basis of the above embodiments, the obstacle information includes the number of obstacles, the number of boundary points of the obstacles, and the coordinates of the boundary points of the obstacles.
On the basis of the above embodiments, the point cloud data obtaining module 310 includes:
the initial point cloud coordinate acquisition unit is used for scanning a preset range of the current vehicle based on a preset radar sensor and acquiring each initial point cloud coordinate in the scanning result;
and the target point cloud coordinate acquisition unit is used for respectively carrying out data preprocessing on each initial point cloud coordinate to obtain each target point cloud coordinate in the preset range of the current vehicle.
On the basis of the above embodiments, the target point cloud coordinate acquiring unit includes:
a coordinate adjustment matrix obtaining subunit, configured to obtain a preset coordinate storage matrix, shift each column of coordinate data in the preset coordinate storage matrix by one column to the right, and store the corresponding column of coordinate data in the initial point cloud coordinate in a first column of the preset coordinate storage matrix to obtain a coordinate adjustment matrix;
the coordinate sorting matrix obtaining subunit is used for sorting the coordinate data in the coordinate adjusting matrix according to a preset sorting rule to obtain a coordinate sorting matrix;
and the point cloud coordinate acquisition subunit is used for acquiring at least two columns of coordinate data in the coordinate sorting matrix, determining a row coordinate mean value of each row of coordinate data in the at least two columns of coordinate data, and taking each row coordinate mean value as a corresponding point cloud coordinate in the target point cloud coordinate.
On the basis of the above embodiments, the obstacle information identification module 320 includes:
a current first adjacent distance obtaining unit, configured to, for any point cloud coordinate, if a distance between the current point cloud coordinate and the current vehicle meets the detection distance condition, obtain a current first adjacent distance between the current point cloud coordinate and a right adjacent point of the current point cloud coordinate;
and the first obstacle information acquisition unit is used for accumulating the number of obstacles, the number of boundary points and the number of boundary points of the obstacle and determining the coordinates of the boundary points based on the current point cloud coordinates if the current first adjacent distance does not accord with the first adjacent distance condition.
On the basis of the above embodiments, the obstacle information identification module 320 includes:
a current adjacent included angle obtaining unit, configured to obtain current adjacent included angles between the current point cloud coordinate and a left adjacent point coordinate and a right adjacent point coordinate of the current point cloud coordinate respectively if the current first adjacent distance meets the first adjacent distance condition;
the second barrier information acquisition unit is used for matching the current boundary point number of the barrier with a preset number threshold value if the current adjacent included angle does not accord with the adjacent included angle condition; if the current boundary point number is within the preset number threshold range, accumulating the boundary point numbers and determining the boundary point coordinates; and if the current boundary point number is not within the preset number threshold range, accumulating the number of the obstacles, the number of the boundary points and the number of the boundary points, and determining the coordinates of the boundary points.
On the basis of the above embodiments, the obstacle information identification module 320 includes:
a current second adjacent distance obtaining unit, configured to obtain a current second adjacent distance between the current point cloud coordinate and a left adjacent point of the current point cloud coordinate if the current adjacent included angle meets the adjacent included angle condition;
a third obstacle information obtaining unit, configured to accumulate the boundary point numbers if the current second neighboring distance does not meet the second neighboring condition;
and the obstacle information storage unit is used for traversing other point cloud coordinates and storing the number of the obstacles, the obstacle numbers, the number of boundary points, the number of the boundary points and the coordinates of the boundary points of each identified obstacle if the current second adjacent distance meets the second adjacent condition and the current point cloud data is determined to be identified to be finished.
On the basis of the above embodiments, the apparatus includes:
the global boundary point coordinate determination module is used for acquiring a global coordinate system after identifying the obstacle information of the obstacle in the preset range of the current vehicle, and respectively determining the global boundary point coordinates of each boundary point in the global coordinate system based on the boundary point coordinates of each boundary point in the local coordinate system and a preset coordinate conversion method;
and the boundary point type determining module is used for respectively determining the global boundary point coordinates in the preset range of the current vehicle at the next moment, and determining the boundary point type of each boundary point based on the comparison result of the global coordinate difference between the global boundary point coordinates at the current moment and the global boundary point coordinates at the next moment and a preset coordinate threshold.
On the basis of the above embodiments, the boundary point type includes a dynamic boundary point and a dynamic boundary point;
correspondingly, the device also comprises:
the first driving track updating module is used for updating the global boundary point coordinates of each boundary point in real time if the boundary point type is a dynamic boundary point after the boundary point type of each boundary point is determined, and updating the driving track of the current vehicle in real time based on the boundary point updated in real time until the boundary point is not in the detection distance condition range or the current vehicle bypasses the boundary point;
and the second driving track updating module is used for determining the driving track of the current vehicle based on each boundary point after determining the boundary point type of each boundary point and if the boundary point type is a static boundary point until the boundary point is not within the range of the detection distance condition or the current vehicle bypasses the boundary point.
The obstacle information determination device provided by the embodiment of the invention can execute the obstacle information determination method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
It should be noted that, in the embodiment of the obstacle information determination apparatus, each included unit and each included module are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
Example four
Fig. 6 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. FIG. 6 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 6 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 6, electronic device 12 is embodied in the form of a general purpose computing electronic device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown in FIG. 6, the network adapter 20 communicates with the other modules of the electronic device 12 via the bus 18. It should be appreciated that although not shown in FIG. 6, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and sample data acquisition by running the program stored in the system memory 28, for example, to implement the steps of an obstacle information determination method provided in this embodiment of the present invention, where the obstacle information determination method includes:
acquiring at least one point cloud data in a preset range of a current vehicle; the point cloud data comprises point cloud coordinates in a local coordinate system with the current vehicle as an origin;
acquiring at least one obstacle identification condition, and identifying obstacle information of obstacles in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification condition comprises a detection distance condition between a point cloud coordinate and the current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate.
Of course, those skilled in the art can understand that the processor may also implement the technical solution of the sample data obtaining method provided in any embodiment of the present invention.
EXAMPLE five
The fifth embodiment provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements, for example, the steps of implementing an obstacle information determination method provided in the fifth embodiment of the present invention, where the obstacle information determination method includes:
acquiring at least one point cloud data in a preset range of a current vehicle; the point cloud data comprises point cloud coordinates in a local coordinate system with the current vehicle as an origin;
acquiring at least one obstacle identification condition, and identifying obstacle information of obstacles in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification condition comprises a detection distance condition between a point cloud coordinate and the current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-readable storage medium may be, for example but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It will be understood by those skilled in the art that the modules or steps of the invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and optionally they may be implemented by program code executable by a computing device, such that it may be stored in a memory device and executed by a computing device, or it may be separately fabricated into various integrated circuit modules, or it may be fabricated by fabricating a plurality of modules or steps thereof into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (12)

1. An obstacle information determination method, characterized by comprising:
acquiring at least one point cloud data in a preset range of a current vehicle; the point cloud data comprises point cloud coordinates in a local coordinate system with the current vehicle as an origin;
acquiring at least one obstacle identification condition, and identifying obstacle information of obstacles in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification condition comprises a detection distance condition between a point cloud coordinate and the current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate.
2. The method of claim 1, wherein the obstacle information comprises a number of obstacles, an obstacle number, a number of boundary points of the obstacle, and boundary point coordinates of the obstacle.
3. The method of claim 1, wherein the obtaining at least one point cloud data within a preset range of a current vehicle comprises:
scanning a preset range of the current vehicle based on a preset radar sensor, and acquiring initial point cloud coordinates in the scanning result;
and respectively carrying out data preprocessing on each initial point cloud coordinate to obtain each target point cloud coordinate in the preset range of the current vehicle.
4. The method of claim 3, wherein the data preprocessing each initial point cloud coordinate to obtain each target point cloud coordinate within the preset range of the current vehicle comprises:
acquiring a preset coordinate storage matrix, shifting each row of coordinate data in the preset coordinate storage matrix to the right by one row, and storing the corresponding row of coordinate data in the initial point cloud coordinate in a first row of the preset coordinate storage matrix to obtain a coordinate adjustment matrix;
sorting the coordinate data in the coordinate adjustment matrix according to a preset sorting rule to obtain a coordinate sorting matrix;
acquiring at least two columns of coordinate data in the coordinate sorting matrix, determining a row coordinate mean value of each row coordinate data in the at least two columns of coordinate data, and taking each row coordinate mean value as a corresponding point cloud coordinate in the target point cloud coordinate.
5. The method according to claim 1, wherein the obtaining at least one obstacle identification condition and identifying obstacle information of an obstacle within a preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate comprises:
for any point cloud coordinate, if the distance between the current point cloud coordinate and the current vehicle meets the detection distance condition, acquiring a current first adjacent distance between the current point cloud coordinate and a right adjacent point of the current point cloud coordinate;
and if the current first adjacent distance does not meet the first adjacent distance condition, accumulating the number of obstacles of the obstacle, accumulating the number of obstacles, accumulating the number of boundary points and accumulating the number of boundary points, and determining the coordinates of the boundary points based on the current point cloud coordinates.
6. The method of claim 5, wherein the obtaining at least one obstacle identification condition and identifying obstacle information of an obstacle within a preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate further comprises:
if the current first adjacent distance meets the first adjacent distance condition, acquiring current adjacent included angles between the current point cloud coordinate and a left adjacent point coordinate and a right adjacent point coordinate of the current point cloud coordinate respectively;
if the current adjacent included angle does not meet the adjacent included angle condition, matching the current boundary point number of the barrier with a preset number threshold value; if the current boundary point number is within the preset number threshold range, accumulating the boundary point numbers and determining the boundary point coordinates; and if the current boundary point number is not within the preset number threshold range, accumulating the number of the obstacles, the number of the boundary points and the number of the boundary points, and determining the coordinates of the boundary points.
7. The method of claim 5, wherein the obtaining at least one obstacle identification condition and identifying obstacle information of an obstacle within a preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate further comprises:
if the current adjacent included angle meets the adjacent included angle condition, acquiring a current second adjacent distance between the current point cloud coordinate and a left adjacent point of the current point cloud coordinate;
if the current second adjacent distance does not accord with the second adjacent condition, accumulating the boundary point numbers;
and traversing other point cloud coordinates and storing the number of the obstacles, the number of boundary points, the number of the boundary points and the coordinates of the boundary points of each identified obstacle if the current second adjacent distance meets the second adjacent condition and the current point cloud data is determined to be identified to be finished.
8. The method of claim 1, after identifying obstacle information for obstacles within the preset range of the current vehicle, further comprising:
acquiring a preset global coordinate system, and respectively determining global boundary point coordinates of each boundary point in the global coordinate system based on the boundary point coordinates of each boundary point in the local coordinate system and a preset coordinate conversion method;
and determining the global boundary point coordinates in the preset range of the current vehicle at the next moment respectively, and determining the boundary point type of each boundary point based on the comparison result of the global coordinate difference between the global boundary point coordinates at the current moment and the global boundary point coordinates at the next moment and a preset coordinate threshold.
9. The method of claim 8, wherein the boundary point types include dynamic boundary points and dynamic boundary points;
correspondingly, after determining the boundary point type of each boundary point, the method further includes:
if the boundary point type is a dynamic boundary point, updating the global boundary point coordinates of the boundary points in real time, and updating the running track of the current vehicle in real time based on the boundary points updated in real time until the boundary points are not in the detection distance condition range or the current vehicle bypasses the boundary points;
and if the boundary point type is a static boundary point, determining the driving track of the current vehicle based on each boundary point until the boundary point is not in the detection distance condition range or the current vehicle bypasses the boundary point.
10. An obstacle information determination device characterized by comprising:
the point cloud data acquisition module is used for acquiring at least one point cloud data in a preset range of the current vehicle; the point cloud data comprises point cloud coordinates in a local coordinate system with the current vehicle as an origin;
the obstacle information identification module is used for acquiring at least one obstacle identification condition and identifying obstacle information of obstacles in the preset range of the current vehicle based on each obstacle identification condition and each point cloud coordinate; the obstacle identification condition comprises a detection distance condition between a point cloud coordinate and the current vehicle, a first adjacent distance condition between the point cloud coordinate and a right adjacent point coordinate, an adjacent included angle condition between the point cloud coordinate and a left adjacent point coordinate and an adjacent included angle condition between the point cloud coordinate and a right adjacent point coordinate, and a second adjacent distance condition between the point cloud coordinate and the right adjacent point coordinate.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the obstacle information determination method of any one of claims 1-9.
12. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the obstacle information determination method according to any one of claims 1 to 9.
CN202210091407.4A 2022-01-26 2022-01-26 Obstacle information determination method, obstacle information determination device, electronic device, and storage medium Pending CN114419601A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210091407.4A CN114419601A (en) 2022-01-26 2022-01-26 Obstacle information determination method, obstacle information determination device, electronic device, and storage medium
PCT/CN2022/141398 WO2023142816A1 (en) 2022-01-26 2022-12-23 Obstacle information determination method and apparatus, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210091407.4A CN114419601A (en) 2022-01-26 2022-01-26 Obstacle information determination method, obstacle information determination device, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN114419601A true CN114419601A (en) 2022-04-29

Family

ID=81277841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210091407.4A Pending CN114419601A (en) 2022-01-26 2022-01-26 Obstacle information determination method, obstacle information determination device, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN114419601A (en)
WO (1) WO2023142816A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023142816A1 (en) * 2022-01-26 2023-08-03 中国第一汽车股份有限公司 Obstacle information determination method and apparatus, and electronic device and storage medium
CN116792155A (en) * 2023-06-26 2023-09-22 华南理工大学 Tunnel health state monitoring and early warning method based on distributed optical fiber sensing
CN117148837A (en) * 2023-08-31 2023-12-01 上海木蚁机器人科技有限公司 Dynamic obstacle determination method, device, equipment and medium
CN116792155B (en) * 2023-06-26 2024-06-07 华南理工大学 Tunnel health state monitoring and early warning method based on distributed optical fiber sensing

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109085608A (en) * 2018-09-12 2018-12-25 奇瑞汽车股份有限公司 Obstacles around the vehicle detection method and device
US10634793B1 (en) * 2018-12-24 2020-04-28 Automotive Research & Testing Center Lidar detection device of detecting close-distance obstacle and method thereof
CN111260789B (en) * 2020-01-07 2024-01-16 青岛小鸟看看科技有限公司 Obstacle avoidance method, virtual reality headset and storage medium
CN111289998A (en) * 2020-02-05 2020-06-16 北京汽车集团有限公司 Obstacle detection method, obstacle detection device, storage medium, and vehicle
CN111950428A (en) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Target obstacle identification method and device and carrier
CN112519797A (en) * 2020-12-10 2021-03-19 广州小鹏自动驾驶科技有限公司 Vehicle safety distance early warning method, early warning system, automobile and storage medium
CN114419601A (en) * 2022-01-26 2022-04-29 中国第一汽车股份有限公司 Obstacle information determination method, obstacle information determination device, electronic device, and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023142816A1 (en) * 2022-01-26 2023-08-03 中国第一汽车股份有限公司 Obstacle information determination method and apparatus, and electronic device and storage medium
CN116792155A (en) * 2023-06-26 2023-09-22 华南理工大学 Tunnel health state monitoring and early warning method based on distributed optical fiber sensing
CN116792155B (en) * 2023-06-26 2024-06-07 华南理工大学 Tunnel health state monitoring and early warning method based on distributed optical fiber sensing
CN117148837A (en) * 2023-08-31 2023-12-01 上海木蚁机器人科技有限公司 Dynamic obstacle determination method, device, equipment and medium

Also Published As

Publication number Publication date
WO2023142816A1 (en) 2023-08-03

Similar Documents

Publication Publication Date Title
CN109284348B (en) Electronic map updating method, device, equipment and storage medium
CN109343061B (en) Sensor calibration method and device, computer equipment, medium and vehicle
CN110163176B (en) Lane line change position identification method, device, equipment and medium
CN114419601A (en) Obstacle information determination method, obstacle information determination device, electronic device, and storage medium
CN109188438A (en) Yaw angle determines method, apparatus, equipment and medium
CN109284801B (en) Traffic indicator lamp state identification method and device, electronic equipment and storage medium
CN112644480B (en) Obstacle detection method, obstacle detection system, computer device and storage medium
CN109558854B (en) Obstacle sensing method and device, electronic equipment and storage medium
CN110377682B (en) Track type determination method and device, computing equipment and storage medium
CN109635861B (en) Data fusion method and device, electronic equipment and storage medium
CN109814114B (en) Ultrasonic radar array, obstacle detection method and system
CN112528859B (en) Lane line detection method, device, equipment and storage medium
CN111121797B (en) Road screening method, device, server and storage medium
CN114429186A (en) Data fusion method, device, equipment and medium based on multiple sensors
CN110186472B (en) Vehicle yaw detection method, computer device, storage medium, and vehicle system
CN113838125A (en) Target position determining method and device, electronic equipment and storage medium
CN109635868B (en) Method and device for determining obstacle type, electronic device and storage medium
CN114475656A (en) Travel track prediction method, travel track prediction device, electronic device, and storage medium
CN112102648B (en) Vacant parking space pushing method, device, equipment and storage medium
CN112100565B (en) Road curvature determination method, device, equipment and storage medium
CN114895274A (en) Guardrail identification method
CN110133624B (en) Unmanned driving abnormity detection method, device, equipment and medium
CN115311634A (en) Lane line tracking method, medium and equipment based on template matching
CN113619606A (en) Obstacle determination method, apparatus, device and storage medium
CN115704688A (en) High-precision map data relative position precision evaluation method, system, medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination