WO2023142816A1 - Procédé et appareil de détermination d'informations d'obstacle, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de détermination d'informations d'obstacle, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2023142816A1
WO2023142816A1 PCT/CN2022/141398 CN2022141398W WO2023142816A1 WO 2023142816 A1 WO2023142816 A1 WO 2023142816A1 CN 2022141398 W CN2022141398 W CN 2022141398W WO 2023142816 A1 WO2023142816 A1 WO 2023142816A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinates
point
point cloud
current
obstacle
Prior art date
Application number
PCT/CN2022/141398
Other languages
English (en)
Chinese (zh)
Inventor
吴岗岗
杜建宇
王恒凯
曹天书
李超
李佳骏
王皓南
刘清宇
黄显晴
宋新丽
Original Assignee
中国第一汽车股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国第一汽车股份有限公司 filed Critical 中国第一汽车股份有限公司
Publication of WO2023142816A1 publication Critical patent/WO2023142816A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the embodiments of the present application relate to the technical field of intelligent driving, for example, to a method, device, electronic device, and storage medium for determining obstacle information.
  • the driver often directly observes the obstacles around the vehicle, or the driver determines the obstacles on the left and right sides of the vehicle by observing the door rearview mirror, and then the vehicle drives according to the driver's instructions.
  • the way is easily affected by the driver's subjective experience or environmental factors, and the safety of the vehicle is low.
  • the present application provides a method, device, electronic device and storage medium for determining obstacle information, so as to improve the accuracy of determining obstacles around a vehicle, thereby improving the driving safety of the vehicle.
  • the embodiment of the present application provides a method for determining obstacle information, the method including:
  • the point cloud data includes point cloud coordinates in a local coordinate system with the current vehicle as the origin;
  • the obstacle identification condition includes a point cloud
  • the embodiment of the present application also provides a device for determining obstacle information, which includes:
  • the point cloud data acquisition module is configured to acquire at least one point cloud data within the preset range of the current vehicle; wherein, the point cloud data includes point cloud coordinates in a local coordinate system with the current vehicle as the origin;
  • the obstacle information identification module is configured to obtain at least one obstacle identification condition, and based on the obstacle identification condition and the point cloud coordinates, identify the obstacle information of the obstacle within the preset range of the current vehicle; wherein, the The obstacle identification conditions include the detection distance condition between the point cloud coordinates and the current vehicle, the first adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates, the point cloud coordinates and the left adjacent point coordinates respectively The adjacent angle condition between the point cloud coordinates and the right adjacent point coordinates and the second adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates.
  • the embodiment of the present application further provides an electronic device, the electronic device comprising:
  • processors one or more processors
  • storage means configured to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors are made to implement the method for determining obstacle information provided in any embodiment of the present application.
  • the embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method for determining obstacle information provided in any embodiment of the present application is implemented.
  • FIG. 1 is a schematic flowchart of a method for determining obstacle information provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for determining obstacle information provided by another embodiment of the present application
  • FIG. 3 is a schematic flowchart of a method for determining obstacle information provided by another embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for determining obstacle information provided by another embodiment of the present application.
  • Fig. 5 is a schematic structural diagram of an obstacle information determining device provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Figure 1 is a flow chart of a method for determining obstacle information provided by the embodiment of the present application. This embodiment can be applied to the situation of determining obstacles around the vehicle when the vehicle is driving automatically; Determines the condition of obstacles around the vehicle when restricted.
  • the method can be executed by an obstacle information determining device, and the device can be realized by software and/or hardware.
  • the application scenarios include: currently in the process of driving, the driver often directly observes the obstacles around the vehicle, or the driver determines the obstacles on the left and right sides of the vehicle by observing the door rearview mirror, and then the vehicle drives according to the driver's instructions , and this artificial determination method is easily affected by the driver's subjective experience or environmental factors, and the safety of the vehicle is low.
  • the obstacle information around the vehicle is mostly obtained based on the camera installed around the vehicle, but the camera will be damaged and the function is limited, resulting in the vehicle controller being unable to obtain the obstacle information around the vehicle, resulting in Automated driving of vehicles poses a greater safety risk.
  • the technical solution in this embodiment calculates the status information of obstacles around the self-driving vehicle through the acquired radar data, and provides necessary information for the self-driving vehicle to decelerate, avoid obstacles, and plan paths around obstacles.
  • the technical solution of this embodiment obtains at least one point cloud data within the preset range of the current vehicle; wherein, the point cloud data includes the point cloud coordinates in the local coordinate system with the current vehicle as the origin; thus obtaining more Accurate radar data provides the necessary information for automatic driving vehicles to decelerate and avoid obstacles, and plan paths around obstacles; obtain at least one obstacle recognition condition, and based on the obstacle recognition condition and point cloud coordinates, identify the location of obstacles within the preset range of the current vehicle Obstacle information; wherein, the obstacle recognition condition includes the detection distance condition between the point cloud coordinates and the current vehicle, the first adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates, the point cloud coordinates and the left phase coordinates respectively The adjacent angle condition between the adjacent point coordinates and the right adjacent point coordinates and the second adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates; through multiple obstacle recognition conditions, the scanned point cloud The data is identified to determine the obstacle information around the vehicle, which improves the accuracy of obstacle identification,
  • the method includes the following steps:
  • the surroundings of the vehicle may be detected based on various radar detection devices, so as to acquire point cloud data within a preset range around the vehicle.
  • the method for acquiring point cloud data may include: scanning a preset range of the current vehicle based on a preset radar sensor, and acquiring initial point cloud coordinates in the scan result.
  • the installation position of the preset radar sensor on the current vehicle determines the perception range and ability of the sensor obstacle detection, in order to reduce the occlusion as much as possible and increase the detection range of the lidar. For example, you can choose to install it above the roof or under the current vehicle.
  • the above installation positions of the radar sensors are only exemplary installation positions, and this embodiment does not limit the installation positions of the radar sensors.
  • the type of the preset radar sensor may be a laser radar, a vehicle millimeter-wave radar, or other types of radar sensors.
  • the surrounding area of the vehicle is continuously scanned, and the scanned point cloud data is stored in real time.
  • the scanning angle of the radar sensor can be 360 degrees, of course, the scanning angle can also be set in real time according to the current environment of the vehicle.
  • the stored point cloud data may include point cloud coordinates in a local coordinate system with the current vehicle as the origin.
  • Each point cloud data packet may include (X, Y) coordinate information based on the local coordinate system where the current vehicle is located.
  • the origin of the local coordinate system is the center point of the current vehicle
  • the positive X direction is the driving direction of the current vehicle
  • the Y direction is the left direction of the current vehicle.
  • the initial point cloud data of radar scanning may have single-frame false positives or multi-frame jitter, it is necessary to perform data preprocessing on the initial point cloud data before identifying obstacle information based on the initial point cloud data. Eliminate these accidental factors as much as possible, so as to improve the accuracy of obstacle information identification. Therefore, after obtaining the initial point cloud data scanned by the radar sensor, the technical solution of this embodiment performs data preprocessing on the initial point cloud coordinates to obtain the target point cloud coordinates within the preset range of the current vehicle.
  • the method for data preprocessing of the initial point cloud data may include: obtaining a preset coordinate storage matrix, shifting the column coordinate data in the preset coordinate storage matrix to the right by one column, and shifting the corresponding column coordinate data in the initial point cloud coordinates Store in the first column of the preset coordinate storage matrix to obtain a coordinate adjustment matrix; sort the coordinate data in the coordinate adjustment matrix according to the preset sorting rules to obtain a coordinate sorting matrix; obtain at least two columns of coordinate data in the coordinate sorting matrix, Determine the row coordinate mean value of the row coordinate data in at least two columns of coordinate data, and use the row coordinate mean value as the corresponding point cloud coordinates in the target point cloud coordinates.
  • the data preprocessing method is exemplarily introduced by taking the above 90 point cloud data as an example.
  • read the initial point cloud data of the current vehicles FSP_0-FSP_90 at the current moment and store the initial point cloud data in the newly created initial point cloud data matrix.
  • the matrix name of the initial point cloud data matrix can be FSP_n_XY, and the matrix is a 90 ⁇ 2 coordinate matrix.
  • the initial coordinate data information in FSP_n_XY is shown in the following table:
  • the preprocessing method of this embodiment needs to follow the principle that error generation and distribution follow a normal distribution in order to avoid single-frame false positives while stabilizing multi-frame jitter, so as to eliminate error points through median mean filtering.
  • a data storage matrix is established in advance to store the point cloud data after data preprocessing at the previous moment.
  • this embodiment divides the data storage matrix into an X coordinate data storage sub-matrix and a Y coordinate data storage sub-matrix.
  • the matrix name of the X-coordinate data storage sub-matrix may be FSP_save_X
  • the matrix name of the Y-coordinate data storage sub-matrix may be FSP_save_Y.
  • the matrix size of the X-coordinate data storage sub-matrix is a matrix of 90 ⁇ N, where the size of N is the key to median mean filtering.
  • the minimum value of N is not less than 10, and the maximum value is not more than 50; for example, in the technical solution of this embodiment, N can be temporarily taken as 30, and of course N can also be Except for other numerical values, the present embodiment does not limit the numerical value of N.
  • the method of processing the coordinate data in the X coordinate data storage sub-matrix exemplarily introduces the data preprocessing method of the initial point cloud data. For example, move the column coordinates in the X coordinate data storage submatrix to the right by one column, and store the first column coordinates in the initial point cloud data matrix, that is, the coordinate data in the X column coordinates, into the X coordinate data storage submatrix, and get Coordinate data after data adjustment.
  • the coordinate data after data adjustment in the X coordinate data storage sub-matrix is sorted.
  • the row data may be sorted in descending order to obtain sorted coordinate data.
  • the beneficial effect of sorting the coordinate data in this embodiment is that the invalid coordinate data in the current matrix can be screened out according to the sorted coordinate data, and the reliability of the data is improved, thereby improving the accuracy of obstacle information identification.
  • At least one column of data in the sorted coordinate data in the X coordinate data storage sub-matrix is acquired, for example, at least one column in the middle may be acquired, or at least one column may be randomly acquired.
  • the column coordinate data of the middle preset column number of the X coordinate data storage sub-matrix can be selected, and the navigation coordinate mean value of each row coordinate data in the selected column coordinate data can be calculated, and the row coordinate mean value As the corresponding point cloud coordinates in the target point cloud coordinates.
  • M is temporarily set to 10, which means the row mean value of the 11th column to the 20th column of the FSP_n_X calculation matrix.
  • the method of data preprocessing is introduced by taking the X coordinate data storage sub-matrix as an example.
  • the coordinate data in the Y coordinate data storage sub-matrix can also be processed in the same way. preprocessing.
  • the data preprocessing process of the information in the second column of the FSP_n_XY matrix includes:
  • M is temporarily set to 10, which means the row mean value of the 11th column to the 20th column of the FSP_n_X calculation matrix.
  • the obstacle identification condition is used to identify the point cloud data in the above embodiments, and determine whether the object corresponding to the point cloud data is an obstacle.
  • the obstacle information includes the number of obstacles, the number of obstacles, the number of boundary points of obstacles, the numbers of boundary points of obstacles, and the coordinates of boundary points of obstacles.
  • the boundary point of the obstacle can be understood as the inflection point of the obstacle, that is, the point cloud scanned by the radar sensor during the scanning process around the vehicle.
  • the obstacle identification conditions include the detection distance condition between the point cloud coordinates and the current vehicle, the first adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates, the point cloud coordinates and the left adjacent point coordinates and The adjacent angle condition between the right adjacent point coordinates and the second adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates.
  • the distance between the point cloud coordinates and the right adjacent point is set to 0; if the current point cloud data is the leftmost end point , then there is no left adjacent point, then the distance between the point cloud coordinates and the left adjacent point is set to 0. Moreover, since the starting point and the ending point cannot form an included angle, set the adjacent included angle of the point cloud coordinates corresponding to FSP_90 and FSP_0 to 180°.
  • the obstacle information of the obstacle within the preset range of the current vehicle is recognized based on the obstacle recognition condition and the point cloud coordinates respectively.
  • the identification method for identifying obstacle information of obstacles within the preset range of the current vehicle includes: for any point cloud coordinates, if the distance between the current point cloud coordinates and the current vehicle meets the detection distance condition, then obtain the current point cloud coordinates and The current first adjacent distance between the right adjacent points of the current point cloud coordinates; if the current first adjacent distance does not meet the first adjacent distance condition, the number of obstacles, the number of obstacles, and the number of obstacles are accumulated. The number of boundary points is accumulated, the number of boundary points is accumulated, and the coordinates of the boundary points are determined based on the current point cloud coordinates.
  • any point cloud coordinates scanned by the current vehicle obtain the current distance between the current point cloud coordinates and the vehicle, and then match the current distance with the preset detection distance condition. If the current distance meets the detection distance condition, that is, the distance is within the detection distance condition, it means that the point cloud data is within the identification range of the current vehicle to identify obstacles. For example, obtain the coordinates of the right adjacent point of the current point cloud coordinates, and obtain the current first adjacent distance between the coordinates of the right adjacent point, and then combine the current adjacent distance with the preset first adjacent distance condition to match.
  • the current point cloud coordinates are determined as obstacle, and update the obstacle information of the obstacle.
  • the current point is the starting point of the obstacle boundary, add 1 to the number of obstacles, add 1 to the obstacle number to record as the new target obstacle number, add 1 to the number of boundary points, and use the current point cloud coordinates as the coordinates of the boundary point.
  • the current point cloud coordinates are the rightmost point cloud coordinates
  • the default current first neighbor distance between the current point cloud coordinates and the right neighbor point is 0, that is, the current first neighbor distance does not conform to The preset first adjacent distance condition, and continue to execute the corresponding identification step that does not meet the first adjacent distance condition.
  • the current first adjacent distance meets the first adjacent distance condition, then obtain the current adjacent angle between the current point cloud coordinates and the left adjacent point coordinates and right adjacent point coordinates of the current point cloud coordinates; If the current adjacent angle does not meet the adjacent angle conditions, match the current boundary point number of the obstacle with the preset number threshold; if the current boundary point number is within the preset number threshold range, then accumulate the boundary point numbers , and determine the boundary point coordinates; if the current boundary point number is not within the preset number threshold range, accumulate the number of obstacles, the number of obstacles, the number of boundary points, the number of boundary points, and determine the coordinates of the boundary point.
  • the current first adjacent distance meets the first adjacent condition, that is, the distance between the right adjacent point of the current point cloud coordinates and the current point cloud coordinates is within the preset distance range
  • Identify whether the current point cloud coordinates are obstacles For example, obtain the current adjacent angle between the current point cloud coordinates and the current point cloud coordinates of the adjacent point coordinates and the right adjacent point coordinates, and then use the current adjacent angle and the preset adjacent angle condition to match.
  • the current adjacent angle does not meet the adjacent angle condition, that is, the angle of the current adjacent angle is not within the preset adjacent angle threshold range. If the current adjacent angle does not meet the adjacent angle condition, that is, the angle of the current adjacent angle is not within the preset adjacent angle threshold range, then obtain the boundary point number of the boundary point in the identified obstacle, And match the boundary point number with the preset number threshold, if the boundary point number is within the preset number threshold range, continue to accumulate the boundary point number, and determine the current point cloud coordinates as the corresponding to the new boundary point number Boundary point coordinates. On the contrary, if the boundary point number is not within the preset number threshold range, the current point cloud coordinates are determined as a new obstacle, and the obstacle information of the obstacle is updated.
  • the current point is the starting point of the obstacle boundary
  • add 1 to the number of obstacles add 1 to the obstacle number and record it as the new target obstacle number
  • add 1 to the number of boundary points and use the current point cloud coordinates as the coordinates of the boundary point.
  • the default angle between the current point cloud coordinates as the adjacent point coordinates and the right adjacent point coordinates is 180° , that is, the current adjacent included angle does not meet the preset adjacent included angle condition, and continue to execute the corresponding identification step that does not meet the adjacent included angle condition.
  • the current second adjacent distance between the current point cloud coordinates and the left adjacent point of the current point cloud coordinates is obtained; if the current second adjacent distance does not meet the For the second adjacent condition, the number of the boundary point is accumulated; if the current second adjacent distance meets the second adjacent condition, and it is determined that the identification of the current point cloud data is completed, then traverse other point cloud coordinates, and the identified obstacle.
  • the number of obstacles, obstacle numbers, number of boundary points, number of boundary points, and coordinates of boundary points are stored.
  • the current adjacent angle meets the adjacent angle condition, that is, the angle of the current adjacent angle is within the preset adjacent angle threshold range
  • identify whether the current point cloud coordinates are obstacles based on other obstacle identification conditions thing For example, obtain the coordinates of the left adjacent point of the current point cloud coordinates, and obtain the current second adjacent distance between the coordinates of the left adjacent point, and then combine the current adjacent distance with the preset second adjacent distance condition to match. If the current second adjacent distance does not meet the second adjacent condition, that is, the distance between the right adjacent point of the current point cloud coordinates and the current point cloud coordinates is not within the preset distance range, then obtain the recognized obstacle The boundary point number of the middle boundary point, and add 1 to the boundary point number.
  • the current second adjacent distance meets the second adjacent condition, and there are other obstacle identification conditions, it is identified whether the current point cloud coordinates are obstacles based on other obstacle identification conditions. For example, if the current second adjacent distance meets the second adjacent condition and there is no other obstacle identification condition, it is determined that the current point cloud data identification ends.
  • the current point cloud coordinates are the leftmost point cloud coordinates
  • the default current second adjacent distance between the current point cloud coordinates and the left adjacent point is 0, that is, the current second adjacent distance does not conform to The preset second adjacent distance condition, and continue to execute the corresponding identification step that does not meet the second adjacent distance condition.
  • point cloud data is identified based on the above identification conditions, and the number of obstacles, obstacle numbers, number of boundary points, number of boundary points, and coordinates of boundary points of the recognized obstacles are stored.
  • the obstacle information of the current frame can be stored in the obstacle information matrix.
  • the matrix size of the obstacle matrix is initially set to 30x7.
  • each row represents the corresponding information of the obstacle of the current number;
  • the first column represents The number ID of the obstacle, the second column represents the number of boundary points of the corresponding obstacle, and the third to seventh columns represent the information ID of multiple boundary points.
  • the obstacle information is shown in the following table:
  • the vehicle controller can complete the obstacle avoidance function of the vehicle according to the obstacle information provided by the embodiment of the present application.
  • the technical solution of this embodiment obtains at least one point cloud data within the preset range of the current vehicle; wherein, the point cloud data includes the point cloud coordinates in the local coordinate system with the current vehicle as the origin; thereby obtaining more accurate radar data for automatic Provide necessary information for driving the vehicle to decelerate to avoid obstacles and plan paths around obstacles; obtain at least one obstacle recognition condition, and based on the obstacle recognition condition and point cloud coordinates, identify the obstacle information of the obstacle within the preset range of the current vehicle; , the obstacle recognition conditions include the detection distance condition between the point cloud coordinates and the current vehicle, the first adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates, the point cloud coordinates and the left adjacent point coordinates and the right adjacent point coordinates respectively The adjacent angle condition between the adjacent point coordinates and the second adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates; identify the scanned point cloud data through multiple obstacle identification conditions, and determine Obstacle information around the vehicle improves the accuracy of obstacle recognition, thereby improving the safety of vehicle driving.
  • Fig. 3 is a flow chart of a method for determining obstacle information provided by another embodiment of the present application.
  • this embodiment in the step of "identifying obstacles within the preset range of the current vehicle "Information” is followed by the addition of “acquire the global coordinate system, and based on the boundary point coordinates of the boundary point of the obstacle in the local coordinate system and the preset coordinate transformation method, determine the global boundary point coordinates of the boundary point in the global coordinate system" which is the same as Explanations of terms that are the same as or corresponding to the above multiple embodiments are not repeated here.
  • the method for determining obstacle information provided in this embodiment includes:
  • the appearance of obstacles will prevent the current vehicle from traveling according to the original planned route, but if the current vehicle bypasses the obstacle by modifying the original planned route, it can also continue to drive.
  • the coordinate position of the obstacle needs to be determined to assist the current vehicle in detour route planning.
  • the coordinate position of the obstacle is determined to be the coordinate position of the obstacle in the global coordinate system where the current route is located, not the coordinate position in the local coordinate system where the current vehicle is located.
  • the method for obtaining the coordinate position of the obstacle in the global coordinate system may include: obtaining the global coordinate system, and based on the boundary point coordinates of the boundary point of the obstacle in the local coordinate system and the preset coordinate conversion method, determine that the boundary point is at The global boundary point coordinates in the global coordinate system.
  • the global coordinate system can be based on the starting point of the vehicle's driving route where the current vehicle is located, with the initial driving direction of the vehicle as the positive direction of the X-axis, and with the vehicle's initial The left side when driving is the positive direction of the Y axis.
  • the positive direction of the X-axis and the positive direction of the Y-axis of the global coordinate system are both the same.
  • the local coordinate system where the current vehicle is located respectively determine the horizontal and vertical distances between the origin of the local coordinate system and the origin of the global coordinate system, and the X-axis direction of the local coordinate system and the X-axis of the global coordinate system The direction angle between directions.
  • a coordinate conversion method between the local coordinate system and the global coordinate system is determined based on the above-mentioned horizontal distance, vertical distance and direction angle.
  • the global boundary point coordinates of the boundary point in the global coordinate system are determined based on the boundary point coordinates of the boundary point of the obstacle in the local coordinate system and a preset coordinate conversion method.
  • the conversion method of the boundary point 2 in FIG. 4 is taken as an example to determine the conversion steps of the boundary point in the local coordinate system and the global coordinate system.
  • the coordinate information of the boundary point 2 of the obstacle recognized in the local coordinate system is (x2, y2).
  • the vehicle is displaced along the positive direction of the X-axis as b, and along the positive direction of the Y-axis is a, the turned angle is ⁇ , and the coordinate conversion calculation is performed based on the coordinate conversion method determined above.
  • the coordinate transformation formula is as follows:
  • the coordinate information conversion of the boundary point 2 is completed based on the above expression, that is, in the global coordinate system XY-O, the coordinates of the boundary point 2 are (X_2, Y_2).
  • the boundary point type of the boundary point is determined.
  • the boundary point is determined to be a static boundary point, and the obstacle information is filled into the static boundary point information matrix in the global coordinate system; on the contrary, if the global coordinate difference is not within the preset If the coordinate threshold is within the boundary point, it is determined that the boundary point is a dynamic boundary point, and the obstacle information is filled into the dynamic boundary point information matrix in the global coordinate system.
  • the boundary point type is a dynamic boundary point
  • update the global boundary point coordinates of the boundary point in real time and update the current vehicle's driving trajectory in real time based on the real-time updated boundary point until the boundary point is not within the detection distance condition range, or the current vehicle Detour around the boundary point
  • the boundary point type is a static boundary point
  • the current vehicle's driving trajectory is determined based on the boundary point until the boundary point is not within the detection distance condition range, or the current vehicle detours through the boundary point.
  • the vehicle controller cancels the obstacle detour
  • the obstacle information conversion is no longer performed, and the current vehicle continues to drive according to the predetermined driving route.
  • the technical solution of this embodiment obtains at least one point cloud data within the preset range of the current vehicle; wherein, the point cloud data includes the point cloud coordinates in the local coordinate system with the current vehicle as the origin; thereby obtaining more accurate radar data for automatic Provide necessary information for driving the vehicle to decelerate to avoid obstacles and plan paths around obstacles; obtain at least one obstacle recognition condition, and based on the obstacle recognition condition and point cloud coordinates, identify the obstacle information of the obstacle within the preset range of the current vehicle; , the obstacle recognition conditions include the detection distance condition between the point cloud coordinates and the current vehicle, the first adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates, the point cloud coordinates and the left adjacent point coordinates and the right adjacent point coordinates respectively The adjacent angle condition between the adjacent point coordinates and the second adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates; identify the scanned point cloud data through multiple obstacle identification conditions, and determine Obstacle information around the vehicle improves the accuracy of obstacle recognition, thereby improving the safety of vehicle driving.
  • the following is an embodiment of the obstacle information determination device provided by the embodiment of the present application.
  • This device belongs to the same application concept as the obstacle information determination method of the above-mentioned multiple embodiments, and is not described in detail in the embodiments of the obstacle information determination device. For details, reference may be made to the above embodiment of the method for determining obstacle information.
  • Fig. 5 is a schematic structural diagram of an obstacle information determination device provided by an embodiment of the present application. This embodiment can be applied to the situation of determining obstacles around the vehicle when the vehicle is driving automatically; it is more suitable for situations where the camera is not used or the camera is damaged and the function is limited. When determining the situation of obstacles around the vehicle.
  • the structure of the obstacle information determination device includes: a point cloud data acquisition module 310 and an obstacle information identification module 320; wherein,
  • the point cloud data acquisition module 310 is configured to acquire at least one point cloud data within the preset range of the current vehicle; wherein, the point cloud data includes point cloud coordinates in a local coordinate system with the current vehicle as the origin;
  • the obstacle information identification module 320 is configured to obtain at least one obstacle identification condition, and based on the obstacle identification condition and the point cloud coordinates, identify the obstacle information of the obstacle within the preset range of the current vehicle; wherein,
  • the obstacle identification conditions include the detection distance condition between the point cloud coordinates and the current vehicle, the first adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates, the point cloud coordinates and the left adjacent point coordinates respectively The adjacent angle condition between the coordinates and the right adjacent point coordinates and the second adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates.
  • the technical solution of this embodiment obtains at least one point cloud data within the preset range of the current vehicle; wherein, the point cloud data includes the point cloud coordinates in the local coordinate system with the current vehicle as the origin; thereby obtaining more accurate
  • the radar data provides the necessary information for the automatic driving vehicle to decelerate and avoid obstacles, and plan the path around obstacles; obtain at least one obstacle recognition condition, and based on the obstacle recognition condition and the point cloud coordinates, identify the current vehicle preset Obstacle information of obstacles within the range; wherein, the obstacle recognition condition includes the detection distance condition between the point cloud coordinates and the current vehicle, the first adjacent distance between the point cloud coordinates and the coordinates of the right adjacent point Condition, the adjacent angle condition between the point cloud coordinates and the left adjacent point coordinates and the right adjacent point coordinates, and the second adjacent distance condition between the point cloud coordinates and the right adjacent point coordinates; through multiple obstacles
  • the object recognition conditions are used to recognize the scanned point cloud data, determine the obstacle information around the vehicle, and improve the accuracy of obstacle recognition, thereby improving the
  • the obstacle information includes the number of obstacles, the number of obstacles, the number of boundary points of the obstacles, the numbers of the boundary points of the obstacles, and the coordinates of the boundary points of the obstacles.
  • the point cloud data acquisition module 310 includes:
  • the initial point cloud coordinate acquisition unit is configured to scan the preset range of the current vehicle based on the preset radar sensor, and acquire the initial point cloud coordinates in the scanning result;
  • the target point cloud coordinate acquisition unit is configured to perform data preprocessing on the initial point cloud coordinates to obtain target point cloud coordinates within the preset range of the current vehicle.
  • the target point cloud coordinate acquisition unit includes:
  • the coordinate adjustment matrix acquisition subunit is configured to acquire a preset coordinate storage matrix, shift the column coordinate data in the preset coordinate storage matrix to the right by one column, and store the corresponding column coordinate data in the initial point cloud coordinates in the The first column of the preset coordinate storage matrix is obtained to obtain the coordinate adjustment matrix;
  • the coordinate sorting matrix acquisition subunit is configured to sort the coordinate data in the coordinate adjustment matrix according to a preset sorting rule to obtain a coordinate sorting matrix
  • the point cloud coordinate acquisition subunit is configured to acquire at least two columns of coordinate data in the coordinate sorting matrix, determine the row coordinate mean value of the row coordinate data in the at least two column coordinate data, and use the row coordinate mean value as the target The corresponding point cloud coordinates in the point cloud coordinates.
  • the obstacle information identification module 320 includes:
  • the current first adjacent distance acquiring unit is configured to obtain the distance between the current point cloud coordinates and the current vehicle if the distance between the current point cloud coordinates and the current vehicle meets the detection distance condition for any point cloud coordinates.
  • the first obstacle information acquisition unit is configured to: if the current first adjacent distance does not meet the first adjacent distance condition, then accumulate the obstacle number of the obstacle, the obstacle number, and the Accumulate the number of boundary points and the number of the boundary points, and determine the coordinates of the boundary points based on the current point cloud coordinates.
  • the obstacle information identification module 320 includes:
  • the current adjacent angle acquisition unit is configured to obtain the left adjacent points of the current point cloud coordinates and the current point cloud coordinates respectively if the current first adjacent distance meets the first adjacent distance condition The current adjacent angle between the coordinates and the coordinates of the right adjacent point;
  • the second obstacle information acquisition unit is configured to match the current boundary point number of the obstacle with a preset number threshold if the current adjacent angle does not meet the adjacent angle condition; If the number of the current boundary point is within the preset number threshold range, the number of the boundary point is accumulated, and the coordinates of the boundary point are determined; if the number of the current boundary point is not within the preset number threshold range, the Accumulating the number of obstacles, accumulating the number of obstacles, accumulating the number of boundary points, accumulating the number of the boundary points, and determining the coordinates of the boundary points.
  • the obstacle information identification module 320 includes:
  • the current second adjacent distance acquisition unit is configured to obtain the distance between the current point cloud coordinates and the left adjacent point of the current point cloud coordinates if the current adjacent angle meets the adjacent angle condition.
  • the third obstacle information acquisition unit is configured to accumulate the numbers of the boundary points if the current second adjacent distance does not meet the second adjacent condition
  • the obstacle information storage unit is configured to traverse other point cloud coordinates and store the identified obstacles if the current second adjacent distance meets the second adjacent condition and it is determined that the identification of the current point cloud data ends.
  • the number of obstacles, the number of obstacles, the number of boundary points, the number of boundary points and the coordinates of boundary points are stored.
  • the device includes:
  • the global boundary point coordinate determination module is configured to obtain the global coordinate system after identifying the obstacle information of the obstacle within the preset range of the current vehicle, and based on the boundary point coordinates of the boundary point of the obstacle in the local coordinate system And a preset coordinate conversion method, determining the global boundary point coordinates of the boundary point in the global coordinate system;
  • the boundary point type determination module is configured to respectively determine the global boundary point coordinates within the preset range of the current vehicle at the next moment, and based on the global coordinates between the global boundary point coordinates at the current moment and the global boundary point coordinates at the next moment.
  • the boundary point type of the boundary point is determined from the comparison result of the difference with the preset coordinate threshold.
  • the boundary point types include dynamic boundary points and dynamic boundary points
  • the unit also includes:
  • the first driving track update module is configured to update the global boundary point coordinates of the boundary point in real time if the boundary point type is a dynamic boundary point after determining the boundary point type of the boundary point, and based on the real-time updated
  • the boundary point updates the driving trajectory of the current vehicle in real time until the boundary point is not within the range of the detection distance condition, or the current vehicle detours through the boundary point;
  • the second driving trajectory update module is configured to determine the current vehicle's driving trajectory based on the boundary point after determining the boundary point type of the boundary point, if the boundary point type is a static boundary point, until the The boundary point is not within the range of the detection distance condition, or the current vehicle detours through the boundary point.
  • the obstacle information determining device provided in the embodiment of the present application can execute the obstacle information determining method provided in any embodiment of the present application, and has corresponding functional modules and beneficial effects for executing the method.
  • the multiple units and modules included are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized;
  • the specific names of multiple functional units are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present application.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 6 shows a block diagram of an exemplary electronic device 12 suitable for implementing embodiments of the present application.
  • the electronic device 12 shown in FIG. 6 is only an example, and should not limit the functions and scope of use of the embodiment of the present application.
  • electronic device 12 takes the form of a general computing electronic device.
  • Components of electronic device 12 may include, but are not limited to, one or more processors or processing units 16, system memory 28, bus 18 connecting various system components including system memory 28 and processing unit 16.
  • Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures.
  • bus structures include, by way of example, but are not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, Enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect ( PCI) bus.
  • ISA Industry Standard Architecture
  • MAC Micro Channel Architecture
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • Electronic device 12 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by electronic device 12 and include both volatile and nonvolatile media, removable and non-removable media.
  • System memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32 .
  • Electronic device 12 may include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • storage system 34 may be configured to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 6, commonly referred to as a "hard drive”).
  • a disk drive for reading and writing to removable nonvolatile disks e.g., "floppy disks”
  • removable nonvolatile optical disks e.g., CD-ROM, DVD-ROM. or other optical media
  • each drive may be connected to bus 18 via one or more data media interfaces.
  • the system memory 28 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of the embodiments of the present application.
  • Program/utility 40 may be stored, for example, in system memory 28 as a set (at least one) of program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of these examples may include the realization of the network environment.
  • the program modules 42 generally perform the functions and/or methods of the embodiments described herein.
  • the electronic device 12 may also communicate with one or more external devices 14 (e.g., a keyboard, pointing device, display 24, etc.), may also communicate with one or more devices that enable a user to interact with the electronic device 12, and/or communicate with Any device (eg, network card, modem, etc.) that enables the electronic device 12 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interface 22 .
  • the electronic device 12 can also communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet) through the network adapter 20 . As shown in FIG. 6 , network adapter 20 communicates with other modules of electronic device 12 via bus 18 .
  • the processing unit 16 executes a variety of functional applications and sample data acquisition by running the program stored in the system memory 28, for example, implementing the steps of a method for determining obstacle information provided in the embodiment of the present invention.
  • the method for determining obstacle information includes:
  • the point cloud data includes point cloud coordinates in a local coordinate system with the current vehicle as the origin;
  • the obstacle identification condition includes a point cloud
  • processor can also implement the technical solution of the sample data acquisition method provided in any embodiment of the present application.
  • This embodiment provides a computer-readable storage medium, on which a computer program is stored.
  • the program is executed by a processor, for example, the steps of a method for determining obstacle information provided in the embodiment of the present invention are realized.
  • Obstacle information determination Methods include:
  • the point cloud data includes point cloud coordinates in a local coordinate system with the current vehicle as the origin;
  • the obstacle identification condition includes a point cloud
  • the computer storage medium in the embodiments of the present application may use any combination of one or more computer-readable media.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer-readable storage medium may be, for example but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connections with one or more leads, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • the computer readable storage medium may be a non-transitory computer readable
  • a computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program codes for performing the operations of the present application may be written in one or more programming languages or combinations thereof, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional Procedural Programming Language - such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through the Internet using an Internet service provider). connect).
  • LAN local area network
  • WAN wide area network
  • connect such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • the above-mentioned multiple modules or multiple steps of the present application can be realized by general-purpose computing devices, and they can be concentrated on a single computing device, or distributed on a network formed by multiple computing devices , for example, they can be implemented with executable program codes of computer devices, so that they can be stored in storage devices and executed by computing devices, or they can be made into a plurality of integrated circuit modules respectively, or a plurality of them can be Modules or steps are implemented as a single integrated circuit module.
  • the application is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

Sont divulgués dans les modes de réalisation de la présente demande un procédé et un appareil de détermination d'informations d'obstacle, un dispositif électronique et un support de stockage. Le procédé consiste à : acquérir au moins un élément de données de nuage de points dans une plage prédéfinie du véhicule actuel, les données de nuage de points comprenant des coordonnées de nuage de points dans un système de coordonnées local qui prend le véhicule actuel comme origine ; et identifier des informations d'obstacle d'un obstacle dans la plage prédéfinie du véhicule actuel sur la base de la condition d'identification d'obstacle et des coordonnées de nuage de points, la condition d'identification d'obstacle comprenant une condition d'une distance de détection entre les coordonnées de nuage de points et le véhicule actuel, une condition d'une première distance adjacente entre les coordonnées de nuage de points et les coordonnées d'un point adjacent droit, une condition d'un angle inclus adjacent entre les coordonnées de nuage de points et les coordonnées du point adjacent gauche et du point adjacent droit, et une condition d'une seconde distance adjacente entre les coordonnées de nuage de points et les coordonnées du point adjacent droit.
PCT/CN2022/141398 2022-01-26 2022-12-23 Procédé et appareil de détermination d'informations d'obstacle, dispositif électronique et support de stockage WO2023142816A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210091407.4 2022-01-26
CN202210091407.4A CN114419601A (zh) 2022-01-26 2022-01-26 障碍物信息确定方法、装置、电子设备以及存储介质

Publications (1)

Publication Number Publication Date
WO2023142816A1 true WO2023142816A1 (fr) 2023-08-03

Family

ID=81277841

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/141398 WO2023142816A1 (fr) 2022-01-26 2022-12-23 Procédé et appareil de détermination d'informations d'obstacle, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN114419601A (fr)
WO (1) WO2023142816A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118243105A (zh) * 2024-03-28 2024-06-25 北京小米机器人技术有限公司 路径规划方法、装置、电子设备及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419601A (zh) * 2022-01-26 2022-04-29 中国第一汽车股份有限公司 障碍物信息确定方法、装置、电子设备以及存储介质
CN116792155B (zh) * 2023-06-26 2024-06-07 华南理工大学 一种基于分布式光纤传感的隧道健康状态监测预警方法
CN117148837B (zh) * 2023-08-31 2024-08-23 上海木蚁机器人科技有限公司 动态障碍物的确定方法、装置、设备和介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109085608A (zh) * 2018-09-12 2018-12-25 奇瑞汽车股份有限公司 车辆周围障碍物检测方法及装置
US10634793B1 (en) * 2018-12-24 2020-04-28 Automotive Research & Testing Center Lidar detection device of detecting close-distance obstacle and method thereof
CN111260789A (zh) * 2020-01-07 2020-06-09 青岛小鸟看看科技有限公司 避障方法、虚拟现实头戴设备以及存储介质
CN111289998A (zh) * 2020-02-05 2020-06-16 北京汽车集团有限公司 障碍物检测方法、装置、存储介质以及车辆
CN111950428A (zh) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 目标障碍物识别方法、装置及运载工具
CN112519797A (zh) * 2020-12-10 2021-03-19 广州小鹏自动驾驶科技有限公司 一种车辆安全距离预警方法、预警系统、汽车及存储介质
CN114419601A (zh) * 2022-01-26 2022-04-29 中国第一汽车股份有限公司 障碍物信息确定方法、装置、电子设备以及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109085608A (zh) * 2018-09-12 2018-12-25 奇瑞汽车股份有限公司 车辆周围障碍物检测方法及装置
US10634793B1 (en) * 2018-12-24 2020-04-28 Automotive Research & Testing Center Lidar detection device of detecting close-distance obstacle and method thereof
CN111260789A (zh) * 2020-01-07 2020-06-09 青岛小鸟看看科技有限公司 避障方法、虚拟现实头戴设备以及存储介质
CN111289998A (zh) * 2020-02-05 2020-06-16 北京汽车集团有限公司 障碍物检测方法、装置、存储介质以及车辆
CN111950428A (zh) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 目标障碍物识别方法、装置及运载工具
CN112519797A (zh) * 2020-12-10 2021-03-19 广州小鹏自动驾驶科技有限公司 一种车辆安全距离预警方法、预警系统、汽车及存储介质
CN114419601A (zh) * 2022-01-26 2022-04-29 中国第一汽车股份有限公司 障碍物信息确定方法、装置、电子设备以及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118243105A (zh) * 2024-03-28 2024-06-25 北京小米机器人技术有限公司 路径规划方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114419601A (zh) 2022-04-29

Similar Documents

Publication Publication Date Title
WO2023142816A1 (fr) Procédé et appareil de détermination d'informations d'obstacle, dispositif électronique et support de stockage
US11042762B2 (en) Sensor calibration method and device, computer device, medium, and vehicle
CN109188438B (zh) 偏航角确定方法、装置、设备和介质
CN109284348B (zh) 一种电子地图的更新方法、装置、设备和存储介质
CN108253975B (zh) 一种建立地图信息及车辆定位的方法与设备
US10035508B2 (en) Device for signalling objects to a navigation module of a vehicle equipped with this device
JP2020166268A (ja) 地図軌跡マッチングデータの品質決定方法、装置、サーバ及び媒体
JP7314213B2 (ja) 車両測位方法、装置、電子デバイス、記憶媒体及びプログラム
CN110163176B (zh) 车道线变化位置识别方法、装置、设备和介质
KR20210127121A (ko) 도로 이벤트 검출 방법, 장치, 기기 및 저장매체
CN112644480B (zh) 一种障碍物检测方法、检测系统、计算机设备和存储介质
WO2023040737A1 (fr) Procédé et appareil de détermination d'emplacement cible, dispositif électronique et support de stockage
CN109635861B (zh) 一种数据融合方法、装置、电子设备及存储介质
CN112100565B (zh) 一种道路曲率确定方法、装置、设备及存储介质
WO2020215254A1 (fr) Procédé de maintenance de carte de ligne de délimitation de voie, dispositif électronique et support d'informations
JP7230691B2 (ja) 異常検出方法、異常検出装置、及び異常検出システム
CN113537362A (zh) 一种基于车路协同的感知融合方法、装置、设备及介质
CN109635868A (zh) 障碍物类别的确定方法、装置、电子设备及存储介质
CN115424245A (zh) 车位识别方法、电子设备和存储介质
CN114662600A (zh) 一种车道线的检测方法、装置和存储介质
CN110390252B (zh) 基于先验地图信息的障碍物检测方法、装置和存储介质
CN115908498B (zh) 一种基于类别最优匹配的多目标跟踪方法及装置
CN109270566B (zh) 导航方法、导航效果测试方法、装置、设备和介质
CN109188419B (zh) 障碍物速度的检测方法、装置、计算机设备及存储介质
CN113313654B (zh) 激光点云滤波去噪方法、系统、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22923602

Country of ref document: EP

Kind code of ref document: A1