WO2023169337A1 - 目标物速度的估计方法及装置、车辆和存储介质 - Google Patents

目标物速度的估计方法及装置、车辆和存储介质 Download PDF

Info

Publication number
WO2023169337A1
WO2023169337A1 PCT/CN2023/079661 CN2023079661W WO2023169337A1 WO 2023169337 A1 WO2023169337 A1 WO 2023169337A1 CN 2023079661 W CN2023079661 W CN 2023079661W WO 2023169337 A1 WO2023169337 A1 WO 2023169337A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
target
segment
point
segments
Prior art date
Application number
PCT/CN2023/079661
Other languages
English (en)
French (fr)
Inventor
刘涛
周全赟
闫鹤
刘兰个川
王弢
吴新宙
Original Assignee
广州小鹏自动驾驶科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州小鹏自动驾驶科技有限公司 filed Critical 广州小鹏自动驾驶科技有限公司
Publication of WO2023169337A1 publication Critical patent/WO2023169337A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the present application relates to the technical field of data processing, and in particular to a method and device for estimating the speed of a target object, a vehicle and a storage medium.
  • lidar equipment can be deployed in the vehicle; then obstacles can be avoided based on the point cloud collected by the lidar equipment.
  • obstacle avoidance during autonomous driving can be completed by determining the position of the static obstacle through point cloud; for dynamic obstacles, since dynamic obstacles are in motion, in order to be able to avoid dynamic obstacles To effectively avoid obstacles, it is necessary to accurately estimate the speed of dynamic obstacles.
  • the continuous multi-frame point cloud collected by the lidar device is usually used for estimation; however, if the dynamic obstacle is at the points of the first few frames collected by the lidar device, If it appears in the cloud, the speed of dynamic obstacles may not be accurately estimated due to the lack of point clouds.
  • a target speed estimation method and device, vehicle and storage medium are proposed to overcome the above problems or at least partially solve the above problems, including:
  • a method for estimating the speed of a target object includes:
  • the target single-frame point cloud is generated by the laser radar device scanning the moving target multiple times within the target period.
  • the target speed of the target object is estimated.
  • determining the difference value between any two point cloud segments in at least two point cloud segments includes:
  • the point cloud parameters include at least one of the following:
  • Point cloud shape point cloud average time, point cloud area, and point cloud average pitch angle.
  • determining the difference value between the first point cloud segment and the second point cloud segment based on the first point cloud parameter and the second point cloud parameter includes:
  • the difference value between the first point cloud segment and the second point cloud segment is determined based on the target shape difference, target time difference, target point cloud area difference, target point cloud average pitch angle difference, and the corresponding weight.
  • the point cloud shape of the point cloud segment is determined through the following steps:
  • the point cloud shape of the point cloud segment is determined based on the angle with the smallest value among the first included angle and the second included angle.
  • estimating the target speed of the target object based on the target point cloud segment includes:
  • segmenting the target single frame point cloud into at least two point cloud segments includes:
  • the target speed includes the absolute speed of the target object.
  • the lidar device is deployed in the target vehicle to obtain a single frame point cloud of the target, including:
  • self-vehicle motion compensation is performed on one frame of point cloud to obtain the target single frame point cloud.
  • the embodiment of the present application also provides a device for estimating the speed of a target object.
  • the device includes:
  • the acquisition module is used to obtain the target single-frame point cloud.
  • the target single-frame point cloud is generated by a frame of point cloud obtained by scanning the moving target multiple times by the lidar device within the target period;
  • a segmentation module used to segment the target single-frame point cloud into at least two point cloud segments
  • a determination module used to determine the difference value between any two point cloud fragments in at least two point cloud fragments, and use the two point cloud fragments corresponding to the minimum difference value as the target point cloud fragment;
  • the estimation module is used to estimate the target speed of the target object based on the target point cloud segments.
  • the determining module includes:
  • the selection submodule is used to arbitrarily obtain two point cloud segments from at least two point cloud segments as the first point cloud segment and the second point cloud segment;
  • the parameter determination submodule is used to determine the first point cloud parameter of the first point cloud segment and the second point cloud parameter of the second point cloud segment;
  • the difference value determination submodule is used to determine the difference value between the first point cloud segment and the second point cloud segment based on the first point cloud parameter and the second point cloud parameter;
  • the point cloud parameters include at least one of the following:
  • Point cloud shape point cloud average time, point cloud area, and point cloud average pitch angle.
  • the difference value determination sub-module is used to determine the target shape difference between the point cloud shape of the first point cloud segment and the point cloud shape of the second point cloud segment; determine the point cloud average of the first point cloud segment time, and the target time difference from the point cloud average time of the second point cloud segment; determine the point cloud area of the first point cloud segment, and the target point cloud area difference from the point cloud area of the second point cloud segment; determine the first point The difference between the point cloud average pitch angle of the cloud fragment and the target point cloud average pitch angle of the point cloud average pitch angle of the second point cloud fragment; obtain the weights set in advance for different differences; based on the target shape difference, target time difference, target point The difference in cloud area, the difference in average pitch angle of the target point cloud, and the corresponding weight determine the difference value between the first point cloud segment and the second point cloud segment.
  • the parameter determination submodule is used to establish a two-dimensional circumscribed rectangle for the point cloud segment, and determine the first corner point closest to the lidar device in the two-dimensional circumscribed rectangle; determine in the circumscribed two-dimensional rectangle, and The second corner point and the third corner point adjacent to the first corner point; determine the first angle between the line connecting the first corner point and the lidar device and the line connecting the second corner point and the lidar device; determine the first angle The second angle between the line connecting the corner point and the lidar device, and the line connecting the third corner point and the lidar device; determine the point of the point cloud segment based on the smallest angle between the first angle and the second angle. Cloud form.
  • the estimation module includes:
  • the feature extraction submodule is used to extract target feature points from the target point cloud segment according to the feature extraction method set in advance for the point cloud shape corresponding to the target point cloud segment;
  • the segmentation module includes:
  • the sorting submodule is used to sort the points in the target single-frame point cloud in time order
  • the first judgment sub-module is used to judge whether the interval between the adjacent first point and the second point in the target single frame point cloud exceeds the preset time interval
  • the second judgment submodule is used to judge the ascending and descending order of the yaw angle of the point cloud segment with the first point as the endpoint when the interval between the first point and the second point exceeds the preset time interval, and the second point Whether the ascending and descending order of yaw angles of the point cloud segments that are endpoints are consistent;
  • Segment segment submodule used when the ascending and descending order of the yaw angle of the point cloud segment with the first point as the endpoint is inconsistent with the ascending and descending order of the yaw angle of the point cloud segment with the second point as the endpoint, between the first point and the second point. Split between the second points.
  • the target speed includes the absolute speed of the target object
  • the lidar device is deployed in the target vehicle
  • the acquisition module includes:
  • the self-vehicle pose acquisition submodule is used to obtain the target self-vehicle pose of the target vehicle within the target period;
  • the self-vehicle motion compensation submodule is used to perform self-vehicle motion compensation on a frame of point cloud based on the target's self-vehicle pose to obtain the target single frame point cloud.
  • An embodiment of the present application also provides a vehicle, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor.
  • the computer program is executed by the processor, the above method for estimating the speed of a target object is implemented.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program is executed by a processor, the above method for estimating the speed of a target object is implemented.
  • the target single frame point cloud can be obtained first.
  • the target single frame point cloud is generated by a frame point cloud obtained after multiple scanning of the moving target object by the lidar device within the target period; and then the target single frame point cloud is generated.
  • the frame point cloud is divided into at least two point cloud segments; then the difference value between any two point cloud segments in the at least two point cloud segments is determined, and the two point cloud segments corresponding to the minimum difference value are used as the target point cloud fragment; and then estimate the target speed of the target object based on the target point cloud fragment.
  • Figure 1 is a step flow chart of a method for estimating the speed of a target object according to an embodiment of the present application
  • Figure 2 is a schematic diagram of a target single frame point cloud according to an embodiment of the present application.
  • Figure 3 is a step flow chart of another method for estimating the speed of a target object according to an embodiment of the present application
  • Figure 4a is a schematic diagram of a point cloud segment in the form of an L-shaped point cloud according to an embodiment of the present application
  • Figure 4b is a schematic diagram of a point cloud segment in the form of an I-shaped point cloud according to an embodiment of the present application
  • Figure 5 is a schematic diagram of the angle between a corner point and a line connecting a lidar device according to an embodiment of the present application
  • Figure 6a is a schematic diagram of the characteristic points of an L-shaped point cloud segment according to an embodiment of the present application.
  • Figure 6b is a schematic diagram of the characteristic points of a point cloud segment in the form of an I-shaped point cloud according to an embodiment of the present application
  • Figure 7 is a flow chart for estimating absolute speed according to an embodiment of the present application.
  • Figure 8 is a flow chart for estimating relative speed according to an embodiment of the present application.
  • Figure 9 is a structural block diagram of a device for estimating the speed of a target object according to an embodiment of the present application.
  • FIG. 1 a flow chart of a method for estimating the speed of a target object according to an embodiment of the present application is shown, which includes the following steps:
  • Step 101 Obtain the target single-frame point cloud.
  • the target single-frame point cloud is generated by a frame of point cloud obtained by scanning the moving target multiple times by the lidar device within the target period;
  • lidar equipment can refer to radar equipment that can scan an object multiple times within a certain period of time and obtain a frame of point cloud based on these multiple scans.
  • the target single-frame point cloud may be the point cloud of the first frame generated by the lidar device, or it may be the point cloud of the second frame, the point cloud of the third frame..., the embodiment of this application does not do this. limit.
  • the target single-frame point cloud can be generated by a frame of point cloud obtained after the lidar device scans a moving target multiple times during the target period. For example, it can be generated by processing the frame of point cloud.
  • the target single frame point cloud is not limited in the embodiment of this application.
  • FIG. 2 it is an example of a target single-frame point cloud, which includes a multi-circle point cloud obtained by the lidar device after scanning the moving target multiple times within the target period.
  • Step 102 Divide the target single-frame point cloud into at least two point cloud segments
  • the target single-frame point cloud After obtaining the target single-frame point cloud, you can first segment the target single-frame point cloud to obtain at least two point cloud segments; the point cloud segments can be composed of multiple consecutive points, and each point can contain three-dimensional coordinates. , color information, reflection intensity information, echo number information, etc.
  • each point cloud segment corresponds to a sub-period of the target period.
  • the target period corresponding to the target single-frame point cloud is the 10th-20ms
  • four point cloud segments A, B, C and D are obtained.
  • Point cloud segment A corresponds to the sub-period from 10ms to 12ms in the target period
  • point cloud segment B corresponds to the sub-period from 12ms to 14ms in the target period
  • point cloud segment C corresponds to the 14th-ms in the target period.
  • point cloud segment D corresponds to the 17th ms-20 ms sub-period in the target period, and the embodiment of the present application does not limit this.
  • Step 103 Determine the difference value between any two point cloud segments in at least two point cloud segments, and use the two point cloud segments corresponding to the minimum difference value as the target point cloud segment;
  • the two point cloud segments corresponding to the smallest difference value may be used as the target point cloud segment.
  • the difference value between point cloud fragment A and point cloud fragment B is 0.2
  • the difference value between point cloud fragment A and point cloud fragment C is 0.3
  • the difference between point cloud fragment A and point cloud fragment D The value is 0.4
  • the difference value between point cloud segment B and point cloud segment C is 0.3
  • the difference value between point cloud segment B and point cloud segment D is 0.1
  • the difference value between point cloud segment C and point cloud segment D is 0 ;
  • the difference value 0 is the minimum difference value
  • the point cloud segment C and the point cloud segment D can be used as the target point cloud segment.
  • Step 104 Estimate the target speed of the target object based on the target point cloud segment.
  • the target speed of the target object can be estimated based on the two target point cloud segments; specifically, the target speed can be estimated based on the positions corresponding to the two target point cloud segments, and the two target point clouds.
  • the time corresponding to the fragment comes Estimate the target speed of the target.
  • the target single frame point cloud can be obtained first.
  • the target single frame point cloud is generated by a frame point cloud obtained after multiple scanning of the moving target object by the lidar device within the target period; and then the target single frame point cloud is generated.
  • the frame point cloud is divided into at least two point cloud segments; then the difference value between any two point cloud segments in the at least two point cloud segments is determined, and the two point cloud segments corresponding to the minimum difference value are used as the target point cloud fragment; and then estimate the target speed of the target object based on the target point cloud fragment.
  • FIG. 3 a flow chart of another method for estimating the speed of a target according to an embodiment of the present application is shown, which includes the following steps:
  • Step 301 Obtain the target single frame point cloud
  • the target speed may include the absolute speed of the target object
  • the lidar device may be deployed in the target vehicle
  • step 301 may include the following sub-steps:
  • Sub-step 11 Obtain the target self-vehicle pose of the target vehicle within the target period
  • the target self-vehicle pose may include the position information and attitude of the target vehicle within the target period.
  • the target object can be a moving obstacle outside the target vehicle; while the target object is moving, the target vehicle may also be moving; at this time, in order to accurately estimate the absolute speed of the target object, it can be based on the target The position and posture of the vehicle within the target period are used to compensate.
  • the target self-vehicle pose can use visual SLAM (Simultaneous Localization and Mapping, simultaneous positioning and map construction), lidar SLAM, GPS (Global Positioning System, global Positioning system), IMU (Inertial Measurement Unit, inertial measurement unit) and wheel speedometer, etc., which are not limited in the embodiment of the present application.
  • visual SLAM Simultaneous Localization and Mapping, simultaneous positioning and map construction
  • lidar SLAM Simultaneous Localization and Mapping, simultaneous positioning and map construction
  • GPS Global Positioning System, global Positioning system
  • IMU Inertial Measurement Unit, inertial measurement unit
  • wheel speedometer etc.
  • Sub-step 12 Perform self-vehicle motion compensation on one frame of point cloud according to the target's own vehicle pose to obtain the target single-frame point cloud.
  • the self-vehicle motion compensation can be performed on a frame of point cloud generated by the lidar device during the target time period based on the target self-vehicle pose, thereby obtaining the target single frame.
  • Point cloud generated by the lidar device during the target time period based on the target self-vehicle pose
  • a frame obtained after the lidar device scans the moving target object multiple times within the target period can be directly used.
  • the point cloud is used as the target single frame point cloud.
  • a point cloud obtained by the lidar device after scanning the moving target multiple times within the target period can be first Frame point cloud is used for target object recognition.
  • deep learning methods or traditional geometric methods can be used for target object recognition.
  • the target single-frame point cloud is obtained from the one-frame point cloud based on the recognition result.
  • Step 302 Divide the target single-frame point cloud into at least two point cloud segments
  • the target single-frame point cloud After obtaining the target single-frame point cloud, you can first segment the target single-frame point cloud to obtain at least two point cloud segments.
  • the target single-frame point cloud can be segmented through the following sub-steps:
  • Sub-step 21 Sort the points in the target single-frame point cloud in time order
  • Sub-step 22 Determine whether the interval between the adjacent first point and the second point in the target single frame point cloud exceeds the preset time interval
  • two adjacent points can be randomly selected from the sorted target single-frame point cloud as the first point and the second point.
  • the continuity between the first point and the second point can be determined based on the relationship between the interval between the first point and the second point and the preset time interval.
  • interval between the first point and the second point does not exceed the preset time interval, it can mean that the first point and the second point are continuous; at this time, sub-step 21 can be performed again.
  • Sub-step 23 When the interval between the first point and the second point exceeds the preset time interval, determine the ascending and descending order of the yaw angles of the point cloud segments with the first point as the endpoint, and the points with the second point as the endpoint. Whether the ascending and descending order of yaw angles of cloud segments are consistent;
  • the interval between the first point and the second point exceeds the preset time interval, you can first use the first point as the endpoint to obtain a point cloud segment, and use the second point as the endpoint to obtain another point cloud segment.
  • the ascending and descending order of the yaw angles of the two point cloud segments can be determined; for example, whether it is ascending or descending; if the ascending and descending sequences of the yaw angles of the two point cloud segments are inconsistent, the first point and the second point can be represented It is generated by the lidar device when it scans the target object at different times.
  • Sub-step 24 When the ascending and descending order of the yaw angle of the point cloud segment with the first point as the endpoint is inconsistent with the ascending and descending order of the yaw angle of the point cloud segment with the second point as the endpoint, between the first point and the second point split between.
  • sub-step 21 can be performed again.
  • At least two point cloud segments can be obtained.
  • Step 303 From at least two point cloud segments, arbitrarily obtain two point cloud segments as the first point cloud segment and the second point cloud segment;
  • Step 304 Determine the first point cloud parameter of the first point cloud segment and the second point cloud parameter of the second point cloud segment;
  • the point cloud parameters may include at least one of the following: point cloud shape, point cloud average time, point cloud area, and point cloud average pitch angle.
  • the point cloud shape can represent the shape of the point cloud segment. As shown in Figure 4a, it is a point cloud segment with an L-shaped point cloud shape; as shown in Figure 4b, it is a point cloud segment with an I-shaped point cloud shape.
  • the point cloud average time can refer to the average time of the time segments corresponding to all points in the point cloud segment.
  • the point cloud area can refer to the area of the two-dimensional circumscribed rectangle outside the point cloud piece.
  • the average pitch angle of the point cloud can refer to the average pitch angle corresponding to all points in the point cloud segment.
  • the first point cloud parameter of the first point cloud segment may be determined first, and the second point cloud parameter of the second point cloud segment may be determined.
  • the point cloud parameters that specifically determine the point cloud shape, point cloud average time, point cloud area, and point cloud average pitch angle can be set according to the actual situation, for example: determine the point cloud shape, point cloud shape, and point cloud shape of the first point cloud segment. Cloud average time, point cloud area, and point cloud average pitch angle; or, determine the point cloud average time, point cloud area, and point cloud average pitch angle of the first point cloud segment, which is not limited in the embodiment of the present application.
  • the point cloud shape of the point cloud segment can be determined through the following sub-steps:
  • Sub-step 31 Establish a two-dimensional circumscribed rectangle for the point cloud fragment, and determine the first corner point closest to the lidar device in the two-dimensional circumscribed rectangle;
  • Sub-step 32 Determine the second corner point and the third corner point adjacent to the first corner point in the circumscribed two-dimensional rectangle;
  • one of the two corner points adjacent to the first corner point in the two-dimensional rectangle can be used as the second corner point, and the other adjacent corner point can be used as the third corner point.
  • Sub-step 33 Determine the first included angle between the line connecting the first corner point and the lidar device and the line connecting the second corner point and the lidar device;
  • Sub-step 34 Determine the second angle between the line connecting the first corner point and the lidar device and the line connecting the third corner point and the lidar device;
  • the two-dimensional circumscribed rectangle 580 of the point cloud patch 500 among them, the connection line 530 between the first corner point 510 and the lidar device 520, and the first connection line 550 between the second corner point 540 and the lidar device 520.
  • Sub-step 35 Determine the point cloud shape of the point cloud segment based on the angle with the smallest value among the first included angle and the second included angle.
  • the point cloud shape of the point cloud segment can be determined based on the angle with the smallest value among the first included angle and the second included angle; specifically, the point cloud shape of the point cloud segment can be determined in advance for different included angles. Different point cloud shapes are set according to the angle. For example, if the angle is less than 1.5°, it is an I-shaped point cloud shape; otherwise, it is an L-shaped point cloud shape. This embodiment of the present application does not limit this.
  • the point cloud average time of the point cloud segment can be determined through the following sub-steps:
  • Sub-step 41 Determine the number of points included in the point cloud segment and the timestamp corresponding to each point;
  • Sub-step 42 Determine the point cloud average time of the point cloud segment based on the number of points and the timestamp corresponding to each point.
  • the average point cloud time of the point cloud segment can be calculated based on the number of points N and the timestamp corresponding to each point; for example, the average point cloud time T can be calculated by the following formula:
  • the area of the circumscribed two-dimensional rectangle created for the point cloud segment can be used as the point cloud area of the point cloud segment.
  • the average pitch angle of the point cloud segment can be determined through the following sub-steps:
  • Sub-step 51 Determine the number of points included in the point cloud segment and the pitch angle corresponding to each point;
  • Sub-step 52 Determine the average pitch angle of the point cloud segment according to the number of points and the pitch angle corresponding to each point.
  • the average pitch angle of the point cloud segment can be calculated based on the number N of points and the pitch angle ⁇ i corresponding to each point; for example, the average pitch angle ⁇ of the point cloud can be calculated by the following formula:
  • Step 305 Determine the difference value between the first point cloud segment and the second point cloud segment based on the first point cloud parameter and the second point cloud parameter;
  • the first point cloud parameter and the second point cloud parameter After determining the first point cloud parameter and the second point cloud parameter, it can be determined based on the first point cloud parameter and the second point cloud parameter that the point cloud in the first point cloud segment is different from the point cloud in the second point cloud segment. Differences in clouds; specifically, the difference between the first point cloud parameter and the second point cloud parameter can be used as the difference between the first point cloud segment and the second point cloud segment.
  • steps 303 to 305 can be repeatedly executed to respectively determine the difference value between any one of the at least two point cloud segments and other point cloud segments of the at least two point cloud segments.
  • the difference value between the first point cloud segment and the second point cloud segment can be determined through the following sub-steps:
  • Sub-step 61 Determine the target shape difference between the point cloud shape of the first point cloud segment and the point cloud shape of the second point cloud segment;
  • the absolute value of the difference between shape i and shape j can be used as the target shape difference between the point cloud shape of the first point cloud segment and the point cloud shape of the second point cloud segment.
  • Sub-step 62 Determine the target time difference between the point cloud average time of the first point cloud segment and the point cloud average time of the second point cloud segment;
  • the time difference between the first point cloud segment and the second point cloud segment can be determined based on the point cloud average time of the first point cloud segment and the point cloud average time of the second point cloud segment; specifically, the first point cloud segment can be The absolute value of the difference between the point cloud average time of the point cloud segment and the point cloud average time of the second point cloud segment is used as the target time difference.
  • Sub-step 63 Determine the difference between the point cloud area of the first point cloud fragment and the target point cloud area of the point cloud area of the second point cloud fragment;
  • the area difference between the first point cloud fragment and the second point cloud fragment can also be determined based on the point cloud area of the first point cloud fragment and the point cloud area of the second point cloud fragment; specifically, the first point cloud fragment can be The absolute value of the difference between the point cloud area of the cloud segment and the point cloud surface of the second point cloud segment is used as the target point cloud area difference.
  • Sub-step 64 Determine the difference between the average pitch angle of the point cloud of the first point cloud segment and the average pitch angle of the target point cloud of the average pitch angle of the point cloud of the second point cloud segment;
  • the difference in pitch angle between the first point cloud segment and the second point cloud segment can also be determined based on the average pitch angle of the point cloud segment of the first point cloud segment and the average pitch angle of the point cloud segment of the second point cloud segment; Specifically, the absolute value of the difference between the average pitch angle of the point cloud of the first point cloud segment and the average pitch angle of the point cloud of the second point cloud segment can be used as the average pitch angle difference of the target point cloud.
  • Sub-step 65 Obtain the weights set in advance for different differences
  • the weights set in advance for different differences for example, set a larger weight for the shape difference to ensure The point cloud shapes of the two target point cloud fragments finally obtained are consistent; the weights set for the time difference, point cloud area difference, and point cloud average pitch angle difference can ensure that the time difference of the two target point cloud fragments finally obtained
  • the difference, the point cloud area difference, and the point cloud average pitch angle difference are within one order of magnitude, and the embodiment of the present application does not limit this.
  • Sub-step 66 Determine the difference value between the first point cloud segment and the second point cloud segment based on the target shape difference, target time difference, target point cloud area difference, target point cloud average pitch angle difference, and the corresponding weight.
  • the first point cloud segment and the second point cloud can be calculated based on the obtained target shape difference, target time difference, target point cloud area difference, and the average pitch angle difference of the punctuation point cloud; and the weights preset for each difference.
  • the difference value between fragments can be calculated based on the obtained target shape difference, target time difference, target point cloud area difference, and the average pitch angle difference of the punctuation point cloud; and the weights preset for each difference. The difference value between fragments.
  • the difference value score ij between the first point cloud segment and the second point cloud segment can be calculated by the following formula:
  • w shape is the weight preset for the morphological difference
  • shape i is the constant corresponding to the point cloud shape of the first point cloud segment
  • shape j is the constant corresponding to the point cloud shape of the second point cloud segment
  • w t is the weight preset for the time difference
  • T i is the point cloud average time of the first point cloud segment
  • T j is the point cloud average time of the second point cloud segment
  • w area is the weight pre-set for the point cloud area difference, area i is the point cloud area of the first point cloud segment, and area j is the point cloud area of the second point cloud segment;
  • w pitch is the weight pre-set for the point cloud average pitch angle difference
  • pitch i is the point cloud average pitch angle of the first point cloud segment
  • pitch j is the point cloud average pitch angle of the second point cloud segment.
  • Step 306 Use the two point cloud segments corresponding to the minimum difference value as the target point cloud segment
  • the two point cloud segments corresponding to the smallest difference value may be used as the target point cloud segment.
  • Step 307 Estimate the target speed of the target object based on the target point cloud segment.
  • the target speed of the target object can be estimated based on the two target point cloud segments; Specifically, the target speed of the target object can be estimated based on the positions corresponding to the two target point cloud segments and the time corresponding to the two target point cloud segments.
  • the target speed of the target object can be estimated through the following sub-steps:
  • Sub-step 71 Extract target feature points from the target point cloud segment according to the feature extraction method preset for the point cloud shape corresponding to the target point cloud segment;
  • one point can be extracted from each of the two target point cloud segments as the target feature point; specifically, corresponding feature extraction methods can be set in advance for different point cloud shapes.
  • the corner point closest to the lidar device 630 in the circumscribed two-dimensional rectangle 620 created for the target point cloud segment can be used as the target feature point 640.
  • the line connecting the two corner points closest to the lidar device 630 in the circumscribed two-dimensional rectangle 660 created for the target point cloud segment can be
  • the focus is the target feature point 670, which is not limited in the embodiment of the present application.
  • Sub-step 72 Determine the target speed of the target object based on the target feature points.
  • the target speed of the target object can be calculated based on the distance between the corresponding positions of the two target feature points and the point cloud average time corresponding to the target point cloud fragment.
  • the ICP Intelligent Closest Point
  • the ICP can also be used to obtain the transformation matrices of the two target point cloud segments, and then the target point cloud average time corresponding to the transformation matrix and the target point cloud segment is used to calculate the target
  • the speed of the object is not limited in the embodiment of the present application.
  • obstacle avoidance of the target object can be performed based on the target speed.
  • the lidar device when the lidar device is scanning a target, the movement of the target may cause serious deformation of the scanned point cloud.
  • the point cloud output by the lidar device can be processed based on the acquired target speed. Motion distortion correction.
  • motion distortion correction can be performed on the point cloud output by the lidar device based on the absolute speed.
  • the target single-frame point cloud can be obtained first; then the target single-frame point cloud is divided into at least two point cloud segments; from the at least two point cloud segments, two point cloud segments are arbitrarily acquired as the third point cloud segment.
  • a point cloud segment and a second point cloud segment determine the first point cloud parameter of the first point cloud segment, and the second point cloud parameter of the second point cloud segment; based on the first point cloud parameter and the second point cloud parameter, Determine the difference value between the first point cloud segment and the second point cloud segment; use the two point cloud segments corresponding to the minimum difference value as the target point cloud segment; estimate the target speed of the target object based on the target point cloud segment.
  • FIG. 9 a schematic structural diagram of a device for estimating the speed of a target object according to an embodiment of the present application is shown, which includes the following modules:
  • the acquisition module 901 is used to acquire a target single-frame point cloud.
  • the target single-frame point cloud is generated by a frame of point cloud obtained by scanning the moving target multiple times by the lidar device within the target period;
  • Segmentation module 902 used to segment the target single frame point cloud into at least two point cloud segments
  • the determination module 903 is used to determine the difference value between any two point cloud segments in at least two point cloud segments, and use the two point cloud segments corresponding to the minimum difference value as the target point cloud segment;
  • the estimation module 904 is used to estimate the target speed of the target object according to the target point cloud segment.
  • the determination module 903 includes:
  • the selection submodule is used to arbitrarily obtain two point cloud segments from at least two point cloud segments as the first point cloud segment and the second point cloud segment;
  • the parameter determination submodule is used to determine the first point cloud parameter of the first point cloud segment and the second point cloud parameter of the second point cloud segment;
  • the difference value determination submodule is used to determine the difference value between the first point cloud segment and the second point cloud segment based on the first point cloud parameter and the second point cloud parameter;
  • the point cloud parameters include at least one of the following:
  • Point cloud shape point cloud average time, point cloud area, and point cloud average pitch angle.
  • the difference value determination sub-module is used to determine the point cloud shape of the first point cloud segment and the target shape difference of the point cloud shape of the second point cloud segment; determine the point cloud shape of the first point cloud segment
  • the point cloud average time is the target time difference with the point cloud average time of the second point cloud segment; determine the point cloud area of the first point cloud segment and the target point cloud area difference with the point cloud area of the second point cloud segment; Determine the average pitch angle difference between the point cloud average pitch angle of the first point cloud segment and the point cloud average pitch angle of the second point cloud segment; obtain the weights set in advance for different differences; based on the target shape difference and target time
  • the difference, the target point cloud area difference, the target point cloud average pitch angle difference, and the corresponding weight determine the difference value between the first point cloud segment and the second point cloud segment.
  • the parameter determination submodule is used to establish a two-dimensional circumscribed rectangle for the point cloud segment, and determine the first corner point closest to the lidar device in the two-dimensional circumscribed rectangle; determine the circumscribed two-dimensional rectangle In the rectangle, the second corner point and the third corner point adjacent to the first corner point; determine the first angle between the line connecting the first corner point and the lidar device, and the line connecting the second corner point and the lidar device ; Determine the second angle between the line connecting the first corner point and the lidar device, and the line connecting the third corner point and the lidar device; determine the point based on the smallest angle between the first angle and the second angle. Point cloud shape of cloud fragment state.
  • the estimation module 904 includes:
  • the feature extraction submodule is used to extract target feature points from the target point cloud segment according to the feature extraction method set in advance for the point cloud shape corresponding to the target point cloud segment;
  • the segmentation module 902 includes:
  • the sorting submodule is used to sort the points in the target single-frame point cloud in time order
  • the first judgment sub-module is used to judge whether the interval between the adjacent first point and the second point in the target single frame point cloud exceeds the preset time interval
  • the second judgment submodule is used to judge the ascending and descending order of the yaw angle of the point cloud segment with the first point as the endpoint when the interval between the first point and the second point exceeds the preset time interval, and the second point Whether the ascending and descending order of yaw angles of the point cloud segments that are endpoints are consistent;
  • Segment segment submodule used when the ascending and descending order of the yaw angle of the point cloud segment with the first point as the endpoint is inconsistent with the ascending and descending order of the yaw angle of the point cloud segment with the second point as the endpoint, between the first point and the second point. Split between the second points.
  • the target speed includes the absolute speed of the target object
  • the lidar device is deployed in the target vehicle
  • the acquisition module 901 includes:
  • the self-vehicle pose acquisition submodule is used to obtain the target self-vehicle pose of the target vehicle within the target period;
  • the self-vehicle motion compensation submodule is used to perform self-vehicle motion compensation on a frame of point cloud based on the target's self-vehicle pose to obtain the target single frame point cloud.
  • the target single frame point cloud can be obtained first.
  • the target single frame point cloud is generated by a frame point cloud obtained after multiple scanning of the moving target object by the lidar device within the target period; and then the target single frame point cloud is generated.
  • the frame point cloud is divided into at least two point cloud segments; then the difference value between any two point cloud segments in the at least two point cloud segments is determined, and the two point cloud segments corresponding to the minimum difference value are used as the target point cloud fragment; and then estimate the target speed of the target object based on the target point cloud fragment.
  • An embodiment of the present application also provides a vehicle, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor.
  • the computer program is executed by the processor, the above method for estimating the speed of a target object is implemented.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program is executed by a processor, the above method for estimating the speed of a target object is implemented.
  • the description is relatively simple. For relevant details, please refer to the partial description of the method embodiment.
  • embodiments of the present application may be provided as methods, devices, or computer program products. Therefore, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • Embodiments of the present application are described with reference to flowcharts and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the present application. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general-purpose computer, special-purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine such that the instructions are executed by the processor of the computer or other programmable data processing terminal device. Means are generated for implementing the functions specified in the process or processes of the flowchart diagrams and/or the block or blocks of the block diagrams.
  • These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing terminal equipment to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the The instruction means implements the functions specified in a process or processes of the flowchart and/or a block or blocks of the block diagram.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing terminal equipment, so that a series of operating steps are performed on the computer or other programmable terminal equipment to produce computer-implemented processing, thereby causing the computer or other programmable terminal equipment to perform a computer-implemented process.
  • the instructions executed on provide steps for implementing the functions specified in a process or processes of the flow diagrams and/or a block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请实施例提供了一种目标物速度的估计方法及装置、车辆和存储介质,方法包括:获取目标单帧点云,目标单帧点云由激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云生成;然后将目标单帧点云分割成至少两个点云片段;再确定至少两个点云片段中任意两个点云片段之间的差异值,并将最小差异值所对应的两个点云片段作为目标点云片段;之后再根据目标点云片段,估计目标物的目标速度。

Description

目标物速度的估计方法及装置、车辆和存储介质
相关申请
本申请要求于2022年3月7号申请的、申请号为202210224164.7的中国专利申请的优先权。
技术领域
本申请涉及数据处理的技术领域,特别是涉及一种目标物速度的估计方法及装置、车辆和存储介质。
背景技术
车辆在自动驾驶过程中,为了对障碍物进行规避,可以在车辆中部署激光雷达设备;然后基于激光雷达设备所采集到的点云进行障碍物的规避。
针对静态的障碍物,通过点云确定静态障碍物的位置即可完成自动驾驶过程中的障碍物规避;而针对动态的障碍物,由于动态障碍物是处于运动过程中的,为了能够对动态障碍物进行有效地规避,需要准确的估计动态障碍物的速度。
在对动态障碍物的速度进行估计时,通常是利用激光雷达设备所采集到的连续的多帧点云来进行估计;但是,如果动态障碍物在激光雷达设备所采集到的初始几帧的点云中出现的话,可能因为点云的欠缺而无法准确的估计动态障碍物的速度。
发明内容
鉴于上述问题,提出了以便提供克服上述问题或者至少部分地解决上述问题的一种目标物速度的估计方法及装置、车辆和存储介质,包括:
一种目标物速度的估计方法,所述方法包括:
获取目标单帧点云,目标单帧点云由激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云生成;
将目标单帧点云分割成至少两个点云片段;
确定至少两个点云片段中任意两个点云片段之间的差异值,并将最小差异值所对应的两个点云片段作为目标点云片段;
根据目标点云片段,估计目标物的目标速度。
在一实施方式中,确定至少两个点云片段中任意两个点云片段之间的差异值,包括:
从至少两个点云片段中,任意获取两个点云片段作为第一点云片段和第二点云片段;
确定第一点云片段的第一点云参数,以及第二点云片段的第二点云参数;
根据第一点云参数和第二点云参数,确定第一点云片段和第二点云片段之间的差异值;
其中,点云参数包括以下至少一项:
点云形态、点云平均时间、点云面积、点云平均俯仰角。
在一实施方式中,根据第一点云参数和第二点云参数,确定第一点云片段和第二点云片段之间的差异值,包括:
确定第一点云片段的点云形态,与第二点云片段的点云形态的目标形态差异;
确定第一点云片段的点云平均时间,与第二点云片段的点云平均时间的目标时间差异;
确定第一点云片段的点云面积,与第二点云片段的点云面积的目标点云面积差异;
确定第一点云片段的点云平均俯仰角,与第二点云片段的点云平均俯仰角的目标点云平均俯仰角差异;
获取预先针对不同差异设置的权重;
根据目标形态差异、目标时间差异、目标点云面积差异、目标点云平均俯仰角差异,以及对应的权重,确定第一点云片段和第二点云片段之间的差异值。
在一实施方式中,点云片段的点云形态通过如下步骤确定:
建立针对点云片段的二维外接矩形,并确定二维外接矩形中距离激光雷达设备最近的第一角点;
确定外接二维矩形中,与第一角点相邻的第二角点和第三角点;
确定第一角点和激光雷达设备的连线,与第二角点和激光雷达设备的连线的第一夹角;
确定第一角点和激光雷达设备的连线,与第三角点和激光雷达设备的连线的第二夹角;
根据第一夹角和第二夹角中数值最小的夹角,确定点云片段的点云形态。
在一实施方式中,根据目标点云片段,估计目标物的目标速度,包括:
根据预先为目标点云片段对应的点云形态设置的特征提取方式,从目标点云片段中提取目标特征点;
根据目标特征点,确定目标物的目标速度。
在一实施方式中,将目标单帧点云分割成至少两个点云片段,包括:
按照时间顺序,对目标单帧点云中的点进行排序;
判断目标单帧点云中相邻的第一点和第二点之间的间隔是否超过预设时长间隔;
当第一点和第二点之间的间隔超过预设时长间隔时,判断以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序是否一致;
当以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序不一致时,在第一点和第二点之间进行分割。
在一实施方式中,目标速度包括目标物的绝对速度,激光雷达设备部署在目标车辆中,获取目标单帧点云,包括:
获取目标车辆在目标时段内的目标自车位姿;
根据目标自车位姿,对一帧点云进行自车运动补偿,得到目标单帧点云。
本申请实施例还提供了一种目标物速度的估计装置,装置包括:
获取模块,用于获取目标单帧点云,目标单帧点云由激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云生成;
分割模块,用于将目标单帧点云分割成至少两个点云片段;
确定模块,用于确定至少两个点云片段中任意两个点云片段之间的差异值,并将最小差异值所对应的两个点云片段作为目标点云片段;
估计模块,用于根据目标点云片段,估计目标物的目标速度。
在一实施方式中,确定模块,包括:
选取子模块,用于从至少两个点云片段中,任意获取两个点云片段作为第一点云片段和第二点云片段;
参数确定子模块,用于确定第一点云片段的第一点云参数,以及第二点云片段的第二点云参数;
差异值确定子模块,用于根据第一点云参数和第二点云参数,确定第一点云片段和第二点云片段之间的差异值;
其中,点云参数包括以下至少一项:
点云形态、点云平均时间、点云面积、点云平均俯仰角。
在一实施方式中,差异值确定子模块,用于确定第一点云片段的点云形态,与第二点云片段的点云形态的目标形态差异;确定第一点云片段的点云平均时间,与第二点云片段的点云平均时间的目标时间差异;确定第一点云片段的点云面积,与第二点云片段的点云面积的目标点云面积差异;确定第一点云片段的点云平均俯仰角,与第二点云片段的点云平均俯仰角的目标点云平均俯仰角差异;获取预先针对不同差异设置的权重;根据目标形态差异、目标时间差异、目标点云面积差异、目标点云平均俯仰角差异,以及对应的权重,确定第一点云片段和第二点云片段之间的差异值。
在一实施方式中,参数确定子模块,用于建立针对点云片段的二维外接矩形,并确定二维外接矩形中距离激光雷达设备最近的第一角点;确定外接二维矩形中,与第一角点相邻的第二角点和第三角点;确定第一角点和激光雷达设备的连线,与第二角点和激光雷达设备的连线的第一夹角;确定第一角点和激光雷达设备的连线,与第三角点和激光雷达设备的连线的第二夹角;根据第一夹角和第二夹角中数值最小的夹角,确定点云片段的点云形态。
在一实施方式中,估计模块,包括:
特征提取子模块,用于根据预先为目标点云片段对应的点云形态设置的特征提取方式,从目标点云片段中提取目标特征点;
根据目标特征点,确定目标物的目标速度。
在一实施方式中,分割模块,包括:
排序子模块,用于按照时间顺序,对目标单帧点云中的点进行排序;
第一判断子模块,用于判断目标单帧点云中相邻的第一点和第二点之间的间隔是否超过预设时长间隔;
第二判断子模块,用于当第一点和第二点之间的间隔超过预设时长间隔时,判断以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序是否一致;
分割片段子模块,用于当以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序不一致时,在第一点和第二点之间进行分割。
在一实施方式中,目标速度包括目标物的绝对速度,激光雷达设备部署在目标车辆中,获取模块,包括:
自车位姿获取子模块,用于获取目标车辆在目标时段内的目标自车位姿;
自车运动补偿子模块,用于根据目标自车位姿,对一帧点云进行自车运动补偿,得到目标单帧点云。
本申请实施例还提供了一种车辆,包括处理器、存储器及存储在存储器上并能够在处理器上运行的计算机程序,计算机程序被处理器执行时实现如上的目标物速度的估计方法。
本申请实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储计算机程序,计算机程序被处理器执行时实现如上的目标物速度的估计方法。
本申请实施例具有以下优点:
本申请实施例中,可以先获取目标单帧点云,目标单帧点云由激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云生成;然后将目标单帧点云分割成至少两个点云片段;再确定至少两个点云片段中任意两个点云片段之间的差异值,并将最小差异值所对应的两个点云片段作为目标点云片段;之后再根据目标点云片段,估计目标物的目标速度。通过本申请实施例,实现了基于单帧的点云来估计移动的物体的速度;由于无需依赖于相邻帧的点云,使得在激光雷达设备所采集到的初始几帧的点云中就出现的移动的物体的速度也能够被准确的估计到。
附图说明
为了更清楚地说明本申请的技术方案,下面将对本申请的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例的一种目标物速度的估计方法的步骤流程图;
图2是本申请实施例的一种目标单帧点云的示意图;
图3是本申请实施例的另一种目标物速度的估计方法的步骤流程图;
图4a是本申请实施例的一种L形点云形态的点云片段的示意图;
图4b是本申请实施例的一种I形点云形态的点云片段的示意图;
图5是本申请实施例的一种角点与激光雷达设备连线的夹角的示意图;
图6a是本申请实施例的一种L形点云形态的点云片段的特征点的示意图;
图6b是本申请实施例的一种I形点云形态的点云片段的特征点的示意图;
图7是本申请实施例的一种估计绝对速度的流程图;
图8是本申请实施例的一种估计相对速度的流程图;
图9是本申请实施例的一种目标物速度的估计装置的结构框图。
具体实施方式
为使本申请的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本申请作进一步详细的说明。显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
参照图1,示出了本申请实施例的一种目标物速度的估计方法的步骤流程图,包括如下步骤:
步骤101、获取目标单帧点云,目标单帧点云由激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云生成;
其中,激光雷达设备可以指能够在一定时间段内对物体进行多次扫描,并基于这多次扫描得到一帧点云的雷达设备。
目标单帧点云可以是激光雷达设备所生成的第一帧的点云,也可以是第二帧的点云、第三帧的点云......,本申请实施例对此不作限制。
目标单帧点云可以是激光雷达设备在目标时段内,对一移动的目标物进行多次扫描后得到的一帧点云生成的,例如:可以是对该一帧点云进行处理后生成的目标单帧点云,本申请实施例对此不作限制。
如图2所示,为一目标单帧点云的示例,其中包括激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到多圈点云。
步骤102、将目标单帧点云分割成至少两个点云片段;
在获取到目标单帧点云后,可以先对目标单帧点云进行分割,以便得到至少两个点云片段;点云片段可以由连续的多个点组成,每个点可以包含有三维坐标、颜色信息、反射强度信息、回波次数信息等。
作为一示例,每个点云片段对应目标时段的一个子时段,例如:目标单帧点云对应的目标时段为第10ms-第20ms,分割得到A、B、C和D四个点云片段,点云片段A对应目标时段中的第10ms-第12ms这一子时段,点云片段B对应目标时段中的第12ms-第14ms这一子时段,点云片段C对应目标时段中的第14ms-第17ms这一子时段,点云片段D对应目标时段中的第17ms-第20ms这一子时段,本申请实施例对此不作限制。
步骤103、确定至少两个点云片段中任意两个点云片段之间的差异值,并将最小差异值所对应的两个点云片段作为目标点云片段;
在对目标单帧点云进行分割后,可以从所得到的至少两个点云片段中,任意选取两个点云片段,然后计算任意两个点云片段之间的差异值;例如:面积差异、形态差异等,本申请实施例对此不作限制。
接上例,可以分别计算点云片段A与点云片段B之间的差异值、点云片段A与点云片段C之间的差异值、点云片段A与点云片段D之间的差异值、点云片段B与点云片段C之间的差异值、点云片段B与点云片段D之间的差异值,以及点云片段C与点云片段D之间的差异值。
在确定至少两个点云片段中,每两个点云片段之间的差异值后,可以将其中最小差异值所对应的两个点云片段作为目标点云片段。
接上例,点云片段A与点云片段B之间的差异值为0.2,点云片段A与点云片段C之间的差异值0.3,点云片段A与点云片段D之间的差异值0.4,点云片段B与点云片段C之间的差异值0.3,点云片段B与点云片段D之间的差异值0.1,点云片段C与点云片段D之间的差异值0;其中,差异值0为最小差异值,则可以将点云片段C和点云片段D作为目标点云片段。
步骤104、根据目标点云片段,估计目标物的目标速度。
在获取到两个目标点云片段后,可以基于这两个目标点云片段来估计目标物的目标速度;具体的,可以基于两个目标点云片段所对应的位置,以及两个目标点云片段所对应的时间来 估计目标物的目标速度。
本申请实施例中,可以先获取目标单帧点云,目标单帧点云由激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云生成;然后将目标单帧点云分割成至少两个点云片段;再确定至少两个点云片段中任意两个点云片段之间的差异值,并将最小差异值所对应的两个点云片段作为目标点云片段;之后再根据目标点云片段,估计目标物的目标速度。通过本申请实施例,实现了基于单帧的点云来估计移动的物体的速度;由于无需依赖于相邻帧的点云,使得在激光雷达设备所采集到的初始几帧的点云中就出现的移动的物体的速度也能够被准确的估计到。
参照图3,示出了本申请实施例的另一种目标物速度的估计方法的步骤流程图,包括如下步骤:
步骤301、获取目标单帧点云;
在实际应用中,当需要对目标物的速度进行估计时,可以先获取目标单帧点云。
在本申请一实施例中,目标速度可以包括目标物的绝对速度,激光雷达设备可以是部署在目标车辆中的,步骤301可以包括如下子步骤:
子步骤11、获取目标车辆在目标时段内的目标自车位姿;
其中,目标自车位姿可以包括目标车辆在目标时段内的位置信息和姿态。
在实际应用中,目标物可以是目标车辆外的移动的障碍物;目标物在移动的同时,目标车辆可能也在移动;此时,为了能够准确的估计到目标物的绝对速度,可以基于目标车辆在目标时段内的位姿来进行补偿。
具体的,可以先获取目标车辆在目标时段内的目标自车位姿,目标自车位姿可以利用视觉SLAM(Simultaneous Localization and Mapping,同步定位与地图构建)、激光雷达SLAM、GPS(Global Positioning System,全球定位系统)、IMU(Inertial Measurement Unit,惯性测量单元)和轮速计等来获取,本申请实施例对此不作限制。
子步骤12、根据目标自车位姿,对一帧点云进行自车运动补偿,得到目标单帧点云。
在获取到目标车辆在目标时段内的目标自车位姿后,可以根据该目标自车位姿,对激光雷达设备在目标时段内所生成的一帧点云做自车运动补偿,从而得到目标单帧点云。
在本申请另一实施例中,如果是要估计目标物相对于目标车辆的相对速去的话,则可以直接将激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云作为目标单帧点云。
作为一示例,激光雷达设备在对目标物进行扫描时,可能还会针对其他物体生成点云;因此,可以先对激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云进行目标物识别,例如:可以采用深度学习方法、或者传统几何方法来进行目标物识别。
然后,再基于识别结果从该一帧点云中得到目标单帧点云。
步骤302、将目标单帧点云分割成至少两个点云片段;
在获取到目标单帧点云后,可以先对目标单帧点云进行分割,以便得到至少两个点云片段。
在本申请一实施例中,可以通过如下子步骤对目标单帧点云进行分割:
子步骤21、按照时间顺序,对目标单帧点云中的点进行排序;
首先,可以先按照时间的先后顺去,对目标单帧点云中所有的点进行排序,得到一端点云。
子步骤22、判断目标单帧点云中相邻的第一点和第二点之间的间隔是否超过预设时长间隔;
然后,可以从排序后的目标单帧点云中任意选取两个相邻的点,作为第一点和第二点。
在选取到第一点和第二点后,可以获取第一点对应的时间戳和第二点对应的时间戳,并基于第一点对应的时间戳和第二点对应的时间戳确定第一点和第二点之间的间隔。
然后,可以基于第一点和第二点之间的间隔与预设时长间隔之间的关系,判断第一点和第二点之间的连续性。
如果第一点和第二点之间的间隔未超过预设时长间隔的话,可以表示第一点和第二点是连续的;此时,可以重新执行子步骤21。
如果第一点和第二点之间的间隔超过预设时长间隔的话,可以表示第一点和第二点是不连续的;此时,可以继续执行后续子步骤23。
子步骤23、当第一点和第二点之间的间隔超过预设时长间隔时,判断以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序是否一致;
如果第一点和第二点之间的间隔超过预设时长间隔的话,可以先以第一点为端点,得到一条点云片段,以及以第二点为端点,得到另一条点云片段。
然后,可以确定这两条点云片段的偏航角升降序;例如:是上升或者下降;如果两条点云片段的偏航角升降序是不一致的话,则可以表示第一点和第二点是激光雷达设备在不同次针对目标物进行扫描时生成的。
如果两条点云片段的偏航角升降序是一致的话,则可以表示第一点和第二点是激光雷达设备在同一次针对目标物进行扫描时生成的。
子步骤24、当以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序不一致时,在第一点和第二点之间进行分割。
当判定以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序不一致时,可以在第一点和第二点之间进行分割操作,以便得到点云片段。
当判定以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序一致时,可以重新执行子步骤21。
作为一示例,通过多次执行上述子步骤21-子步骤24,可以得到至少两个点云片段。
步骤303、从至少两个点云片段中,任意获取两个点云片段作为第一点云片段和第二点云片段;
在对目标单帧点云进行分割,得到至少两个点云片段后,可以先从至少两个点云片段中,任意获取两个点云片段,并将任意获取的两个点云片段中的一个点云片段作为第一点云片段,将另一个点云片段作为第二点云片段。
步骤304、确定第一点云片段的第一点云参数,以及第二点云片段的第二点云参数;
其中,点云参数可以包括以下至少一项:点云形态、点云平均时间、点云面积、点云平均俯仰角。
点云形态可以表征点云片段所具备的形态,如图4a所示,为L形点云形态的点云片段;如图4b所示,为I形点云形态的点云片段。
点云平均时间可以指点云片段中所有点所对应的时间撮的平均时间。
点云面积可以指点云片外的二维外接矩形的面积。
点云平均俯仰角可以指点云片段中所有点所对应的俯仰角的平均角度。
在实际应用中,在获取到第一点云片段和第二点云片段后,可以先确定第一点云片段的第一点云参数,以及确定第二点云片段的第二点云参数。其中,具体确定点云形态、点云平均时间、点云面积和点云平均俯仰角中的那个点云参数,可以根据实际情况设定,例如:确定第一点云片段的点云形态、点云平均时间、点云面积和点云平均俯仰角;或者,确定第一点云片段的点云平均时间、点云面积和点云平均俯仰角,本申请实施例对此不作限制。
在本申请一实施例中,点云片段的点云形态可以通过如下子步骤确定:
子步骤31、建立针对点云片段的二维外接矩形,并确定二维外接矩形中距离激光雷达设备最近的第一角点;
在确定点云片段的点云形态时,可以先建立针对点云片段的二维外接矩形;然后,可以确定该二维外接矩形中,距离激光雷达设备最近的一个角点,并将其作为第一角点。
子步骤32、确定外接二维矩形中,与第一角点相邻的第二角点和第三角点;
然后,可以将二维矩形中,与第一角点相邻的两个角点中的一个角点作为第二角点,将另一个相邻的角点作为第三角点。
子步骤33、确定第一角点和激光雷达设备的连线,与第二角点和激光雷达设备的连线的第一夹角;
子步骤34、确定第一角点和激光雷达设备的连线,与第三角点和激光雷达设备的连线的第二夹角;
在确定第二角点和第三角点后,可以将第一角点、第二角点和第三角点与分别与激光雷达设备所处的位置进行连线;并确定第一角点和激光雷达设备的连线与第二角点和激光雷达设备的连线的第一夹角,以及第一角点和激光雷达设备的连线与第三角点和激光雷达设备的连线的第二夹角。
如图5,点云云片500的二维外接矩形580;其中,第一角点510和激光雷达设备520的连线530,与第二角点540和激光雷达设备520的连线550的第一夹角α,以及第一角点510和激光雷达设备520的连线560与第三角点570和激光雷达设备520的连线580的第二夹角β。
子步骤35、根据第一夹角和第二夹角中数值最小的夹角,确定点云片段的点云形态。
在确定第一夹角和第二夹角后,可以根据第一夹角和第二夹角中数值最小的那个夹角来确定点云片段的点云形态;具体的,可以预先针对不同的夹角设置不同的点云形态,例如:夹角小于1.5°的为I形点云形态,否则为L形点云形态,本申请实施例对此不作限制。
在本申请另一实施例中,点云片段的点云平均时间可以通过如下子步骤确定:
子步骤41、确定点云片段中所包括的点的数量,以及每个点对应的时间戳;
首先,可以先统计需要确定点云平均时间的点云片段中,所包括的点的数量N,以及每个点对应的时间戳ti
子步骤42、根据点的数量,以及每个点对应的时间戳,确定点云片段的点云平均时间。
然后,可以根据点的数量N,以及每个点对应的时间戳,计算该点云片段的点云平均时间;例如:点云平均时间T可以通过如下公式计算:
在本申请又一实施例中,可以将针对点云片段所创建的外接二维矩形的面积,作为该点云片段的点云面积。
在本申请再一实施例中,点云片段的点云平均俯仰角可以通过如下子步骤确定:
子步骤51、确定点云片段中所包括的点的数量,以及每个点对应的俯仰角;
首先,可以先统计需要确定点云平均时间的点云片段中,所包括的点的数量N,以及每个点对应的俯仰角∠i
子步骤52、根据点的数量,以及每个点对应的俯仰角,确定点云片段的点云平均俯仰角。
然后,可以根据点的数量N,以及每个点对应的俯仰角∠i,计算该点云片段的点云平均俯仰角;例如:点云平均俯仰角∠可以通过如下公式计算:
步骤305、根据第一点云参数和第二点云参数,确定第一点云片段和第二点云片段之间的差异值;
在确定第一点云参数和第二点云参数后,可以基于第一点云参数和第二点云参数,来确定第一点云片段中的点云,与第二点云片段中的点云的差异;具体的,可以将第一点云参数与第二点云参数的差值,作为第一点云片段和第二点云片段之间的差异值。
作为一示例,可以重复执行步骤303-步骤305,以分别确定至少两个点云片段中,任意一个点云片段与至少两个点云片段中其他点云片段的差异值。
在本申请一实施例中,可以通过如下子步骤确定第一点云片段和第二点云片段之间的差异值:
子步骤61、确定第一点云片段的点云形态,与第二点云片段的点云形态的目标形态差异;
在实际应用中,可以先确定第一点云片段的点云形态,以及第二点云片段的点云形态;然后,获取预先为第一点云片段的点云形态设置的一常数shapei,以及为第二点云片段的点云形态设置的另一常数shapej
之后,可以将shapei和shapej的差值的绝对值,作为第一点云片段的点云形态与第二点云片段的点云形态的目标形态差异。
子步骤62、确定第一点云片段的点云平均时间,与第二点云片段的点云平均时间的目标时间差异;
同时,可以根据第一点云片段的点云平均时间,和第二点云片段的点云平均时间,确定第一点云片段和第二点云片段的时间差异;具体的,可以将第一点云片段的点云平均时间,与第二点云片段的点云平均时间的差值的绝对值作为目标时间差异。
子步骤63、确定第一点云片段的点云面积,与第二点云片段的点云面积的目标点云面积差异;
另外,还可以根据第一点云片段的点云面积,和第二点云片段的点云面积,确定第一点云片段和第二点云片段的面积差异;具体的,可以将第一点云片段的点云面积,与第二点云片段的点云面的差值的绝对值作为目标点云面积差异。
子步骤64、确定第一点云片段的点云平均俯仰角,与第二点云片段的点云平均俯仰角的目标点云平均俯仰角差异;
在实际应用中,还可以根据第一点云片段的点云平均俯仰角,和第二点云片段的点云平均俯仰角,确定第一点云片段和第二点云片段的俯仰角差异;具体的,可以将第一点云片段的点云平均俯仰角,与第二点云片段的点云平均俯仰角的差值的绝对值作为目标点云平均俯仰角差异。
子步骤65、获取预先针对不同差异设置的权重;
在得到目标形态差异、目标时间差异、目标点云面积差异,以及目标点云平均俯仰角差异的同时,可以获取预先针对不同差异设置的权重;例如:为形态差异设置较大的权重,以保证最终得到的两个目标点云片段的点云形态是一致的;为时间差异、点云面积差异,以及点云平均俯仰角差异设置的权重,能够保证最终得到的两个目标点云片段的时间差异、点云面积差异,以及点云平均俯仰角差异在一个数量级上,本申请实施例对此不作限制。
子步骤66、根据目标形态差异、目标时间差异、目标点云面积差异、目标点云平均俯仰角差异,以及对应的权重,确定第一点云片段和第二点云片段之间的差异值。
然后,可以根据所得到的目标形态差异、目标时间差异、目标点云面积差异,以及标点云平均俯仰角差异;和预先为各个差异设置的权重,来计算第一点云片段与第二点云片段之间的差异值。
例如:可以通过如下公式计算第一点云片段与第二点云片段之间的差异值scoreij
其中,wshape为预先为形态差异设置的权重,shapei为第一点云片段的点云形态对应的常数,shapej为第二点云片段的点云形态对应的常数;
wt为预先为时间差异设置的权重,Ti为第一点云片段的点云平均时间,Tj为第二点云片段的点云平均时间;
warea为预先为点云面积差异设置的权重,areai为第一点云片段的点云面积,areaj为第二点云片段的点云面积;
wpitch为预先为点云平均俯仰角差异设置的权重,pitchi为第一点云片段的点云平均俯仰角,pitchj为第二点云片段的点云平均俯仰角。
步骤306、将最小差异值所对应的两个点云片段作为目标点云片段;
在确定至少两个点云片段中,每两个点云片段之间的差异值后,可以将其中最小差异值所对应的两个点云片段作为目标点云片段。
步骤307、根据目标点云片段,估计目标物的目标速度。
在获取到两个目标点云片段后,可以基于这两个目标点云片段来估计目标物的目标速度; 具体的,可以基于两个目标点云片段所对应的位置,以及两个目标点云片段所对应的时间来估计目标物的目标速度。
在本申请一实施例中,可以通过如下子步骤估计目标物的目标速度:
子步骤71、根据预先为目标点云片段对应的点云形态设置的特征提取方式,从目标点云片段中提取目标特征点;
在确定两个目标点云片段后,可以先从两个目标点云片段中各提取出一个点作为目标特征点;具体的,可以预先为不同的点云形态设置对应的特征提取方式。
如图6a所示,对于L形点云形态的点云片段610,可以将针对目标点云片段所创建的外接二维矩形620中,距离激光雷达设备630最近的角点作为目标特征点640。
如图6b所示,对于I形点云形态的点云片段650,可以将针对目标点云片段所创建的外接二维矩形660中,距离激光雷达设备630最近的两个角点的连线的重点作为目标特征点670,本申请实施例对此不作限制。
子步骤72、根据目标特征点,确定目标物的目标速度。
然后,可以根据两个目标特征点对应位置之间的距离,以及目标点云片段对应的点云平均时间,计算目标物的目标速度。
在本申请一实施例中,也可以用ICP(Iterative Closest Point,迭代最近点)方法得到两个目标点云片段的变换矩阵,然后利用变换矩阵和目标点云片段对应的点云平均时间计算目标物的速度,本申请实施例对此不作限制。
在实际应用中,在估计到目标物的目标速度后,可以基于目标速度来进行针对目标物的避障。
当然,激光雷达设备在针对目标物进行扫描,可能因为目标物的移动导致扫描到的点云产生严重的变形;此时,可以基于所获取到的目标速度对激光雷达设备所输出的点云进行运动畸变矫正。
如图7所示,如果期望估计到目标物的绝对速度的话,可以在获取目标单帧点云时,先对所获取的一帧点云进行自车运动补偿,然后对补偿后的一帧点云进行目标物的检测,从而得到目标单帧点云;之后,再通过对目标单帧点云进行上述步骤302-步骤307的处理,来估计目标物的绝对速度。
在获取到目标物的绝对速度后,可以基于绝对速度对激光雷达设备所输出的点云进行运动畸变矫正。
如图8所示,如果期望估计到目标物的相对速度的话,可以在获取到一帧点云后,仅对这一帧点云进行目标物的检测,从而得到目标单帧点云,而不进行自车运动补偿;之后,再通过对目标单帧点云进行上述步骤302-步骤307的处理,来估计目标物相对于目标车辆的相对速度。
在本申请实施例中,可以先获取目标单帧点云;然后将目标单帧点云分割成至少两个点云片段;从至少两个点云片段中,任意获取两个点云片段作为第一点云片段和第二点云片段;确定第一点云片段的第一点云参数,以及第二点云片段的第二点云参数;根据第一点云参数和第二点云参数,确定第一点云片段和第二点云片段之间的差异值;将最小差异值所对应的两个点云片段作为目标点云片段;根据目标点云片段,估计目标物的目标速度。通过本申请实施例,实现了基于单帧的点云来估计移动的物体的速度;且保证了移动的物体的速度估计 的实时性和准确性。
且通过准确而及时地估计移动的物体的速度,可以有效地提升自动驾驶过程中车辆避障的及时性、舒适性及安全性。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请实施例并不受所描述的动作顺序的限制,因为依据本申请实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作并不一定是本申请实施例所必须的。
参照图9,示出了本申请实施例的一种目标物速度的估计装置的结构示意图,包括如下模块:
获取模块901,用于获取目标单帧点云,目标单帧点云由激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云生成;
分割模块902,用于将目标单帧点云分割成至少两个点云片段;
确定模块903,用于确定至少两个点云片段中任意两个点云片段之间的差异值,并将最小差异值所对应的两个点云片段作为目标点云片段;
估计模块904,用于根据目标点云片段,估计目标物的目标速度。
本申请的一个可选实施例中,确定模块903,包括:
选取子模块,用于从至少两个点云片段中,任意获取两个点云片段作为第一点云片段和第二点云片段;
参数确定子模块,用于确定第一点云片段的第一点云参数,以及第二点云片段的第二点云参数;
差异值确定子模块,用于根据第一点云参数和第二点云参数,确定第一点云片段和第二点云片段之间的差异值;
其中,点云参数包括以下至少一项:
点云形态、点云平均时间、点云面积、点云平均俯仰角。
本申请的一个可选实施例中,差异值确定子模块,用于确定第一点云片段的点云形态,与第二点云片段的点云形态的目标形态差异;确定第一点云片段的点云平均时间,与第二点云片段的点云平均时间的目标时间差异;确定第一点云片段的点云面积,与第二点云片段的点云面积的目标点云面积差异;确定第一点云片段的点云平均俯仰角,与第二点云片段的点云平均俯仰角的目标点云平均俯仰角差异;获取预先针对不同差异设置的权重;根据目标形态差异、目标时间差异、目标点云面积差异、目标点云平均俯仰角差异,以及对应的权重,确定第一点云片段和第二点云片段之间的差异值。
本申请的一个可选实施例中,参数确定子模块,用于建立针对点云片段的二维外接矩形,并确定二维外接矩形中距离激光雷达设备最近的第一角点;确定外接二维矩形中,与第一角点相邻的第二角点和第三角点;确定第一角点和激光雷达设备的连线,与第二角点和激光雷达设备的连线的第一夹角;确定第一角点和激光雷达设备的连线,与第三角点和激光雷达设备的连线的第二夹角;根据第一夹角和第二夹角中数值最小的夹角,确定点云片段的点云形 态。
本申请的一个可选实施例中,估计模块904,包括:
特征提取子模块,用于根据预先为目标点云片段对应的点云形态设置的特征提取方式,从目标点云片段中提取目标特征点;
根据目标特征点,确定目标物的目标速度。
本申请的一个可选实施例中,分割模块902,包括:
排序子模块,用于按照时间顺序,对目标单帧点云中的点进行排序;
第一判断子模块,用于判断目标单帧点云中相邻的第一点和第二点之间的间隔是否超过预设时长间隔;
第二判断子模块,用于当第一点和第二点之间的间隔超过预设时长间隔时,判断以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序是否一致;
分割片段子模块,用于当以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序不一致时,在第一点和第二点之间进行分割。
本申请的一个可选实施例中,目标速度包括目标物的绝对速度,激光雷达设备部署在目标车辆中,获取模块901,包括:
自车位姿获取子模块,用于获取目标车辆在目标时段内的目标自车位姿;
自车运动补偿子模块,用于根据目标自车位姿,对一帧点云进行自车运动补偿,得到目标单帧点云。
本申请实施例中,可以先获取目标单帧点云,目标单帧点云由激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云生成;然后将目标单帧点云分割成至少两个点云片段;再确定至少两个点云片段中任意两个点云片段之间的差异值,并将最小差异值所对应的两个点云片段作为目标点云片段;之后再根据目标点云片段,估计目标物的目标速度。通过本申请实施例,实现了基于单帧的点云来估计移动的物体的速度;由于无需依赖于相邻帧的点云,使得在激光雷达设备所采集到的初始几帧的点云中就出现的移动的物体的速度也能够被准确的估计到。
本申请实施例还提供了一种车辆,包括处理器、存储器及存储在存储器上并能够在处理器上运行的计算机程序,计算机程序被处理器执行时实现如上的目标物速度的估计方法。
本申请实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储计算机程序,计算机程序被处理器执行时实现如上的目标物速度的估计方法。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本申请实施例可提供为方法、装置、或计算机程序产品。 因此,本申请实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请实施例是参照根据本申请实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请实施例的可选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括可选实施例以及落入本申请实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对所提供的一种目标物速度的估计方法及装置、车辆和存储介质,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (10)

  1. 一种目标物速度的估计方法,其中,所述方法包括:
    获取目标单帧点云,所述目标单帧点云由激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云生成;
    将所述目标单帧点云分割成至少两个点云片段;
    确定所述至少两个点云片段中任意两个点云片段之间的差异值,并将最小差异值所对应的两个点云片段作为目标点云片段;
    根据所述目标点云片段,估计所述目标物的目标速度。
  2. 根据权利要求1所述的方法,其中,所述确定所述至少两个点云片段中任意两个点云片段之间的差异值,包括:
    从所述至少两个点云片段中,任意获取两个点云片段作为第一点云片段和第二点云片段;
    确定所述第一点云片段的第一点云参数,以及所述第二点云片段的第二点云参数;
    根据所述第一点云参数和第二点云参数,确定所述第一点云片段和第二点云片段之间的差异值;
    其中,所述点云参数包括以下至少一项:
    点云形态、点云平均时间、点云面积、点云平均俯仰角。
  3. 根据权利要求2所述的方法,其中,所述根据所述第一点云参数和第二点云参数,确定所述第一点云片段和第二点云片段之间的差异值,包括:
    确定所述第一点云片段的点云形态,与所述第二点云片段的点云形态的目标形态差异;
    确定所述第一点云片段的点云平均时间,与所述第二点云片段的点云平均时间的目标时间差异;
    确定所述第一点云片段的点云面积,与所述第二点云片段的点云面积的目标点云面积差异;
    确定所述第一点云片段的点云平均俯仰角,与所述第二点云片段的点云平均俯仰角的目标点云平均俯仰角差异;
    获取预先针对不同差异设置的权重;
    根据所述目标形态差异、所述目标时间差异、所述目标点云面积差异、所述目标点云平均俯仰角差异,以及对应的权重,确定所述第一点云片段和第二点云片段之间的差异值。
  4. 根据权利要求3所述的方法,其中,点云片段的点云形态通过如下步骤确定:
    建立针对点云片段的二维外接矩形,并确定所述二维外接矩形中距离所述激光雷达设备最近的第一角点;
    确定所述外接二维矩形中,与所述第一角点相邻的第二角点和第三角点;
    确定所述第一角点和所述激光雷达设备的连线,与所述第二角点和所述激光雷达设备的连线的第一夹角;
    确定所述第一角点和所述激光雷达设备的连线,与所述第三角点和所述激光雷达设备的连线的第二夹角;
    根据所述第一夹角和所述第二夹角中数值最小的夹角,确定点云片段的点云形态。
  5. 根据权利要求4所述的方法,其中,所述根据所述目标点云片段,估计所述目标物的 目标速度,包括:
    根据预先为所述目标点云片段对应的点云形态设置的特征提取方式,从目标点云片段中提取目标特征点;
    根据所述目标特征点,确定所述目标物的目标速度。
  6. 根据权利要求1所述的方法,其中,所述将所述目标单帧点云分割成至少两个点云片段,包括:
    按照时间顺序,对所述目标单帧点云中的点进行排序;
    判断所述目标单帧点云中相邻的第一点和第二点之间的间隔是否超过预设时长间隔;
    当所述第一点和第二点之间的间隔超过预设时长间隔时,判断以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序是否一致;
    当以第一点为端点的点云片段的偏航角升降序,和以第二点为端点的点云片段的偏航角升降序不一致时,在所述第一点和第二点之间进行分割。
  7. 根据权利要求1-6任一项所述的方法,其中,所述目标速度包括所述目标物的绝对速度,所述激光雷达设备部署在目标车辆中,所述获取目标单帧点云,包括:
    获取所述目标车辆在所述目标时段内的目标自车位姿;
    根据所述目标自车位姿,对所述一帧点云进行自车运动补偿,得到所述目标单帧点云。
  8. 一种目标物速度的估计装置,其中,所述装置包括:
    获取模块,用于获取目标单帧点云,所述目标单帧点云由激光雷达设备在目标时段内对移动的目标物进行多次扫描后得到的一帧点云生成;
    分割模块,用于将所述目标单帧点云分割成至少两个点云片段;
    确定模块,用于确定所述至少两个点云片段中任意两个点云片段之间的差异值,并将最小差异值所对应的两个点云片段作为目标点云片段;
    估计模块,用于根据所述目标点云片段,估计所述目标物的目标速度。
  9. 一种车辆,包括处理器、存储器及存储在所述存储器上并能够在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至7中任一项所述的目标物速度的估计方法。
  10. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的目标物速度的估计方法。
PCT/CN2023/079661 2022-03-07 2023-03-03 目标物速度的估计方法及装置、车辆和存储介质 WO2023169337A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210224164.7 2022-03-07
CN202210224164.7A CN114581481B (zh) 2022-03-07 2022-03-07 一种目标物速度的估计方法及装置、车辆和存储介质

Publications (1)

Publication Number Publication Date
WO2023169337A1 true WO2023169337A1 (zh) 2023-09-14

Family

ID=81778216

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/079661 WO2023169337A1 (zh) 2022-03-07 2023-03-03 目标物速度的估计方法及装置、车辆和存储介质

Country Status (2)

Country Link
CN (1) CN114581481B (zh)
WO (1) WO2023169337A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581481B (zh) * 2022-03-07 2023-08-25 广州小鹏自动驾驶科技有限公司 一种目标物速度的估计方法及装置、车辆和存储介质
CN115661220B (zh) * 2022-12-28 2023-03-17 深圳煜炜光学科技有限公司 点云数据配准方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190080503A1 (en) * 2017-09-13 2019-03-14 Tata Consultancy Services Limited Methods and systems for surface fitting based change detection in 3d point-cloud
CN113721253A (zh) * 2021-08-30 2021-11-30 杭州视光半导体科技有限公司 基于fmcw激光雷达的运动物体速度检测方法
CN114091515A (zh) * 2021-09-30 2022-02-25 浙江大华技术股份有限公司 障碍物检测方法、装置、电子设备和存储介质
CN114581481A (zh) * 2022-03-07 2022-06-03 广州小鹏自动驾驶科技有限公司 一种目标物速度的估计方法及装置、车辆和存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10365650B2 (en) * 2017-05-25 2019-07-30 GM Global Technology Operations LLC Methods and systems for moving object velocity determination
CN109521756B (zh) * 2017-09-18 2022-03-08 阿波罗智能技术(北京)有限公司 用于无人驾驶车辆的障碍物运动信息生成方法和装置
CN108647646B (zh) * 2018-05-11 2019-12-13 北京理工大学 基于低线束雷达的低矮障碍物的优化检测方法及装置
CN111208492B (zh) * 2018-11-21 2022-04-19 长沙智能驾驶研究院有限公司 车载激光雷达外参标定方法及装置、计算机设备及存储介质
CN110221603B (zh) * 2019-05-13 2020-08-14 浙江大学 一种基于激光雷达多帧点云融合的远距离障碍物检测方法
CN115950440A (zh) * 2020-01-03 2023-04-11 御眼视觉技术有限公司 用于车辆导航的系统和方法
CN111220993B (zh) * 2020-01-14 2020-07-28 长沙智能驾驶研究院有限公司 目标场景定位方法、装置、计算机设备和存储介质
CN112308889B (zh) * 2020-10-23 2021-08-31 香港理工大学深圳研究院 一种利用矩形及扁率信息的点云配准方法及存储介质
CN113066105B (zh) * 2021-04-02 2022-10-21 北京理工大学 激光雷达和惯性测量单元融合的定位与建图方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190080503A1 (en) * 2017-09-13 2019-03-14 Tata Consultancy Services Limited Methods and systems for surface fitting based change detection in 3d point-cloud
CN113721253A (zh) * 2021-08-30 2021-11-30 杭州视光半导体科技有限公司 基于fmcw激光雷达的运动物体速度检测方法
CN114091515A (zh) * 2021-09-30 2022-02-25 浙江大华技术股份有限公司 障碍物检测方法、装置、电子设备和存储介质
CN114581481A (zh) * 2022-03-07 2022-06-03 广州小鹏自动驾驶科技有限公司 一种目标物速度的估计方法及装置、车辆和存储介质

Also Published As

Publication number Publication date
CN114581481B (zh) 2023-08-25
CN114581481A (zh) 2022-06-03

Similar Documents

Publication Publication Date Title
WO2023169337A1 (zh) 目标物速度的估计方法及装置、车辆和存储介质
CN108152831B (zh) 一种激光雷达障碍物识别方法及系统
CN110264416B (zh) 稀疏点云分割方法及装置
US9576367B2 (en) Object detection method and device
US9298990B2 (en) Object tracking method and device
US11379963B2 (en) Information processing method and device, cloud-based processing device, and computer program product
CN110673107B (zh) 基于多线激光雷达的路沿检测方法及装置
CN111209825B (zh) 一种用于动态目标3d检测的方法和装置
JPH10143659A (ja) 物体検出装置
CN113378760A (zh) 训练目标检测模型和检测目标的方法及装置
JP2002352225A (ja) 障害物検出装置及びその方法
KR101628155B1 (ko) Ccl을 이용한 실시간 미확인 다중 동적물체 탐지 및 추적 방법
US20230005278A1 (en) Lane extraction method using projection transformation of three-dimensional point cloud map
CN114692720B (zh) 基于鸟瞰图的图像分类方法、装置、设备及存储介质
CN112631266A (zh) 一种移动机器人感知障碍信息的方法、装置
CN112904369B (zh) 机器人重定位方法、装置、机器人和计算机可读存储介质
CN110262487B (zh) 一种障碍物检测方法、终端及计算机可读存储介质
KR20200030738A (ko) 이동 로봇 및 이의 위치 인식 방법
CN112017248A (zh) 一种基于点线特征的2d激光雷达相机多帧单步标定方法
CN116030130A (zh) 一种动态环境下的混合语义slam方法
JP2002334330A (ja) 車両認識装置
CN113734176A (zh) 智能驾驶车辆的环境感知系统、方法、车辆及存储介质
CN113256709A (zh) 目标检测方法、装置、计算机设备以及存储介质
WO2021063756A1 (en) Improved trajectory estimation based on ground truth
CN107248171B (zh) 一种基于三角剖分的单目视觉里程计尺度恢复方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23765915

Country of ref document: EP

Kind code of ref document: A1