WO2022061850A1 - Point cloud motion distortion correction method and device - Google Patents

Point cloud motion distortion correction method and device Download PDF

Info

Publication number
WO2022061850A1
WO2022061850A1 PCT/CN2020/118270 CN2020118270W WO2022061850A1 WO 2022061850 A1 WO2022061850 A1 WO 2022061850A1 CN 2020118270 W CN2020118270 W CN 2020118270W WO 2022061850 A1 WO2022061850 A1 WO 2022061850A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud frame
target object
point
frame
Prior art date
Application number
PCT/CN2020/118270
Other languages
French (fr)
Chinese (zh)
Inventor
宫正
李延召
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/118270 priority Critical patent/WO2022061850A1/en
Publication of WO2022061850A1 publication Critical patent/WO2022061850A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application relates to the technical field of photoelectric measuring instruments, and more particularly, to a method and device for correcting point cloud motion distortion.
  • the present application proposes a point cloud motion distortion correction method and device, which can eliminate the distortion of the moving target to the greatest extent and correct the measurement result.
  • a first aspect provides a point cloud motion distortion correction method, comprising: acquiring a first point cloud frame and a second point cloud frame, the first point cloud frame and the second point cloud frame are for the same target scene point cloud frames at different times; extract the same target object from the first point cloud frame and the second point cloud frame; according to the target object in the first point cloud frame and the second point cloud frame Estimate the movement speed of the target object; perform distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the movement speed of the target object.
  • a point cloud motion distortion correction device including: a memory and a processor; the memory is used for storing program codes; the processor calls the program codes, and when the program codes are executed, is used for Perform the following operations: obtain a first point cloud frame and a second point cloud frame, the first point cloud frame and the second point cloud frame are point cloud frames at different times for the same target scene; Extracting the same target object from the point cloud frame and the second point cloud frame; estimating the movement speed of the target object according to the target object in the first point cloud frame and the target object in the second point cloud frame; Distortion correction is performed on the target object in the first point cloud frame and/or the second point cloud frame according to the moving speed of the target object.
  • a radar including a processor and a memory.
  • the memory is used for storing a computer program
  • the processor is used for calling and running the computer program stored in the memory to execute the method in the above-mentioned first aspect or each implementation manner thereof.
  • a chip is provided for implementing the method in the above-mentioned first aspect or each of its implementation manners.
  • the chip includes: a processor for invoking and running a computer program from a memory, so that a device installed with the chip executes the method in the first aspect or each of its implementations.
  • a computer-readable storage medium for storing a computer program, the computer program comprising instructions for performing the method in the first aspect or any possible implementation of the first aspect.
  • a computer program product comprising computer program instructions, the computer program instructions causing a computer to execute the method in the first aspect or each implementation manner of the first aspect.
  • a computer program which, when run on a computer, causes the computer to execute the method in the first aspect or any possible implementation manner of the first aspect.
  • FIG. 1 is a point cloud motion distortion correction method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a target scene provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a target scene provided by another embodiment of the present application.
  • FIG. 4 is a schematic diagram of a point cloud frame before and after motion distortion correction provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a point cloud frame before and after motion distortion correction provided by another embodiment of the present application.
  • FIG. 6 is a schematic diagram of a scene flow provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of motion estimation based on a scene flow provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram for the target motion estimation shown in FIG. 7 .
  • FIG. 9 is a schematic diagram of motion estimation based on a scene flow provided by another embodiment of the present application.
  • FIG. 10 is a schematic diagram of motion estimation for the target shown in FIG. 9 .
  • FIG. 11 is a schematic diagram of motion estimation based on a scene flow provided by another embodiment of the present application.
  • FIG. 12 is a schematic diagram of motion estimation for the target shown in FIG. 11 .
  • FIG. 13 is a schematic diagram of a target scene provided by another embodiment of the present application.
  • FIG. 14 is a point cloud motion distortion correction method provided by another embodiment of the present application.
  • FIG. 15 is a point cloud motion distortion correction device provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of a point cloud motion distortion correction device provided by another embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of a chip according to an embodiment of the present application.
  • This application is mainly applied to the scene of scanning lidar. Since each point cloud in a frame of point cloud output by scanning lidar is not obtained by scanning the moving target at the same time, ghosting occurs in the point cloud frame, which causes motion distortion.
  • the source of point cloud motion distortion is caused by the fact that the scanning of the point cloud included in the target object is not at the same time.
  • the scanning of moving targets by scanning lidar there is currently no motion distortion correction algorithm for scanning radar. .
  • the present application proposes a point cloud motion distortion correction method and device, which can eliminate the distortion of the moving target to the greatest extent and correct the measurement result.
  • the embodiments of the present application can be applied to 3S assumption scenarios, that is, Structural Consistency Assumption, Speed Consistency Assumption, and Small Motion Assumption.
  • the target is moving at a small speed (eg, within 60km/h).
  • the method can be applied to the scanning radar, and can also be applied to the server connected to the scanning radar. In some embodiments, some of the steps may be performed by a scanning radar and some by a server.
  • a point cloud motion distortion correction method 100 provided by an embodiment of the present application may include steps 110 - 140 .
  • first point cloud frame and a second point cloud frame where the first point cloud frame and the second point cloud frame are point cloud frames at different times for the same target scene.
  • the first point cloud frame and the second point cloud frame in this embodiment of the present application may be point cloud frames at different times for the same target scene, and the points included in the point cloud frames at different times for the same target scene here
  • the number of clouds may be basically the same, or the number of point clouds included in point cloud frames at different times of the same target scene may vary greatly.
  • FIG. 2 it is a schematic diagram of a target scene provided by an embodiment of the present application.
  • FIG. 3 it is a schematic diagram of a target scene provided by another embodiment of the present application.
  • the first point cloud frame and the second point cloud frame are point cloud frames at adjacent moments of the same target scene.
  • the adjacent moments in this embodiment of the present application may be determined based on time units.
  • the first point cloud frame and the second point cloud frame may be point cloud frames of adjacent seconds, or point cloud frames of adjacent milliseconds, etc., No restrictions.
  • the first point cloud frame may be the point cloud frame at the current moment (unit is seconds), and the second point cloud frame may be the point cloud frame at the next second at the current moment; or, the first point cloud frame may be the current point cloud frame.
  • the point cloud frame of the moment in milliseconds
  • the second point cloud frame may be the point cloud frame of the next millisecond of the current moment.
  • the target object may be composed of point clouds, and after the first point cloud frame and the second point cloud frame are acquired, the target object may be extracted from the first point cloud frame and the second point cloud frame, respectively.
  • the target object in this embodiment of the present application may include one object, or may include multiple objects, which is not limited.
  • the movement speed of the target object is estimated according to the target object in the first point cloud frame and the target object in the second point cloud frame.
  • performing distortion correction on the target object in the first point cloud frame and/or the second point cloud frame can be understood as correcting the coordinate position of the point cloud included in the target object.
  • performing distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the moving speed of the target object includes: Determine the distortion coefficient of the target object according to the movement speed of the target object; perform distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the distortion coefficient.
  • the distortion coefficients include rotational interpolation distortion coefficients and linear motion interpolation distortion coefficients.
  • the distortion of the target object may be further determined according to the estimated movement speed coefficient, so as to perform distortion correction on the target object based on the distortion coefficient.
  • the distortion coefficients in the embodiments of the present application may include rotational interpolation distortion coefficients and linear motion interpolation distortion coefficients.
  • the rotation motion interpolation coefficient can use the slerp algorithm, that is, the spherical linear interpolation algorithm, which is a linear interpolation operation of the quaternion, which is mainly used to smooth the difference between two quaternions representing rotation. value.
  • the translational motion interpolation coefficient that is, the motion is directly divided proportionally by the distortion coefficient (short-term uniform motion assumption).
  • the distortion correction can be performed on the target object in the first point cloud frame and/or the second point cloud frame according to the distortion coefficient, and the specific implementation process will be described below.
  • performing distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the distortion coefficient includes: for the first point cloud frame and/or the point cloud of the target object in the second point cloud frame, according to the product of the rotation interpolation distortion coefficient of the point cloud and the coordinate position of the point cloud, and the linear motion interpolation distortion of the point cloud The sum of the coefficients determines the corrected coordinate position of the point cloud.
  • the determining the distortion coefficient of the target object according to the movement speed of the target object includes: determining the distortion coefficient according to the movement speed of the target object and an interpolation coefficient, the The interpolation coefficient is calculated based on the timestamps of the point clouds in the first point cloud frame and the second point cloud frame.
  • the motion distortion correction for the target object in the first point cloud frame and the second point cloud frame is essentially the point cloud included in the target object in the first point cloud frame and the second point cloud frame.
  • the coordinate position is corrected.
  • the motion distortion correction of the target object can be realized through the following steps.
  • the first step Initialize the read-in point cloud and the corresponding timestamp, as well as the motion direction delta_t and the speed size delta_q.
  • the process of initializing the read-in point cloud is essentially the coordinate position of the read-in point cloud, the timestamp of the corresponding point cloud, and the speed of the target object.
  • Step 2 Calculate the interpolation coefficient si of each point cloud according to the first and last points and the timestamp of each point, as shown in formula (1).
  • s i represents the interpolation coefficient of the ith point cloud
  • t_i represents the timestamp of the ith point cloud
  • t_n represents the timestamp of the nth point cloud
  • t_0 represents the 0th point cloud Timestamp of the point cloud.
  • the corresponding equation (1) can be used to calculate its corresponding Interpolation coefficients.
  • Step 3 Calculate the corresponding rotational motion interpolation coefficient and translational motion interpolation coefficient according to the interpolation coefficient s i and the forward and backward motion estimation t_0 and t_n of the point cloud cluster.
  • the translational motion interpolation coefficient can be calculated by formula (3), that is, the motion is directly divided proportionally by the distortion coefficient (short-term uniform motion assumption).
  • t_point_last i represents the translational motion interpolation coefficient of the i-th point cloud.
  • first point cloud frame and the second point cloud frame including one target object is composed of 10 point clouds as an example, if The lengths of are 3 and 2, respectively, and the angle between them is 30°.
  • the 0th point cloud then:
  • Step 4 According to the rotation motion interpolation coefficient and the translation motion interpolation coefficient, the corresponding point is restored and aligned to the coordinate system of the last point of the frame according to this motion interpolation, and the calculation is completed, as shown in formula (4).
  • p_i_update represents the coordinate position of the ith point cloud after updating
  • p_i represents the coordinate position of the ith point cloud before updating
  • q_point_last is the above formula (2)
  • the rotational motion interpolation coefficient q_point_last and the translational motion interpolation coefficient t_point_last of the 0th point cloud obtained according to the above equations (2) and (3) are 2.00 and 10.00, respectively.
  • the corrected coordinate position can be obtained by the above formula (4), that is, the corrected coordinate position is (12, 18, 16 ).
  • the rotational motion interpolation coefficient q_point_last and translation motion interpolation coefficient t_point_last of the first point cloud obtained according to the above equations (2) and (3) are 2.13 and 9.00, respectively.
  • the corrected coordinate position can be obtained by the above formula (4), that is, the corrected coordinate position is (11.13, 21.78, 15.39 ).
  • FIG. 4 it is a schematic diagram of a point cloud frame before and after motion distortion correction according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a point cloud frame before motion distortion correction
  • (b) in FIG. 4 is a schematic diagram of a point cloud frame after motion distortion correction.
  • the object formed by the point cloud in this figure may be a car.
  • the corrected point cloud please refer to the point cloud in the area enclosed by the thicker dashed circle in Fig. 4(a) and Fig. 4(b), that is, the rear of the car. It can be clearly seen from the figure that the rear of the car in (a) in Figure 4 has obvious point cloud stacking phenomenon, and three distinct lines are formed at the rear of the car due to the stacking of point clouds. After the point cloud distortion correction, the coordinate position of the point cloud at the rear of the car in Figure 4(b) has obvious changes, and an obvious line is formed at the rear of the car at this time.
  • FIG. 5 it is a schematic diagram of a point cloud frame before and after motion distortion correction according to another embodiment of the present application.
  • (a) in FIG. 5 is a schematic diagram of a point cloud frame before motion distortion correction
  • (b) in FIG. 5 is a schematic diagram of a point cloud frame after motion distortion correction.
  • the corrected point cloud please refer to the point cloud in (a) and (b) of FIG. 5 in the area enclosed by the thicker dashed circle, that is, the right part of the target object.
  • the right part of the target object in (a) in Figure 5 has obvious point cloud stacking phenomenon, and due to the stacking of point clouds, three lines are formed on the right part of the target object. Obvious lines.
  • the coordinate position of the point cloud on the right part of the target object in (b) in Figure 5 has obvious changes. At this time, an obvious line is formed on the right part of the target object.
  • the solution provided by this application can further eliminate the distortion of the moving target by determining the distortion coefficient according to the movement speed of the target object and the interpolation coefficient of the point cloud, and performing distortion correction on the position of the point cloud included in the target object based on the distortion coefficient. Correct the measurement results.
  • the extracting the same target object from the first point cloud frame and the second point cloud frame includes: using a clustering algorithm to separate the first point cloud frame
  • the point cloud in the point cloud frame and the point cloud in the second point cloud frame are clustered into at least one object; and the object in the first point cloud frame and the object in the second point cloud frame are associated.
  • the clustering algorithm in the embodiment of the present application may include an Euclidean clustering algorithm or a generalized Euclidean clustering algorithm, etc., which is not limited.
  • this application can also use other methods to extract at least one object from the first point cloud frame and the second point cloud frame, such as target detection method, etc.
  • This application does not specifically limit this, as long as it can be extracted from the point cloud frame
  • the methods of the object can all be applied to the embodiments of the present application.
  • the extracted at least one object may be associated, that is, the first point cloud frame is associated with the second point cloud frame.
  • the same objects in the point cloud frame are mapped.
  • the corresponding object can be found from at least one object extracted from the second point cloud frame.
  • the associating the object in the first point cloud frame with the object in the second point cloud frame includes: according to the object in the first point cloud frame The target point of , and the target point of the object in the second point cloud frame associate the object in the first point cloud frame with the object in the second point cloud frame, wherein the first point cloud frame The distance between the target point of the target object in the frame and the target point of the target object in the second point cloud frame is smaller than the distance to the target points of other objects in the second point cloud frame.
  • the target point is a center point or a center of gravity point.
  • the target point may also be any point on the extracted target object, which is not limited.
  • the point cloud in the first point cloud frame and the point cloud in the second point cloud frame are respectively clustered into at least one object by using the clustering algorithm, that is, from the first point cloud frame and the second point cloud frame
  • One or more objects may be included in the objects extracted in the cloud frame.
  • the one of the two frames can be The objects are associated; if the point clouds in the first point cloud frame and the second point cloud frame are clustered to form multiple objects, it is necessary to associate the multiple objects in the two frames respectively.
  • the following process can be referred to to associate the target objects in the first point cloud frame and the second point cloud frame.
  • K t0 and K t1 may not be the same moving target, K t0 and K t1 can be associated with the target first:
  • the movement speed of the target can be calculated:
  • the final scene flow can be obtained by traversing and calculating all moving objects:
  • Scene flow the 3D version of optical flow, describes the changes of each point cloud in the point cloud two frames before and after.
  • Current algorithms for scene flow estimation using point cloud data include flownet3D and HPLFlownet.
  • flownet3D using pointnet++ as the basic module, extracting the point cloud features of the two frames before and after, merging and upsampling, and directly fitting the scene flow.
  • HPLFlownet using Bilateral Convolutional Layers as the basic module, extracting the point cloud features of the two frames before and after, merging and upsampling, and directly fitting the scene flow.
  • FIG. 6 it is a schematic diagram of a scene flow according to an embodiment of the present application.
  • the small white circles represent point cloud 1
  • the small black circles represent point cloud 2
  • the point cloud 1 and the point cloud 2 can be respectively the point clouds of the frame before and after the same target object, and the scene flow of the point cloud can be obtained by the motion estimation of the point cloud 1 and the point cloud 2, as shown in Figure 6 (b).
  • FIG. 7 is a schematic diagram of motion estimation based on a scene flow provided by an embodiment of the present application.
  • FIG. 7 is the point cloud frame shot at time t0 (that is, the first point cloud frame in this application), and (b) in FIG. 7 is the point cloud frame shot at time t1 (that is, the first point cloud frame in this application) second point cloud frame in this application).
  • the target object can be obtained from the first point cloud frame and the second point cloud frame respectively.
  • This application takes the Euclidean clustering method as an example. Referring to (a) in FIG. 7 , the center points O1 and Q1 are used as clusters respectively. Points are clustered. For the center point O1, the points whose distance O1 is smaller than the preset threshold may be clustered into one category; for the center point Q1, the points whose distance Q1 is smaller than the preset threshold may be clustered into one category.
  • the target object A1 and the target object B1 shown in the figure can be obtained by clustering.
  • clustering is performed with center points O2 and Q2 as clustering points, respectively.
  • the points whose distance O2 is smaller than the preset threshold may be clustered into one category; for the center point Q2, the points whose distance Q2 is smaller than the preset threshold may be clustered into one category.
  • target association can be performed on the obtained target object. That is, for the cluster center at time t0, the nearest neighbor search is performed in the cluster target center set at time t1, and the point with the smallest distance is taken as the target correlation point.
  • the clustering target center set at time t1 ie the cluster center O2 (ie the center point O2 above), the cluster center Q2 (ie the center point O2 above)
  • the nearest neighbor search is performed in the center point Q2)) above. It can be seen from the figure that the distance between the cluster center O2 and the cluster center O1 is smaller than the distance between the cluster center Q2 and the cluster center O1. Therefore, The cluster center O1 and the cluster center O2 can be associated, that is, the target object A1 at time t0 and the target object A2 at time t1 are the same target object.
  • the nearest neighbor search is performed in the clustering target center set (that is, the cluster center O2, the cluster center Q2) at the time of t1.
  • the clustering target center set that is, the cluster center O2, the cluster center Q2
  • the cluster center Q1 and the cluster center Q2 can be associated, that is, the target object B1 at the time of t0 is different from the cluster center Q1.
  • the target object B2 at time t1 is the same target object.
  • FIG. 7 is a schematic diagram of a scene of point cloud frames captured at different times, the target object in FIG. 7 can be processed first.
  • the same target object in FIG. 7 can be placed on the same horizontal line, that is, the target object A1 in (a) in FIG. 7 and the target object A2 in (b) in FIG. 7 are extracted and placed on the same horizontal line.
  • the horizontal line as shown in (a) in Figure 8; the target object B1 in (a) in Figure 7 and the target object B2 in (b) in Figure 7 are extracted and placed on the same horizontal line, as shown in Figure 8 shown in (b).
  • the target object B1 moves from Q1 to the position of Q2 (that is, the target object B2), and the displacement between Q1 and Q2 is denoted as Then the movement speed of the target B1 can be obtained by the ratio of displacement and time (including magnitude and direction of velocity), i.e.
  • the preset threshold in this embodiment of the present application may be a fixed value, or may be a continuously adjusted value, which is not specifically limited in the present application.
  • FIG. 9 it is a schematic diagram of motion estimation based on a scene flow provided by another embodiment of the present application.
  • FIG. 9 is the point cloud frame shot at time t0 (that is, the first point cloud frame in this application), and (b) in FIG. 9 is the point cloud frame shot at time t1 (that is, the first point cloud frame in this application) second point cloud frame in this application).
  • the present application takes the Euclidean clustering method as an example, and with reference to (a) in FIG. 9 , the center point O1 , the center point Q1 , and the center point P1 are respectively used for clustering to perform clustering.
  • the center point O1 the points whose distance O1 is less than the preset threshold can be clustered into one category; for the center point Q1, the points whose distance Q1 is less than the preset threshold can be clustered into one category; for the center point P1, the distance Points whose P1 is less than the preset threshold are clustered into one class.
  • the target object A1, the target object B1 and the target object C1 in the figure can be obtained.
  • the center point O2 , the center point Q2 , and the center point P2 are respectively used as cluster points to perform clustering.
  • the points whose distance O2 is less than the preset threshold can be clustered into one category; for the center point Q2, the points whose distance Q2 is less than the preset threshold can be clustered into one category; for the center point P2, the distance Points whose P2 is less than the preset threshold are clustered into one class.
  • the target object A2, the target object B2 and the target object C2 in the figure can be obtained. As can be seen from the figure, the target object A2 may not be a complete object.
  • target association can be performed on the obtained target object. That is, for the cluster center at time t0, the nearest neighbor search is performed in the cluster target center set at time t1, and the point with the smallest distance is taken as the target correlation point.
  • the clustering target center set at time t1 ie the cluster center O2 (ie the center point O2 above), the cluster center Q2 (ie the center point O2 above)
  • the nearest neighbor search is performed in the center point Q2) and the cluster center P2 (that is, the center point P2 above)).
  • the distance between the cluster center Q2 and the cluster center O1 is the smallest, but Since the target object moves westward in this scene, for the target object A1, the position of the target object A1 at time t1 should be located on the west side of the position at time t0, while the target object B2 is located at the east side of the target object A1 at time t0. , therefore, the cluster center O1 and the cluster center Q2 cannot be associated.
  • the cluster center O2 can be associated with the cluster center O1, that is, the target at time t0
  • the object A1 and the target object A2 at time t1 are the same target object.
  • the nearest neighbor search is performed in the cluster target center set at time t1 (ie, the cluster center O2, the cluster center Q2, and the cluster center P2). It can be seen from the figure that combined with the movement direction of the target object and distance, the cluster center Q2 and the cluster center Q1 can be associated, that is, the target object B1 at time t0 and the target object B2 at time t1 are the same target object.
  • the nearest neighbor search is performed in the cluster target center set at time t1 (ie, the cluster center O2, the cluster center Q2, and the cluster center P2). It can be seen from the figure that combined with the target object The movement direction and distance of , the cluster center P2 and the cluster center P1 can be associated, that is, the target object C1 at time t0 and the target object C2 at time t1 are the same target object.
  • FIG. 9 is a schematic diagram of the point cloud frame scene shot at different times, the target object in FIG. 9 can be processed first, for example, the same target object in FIG. 9 can be placed on the same horizontal line, that is, the ( The target object A1 in a) and the target object A2 in (b) in Figure 9 are extracted and placed on the same horizontal line, as shown in (a) in Figure 10; the target object in (a) in Figure 9 B1 and the target object B2 in (b) in FIG. 9 are extracted and placed on the same horizontal line, as shown in (b) in FIG. 10 ; the target object C1 in (a) in FIG. The target object C2 in (b) is extracted and placed on the same horizontal line, as shown in (c) in FIG. 10 .
  • the target object B1 moves from Q1 to the position of Q2 (that is, the target object B2), and the displacement between Q1 and Q2 is denoted as Then the movement speed of the target object B1 can be obtained by the ratio of displacement to time (including magnitude and direction of velocity), i.e.
  • the target object C1 moves from P1 to the position of P2 (ie, the target object C2), and the displacement between P1 and P2 is denoted as Then the movement speed of the target object C1 can be obtained by the ratio of displacement and time (including magnitude and direction of velocity), i.e.
  • the cluster center at time t0 and the cluster center at time t1 are not the same center, it can also reflect the movement speed of the target object A1 to a certain extent.
  • FIG. 11 is a schematic diagram of motion estimation based on a scene flow provided by another embodiment of the present application.
  • FIG. 11 is the point cloud frame shot at time t0 (that is, the first point cloud frame in this application), and (b) of FIG. 11 is the point cloud frame shot at time t1 (that is, the first point cloud frame in this application) second point cloud frame in this application).
  • the Euclidean clustering method is used as an example, referring to (a) in FIG. 11 , the center point O1 and the center point Q1 are respectively used as clustering points to perform clustering.
  • the points whose distance O1 is less than the preset threshold can be clustered into one category; for the center point Q1, the points whose distance Q1 is less than the preset threshold can be clustered into one category.
  • the target object A1 and the target object B1 in the figure can be obtained by clustering.
  • clustering is performed with O2 and Q2 as clustering points, respectively.
  • O2 and Q2 the points whose distance O2 is smaller than the preset threshold may be clustered into one category; for the center point Q2, the points whose distance Q2 is smaller than the preset threshold may be clustered into one category.
  • target association can be performed on the obtained target object. That is, for the cluster center at time t0, the nearest neighbor search is performed in the cluster target center set at time t1, and the point with the smallest distance is taken as the target correlation point.
  • the nearest neighbor search is performed in the cluster target center set (ie, the cluster center O2, the cluster center Q2) at the time t1. It can be seen from the figure that the cluster center O2 and the cluster The distance between the centers O1 is smaller than the distance between the cluster center Q2 and the cluster center O1. Therefore, the cluster center O1 and the cluster center O2 can be associated, that is, the target object A1 at time t0 and the target object at time t1 A2 is the same target object.
  • the nearest neighbor search is performed in the cluster target center set at time t1 (ie, the cluster center O2, the cluster center Q2). It can be seen from the figure that the distance between the cluster center Q2 and the cluster center Q1 The distance between the cluster center O2 and the cluster center Q1 is smaller than the distance between the cluster center O2 and the cluster center Q1. Therefore, the cluster center Q1 and the cluster center Q2 can be associated, that is, the target object B1 at time t0 and the target object B2 at time t1 are the same target. object.
  • FIG. 11 is a schematic diagram of a point cloud frame scene captured at different times
  • the target object in FIG. 11 may be processed first, for example, the same target object in FIG. 11 may be placed on the same horizontal line. That is, the target object A1 in (a) in FIG. 11 and the target object A2 in (b) in FIG. 11 are extracted and placed on the same horizontal line, as shown in (a) in FIG. 12 ; The target object B1 in a) and the target object B2 in (b) in FIG. 11 are extracted and placed on the same horizontal line, as shown in (b) in FIG. 12 .
  • the target object B1 moves from Q1 to the position of Q2 (that is, the target object B2), and the displacement between Q1 and Q2 is denoted as Then the movement speed of the target object B1 can be obtained by the ratio of displacement to time (including magnitude and direction of velocity), i.e.
  • the point cloud frame before point cloud distortion correction is used as an example to illustrate the process of extracting target objects and target associations.
  • the process is similar to the above. It is concise and will not be repeated here.
  • the target objects extracted from the first point cloud frame and the second point cloud frame are associated, and motion estimation is performed based on the associated target objects, that is, the target objects are moved through the scene flow. estimation, the motion estimation of the target object can be realized efficiently and quickly, and the data dependence and empirical risk can be reduced.
  • the method before the extracting the same target object from the first point cloud frame and the second point cloud frame, the method further includes: frame and the second point cloud frame to perform a preprocessing operation; the extracting the same target object from the first point cloud frame and the second point cloud frame includes: extracting the same target object from the preprocessing operation The target object is extracted from the first point cloud frame and the second point cloud frame.
  • the preprocessing operations include ground filtering operations and/or downsampling operations.
  • preprocessing operations may be performed on the acquired point cloud frames, such as ground filtering operations and/or downsampling operations, and then based on the preprocessing operations
  • the processed point cloud frame clusters at least one object.
  • the ground filtering operation is essentially filtering out the acquired point cloud of the ground, as shown in FIG. 13 , which is a schematic diagram of a target scene provided by another embodiment of the present application.
  • the point cloud in the dotted line can be filtered out to obtain the scene point cloud shown in (b) in Figure 13.
  • the target object is subsequently extracted from the point cloud frame, it can be improved. efficiency and target extraction accuracy.
  • the downsampling operation also known as the downsampling operation, is a multi-rate digital signal processing technique or a process of reducing the signal sampling rate, which can be used to increase the data transmission rate or reduce the data transmission size.
  • the efficiency and the extraction accuracy of the target object can be improved.
  • FIG. 14 it is a schematic diagram of a point cloud motion distortion correction method 1400 provided by another embodiment of the present application.
  • the method 1400 may include steps 1410-1490.
  • FIG. 15 is a point cloud motion distortion correction apparatus 1500 provided by an embodiment of the present application.
  • the apparatus 1500 may include a memory 1510 and a processor 1520 .
  • the memory 1510 is used to store program codes
  • the processor 1520 calls the program code, and when the program code is executed, is configured to perform the following operations:
  • first point cloud frame and the second point cloud frame are point cloud frames at different moments for the same target scene
  • Distortion correction is performed on the target object in the first point cloud frame and/or the second point cloud frame according to the moving speed of the target object.
  • the processor 1520 is further configured to: determine a distortion coefficient of the target object according to the movement speed of the target object; /Or perform distortion correction on the target object in the second point cloud frame.
  • the distortion coefficients include rotational interpolation distortion coefficients and linear motion interpolation distortion coefficients.
  • the processor 1520 is further configured to: for the point cloud of the target object in the first point cloud frame and/or the second point cloud frame, according to the point cloud The product of the rotation interpolation distortion coefficient and the coordinate position of the point cloud, and the sum of the linear motion interpolation distortion coefficient of the point cloud determine the corrected coordinate position of the point cloud.
  • the processor 1520 is further configured to: determine the distortion coefficient according to the motion speed of the target object and an interpolation coefficient, where the interpolation coefficient is based on the first point cloud frame and all The timestamp of the point cloud in the second point cloud frame is calculated.
  • the processor 1520 is further configured to: cluster the point clouds in the first point cloud frame and the point clouds in the second point cloud frame respectively by using a clustering algorithm Classify into at least one object; associate the object in the first point cloud frame with the object in the second point cloud frame.
  • the processor 1520 is further configured to: pair the target point of the object in the first point cloud frame with the target point of the object in the second point cloud frame
  • the object in the first point cloud frame is associated with the object in the second point cloud frame
  • the target point of the target object in the first point cloud frame is related to the target point of the target object in the second point cloud frame
  • the distance of the target point is smaller than the distance to the target points of other objects in the second point cloud frame.
  • the target point is a center point or a center of gravity point.
  • the processor 1520 before extracting the same target object from the first point cloud frame and the second point cloud frame, is further configured to: The point cloud frame and the second point cloud frame are subjected to a preprocessing operation; the processor is further configured to: extract the first point cloud frame and the second point cloud frame from the first point cloud frame and the second point cloud frame after the preprocessing operation. Extract the target object.
  • the preprocessing operations include ground filtering operations and/or downsampling operations.
  • the first point cloud frame and the second point cloud frame are point cloud frames at adjacent moments of the same target scene.
  • Embodiments of the present application further provide a computer-readable storage medium for storing a computer program.
  • the computer-readable storage medium can be applied to the device for correcting point cloud motion distortion in the embodiments of the present application, and the computer program enables the computer to execute the methods implemented by the device for correcting point cloud motion distortion in the embodiments of the present application.
  • the corresponding process is not repeated here.
  • Embodiments of the present application also provide a computer program product, including computer program instructions.
  • the computer program product can be applied to the point cloud motion distortion correction device in the embodiments of the present application, and the computer program instructions cause the computer to execute the corresponding methods implemented by the point cloud motion distortion correction device in each method of the embodiments of the present application.
  • the process for the sake of brevity, will not be repeated here.
  • the embodiments of the present application also provide a computer program.
  • the computer program can be applied to the device for correcting point cloud motion distortion in the embodiments of the present application.
  • the computer program runs on the computer, the computer executes the correction by point cloud motion distortion in each method of the embodiments of the present application.
  • the corresponding process implemented by the device will not be repeated here.
  • the embodiments of the present application also provide a radar, the radar includes a memory and a processor, and the processor can call and run a computer program from the memory to implement the methods described in the embodiments of the present application.
  • FIG. 16 is a schematic structural diagram of a point cloud motion distortion correction device provided by another embodiment of the present application.
  • the point cloud motion distortion correction device 1600 shown in FIG. 16 includes a processor 1610, and the processor 1610 can call and run a computer program from a memory to implement the methods described in the embodiments of the present application.
  • the point cloud motion distortion correction apparatus 1600 may further include a memory 1620 .
  • the processor 1610 may call and run a computer program from the memory 1620 to implement the methods in the embodiments of the present application.
  • the memory 1620 may be a separate device independent of the processor 1610, or may be integrated in the processor 1610.
  • the point cloud motion distortion correction device 1600 may further include a transceiver 1630, and the processor 1610 may control the transceiver 1630 to communicate with other devices, specifically, may send information or data to other devices , or receive information or data sent by other devices.
  • the point cloud motion distortion correction device may be, for example, a radar, etc., and the point cloud motion distortion correction device 1600 may implement the corresponding processes in each method of the embodiments of the present application, which will not be repeated here for brevity.
  • FIG. 17 is a schematic structural diagram of a chip according to an embodiment of the present application.
  • the chip 1700 shown in FIG. 17 includes a processor 1710, and the processor 1710 can call and run a computer program from a memory, so as to implement the methods in the embodiments of the present application.
  • the chip 1700 may further include a memory 1720 .
  • the processor 1710 may call and run a computer program from the memory 1720 to implement the methods in the embodiments of the present application.
  • the memory 1720 may be a separate device independent of the processor 1710, or may be integrated in the processor 1710.
  • the chip 1700 may further include an input interface 1730 .
  • the processor 1710 can control the input interface 1730 to communicate with other devices or chips, and specifically, can obtain information or data sent by other devices or chips.
  • the chip 1700 may further include an output interface 1740 .
  • the processor 1710 can control the output interface 1740 to communicate with other devices or chips, and specifically, can output information or data to other devices or chips.
  • the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-chip, or a system-on-a-chip, or the like.
  • the processor in this embodiment of the present application may be an integrated circuit image processing system, which has signal processing capability.
  • each step of the above method embodiments may be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the above-mentioned processor can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other available Programming logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memory in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM, PROM), an erasable programmable read-only memory (Erasable PROM, EPROM), an electrically programmable read-only memory (Erasable PROM, EPROM). Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be Random Access Memory (RAM), which acts as an external cache.
  • RAM random access memory
  • SRAM Static RAM
  • DRAM Dynamic RAM
  • SDRAM Synchronous DRAM
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDR SDRAM
  • enhanced SDRAM ESDRAM
  • synchronous link dynamic random access memory Synchlink DRAM, SLDRAM
  • Direct Rambus RAM Direct Rambus RAM
  • the memory in the embodiment of the present application may also be a static random access memory (Static RAM, SRAM), a dynamic random access memory (Dynamic RAM, DRAM), Synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection Dynamic random access memory (Synch Link DRAM, SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DR RAM) and so on. That is, the memory in the embodiments of the present application is intended to include but not limited to these and any other suitable types of memory.
  • the memory in the embodiments of the present application may provide instructions and data to the processor.
  • a portion of the memory may also include non-volatile random access memory.
  • the memory may also store device type information.
  • the processor may be configured to execute the instruction stored in the memory, and when the processor executes the instruction, the processor may execute each step corresponding to the terminal device in the foregoing method embodiments.
  • each step of the above-mentioned method can be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor executes the instructions in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, detailed description is omitted here.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms of connection.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solutions of the embodiments of the present application.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application are essentially or part of contributions to the prior art, or all or part of the technical solutions can be embodied in the form of software products, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk and other mediums that can store program codes.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A point cloud motion distortion correction method and device, a radar, and a computer readable storage medium. The method comprises: obtaining a first point cloud frame and a second point cloud frame, the first point cloud frame and the second point cloud frame being point cloud frames at different moments of the same target scenario (110); extracting the same target object from the first point cloud frame and the second point cloud frame (120); estimating the motion speed of the target object according to a target object in the first point cloud frame and a target object in the second point cloud frame (130); and performing distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the motion speed of the target object (140). According to the method, the motion speed of the target object is estimated according to a target object in the first point cloud frame and a target object in the second point cloud frame, and distortion correction is performed on the target object on the basis of the estimated motion speed, so that the distortion of a moving target can be eliminated to the greatest extent, and the measurement result is corrected.

Description

点云运动畸变修正方法和装置Point cloud motion distortion correction method and device
版权申明Copyright notice
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或者该专利披露。The disclosure of this patent document contains material that is subject to copyright protection. This copyright belongs to the copyright owner. The copyright owner has no objection to the reproduction by anyone of the patent document or the patent disclosure as it exists in the official records and archives of the Patent and Trademark Office.
技术领域technical field
本申请涉及光电测量仪器技术领域,并且更为具体地,涉及一种点云运动畸变修正方法和装置。The present application relates to the technical field of photoelectric measuring instruments, and more particularly, to a method and device for correcting point cloud motion distortion.
背景技术Background technique
随着雷达技术的发展,对于运动目标的探测日趋重视。由于扫描型激光雷达输出的一帧点云中各点云不是同一时刻对于运动目标的扫描得到的,导致点云帧中产生重影,从而引起运动畸变。With the development of radar technology, more and more attention is paid to the detection of moving targets. Since each point cloud in a frame of point cloud output by the scanning lidar is not obtained by scanning the moving target at the same time, ghosting occurs in the point cloud frame, resulting in motion distortion.
因此,如何消除运动畸变是一项需要解决的问题。Therefore, how to eliminate motion distortion is a problem that needs to be solved.
发明内容SUMMARY OF THE INVENTION
本申请提出一种点云运动畸变修正方法和装置,可以最大程度消除运动目标的畸变,修正测量结果。The present application proposes a point cloud motion distortion correction method and device, which can eliminate the distortion of the moving target to the greatest extent and correct the measurement result.
第一方面,提供一种点云运动畸变修正方法,包括:获取第一点云帧和第二点云帧,所述第一点云帧和所述第二点云帧是针对同一目标场景的不同时刻的点云帧;从所述第一点云帧和所述第二点云帧中提取同一目标对象;根据所述第一点云帧中的目标对象和所述第二点云帧中的目标对象估计所述目标对象的运动速度;根据所述目标对象的运动速度对所述第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正。A first aspect provides a point cloud motion distortion correction method, comprising: acquiring a first point cloud frame and a second point cloud frame, the first point cloud frame and the second point cloud frame are for the same target scene point cloud frames at different times; extract the same target object from the first point cloud frame and the second point cloud frame; according to the target object in the first point cloud frame and the second point cloud frame Estimate the movement speed of the target object; perform distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the movement speed of the target object.
第二方面,提供一种点云运动畸变修正装置,包括:存储器和处理器;所述存储器用于存储程序代码;所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:获取第一点云帧和第二点云帧,所述第一点云帧和所述第二点云帧是针对同一目标场景的不同时刻的点云帧;从所述第一点云帧和所述第二点云帧中提取同一目标对象;根据所述第一点云帧中的目标对象和所述第二点云帧中的目标对象估计所述目标对象的运动速度; 根据所述目标对象的运动速度对所述第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正。In a second aspect, a point cloud motion distortion correction device is provided, including: a memory and a processor; the memory is used for storing program codes; the processor calls the program codes, and when the program codes are executed, is used for Perform the following operations: obtain a first point cloud frame and a second point cloud frame, the first point cloud frame and the second point cloud frame are point cloud frames at different times for the same target scene; Extracting the same target object from the point cloud frame and the second point cloud frame; estimating the movement speed of the target object according to the target object in the first point cloud frame and the target object in the second point cloud frame; Distortion correction is performed on the target object in the first point cloud frame and/or the second point cloud frame according to the moving speed of the target object.
第三方面,提供了一种雷达,包括处理器和存储器。该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,执行上述第一方面或其各实现方式中的方法。In a third aspect, a radar is provided, including a processor and a memory. The memory is used for storing a computer program, and the processor is used for calling and running the computer program stored in the memory to execute the method in the above-mentioned first aspect or each implementation manner thereof.
第四方面,提供一种芯片,用于实现上述第一方面或其各实现方式中的方法。In a fourth aspect, a chip is provided for implementing the method in the above-mentioned first aspect or each of its implementation manners.
具体地,该芯片包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有该芯片的设备执行如上述第一方面或其各实现方式中的方法。Specifically, the chip includes: a processor for invoking and running a computer program from a memory, so that a device installed with the chip executes the method in the first aspect or each of its implementations.
第五方面,提供了一种计算机可读存储介质,用于存储计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。In a fifth aspect, a computer-readable storage medium is provided for storing a computer program, the computer program comprising instructions for performing the method in the first aspect or any possible implementation of the first aspect.
第六方面,提供了一种计算机程序产品,包括计算机程序指令,该计算机程序指令使得计算机执行上述第一方面或第一方面的各实现方式中的方法。In a sixth aspect, a computer program product is provided, comprising computer program instructions, the computer program instructions causing a computer to execute the method in the first aspect or each implementation manner of the first aspect.
第七方面,提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面或第一方面的任一可能的实现方式中的方法。In a seventh aspect, a computer program is provided, which, when run on a computer, causes the computer to execute the method in the first aspect or any possible implementation manner of the first aspect.
本申请提供的方案,通过根据第一点云帧中的目标对象和第二点云帧中的目标对象估计该目标对象的运动速度,并基于估计的运动速度对该目标对象进行畸变修正,可以最大程度消除运动目标的畸变,修正测量结果。In the solution provided by the present application, by estimating the movement speed of the target object according to the target object in the first point cloud frame and the target object in the second point cloud frame, and performing distortion correction on the target object based on the estimated movement speed, it is possible to Eliminate the distortion of moving objects to the greatest extent and correct the measurement results.
附图说明Description of drawings
下面将对实施例使用的附图作简单地介绍。The accompanying drawings used in the embodiments will be briefly introduced below.
图1是本申请一实施例提供的点云运动畸变修正方法。FIG. 1 is a point cloud motion distortion correction method provided by an embodiment of the present application.
图2是本申请一实施例提供的一种目标场景的示意图。FIG. 2 is a schematic diagram of a target scene provided by an embodiment of the present application.
图3是本申请另一实施例提供的一种目标场景的示意图。FIG. 3 is a schematic diagram of a target scene provided by another embodiment of the present application.
图4是本申请一实施例提供的运动畸变修正前后的点云帧的示意图。FIG. 4 is a schematic diagram of a point cloud frame before and after motion distortion correction provided by an embodiment of the present application.
图5是本申请另一实施例提供的运动畸变修正前后的点云帧的示意图。FIG. 5 is a schematic diagram of a point cloud frame before and after motion distortion correction provided by another embodiment of the present application.
图6是本申请实施例提供的一个场景流的示意图。FIG. 6 is a schematic diagram of a scene flow provided by an embodiment of the present application.
图7是本申请一实施例提供的一种基于场景流的运动估计的示意图。FIG. 7 is a schematic diagram of motion estimation based on a scene flow provided by an embodiment of the present application.
图8是针对图7所示的目标运动估计的示意图。FIG. 8 is a schematic diagram for the target motion estimation shown in FIG. 7 .
图9是本申请另一实施例提供的一种基于场景流的运动估计的示意图。FIG. 9 is a schematic diagram of motion estimation based on a scene flow provided by another embodiment of the present application.
图10是针对图9所示的目标运动估计的示意图。FIG. 10 is a schematic diagram of motion estimation for the target shown in FIG. 9 .
图11是本申请又一实施例提供的一种基于场景流的运动估计的示意图。FIG. 11 is a schematic diagram of motion estimation based on a scene flow provided by another embodiment of the present application.
图12是针对图11所示的目标运动估计的示意图。FIG. 12 is a schematic diagram of motion estimation for the target shown in FIG. 11 .
图13是本申请又一实施例提供的一种目标场景的示意图。FIG. 13 is a schematic diagram of a target scene provided by another embodiment of the present application.
图14是本申请另一实施例提供的点云运动畸变修正方法。FIG. 14 is a point cloud motion distortion correction method provided by another embodiment of the present application.
图15是本申请一实施例提供的一种点云运动畸变修正装置。FIG. 15 is a point cloud motion distortion correction device provided by an embodiment of the present application.
图16是本申请另一实施例提供的点云运动畸变修正装置的示意性结构图。FIG. 16 is a schematic structural diagram of a point cloud motion distortion correction device provided by another embodiment of the present application.
图17是本申请实施例的芯片的示意性结构图。FIG. 17 is a schematic structural diagram of a chip according to an embodiment of the present application.
具体实施方式detailed description
下面对本申请实施例中的技术方案进行描述。The technical solutions in the embodiments of the present application are described below.
除非另有说明,本申请实施例所使用的所有技术和科学术语与本申请的技术领域的技术人员通常理解的含义相同。本申请中所使用的术语只是为了描述具体的实施例的目的,不是旨在限制本申请的范围。Unless otherwise specified, all technical and scientific terms used in the embodiments of the present application have the same meaning as commonly understood by those skilled in the technical field of the present application. The terminology used in this application is for the purpose of describing specific embodiments only and is not intended to limit the scope of the application.
本申请主要应用于扫描型激光雷达的场景,由于扫描型激光雷达输出的一帧点云中各点云不是同一时刻对于运动目标的扫描得到的,导致点云帧中产生重影,从而引起运动畸变。This application is mainly applied to the scene of scanning lidar. Since each point cloud in a frame of point cloud output by scanning lidar is not obtained by scanning the moving target at the same time, ghosting occurs in the point cloud frame, which causes motion distortion.
应理解,上述场景仅为举例说明,本申请还可以引用于其它场景,不应对本申请实施例造成特别限定。It should be understood that the above scenarios are only illustrative, and the present application may also refer to other scenarios, which should not be particularly limited to the embodiments of the present application.
本质上,点云运动畸变的来源还是由于对目标对象所包括的点云的扫描不是同一时刻导致的,关于扫描型激光雷达对于运动目标的扫描,目前还没有针对扫描型雷达的运动畸变修正算法。In essence, the source of point cloud motion distortion is caused by the fact that the scanning of the point cloud included in the target object is not at the same time. Regarding the scanning of moving targets by scanning lidar, there is currently no motion distortion correction algorithm for scanning radar. .
因此,本申请提出一种点云运动畸变修正方法和装置,可以最大程度消除运动目标的畸变,修正测量结果。Therefore, the present application proposes a point cloud motion distortion correction method and device, which can eliminate the distortion of the moving target to the greatest extent and correct the measurement result.
本申请实施例可以应用于3S假设场景,即结构一致性假设(Structural Consistency Assumption)、速度一致性假设(Speed Consistency Assumption)以及微小运动假设(Small Motion Assumption)。The embodiments of the present application can be applied to 3S assumption scenarios, that is, Structural Consistency Assumption, Speed Consistency Assumption, and Small Motion Assumption.
结构一致性假设:相邻帧的同一目标物在结构上有着一致的特征;Structural consistency assumption: the same object in adjacent frames has consistent features in structure;
速度一致性假设:相邻帧的同一目标物有着相同的运动速度和方向;Speed consistency assumption: the same object in adjacent frames has the same moving speed and direction;
微小运动假设:目标物的运动速度较小(例如,速度在60km/h以内)。Slight motion assumption: The target is moving at a small speed (eg, within 60km/h).
通过结构一致性假设,我们可以估计运动速度。通过速度一致性假设,我们可以给出相同目标物的速度一致约束。通过微小运动假设,我们可以进行数据关联,最终估计出目标运动状态。With the structural consistency assumption, we can estimate the speed of motion. Through the velocity consistency assumption, we can give velocity consistency constraints for the same target. Through the small motion assumption, we can perform data association and finally estimate the target motion state.
下面结合图1详细描述本申请实施例提供的点云运动畸变修正方法100。该方法可以应用于扫描型雷达,也可以应用于与扫描型雷达通信连接的服务器。在一些实施例中,部分步骤可由扫描型雷达执行,部分由服务器执行。The following describes the point cloud motion distortion correction method 100 provided by the embodiment of the present application in detail with reference to FIG. 1 . The method can be applied to the scanning radar, and can also be applied to the server connected to the scanning radar. In some embodiments, some of the steps may be performed by a scanning radar and some by a server.
如图1所示,为本申请一实施例提供的点云运动畸变修正方法100,该方法可以包括步骤110-140。As shown in FIG. 1 , a point cloud motion distortion correction method 100 provided by an embodiment of the present application may include steps 110 - 140 .
110,获取第一点云帧和第二点云帧,所述第一点云帧和所述第二点云帧是针对同一目标场景的不同时刻的点云帧。110. Acquire a first point cloud frame and a second point cloud frame, where the first point cloud frame and the second point cloud frame are point cloud frames at different times for the same target scene.
本申请实施例中的第一点云帧和第二点云帧可以是针对同一目标场景的不同时刻的点云帧,此处的针对同一目标场景的不同时刻的点云帧中所包括的点云的数量可能基本相同,也可能同一目标场景的不同时刻的点云帧中所包括的点云的数量差异较大。The first point cloud frame and the second point cloud frame in this embodiment of the present application may be point cloud frames at different times for the same target scene, and the points included in the point cloud frames at different times for the same target scene here The number of clouds may be basically the same, or the number of point clouds included in point cloud frames at different times of the same target scene may vary greatly.
示例性地,如图2所示,为本申请一实施例提供的一种目标场景的示意图。Exemplarily, as shown in FIG. 2 , it is a schematic diagram of a target scene provided by an embodiment of the present application.
假设图2中的(a)和图2中的(b)分别为本申请实施例中的第一点云帧和第二点云帧,从图中可以看出,第一点云帧和第二点云帧中所包括的点云的数量基本相同。Assuming that (a) in FIG. 2 and (b) in FIG. 2 are respectively the first point cloud frame and the second point cloud frame in the embodiment of the present application, it can be seen from the figure that the first point cloud frame and the second point cloud frame The number of point clouds included in the two point cloud frames is basically the same.
如图3所示,为本申请另一实施例提供的一种目标场景的示意图。As shown in FIG. 3 , it is a schematic diagram of a target scene provided by another embodiment of the present application.
假设图3中的(a)和图3中的(b)分别为本申请实施例中的第一点云帧和第二点云帧,从图中可以看出,第一点云帧和第二点云帧中所包括的点云的数量差异较大,主要是由于图3中的(b)最左边的点云的数量减少所致。Assuming that (a) in FIG. 3 and (b) in FIG. 3 are respectively the first point cloud frame and the second point cloud frame in the embodiment of the present application, it can be seen from the figure that the first point cloud frame and the second point cloud frame The number of point clouds included in the two point cloud frames varies greatly, mainly due to the reduction in the number of the leftmost point cloud in (b) in Figure 3.
可选地,在一些实施例中,所述第一点云帧和所述第二点云帧是针对所述同一目标场景的相邻时刻的点云帧。Optionally, in some embodiments, the first point cloud frame and the second point cloud frame are point cloud frames at adjacent moments of the same target scene.
本申请实施例中的相邻时刻可以基于时间单位确定,例如,第一点云帧和第二点云帧可以是相邻秒的点云帧,也可以是相邻毫秒的点云帧等,不予限制。The adjacent moments in this embodiment of the present application may be determined based on time units. For example, the first point cloud frame and the second point cloud frame may be point cloud frames of adjacent seconds, or point cloud frames of adjacent milliseconds, etc., No restrictions.
示例性地,第一点云帧可以为当前时刻(单位为秒)的点云帧,第二点云帧为当前时刻的下一秒的点云帧;或者,第一点云帧可以为当前时刻(单位为毫秒)的点云帧,第二点云帧可以为当前时刻的下一毫秒的点云帧。Exemplarily, the first point cloud frame may be the point cloud frame at the current moment (unit is seconds), and the second point cloud frame may be the point cloud frame at the next second at the current moment; or, the first point cloud frame may be the current point cloud frame. The point cloud frame of the moment (in milliseconds), and the second point cloud frame may be the point cloud frame of the next millisecond of the current moment.
120,从所述第一点云帧和所述第二点云帧中提取同一目标对象。120. Extract the same target object from the first point cloud frame and the second point cloud frame.
众所周知,目标对象可以由点云组成,在获取到第一点云帧和第二点云帧后,可以分别从第一点云帧和第二点云帧中提取目标对象。As is known to all, the target object may be composed of point clouds, and after the first point cloud frame and the second point cloud frame are acquired, the target object may be extracted from the first point cloud frame and the second point cloud frame, respectively.
可以理解的是,本申请实施例中的目标对象可以包括1个对象,也可以包括多个对象,不予限制。It can be understood that, the target object in this embodiment of the present application may include one object, or may include multiple objects, which is not limited.
130,根据所述第一点云帧中的目标对象和所述第二点云帧中的目标对象估计所述目标对象的运动速度。130. Estimate the movement speed of the target object according to the target object in the first point cloud frame and the target object in the second point cloud frame.
可以理解的是,本申请实施例中,针对某一目标对象,根据第一点云帧中的该目标对象和第二点云帧中的该目标对象估计该目标对象的运动速度。It can be understood that, in this embodiment of the present application, for a certain target object, the movement speed of the target object is estimated according to the target object in the first point cloud frame and the target object in the second point cloud frame.
换句话说,从第一点云帧和第二点云帧中提取多个目标对象的情况下,在计算目标对象的速度时,针对的是同一目标对象。In other words, when multiple target objects are extracted from the first point cloud frame and the second point cloud frame, when calculating the speed of the target object, the same target object is aimed.
140,根据所述目标对象的运动速度对所述第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正。140. Perform distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the moving speed of the target object.
本申请实施例中,对第一点云帧和/或第二点云帧中的目标对象进行畸变修正,可以理解为对目标对象所包括的点云的坐标位置进行修正。In the embodiment of the present application, performing distortion correction on the target object in the first point cloud frame and/or the second point cloud frame can be understood as correcting the coordinate position of the point cloud included in the target object.
本申请提供的方案,通过根据第一点云帧中的目标对象和第二点云帧中的目标对象估计该目标对象的运动速度,并基于估计的运动速度对该目标对象进行畸变修正,可以最大程度消除运动目标的畸变,修正测量结果。In the solution provided by the present application, by estimating the movement speed of the target object according to the target object in the first point cloud frame and the target object in the second point cloud frame, and performing distortion correction on the target object based on the estimated movement speed, it is possible to Eliminate the distortion of moving objects to the greatest extent and correct the measurement results.
可选地,在一些实施例中,所述根据所述目标对象的运动速度对所述第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正,包括:根据所述目标对象的运动速度确定所述目标对象的畸变系数;根据所述畸变系数对第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正。Optionally, in some embodiments, performing distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the moving speed of the target object includes: Determine the distortion coefficient of the target object according to the movement speed of the target object; perform distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the distortion coefficient.
可选地,在一些实施例中,所述畸变系数包括旋转插值畸变系数和线性运动插值畸变系数。Optionally, in some embodiments, the distortion coefficients include rotational interpolation distortion coefficients and linear motion interpolation distortion coefficients.
本申请实施例中,在根据第一点云帧中的目标对象和第二点云帧中的目标对象估计出该目标对象的运动速度后,可以进一步根据估计的运动速度确定该目标对象的畸变系数,从而基于该畸变系数对该目标对象进行畸变修正。In the embodiment of the present application, after estimating the movement speed of the target object according to the target object in the first point cloud frame and the target object in the second point cloud frame, the distortion of the target object may be further determined according to the estimated movement speed coefficient, so as to perform distortion correction on the target object based on the distortion coefficient.
本申请实施例中的畸变系数可以包括旋转插值畸变系数和线性运动插值畸变系数。The distortion coefficients in the embodiments of the present application may include rotational interpolation distortion coefficients and linear motion interpolation distortion coefficients.
其中,旋转运动插值系数可以采用slerp算法,也就是球面线性插值(spherical linear interpolation)算法,是四元数的一种线性插值运算,主要用于在两个表示旋转的四元数之间平滑差值。Among them, the rotation motion interpolation coefficient can use the slerp algorithm, that is, the spherical linear interpolation algorithm, which is a linear interpolation operation of the quaternion, which is mainly used to smooth the difference between two quaternions representing rotation. value.
平移运动插值系数,即运动直接被畸变系数按比例进行拆分(短时匀速运动假设)。The translational motion interpolation coefficient, that is, the motion is directly divided proportionally by the distortion coefficient (short-term uniform motion assumption).
上文指出,可以根据畸变系数对第一点云帧和/或第二点云帧中的目标对象进行畸变修正,下文将对具体实现过程进行说明。It is pointed out above that the distortion correction can be performed on the target object in the first point cloud frame and/or the second point cloud frame according to the distortion coefficient, and the specific implementation process will be described below.
可选地,在一些实施例中,所述根据所述畸变系数对第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正,包括:针对所述第一点云帧和/或所述第二点云帧中的目标对象的点云,根据所述点云的旋转插值畸变系数和所述点云的坐标位置的乘积,与所述点云的线性运动插值畸变系数之和确定所述点云修正后的坐标位置。Optionally, in some embodiments, performing distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the distortion coefficient includes: for the first point cloud frame and/or the point cloud of the target object in the second point cloud frame, according to the product of the rotation interpolation distortion coefficient of the point cloud and the coordinate position of the point cloud, and the linear motion interpolation distortion of the point cloud The sum of the coefficients determines the corrected coordinate position of the point cloud.
可选地,在一些实施例中,所述根据所述目标对象的运动速度确定所述目标对象的畸变系数,包括:根据所述目标对象的运动速度和插值系数确定所述畸变系数,所述插值系数基于所述第一点云帧和所述第二点云帧中的点云的时间戳计算得到。Optionally, in some embodiments, the determining the distortion coefficient of the target object according to the movement speed of the target object includes: determining the distortion coefficient according to the movement speed of the target object and an interpolation coefficient, the The interpolation coefficient is calculated based on the timestamps of the point clouds in the first point cloud frame and the second point cloud frame.
如上所述,对第一点云帧和第二点云帧中的目标对象的运动畸变修正,其实质上是对第一点云帧和第二点云帧中的目标对象所包括的点云的坐标位置进行修正。As described above, the motion distortion correction for the target object in the first point cloud frame and the second point cloud frame is essentially the point cloud included in the target object in the first point cloud frame and the second point cloud frame. The coordinate position is corrected.
在具体实现过程中,可以通过以下步骤实现对目标对象的运动畸变修正。In the specific implementation process, the motion distortion correction of the target object can be realized through the following steps.
第一步:初始化读入点云和对应时间戳以及运动方向delta_t和速度大小delta_q。The first step: Initialize the read-in point cloud and the corresponding timestamp, as well as the motion direction delta_t and the speed size delta_q.
其中,初始化读入点云的过程实质上是读入点云的坐标位置、对应点云的时间戳以及目标对象的速度。Among them, the process of initializing the read-in point cloud is essentially the coordinate position of the read-in point cloud, the timestamp of the corresponding point cloud, and the speed of the target object.
第二步:根据首尾点和每个点的时间戳计算出每一个点云的插值系数s i,如式(1)所示。 Step 2: Calculate the interpolation coefficient si of each point cloud according to the first and last points and the timestamp of each point, as shown in formula (1).
Figure PCTCN2020118270-appb-000001
Figure PCTCN2020118270-appb-000001
其中,s i表示第i个点云的插值系数,s i∈[0,1],t_i表示第i个点云的时间戳,t_n表示第n个点云的时间戳,t_0表示第0个点云的时间戳。 Among them, s i represents the interpolation coefficient of the ith point cloud, s i ∈ [0,1], t_i represents the timestamp of the ith point cloud, t_n represents the timestamp of the nth point cloud, and t_0 represents the 0th point cloud Timestamp of the point cloud.
假设第一点云帧和第二点云帧中包括1个目标对象,且该1个目标对象由10个点云组成,对于任意一个点云,均可通过上述式(1)计算其对应的插值系数。Assuming that the first point cloud frame and the second point cloud frame include 1 target object, and the 1 target object consists of 10 point clouds, for any point cloud, the corresponding equation (1) can be used to calculate its corresponding Interpolation coefficients.
假设在第10ms获取到第0个点云,第20ms获取到第n个点云,对于第0个点云,则:Assuming that the 0th point cloud is obtained at the 10th ms, and the nth point cloud is obtained at the 20th ms, for the 0th point cloud, then:
Figure PCTCN2020118270-appb-000002
Figure PCTCN2020118270-appb-000002
若在第11ms获取到第1个点云,则:If the first point cloud is obtained at 11ms, then:
Figure PCTCN2020118270-appb-000003
Figure PCTCN2020118270-appb-000003
若在第12ms获取到第2个点云,则:If the second point cloud is obtained at 12ms, then:
Figure PCTCN2020118270-appb-000004
Figure PCTCN2020118270-appb-000004
……...
以此类推,若在第20ms获取到第9个点云,则:By analogy, if the 9th point cloud is obtained in the 20th ms, then:
Figure PCTCN2020118270-appb-000005
Figure PCTCN2020118270-appb-000005
第三步:根据插值系数s i和点云簇的前后运动估计t_0,t_n计算出对应的旋转运动插值系数和平移运动插值系数。 Step 3: Calculate the corresponding rotational motion interpolation coefficient and translational motion interpolation coefficient according to the interpolation coefficient s i and the forward and backward motion estimation t_0 and t_n of the point cloud cluster.
本文调用Eigen库中的slerp函数完成对应计算,如式(2)所示。This paper calls the slerp function in the Eigen library to complete the corresponding calculation, as shown in formula (2).
Figure PCTCN2020118270-appb-000006
Figure PCTCN2020118270-appb-000006
其中,
Figure PCTCN2020118270-appb-000007
表示第i个点云的旋转运动插值系数,
Figure PCTCN2020118270-appb-000008
分别为插值的首尾向量,θ为
Figure PCTCN2020118270-appb-000009
之间的夹角。
in,
Figure PCTCN2020118270-appb-000007
represents the rotational motion interpolation coefficient of the i-th point cloud,
Figure PCTCN2020118270-appb-000008
are the head and tail vectors of the interpolation, respectively, and θ is
Figure PCTCN2020118270-appb-000009
the angle between.
平移运动插值系数可以通过式(3)计算,也就是运动直接被畸变系数按比例进行拆分(短时匀速运动假设)。The translational motion interpolation coefficient can be calculated by formula (3), that is, the motion is directly divided proportionally by the distortion coefficient (short-term uniform motion assumption).
t_point_last i=s i*delta_t    (3) t_point_last i = s i *delta_t (3)
其中,t_point_last i表示第i个点云的平移运动插值系数。 Among them, t_point_last i represents the translational motion interpolation coefficient of the i-th point cloud.
①关于点云的旋转运动插值系数的计算①Calculation of the rotation motion interpolation coefficient of the point cloud
仍然以上述第一点云帧和第二点云帧中包括1个目标对象,且该1个目标对象由10个点云组成为例,若
Figure PCTCN2020118270-appb-000010
的长度分别为3和2,且两者的夹角为30°,对于第0个点云,则:
Still take the above first point cloud frame and the second point cloud frame including one target object, and the one target object is composed of 10 point clouds as an example, if
Figure PCTCN2020118270-appb-000010
The lengths of are 3 and 2, respectively, and the angle between them is 30°. For the 0th point cloud, then:
Figure PCTCN2020118270-appb-000011
Figure PCTCN2020118270-appb-000011
对于第1个点云,则:For the first point cloud, then:
Figure PCTCN2020118270-appb-000012
Figure PCTCN2020118270-appb-000012
……...
以此类推,对于第9个点云,则:And so on, for the ninth point cloud, then:
Figure PCTCN2020118270-appb-000013
Figure PCTCN2020118270-appb-000013
②关于点云的平移运动插值系数的计算②Calculation of the translational motion interpolation coefficient of the point cloud
仍然以上述第一点云帧和第二点云帧中包括1个目标对象,且该1个目标对象由10个点云组成为例,且由该10个点云组成的目标对象的速度为10m/s,对于第0个点云,则:Still take the above first point cloud frame and the second point cloud frame including 1 target object, and the 1 target object is composed of 10 point clouds as an example, and the speed of the target object composed of the 10 point clouds is 10m/s, for the 0th point cloud, then:
t_point_last 0=s 0*delta_t=1*10=10.00 t_point_last 0 =s 0 *delta_t=1*10=10.00
对于第1个点云,则:For the first point cloud, then:
t_point_last 1=s 1*delta_t=0.9*10=9.00 t_point_last 1 =s 1 *delta_t=0.9*10=9.00
……...
以此类推,对于第9个点云,则:And so on, for the ninth point cloud, then:
t_point_last 9=s 9*delta_t=0*10=0 t_point_last 9 =s 9 *delta_t=0*10=0
第四步:根据旋转运动插值系数和平移运动插值系数将对应点按此运动插值还原对齐到该帧最后一点的坐标系下,完成计算,如式(4)所示。Step 4: According to the rotation motion interpolation coefficient and the translation motion interpolation coefficient, the corresponding point is restored and aligned to the coordinate system of the last point of the frame according to this motion interpolation, and the calculation is completed, as shown in formula (4).
p_i_update=q_point_last*p_i+t_point_last    (4)p_i_update=q_point_last*p_i+t_point_last (4)
其中,p_i_update表示第i个点云更新后的坐标位置,p_i表示第i个点云更新前的坐标位置,q_point_last即为上述式(2)中的
Figure PCTCN2020118270-appb-000014
Among them, p_i_update represents the coordinate position of the ith point cloud after updating, p_i represents the coordinate position of the ith point cloud before updating, and q_point_last is the above formula (2)
Figure PCTCN2020118270-appb-000014
如上所述,根据上述式(2)和式(3)得到的第0个点云的旋转运动插值系数q_point_last和平移运动插值系数t_point_last分别为2.00和10.00。对于第0个点云,假设修正前的坐标位置为(1,4,3),通过上述式(4)即可得到修正后的坐标位置,即修正后的坐标位置为(12,18,16)。As described above, the rotational motion interpolation coefficient q_point_last and the translational motion interpolation coefficient t_point_last of the 0th point cloud obtained according to the above equations (2) and (3) are 2.00 and 10.00, respectively. For the 0th point cloud, assuming that the coordinate position before correction is (1, 4, 3), the corrected coordinate position can be obtained by the above formula (4), that is, the corrected coordinate position is (12, 18, 16 ).
根据上述式(2)和式(3)得到的第1个点云的旋转运动插值系数q_point_last和平移运动插值系数t_point_last分别为2.13和9.00。对于第1个点云,假设修正前的坐标位置为(1,6,3),通过上述式(4)即可得到修正后的坐标位置,即修正后的坐标位置为(11.13,21.78,15.39)。The rotational motion interpolation coefficient q_point_last and translation motion interpolation coefficient t_point_last of the first point cloud obtained according to the above equations (2) and (3) are 2.13 and 9.00, respectively. For the first point cloud, assuming that the coordinate position before correction is (1, 6, 3), the corrected coordinate position can be obtained by the above formula (4), that is, the corrected coordinate position is (11.13, 21.78, 15.39 ).
类似地,对于其它点云,也可以采用上述方法进行修正,为了简洁,这里不再赘述。Similarly, for other point clouds, the above method can also be used for correction, which is not repeated here for brevity.
应理解,上文所举示例是以10个点云为例进行说明的,在实际过程中,获取的点云的数量可能远远超过这个数值,但具体实现过程与上述所举示例是类似的。It should be understood that the above examples are described with 10 point clouds as an example. In the actual process, the number of acquired point clouds may far exceed this value, but the specific implementation process is similar to the above examples. .
如图4所示,为本申请一实施例提供的运动畸变修正前后的点云帧的示意图。As shown in FIG. 4 , it is a schematic diagram of a point cloud frame before and after motion distortion correction according to an embodiment of the present application.
图4中的(a)为运动畸变修正前的点云帧的示意图,图4中的(b)为运动畸变修正后的点云帧的示意图。参考图4中的(a)和图4中的(b),可以看出,该图中的点云所形成的对象可能为一辆车。(a) in FIG. 4 is a schematic diagram of a point cloud frame before motion distortion correction, and (b) in FIG. 4 is a schematic diagram of a point cloud frame after motion distortion correction. Referring to Fig. 4(a) and Fig. 4(b), it can be seen that the object formed by the point cloud in this figure may be a car.
其中,关于修正的点云请参见图4中的(a)和图4中的(b)用较粗的虚线圈出来的区域中的点云,即车的尾部。从图中可以清楚的看到,图4中的(a)的车的尾部有明显的点云堆叠现象,且由于点云的堆叠在车的尾部形成了3条明显的线条。经过点云畸变修正后,图4中的(b)的车的尾部的点云的坐标位置有明显的变化,此时在车的尾部形成1条明显的线条。Among them, for the corrected point cloud, please refer to the point cloud in the area enclosed by the thicker dashed circle in Fig. 4(a) and Fig. 4(b), that is, the rear of the car. It can be clearly seen from the figure that the rear of the car in (a) in Figure 4 has obvious point cloud stacking phenomenon, and three distinct lines are formed at the rear of the car due to the stacking of point clouds. After the point cloud distortion correction, the coordinate position of the point cloud at the rear of the car in Figure 4(b) has obvious changes, and an obvious line is formed at the rear of the car at this time.
如图5所示,为本申请另一实施例提供的运动畸变修正前后的点云帧的示意图。图5中的(a)为运动畸变修正前的点云帧的示意图,图5中的(b)为运动畸变修正后的点云帧的示意图。As shown in FIG. 5 , it is a schematic diagram of a point cloud frame before and after motion distortion correction according to another embodiment of the present application. (a) in FIG. 5 is a schematic diagram of a point cloud frame before motion distortion correction, and (b) in FIG. 5 is a schematic diagram of a point cloud frame after motion distortion correction.
其中,关于修正的点云请参见图5中的(a)和图5中的(b)用较粗的虚线圈出来的区域中的点云,即目标对象的靠右侧部分。从图中可以清楚的看到,图5中的(a)的目标对象的靠右侧部分有明显的点云堆叠现象,且由于点云的堆叠在目标对象的靠右侧部分形成了3条明显的线条。经过点云畸变修正后,图5中的(b)的目标对象的靠右侧部分的点云的坐标位置有明显的变化,此时在目标对象的靠右侧部分形成1条明显的线条。Among them, for the corrected point cloud, please refer to the point cloud in (a) and (b) of FIG. 5 in the area enclosed by the thicker dashed circle, that is, the right part of the target object. It can be clearly seen from the figure that the right part of the target object in (a) in Figure 5 has obvious point cloud stacking phenomenon, and due to the stacking of point clouds, three lines are formed on the right part of the target object. Obvious lines. After the point cloud distortion correction, the coordinate position of the point cloud on the right part of the target object in (b) in Figure 5 has obvious changes. At this time, an obvious line is formed on the right part of the target object.
本申请提供的方案,通过根据目标对象的运动速度和点云的插值系数确定畸变系数,并基于畸变系数对该目标对象所包括的点云的位置进行畸变修正,可以进一步消除运动目标的畸变,修正测量结果。The solution provided by this application can further eliminate the distortion of the moving target by determining the distortion coefficient according to the movement speed of the target object and the interpolation coefficient of the point cloud, and performing distortion correction on the position of the point cloud included in the target object based on the distortion coefficient. Correct the measurement results.
基于此,上文已经描述了对第一点云帧和第二点云帧中的目标对象进行畸变修正的过程,关于从第一点云帧和第二点云帧中提取目标对象的内容,请参见下文。Based on this, the process of performing distortion correction on the target object in the first point cloud frame and the second point cloud frame has been described above. Regarding the content of extracting the target object from the first point cloud frame and the second point cloud frame, See below.
可选地,在一些实施例中,所述从所述第一点云帧和所述第二点云帧中提取同一目标对象,包括:利用聚类算法,分别将所述第一点云帧中的点云和所述第二点云帧中的点云聚类成至少一个对象;对所述第一点云帧中的对象与所述第二点云帧中的对象进行关联。Optionally, in some embodiments, the extracting the same target object from the first point cloud frame and the second point cloud frame includes: using a clustering algorithm to separate the first point cloud frame The point cloud in the point cloud frame and the point cloud in the second point cloud frame are clustered into at least one object; and the object in the first point cloud frame and the object in the second point cloud frame are associated.
本申请实施例中的聚类算法可以包括欧式聚类算法或广义欧式聚类算法等,不予限制。The clustering algorithm in the embodiment of the present application may include an Euclidean clustering algorithm or a generalized Euclidean clustering algorithm, etc., which is not limited.
此外,本申请还可以用其它方法从第一点云帧中和第二点云帧中提取至少一个对象,如目标检测法等,本申请对此不作具体限定,只要可以从点云帧中提取对象的方法均可应用本申请实施例。In addition, this application can also use other methods to extract at least one object from the first point cloud frame and the second point cloud frame, such as target detection method, etc. This application does not specifically limit this, as long as it can be extracted from the point cloud frame The methods of the object can all be applied to the embodiments of the present application.
本申请实施例中,在利用聚类算法从第一点云帧和第二点云帧中提取至少一个对象后,可以对提取的至少一个对象进行关联,即将第一点云帧中与第二点云帧中的相同对象进行对应。In this embodiment of the present application, after at least one object is extracted from the first point cloud frame and the second point cloud frame by using a clustering algorithm, the extracted at least one object may be associated, that is, the first point cloud frame is associated with the second point cloud frame. The same objects in the point cloud frame are mapped.
换句话说,对于从第一点云帧中提取的每一个对象,可以从第二点云帧中提取的至少一个对象中寻找其所对应的对象。In other words, for each object extracted from the first point cloud frame, the corresponding object can be found from at least one object extracted from the second point cloud frame.
可选地,在一些实施例中,所述对所述第一点云帧中的对象与所述第二点云帧中的对象进行关联,包括:根据所述第一点云帧中的对象的目标点与所述第二点云帧中的对象的目标点对所述第一点云帧中的对象与所述第二点云帧中的对象进行关联,其中,所述第一点云帧中的目标对象的目标点与所述第二点云帧中目标对象的目标点的距离小于与所述第二点云帧中其他对象的目标点的距离。Optionally, in some embodiments, the associating the object in the first point cloud frame with the object in the second point cloud frame includes: according to the object in the first point cloud frame The target point of , and the target point of the object in the second point cloud frame associate the object in the first point cloud frame with the object in the second point cloud frame, wherein the first point cloud frame The distance between the target point of the target object in the frame and the target point of the target object in the second point cloud frame is smaller than the distance to the target points of other objects in the second point cloud frame.
可选地,在一些实施例中,所述目标点为中心点或重心点。Optionally, in some embodiments, the target point is a center point or a center of gravity point.
需要说明的是,在一些实施中,目标点也可以为提取的目标对象上的任意一点,不予限制。It should be noted that, in some implementations, the target point may also be any point on the extracted target object, which is not limited.
上文指出,利用聚类算法分别将第一点云帧中的点云和第二点云帧中的点云聚类成至少一个对象,也就是说,从第一点云帧和第二点云帧中提取的对象中可能包括1个或多个对象。It is pointed out above that the point cloud in the first point cloud frame and the point cloud in the second point cloud frame are respectively clustered into at least one object by using the clustering algorithm, that is, from the first point cloud frame and the second point cloud frame One or more objects may be included in the objects extracted in the cloud frame.
若第一点云帧和第二点云帧中的点云聚类后均形成1个对象,由于本申请是在微小运动假设的场景下进行的,则可以将这两帧中的这1个对象进行 关联;若第一点云帧和第二点云帧中的点云聚类后均形成多个对象,则需要对这两帧中的多个对象分别进行关联。If the point clouds in the first point cloud frame and the second point cloud frame are clustered to form one object, since this application is carried out under the scenario of micro-motion assumption, the one of the two frames can be The objects are associated; if the point clouds in the first point cloud frame and the second point cloud frame are clustered to form multiple objects, it is necessary to associate the multiple objects in the two frames respectively.
在具体实现上,可以参照以下过程对第一点云帧和第二点云帧中的目标对象进行关联。In terms of specific implementation, the following process can be referred to to associate the target objects in the first point cloud frame and the second point cloud frame.
以目标点为中心点为例,假设某一点云场景中,在t 0时刻(可以认为是本申请中的第一点云帧对应时刻)有n个运动目标K t0Taking the target point as the center point as an example, it is assumed that in a certain point cloud scene, there are n moving objects K t0 at time t 0 (which can be considered as the time corresponding to the first point cloud frame in this application),
Figure PCTCN2020118270-appb-000015
Figure PCTCN2020118270-appb-000015
在t 1时刻(可以认为是本申请中的第二点云帧对应时刻)有m个运动目标K t1At time t1 (which can be considered as the time corresponding to the second point cloud frame in this application), there are m moving objects K t1 ,
Figure PCTCN2020118270-appb-000016
Figure PCTCN2020118270-appb-000016
由于K t0和K t1可能不是同一个运动目标,因此,可以先将K t0和K t1进行目标关联: Since K t0 and K t1 may not be the same moving target, K t0 and K t1 can be associated with the target first:
Figure PCTCN2020118270-appb-000017
Figure PCTCN2020118270-appb-000017
在进行目标关联后,可以计算该目标的运动速度:After the target association, the movement speed of the target can be calculated:
Figure PCTCN2020118270-appb-000018
Figure PCTCN2020118270-appb-000018
由于场景流由不同的运动目标组成,因此,通过遍历计算所有运动目标,可以得到最终的场景流:Since the scene flow consists of different moving objects, the final scene flow can be obtained by traversing and calculating all moving objects:
Figure PCTCN2020118270-appb-000019
Figure PCTCN2020118270-appb-000019
为了便于理解本申请的方案,下文先对场景流进行一个简单介绍。In order to facilitate understanding of the solution of the present application, a brief introduction to the scene flow is given below.
场景流,即光流的三维版本,表述了点云中每一个点云在前后两帧的变化情况。目前使用点云数据进行场景流估计的算法包括flownet3D和HPLFlownet。Scene flow, the 3D version of optical flow, describes the changes of each point cloud in the point cloud two frames before and after. Current algorithms for scene flow estimation using point cloud data include flownet3D and HPLFlownet.
其中,flownet3D核心思想为:采用pointnet++作为基本模块,提取前后两帧点云特征并进行融合、上采样,直接拟合出场景流。HPLFlownet的核心思想为:采用Bilateral Convolutional Layers作为基本模块,提取前后两帧点云特征并进行融合、上采样,直接拟合出场景流。Among them, the core idea of flownet3D is: using pointnet++ as the basic module, extracting the point cloud features of the two frames before and after, merging and upsampling, and directly fitting the scene flow. The core idea of HPLFlownet is: using Bilateral Convolutional Layers as the basic module, extracting the point cloud features of the two frames before and after, merging and upsampling, and directly fitting the scene flow.
如图6所示,为本申请实施例提供的一个场景流的示意图。As shown in FIG. 6 , it is a schematic diagram of a scene flow according to an embodiment of the present application.
参考图6中的(a),白色小圆圈表示点云1,黑色小圆圈表示点云2。其中,点云1和点云2可以分别为针对同一目标对象的前后帧的点云,通过根据对点云1和点云2的运动估计即可得到点云的场景流,如图6中的(b)所示。Referring to (a) in FIG. 6 , the small white circles represent point cloud 1 , and the small black circles represent point cloud 2 . Among them, the point cloud 1 and the point cloud 2 can be respectively the point clouds of the frame before and after the same target object, and the scene flow of the point cloud can be obtained by the motion estimation of the point cloud 1 and the point cloud 2, as shown in Figure 6 (b).
如图7所示为本申请一实施例提供的一种基于场景流的运动估计的示意图。FIG. 7 is a schematic diagram of motion estimation based on a scene flow provided by an embodiment of the present application.
其中,图7的(a)为t0时刻所拍摄的点云帧(即为本申请中的第一点云帧),图7中的(b)为t1时刻所拍摄的点云帧(即为本申请中的第二点云帧)。Wherein, (a) in FIG. 7 is the point cloud frame shot at time t0 (that is, the first point cloud frame in this application), and (b) in FIG. 7 is the point cloud frame shot at time t1 (that is, the first point cloud frame in this application) second point cloud frame in this application).
首先,可以分别从第一点云帧和第二点云帧中获取目标对象,本申请以欧式聚类法为例,参考图7中的(a),分别以中心点O1和Q1为聚类点进行聚类。对于中心点O1,可以将距离O1小于预设阈值的点聚类为一类;对于中心点Q1,可以将距离Q1小于预设阈值的点聚类为一类。通过上述聚类算法,则可以聚类得到图中所示的目标对象A1和目标对象B1。First, the target object can be obtained from the first point cloud frame and the second point cloud frame respectively. This application takes the Euclidean clustering method as an example. Referring to (a) in FIG. 7 , the center points O1 and Q1 are used as clusters respectively. Points are clustered. For the center point O1, the points whose distance O1 is smaller than the preset threshold may be clustered into one category; for the center point Q1, the points whose distance Q1 is smaller than the preset threshold may be clustered into one category. Through the above clustering algorithm, the target object A1 and the target object B1 shown in the figure can be obtained by clustering.
类似地,参考图7中的(b),分别以中心点O2和Q2为聚类点进行聚类。对于中心点O2,可以将距离O2小于预设阈值的点聚类为一类;对于中心点Q2,可以将距离Q2小于预设阈值的点聚类为一类。通过上述聚类算法,则可以聚类得到图中所示的目标对象A2和目标对象B2。Similarly, referring to (b) in FIG. 7 , clustering is performed with center points O2 and Q2 as clustering points, respectively. For the center point O2, the points whose distance O2 is smaller than the preset threshold may be clustered into one category; for the center point Q2, the points whose distance Q2 is smaller than the preset threshold may be clustered into one category. Through the above clustering algorithm, the target object A2 and the target object B2 shown in the figure can be obtained by clustering.
其次,可以对获取到的目标对象进行目标关联。即对于在t0时刻的聚类中心,在t1时刻的聚类目标中心集合中进行近邻搜索,将距离最小的点作为目标关联点。Secondly, target association can be performed on the obtained target object. That is, for the cluster center at time t0, the nearest neighbor search is performed in the cluster target center set at time t1, and the point with the smallest distance is taken as the target correlation point.
示例性地,对于聚类中心O1(即上文中的中心点O1),在t1时刻的聚类目标中心集合(即聚类中心O2(即上文中的中心点O2)、聚类中心Q2(即上文中的中心点Q2))中进行近邻搜索,从图中可以看出,聚类中心O2与聚类中心O1之间的距离小于聚类中心Q2与聚类中心O1之间的距离,因此,可以将聚类中心O1和聚类中心O2进行关联,即t0时刻的目标对象A1与t1时刻的目标对象A2是同一目标对象。Exemplarily, for the cluster center O1 (ie the center point O1 above), the clustering target center set at time t1 (ie the cluster center O2 (ie the center point O2 above), the cluster center Q2 (ie the center point O2 above) The nearest neighbor search is performed in the center point Q2)) above. It can be seen from the figure that the distance between the cluster center O2 and the cluster center O1 is smaller than the distance between the cluster center Q2 and the cluster center O1. Therefore, The cluster center O1 and the cluster center O2 can be associated, that is, the target object A1 at time t0 and the target object A2 at time t1 are the same target object.
对于聚类中心Q1(即上文中的中心点Q1),在t1时刻的聚类目标中心集合(即聚类中心O2、聚类中心Q2)中进行近邻搜索,从图中可以看出,聚类中心Q2与聚类中心Q1之间的距离小于聚类中心O2与聚类中心Q1之间的距离,因此,可以将聚类中心Q1和聚类中心Q2进行关联,即t0时刻的目标对象B1与t1时刻的目标对象B2是同一目标对象。For the cluster center Q1 (that is, the center point Q1 above), the nearest neighbor search is performed in the clustering target center set (that is, the cluster center O2, the cluster center Q2) at the time of t1. It can be seen from the figure that the clustering The distance between the center Q2 and the cluster center Q1 is smaller than the distance between the cluster center O2 and the cluster center Q1. Therefore, the cluster center Q1 and the cluster center Q2 can be associated, that is, the target object B1 at the time of t0 is different from the cluster center Q1. The target object B2 at time t1 is the same target object.
对不同点云帧的目标完成目标关联后,进一步地,可以计算该目标对象的运动。由于图7是不同时刻所拍摄的点云帧场景的示意图,可以对图7中的目标对象先进行处理。示例性地,可以先将图7中的同一目标对象置于同 一水平线,即将图7中的(a)中的目标对象A1和图7中的(b)中的目标对象A2提取出来置于同一水平线,如图8中的(a)所示;将图7中的(a)中的目标对象B1和图7中的(b)中的目标对象B2提取出来置于同一水平线,如图8中的(b)所示。After completing the target association for the targets of different point cloud frames, further, the motion of the target object can be calculated. Since FIG. 7 is a schematic diagram of a scene of point cloud frames captured at different times, the target object in FIG. 7 can be processed first. Exemplarily, the same target object in FIG. 7 can be placed on the same horizontal line, that is, the target object A1 in (a) in FIG. 7 and the target object A2 in (b) in FIG. 7 are extracted and placed on the same horizontal line. The horizontal line, as shown in (a) in Figure 8; the target object B1 in (a) in Figure 7 and the target object B2 in (b) in Figure 7 are extracted and placed on the same horizontal line, as shown in Figure 8 shown in (b).
参考图8中的(a),可以看出,从t0时刻到t1时刻,目标对象A1从O1运动至O2(即目标对象A2)的位置,记O1到O2之间的位移为
Figure PCTCN2020118270-appb-000020
则可以通过位移与时间的比值得到目标A1的运动速度
Figure PCTCN2020118270-appb-000021
(包括速度的大小和方向),即
Figure PCTCN2020118270-appb-000022
其中,t为t0时刻到t1时刻的时长。
Referring to (a) in Figure 8, it can be seen that from time t0 to time t1, the target object A1 moves from O1 to the position of O2 (that is, the target object A2), and the displacement between O1 and O2 is denoted as
Figure PCTCN2020118270-appb-000020
Then the movement speed of target A1 can be obtained by the ratio of displacement and time
Figure PCTCN2020118270-appb-000021
(including magnitude and direction of velocity), i.e.
Figure PCTCN2020118270-appb-000022
Among them, t is the duration from time t0 to time t1.
参考图8中的(b),可以看出,从t0时刻到t1时刻,目标对象B1从Q1运动至Q2(即目标对象B2)的位置,记Q1到Q2之间的位移为
Figure PCTCN2020118270-appb-000023
则可以通过位移与时间的比值得到目标B1的运动速度
Figure PCTCN2020118270-appb-000024
(包括速度的大小和方向),即
Figure PCTCN2020118270-appb-000025
Referring to (b) in Figure 8, it can be seen that from time t0 to time t1, the target object B1 moves from Q1 to the position of Q2 (that is, the target object B2), and the displacement between Q1 and Q2 is denoted as
Figure PCTCN2020118270-appb-000023
Then the movement speed of the target B1 can be obtained by the ratio of displacement and time
Figure PCTCN2020118270-appb-000024
(including magnitude and direction of velocity), i.e.
Figure PCTCN2020118270-appb-000025
示例性地,假设上述式中的t为1s,经过测量获得
Figure PCTCN2020118270-appb-000026
则目标对象A1的速度
Figure PCTCN2020118270-appb-000027
目标对象B1的速度
Figure PCTCN2020118270-appb-000028
速度方向均为向西。
Exemplarily, assuming that t in the above formula is 1s, obtained through measurement
Figure PCTCN2020118270-appb-000026
Then the speed of the target object A1
Figure PCTCN2020118270-appb-000027
Speed of target object B1
Figure PCTCN2020118270-appb-000028
The direction of speed is westward.
应理解,本申请实施例中的预设阈值可以是固定值,也可以是不断调整的值,本申请对此不作具体限定。It should be understood that the preset threshold in this embodiment of the present application may be a fixed value, or may be a continuously adjusted value, which is not specifically limited in the present application.
如图9所示,为本申请另一实施例提供的一种基于场景流的运动估计的示意图。As shown in FIG. 9 , it is a schematic diagram of motion estimation based on a scene flow provided by another embodiment of the present application.
其中,图9的(a)为t0时刻所拍摄的点云帧(即为本申请中的第一点云帧),图9中的(b)为t1时刻所拍摄的点云帧(即为本申请中的第二点云帧)。Wherein, (a) in FIG. 9 is the point cloud frame shot at time t0 (that is, the first point cloud frame in this application), and (b) in FIG. 9 is the point cloud frame shot at time t1 (that is, the first point cloud frame in this application) second point cloud frame in this application).
本申请以欧式聚类法为例,参考图9中的(a),分别以中心点O1、中心点Q1和中心点P1为聚类点进行聚类。对于中心点O1,可以将距离O1小于预设阈值的点聚类为一类;对于中心点Q1,可以将距离Q1小于预设阈值的点聚类为一类;对于中心点P1,可以将距离P1小于预设阈值的点聚类为一类。通过上述聚类算法,则可以得到图中的目标对象A1、目标对象B1和目标对象C1。The present application takes the Euclidean clustering method as an example, and with reference to (a) in FIG. 9 , the center point O1 , the center point Q1 , and the center point P1 are respectively used for clustering to perform clustering. For the center point O1, the points whose distance O1 is less than the preset threshold can be clustered into one category; for the center point Q1, the points whose distance Q1 is less than the preset threshold can be clustered into one category; for the center point P1, the distance Points whose P1 is less than the preset threshold are clustered into one class. Through the above clustering algorithm, the target object A1, the target object B1 and the target object C1 in the figure can be obtained.
类似地,参考图9中的(b),分别以中心点O2、中心点Q2和中心点P2为聚类点进行聚类。对于中心点O2,可以将距离O2小于预设阈值的点 聚类为一类;对于中心点Q2,可以将距离Q2小于预设阈值的点聚类为一类;对于中心点P2,可以将距离P2小于预设阈值的点聚类为一类。通过上述聚类算法,则可以得到图中的目标对象A2、目标对象B2和目标对象C2。从图中可以看出,目标对象A2可能并不是一个完整的对象。Similarly, referring to (b) in FIG. 9 , the center point O2 , the center point Q2 , and the center point P2 are respectively used as cluster points to perform clustering. For the center point O2, the points whose distance O2 is less than the preset threshold can be clustered into one category; for the center point Q2, the points whose distance Q2 is less than the preset threshold can be clustered into one category; for the center point P2, the distance Points whose P2 is less than the preset threshold are clustered into one class. Through the above clustering algorithm, the target object A2, the target object B2 and the target object C2 in the figure can be obtained. As can be seen from the figure, the target object A2 may not be a complete object.
其次,可以对获取到的目标对象进行目标关联。即对于在t0时刻的聚类中心,在t1时刻的聚类目标中心集合中进行近邻搜索,将距离最小的点作为目标关联点。Secondly, target association can be performed on the obtained target object. That is, for the cluster center at time t0, the nearest neighbor search is performed in the cluster target center set at time t1, and the point with the smallest distance is taken as the target correlation point.
需要说明的是,在该实施例中,假设所有目标对象的运动方向是一致的,下文中假设所有目标对象均向西运动。It should be noted that, in this embodiment, it is assumed that all target objects move in the same direction, and hereinafter, it is assumed that all target objects move westward.
示例性地,对于聚类中心O1(即上文中的中心点O1),在t1时刻的聚类目标中心集合(即聚类中心O2(即上文中的中心点O2)、聚类中心Q2(即上文中的中心点Q2)、聚类中心P2(即上文中的中心点P2))中进行近邻搜索,从图中可以看出,聚类中心Q2与聚类中心O1之间的距离最小,但是由于该场景下,目标对象向西运动,对于目标对象A1来说,目标对象A1在t1时刻的位置应该位于在t0时刻的位置的西侧,而目标对象B2位于t0时刻目标对象A1的东侧,因此,不可将聚类中心O1和聚类中心Q2进行关联。Exemplarily, for the cluster center O1 (ie the center point O1 above), the clustering target center set at time t1 (ie the cluster center O2 (ie the center point O2 above), the cluster center Q2 (ie the center point O2 above) The nearest neighbor search is performed in the center point Q2) and the cluster center P2 (that is, the center point P2 above)). It can be seen from the figure that the distance between the cluster center Q2 and the cluster center O1 is the smallest, but Since the target object moves westward in this scene, for the target object A1, the position of the target object A1 at time t1 should be located on the west side of the position at time t0, while the target object B2 is located at the east side of the target object A1 at time t0. , therefore, the cluster center O1 and the cluster center Q2 cannot be associated.
从目标对象的运动方向的一致性考虑,此时由于聚类中心O2与聚类中心O1之间的距离次之,因此可以将聚类中心O2与聚类中心O1进行关联,即t0时刻的目标对象A1与t1时刻的目标对象A2是同一目标对象。Considering the consistency of the movement direction of the target object, at this time, since the distance between the cluster center O2 and the cluster center O1 is second, the cluster center O2 can be associated with the cluster center O1, that is, the target at time t0 The object A1 and the target object A2 at time t1 are the same target object.
对于聚类中心Q1,在t1时刻的聚类目标中心集合(即聚类中心O2、聚类中心Q2、聚类中心P2)中进行近邻搜索,从图中可以看出,结合目标对象的运动方向和距离,可以将聚类中心Q2和聚类中心Q1进行关联,即t0时刻的目标对象B1与t1时刻的目标对象B2是同一目标对象。For the cluster center Q1, the nearest neighbor search is performed in the cluster target center set at time t1 (ie, the cluster center O2, the cluster center Q2, and the cluster center P2). It can be seen from the figure that combined with the movement direction of the target object and distance, the cluster center Q2 and the cluster center Q1 can be associated, that is, the target object B1 at time t0 and the target object B2 at time t1 are the same target object.
类似地,对于聚类中心P1,在t1时刻的聚类目标中心集合(即聚类中心O2、聚类中心Q2、聚类中心P2)中进行近邻搜索,从图中可以看出,结合目标对象的运动方向和距离,可以将聚类中心P2和聚类中心P1进行关联,即t0时刻的目标对象C1与t1时刻的目标对象C2是同一目标对象。Similarly, for the cluster center P1, the nearest neighbor search is performed in the cluster target center set at time t1 (ie, the cluster center O2, the cluster center Q2, and the cluster center P2). It can be seen from the figure that combined with the target object The movement direction and distance of , the cluster center P2 and the cluster center P1 can be associated, that is, the target object C1 at time t0 and the target object C2 at time t1 are the same target object.
对不同点云帧的目标对象完成目标关联后,进一步地,可以计算该目标对象的运动。由于图9是不同时刻所拍摄的点云帧场景的示意图,可以对图9中的目标对象先进行处理,如可以先将图9中的同一目标对象置于同一水 平线,即将图9中的(a)中的目标对象A1和图9中的(b)中的目标对象A2提取出来置于同一水平线,如图10中的(a)所示;将图9中的(a)中的目标对象B1和图9中的(b)中的目标对象B2提取出来置于同一水平线,如图10中的(b)所示;将图9中的(a)中的目标对象C1和图9中的(b)中的目标对象C2提取出来置于同一水平线,如图10中的(c)所示。After completing the target association for the target objects in different point cloud frames, further, the motion of the target object can be calculated. Since FIG. 9 is a schematic diagram of the point cloud frame scene shot at different times, the target object in FIG. 9 can be processed first, for example, the same target object in FIG. 9 can be placed on the same horizontal line, that is, the ( The target object A1 in a) and the target object A2 in (b) in Figure 9 are extracted and placed on the same horizontal line, as shown in (a) in Figure 10; the target object in (a) in Figure 9 B1 and the target object B2 in (b) in FIG. 9 are extracted and placed on the same horizontal line, as shown in (b) in FIG. 10 ; the target object C1 in (a) in FIG. The target object C2 in (b) is extracted and placed on the same horizontal line, as shown in (c) in FIG. 10 .
参考图10中的(a),可以看出,从t0时刻到t1时刻,目标对象A1从O1运动至O2(即目标对象A2)的位置,记O1到O2之间的位移为
Figure PCTCN2020118270-appb-000029
则可以通过位移与时间的比值得到目标对象A1的运动速度
Figure PCTCN2020118270-appb-000030
(包括速度的大小和方向),即
Figure PCTCN2020118270-appb-000031
其中,t为t0时刻到t1时刻的时长。
Referring to (a) in Figure 10, it can be seen that from time t0 to time t1, the target object A1 moves from O1 to the position of O2 (that is, the target object A2), and the displacement between O1 and O2 is denoted as
Figure PCTCN2020118270-appb-000029
Then the movement speed of the target object A1 can be obtained by the ratio of displacement and time
Figure PCTCN2020118270-appb-000030
(including magnitude and direction of velocity), i.e.
Figure PCTCN2020118270-appb-000031
Among them, t is the duration from time t0 to time t1.
参考图10中的(b),可以看出,从t0时刻到t1时刻,目标对象B1从Q1运动至Q2(即目标对象B2)的位置,记Q1到Q2之间的位移为
Figure PCTCN2020118270-appb-000032
则可以通过位移与时间的比值得到目标对象B1的运动速度
Figure PCTCN2020118270-appb-000033
(包括速度的大小和方向),即
Figure PCTCN2020118270-appb-000034
Referring to (b) in Figure 10, it can be seen that from time t0 to time t1, the target object B1 moves from Q1 to the position of Q2 (that is, the target object B2), and the displacement between Q1 and Q2 is denoted as
Figure PCTCN2020118270-appb-000032
Then the movement speed of the target object B1 can be obtained by the ratio of displacement to time
Figure PCTCN2020118270-appb-000033
(including magnitude and direction of velocity), i.e.
Figure PCTCN2020118270-appb-000034
参考图10中的(c),可以看出,从t0时刻到t1时刻,目标对象C1从P1运动至P2(即目标对象C2)的位置,记P1到P2之间的位移为
Figure PCTCN2020118270-appb-000035
则可以通过位移与时间的比值得到目标对象C1的运动速度
Figure PCTCN2020118270-appb-000036
(包括速度的大小和方向),即
Figure PCTCN2020118270-appb-000037
Referring to (c) in Figure 10, it can be seen that from time t0 to time t1, the target object C1 moves from P1 to the position of P2 (ie, the target object C2), and the displacement between P1 and P2 is denoted as
Figure PCTCN2020118270-appb-000035
Then the movement speed of the target object C1 can be obtained by the ratio of displacement and time
Figure PCTCN2020118270-appb-000036
(including magnitude and direction of velocity), i.e.
Figure PCTCN2020118270-appb-000037
示例性地,假设上述式中的t为1s,经过测量获得
Figure PCTCN2020118270-appb-000038
Figure PCTCN2020118270-appb-000039
则目标对象A1的速度
Figure PCTCN2020118270-appb-000040
目标对象B1的速度
Figure PCTCN2020118270-appb-000041
目标对象C1的速度为
Figure PCTCN2020118270-appb-000042
速度方向均为向西。
Exemplarily, assuming that t in the above formula is 1s, obtained through measurement
Figure PCTCN2020118270-appb-000038
Figure PCTCN2020118270-appb-000039
Then the speed of the target object A1
Figure PCTCN2020118270-appb-000040
Speed of target object B1
Figure PCTCN2020118270-appb-000041
The velocity of the target object C1 is
Figure PCTCN2020118270-appb-000042
The direction of speed is westward.
可以理解的是,对于目标对象A1,虽然t0时刻的聚类中心和t1时刻的聚类中心不是同一中心,但在一定程度上也可以反映该目标对象A1的运动速度。It can be understood that, for the target object A1, although the cluster center at time t0 and the cluster center at time t1 are not the same center, it can also reflect the movement speed of the target object A1 to a certain extent.
如图11所示为本申请又一实施例提供的一种基于场景流的运动估计的示意图。FIG. 11 is a schematic diagram of motion estimation based on a scene flow provided by another embodiment of the present application.
其中,图11的(a)为t0时刻所拍摄的点云帧(即为本申请中的第一点云帧),图11中的(b)为t1时刻所拍摄的点云帧(即为本申请中的第二点云帧)。Wherein, (a) of FIG. 11 is the point cloud frame shot at time t0 (that is, the first point cloud frame in this application), and (b) of FIG. 11 is the point cloud frame shot at time t1 (that is, the first point cloud frame in this application) second point cloud frame in this application).
本申请以欧式聚类法为例,参考图11中的(a),分别以中心点O1和中心点Q1为聚类点进行聚类。对于中心点O1,可以将距离O1小于预设阈值 的点聚类为一类;对于中心点Q1,可以将距离Q1小于预设阈值的点聚类为一类。通过上述聚类算法,则可以聚类得到图中的目标对象A1和目标对象B1。In this application, the Euclidean clustering method is used as an example, referring to (a) in FIG. 11 , the center point O1 and the center point Q1 are respectively used as clustering points to perform clustering. For the center point O1, the points whose distance O1 is less than the preset threshold can be clustered into one category; for the center point Q1, the points whose distance Q1 is less than the preset threshold can be clustered into one category. Through the above clustering algorithm, the target object A1 and the target object B1 in the figure can be obtained by clustering.
类似地,参考图11中的(b),分别以O2和Q2为聚类点进行聚类。对于中心点O2,可以将距离O2小于预设阈值的点聚类为一类;对于中心点Q2,可以将距离Q2小于预设阈值的点聚类为一类。通过上述聚类算法,则可以聚类得到图中的目标对象A2和目标对象B2。Similarly, referring to (b) in FIG. 11 , clustering is performed with O2 and Q2 as clustering points, respectively. For the center point O2, the points whose distance O2 is smaller than the preset threshold may be clustered into one category; for the center point Q2, the points whose distance Q2 is smaller than the preset threshold may be clustered into one category. Through the above clustering algorithm, the target object A2 and the target object B2 in the figure can be obtained by clustering.
其次,可以对获取到的目标对象进行目标关联。即对于在t0时刻的聚类中心,在t1时刻的聚类目标中心集合中进行近邻搜索,将距离最小的点作为目标关联点。Secondly, target association can be performed on the obtained target object. That is, for the cluster center at time t0, the nearest neighbor search is performed in the cluster target center set at time t1, and the point with the smallest distance is taken as the target correlation point.
需要说明的是,在该实施例中,假设不同目标对象的运动方向是不一致的,下文中假设目标对象A1向西运动,目标对象A2向东运动。It should be noted that, in this embodiment, it is assumed that the moving directions of different target objects are inconsistent. In the following, it is assumed that the target object A1 moves westward and the target object A2 moves eastward.
示例性地,对于聚类中心O1,在t1时刻的聚类目标中心集合(即聚类中心O2、聚类中心Q2)中进行近邻搜索,从图中可以看出,聚类中心O2与聚类中心O1之间的距离小于聚类中心Q2与聚类中心O1之间的距离,因此,可以将聚类中心O1和聚类中心O2进行关联,即t0时刻的目标对象A1与t1时刻的目标对象A2是同一目标对象。Exemplarily, for the cluster center O1, the nearest neighbor search is performed in the cluster target center set (ie, the cluster center O2, the cluster center Q2) at the time t1. It can be seen from the figure that the cluster center O2 and the cluster The distance between the centers O1 is smaller than the distance between the cluster center Q2 and the cluster center O1. Therefore, the cluster center O1 and the cluster center O2 can be associated, that is, the target object A1 at time t0 and the target object at time t1 A2 is the same target object.
对于聚类中心Q1,在t1时刻的聚类目标中心集合(即聚类中心O2、聚类中心Q2)中进行近邻搜索,从图中可以看出,聚类中心Q2与聚类中心Q1之间的距离小于聚类中心O2与聚类中心Q1之间的距离,因此,可以将聚类中心Q1和聚类中心Q2进行关联,即t0时刻的目标对象B1与t1时刻的目标对象B2是同一目标对象。For the cluster center Q1, the nearest neighbor search is performed in the cluster target center set at time t1 (ie, the cluster center O2, the cluster center Q2). It can be seen from the figure that the distance between the cluster center Q2 and the cluster center Q1 The distance between the cluster center O2 and the cluster center Q1 is smaller than the distance between the cluster center O2 and the cluster center Q1. Therefore, the cluster center Q1 and the cluster center Q2 can be associated, that is, the target object B1 at time t0 and the target object B2 at time t1 are the same target. object.
对不同点云帧的目标对象完成目标关联后,进一步地,可以计算该目标对象的运动。由于图11是不同时刻所拍摄的点云帧场景的示意图,可以对图11中的目标对象先进行处理,示例性地,可以先将图11中的同一目标对象置于同一水平线。即将图11中的(a)中的目标对象A1和图11中的(b)中的目标对象A2提取出来置于同一水平线,如图12中的(a)所示;将图11中的(a)中的目标对象B1和图11中的(b)中的目标对象B2提取出来置于同一水平线,如图12中的(b)所示。After completing the target association for the target objects in different point cloud frames, further, the motion of the target object can be calculated. Since FIG. 11 is a schematic diagram of a point cloud frame scene captured at different times, the target object in FIG. 11 may be processed first, for example, the same target object in FIG. 11 may be placed on the same horizontal line. That is, the target object A1 in (a) in FIG. 11 and the target object A2 in (b) in FIG. 11 are extracted and placed on the same horizontal line, as shown in (a) in FIG. 12 ; The target object B1 in a) and the target object B2 in (b) in FIG. 11 are extracted and placed on the same horizontal line, as shown in (b) in FIG. 12 .
参考图12中的(a),可以看出,从t0时刻到t1时刻,目标对象A1从O1运动至O2(即目标对象A2)的位置,记O1到O2之间的位移为
Figure PCTCN2020118270-appb-000043
则 可以通过位移与时间的比值得到目标对象A1的运动速度
Figure PCTCN2020118270-appb-000044
(包括速度的大小和方向),即
Figure PCTCN2020118270-appb-000045
其中,t为t0时刻到t1时刻的时长。
Referring to (a) in Figure 12, it can be seen that from time t0 to time t1, the target object A1 moves from O1 to the position of O2 (that is, the target object A2), and the displacement between O1 and O2 is denoted as
Figure PCTCN2020118270-appb-000043
Then the movement speed of the target object A1 can be obtained by the ratio of displacement and time
Figure PCTCN2020118270-appb-000044
(including magnitude and direction of velocity), i.e.
Figure PCTCN2020118270-appb-000045
Among them, t is the duration from time t0 to time t1.
参考图12中的(b),可以看出,从t0时刻到t1时刻,目标对象B1从Q1运动至Q2(即目标对象B2)的位置,记Q1到Q2之间的位移为
Figure PCTCN2020118270-appb-000046
则可以通过位移与时间的比值得到目标对象B1的运动速度
Figure PCTCN2020118270-appb-000047
(包括速度的大小和方向),即
Figure PCTCN2020118270-appb-000048
Referring to (b) in Figure 12, it can be seen that from time t0 to time t1, the target object B1 moves from Q1 to the position of Q2 (that is, the target object B2), and the displacement between Q1 and Q2 is denoted as
Figure PCTCN2020118270-appb-000046
Then the movement speed of the target object B1 can be obtained by the ratio of displacement to time
Figure PCTCN2020118270-appb-000047
(including magnitude and direction of velocity), i.e.
Figure PCTCN2020118270-appb-000048
示例性地,假设上述式中的t为1s,经过测量获得
Figure PCTCN2020118270-appb-000049
则目标对象A1的速度
Figure PCTCN2020118270-appb-000050
速度方向为向西;目标对象B1的速度
Figure PCTCN2020118270-appb-000051
速度方向为向东。
Exemplarily, assuming that t in the above formula is 1s, obtained through measurement
Figure PCTCN2020118270-appb-000049
Then the speed of the target object A1
Figure PCTCN2020118270-appb-000050
The speed direction is west; the speed of the target object B1
Figure PCTCN2020118270-appb-000051
The speed direction is east.
应理解,上述数值仅为举例说明,在具体实现过程中,还可以为其它数值,不应对本申请造成特别限定。It should be understood that the above numerical values are only for illustration, and in the specific implementation process, other numerical values may also be used, which should not be particularly limited to the present application.
需要说明的是,上述实施例均是以点云畸变修正前的点云帧为例说明提取目标对象以及目标关联的过程的,对于点云畸变修正后的点云帧,与上述过程类似,为了简洁,这里不再赘述。It should be noted that, in the above embodiments, the point cloud frame before point cloud distortion correction is used as an example to illustrate the process of extracting target objects and target associations. For the point cloud frame after point cloud distortion correction, the process is similar to the above. It is concise and will not be repeated here.
本申请实施例提供的方案,通过对从第一点云帧和第二点云帧中提取的目标对象进行关联,并基于关联后的目标对象进行运动估计,即通过场景流对目标对象进行运动估计,可以高效快速的实现目标对象的运动估计,且可以降低数据依赖性以及经验风险性。In the solution provided by the embodiments of the present application, the target objects extracted from the first point cloud frame and the second point cloud frame are associated, and motion estimation is performed based on the associated target objects, that is, the target objects are moved through the scene flow. estimation, the motion estimation of the target object can be realized efficiently and quickly, and the data dependence and empirical risk can be reduced.
可选地,在一些实施例中,在所述从所述第一点云帧和所述第二点云帧中提取同一目标对象之前,所述方法还包括:分别对所述第一点云帧和所述第二点云帧进行预处理操作;所述从所述第一点云帧和所述第二点云帧中提取同一目标对象,包括:从经过所述预处理操作后的所述第一点云帧和所述第二点云帧中提取所述目标对象。Optionally, in some embodiments, before the extracting the same target object from the first point cloud frame and the second point cloud frame, the method further includes: frame and the second point cloud frame to perform a preprocessing operation; the extracting the same target object from the first point cloud frame and the second point cloud frame includes: extracting the same target object from the preprocessing operation The target object is extracted from the first point cloud frame and the second point cloud frame.
可选地,在一些实施例中,所述预处理操作包括地面滤除操作和/或降采样操作。Optionally, in some embodiments, the preprocessing operations include ground filtering operations and/or downsampling operations.
本申请实施例中,在获取到第一点云帧和第二点云帧后,可以先对获取的点云帧进行预处理操作,如地面滤出操作和/或降采样操纵,再基于预处理操作后的点云帧聚类至少一个对象。In this embodiment of the present application, after the first point cloud frame and the second point cloud frame are acquired, preprocessing operations may be performed on the acquired point cloud frames, such as ground filtering operations and/or downsampling operations, and then based on the preprocessing operations The processed point cloud frame clusters at least one object.
可以理解的是,地面滤除操作实质上是将获取的地面的点云滤除掉,如图13所示,为本申请另一实施例提供的目标场景的示意图。It can be understood that the ground filtering operation is essentially filtering out the acquired point cloud of the ground, as shown in FIG. 13 , which is a schematic diagram of a target scene provided by another embodiment of the present application.
参考图13中的(a),可以将虚线内的点云滤除,得到如图13中的(b)所示的场景点云,在后续从点云帧中提取目标对象的时候,可以提高效率以及目标提取精度。Referring to (a) in Figure 13, the point cloud in the dotted line can be filtered out to obtain the scene point cloud shown in (b) in Figure 13. When the target object is subsequently extracted from the point cloud frame, it can be improved. efficiency and target extraction accuracy.
降采样操作,也可以称为减采集操作,是一种多速率数字信号处理的技术或是降低信号采样率的过程,可以用于提高数据传输速率或者降低数据传输大小。The downsampling operation, also known as the downsampling operation, is a multi-rate digital signal processing technique or a process of reducing the signal sampling rate, which can be used to increase the data transmission rate or reduce the data transmission size.
本申请实施例提供的方案,通过对获取的点云帧进行预处理操作,并基于预处理后的点云帧提取目标对象,可以提高效率以及提升目标对象的提取精度。In the solution provided by the embodiments of the present application, by performing a preprocessing operation on the acquired point cloud frame, and extracting the target object based on the preprocessed point cloud frame, the efficiency and the extraction accuracy of the target object can be improved.
如图14所示,为本申请另一实施例提供的点云运动畸变修正方法1400的示意图。该方法1400可以包括步骤1410-1490。As shown in FIG. 14 , it is a schematic diagram of a point cloud motion distortion correction method 1400 provided by another embodiment of the present application. The method 1400 may include steps 1410-1490.
1410,获取某一场景在t0时刻的点云。1410. Acquire a point cloud of a scene at time t0.
1420,获取该场景在t1时刻的点云。1420. Obtain the point cloud of the scene at time t1.
1430,对获取的点云进行预处理。1430, preprocessing the acquired point cloud.
1440,对预处理后的点云进行聚类。1440, clustering the preprocessed point cloud.
1450,对聚类后的点云进行中心计算。1450, perform center calculation on the clustered point cloud.
1460,根据中心进行目标关联。1460, perform target association according to the center.
1470,根据关联后的目标对目标进行运动估计。1470. Perform motion estimation on the target according to the associated target.
1480,运动畸变修正。1480, motion distortion correction.
1490,场景云输出。1490, the scene cloud output.
关于该方法1400的具体内容可以参考上文图1-图13的描述,为了简洁,这里不再赘述。For the specific content of the method 1400, reference may be made to the descriptions of FIG. 1 to FIG. 13 above, which are not repeated here for brevity.
上文结合图1-图14,详细描述了本申请的方法实施例,下面结合图15-图17,描述本申请的装置实施例,装置实施例与方法实施例相互对应,因此未详细描述的部分可参见前面各部分方法实施例。The method embodiments of the present application are described in detail above with reference to FIGS. 1 to 14 , and the apparatus embodiments of the present application are described below with reference to FIGS. 15 to 17 . The apparatus embodiments and method embodiments correspond to each other, and therefore are not described in detail. In part, please refer to the method embodiments of the previous parts.
图15为本申请一实施例提供的一种点云运动畸变修正装置1500,该装置1500可以包括存储器1510和处理器1520。FIG. 15 is a point cloud motion distortion correction apparatus 1500 provided by an embodiment of the present application. The apparatus 1500 may include a memory 1510 and a processor 1520 .
所述存储器1510用于存储程序代码;The memory 1510 is used to store program codes;
所述处理器1520,调用所述程序代码,当程序代码被执行时,用于执行以下操作:The processor 1520 calls the program code, and when the program code is executed, is configured to perform the following operations:
获取第一点云帧和第二点云帧,所述第一点云帧和所述第二点云帧是针 对同一目标场景的不同时刻的点云帧;Obtain the first point cloud frame and the second point cloud frame, and the first point cloud frame and the second point cloud frame are point cloud frames at different moments for the same target scene;
从所述第一点云帧和所述第二点云帧中提取同一目标对象;extracting the same target object from the first point cloud frame and the second point cloud frame;
根据所述第一点云帧中的目标对象和所述第二点云帧中的目标对象估计所述目标对象的运动速度;Estimate the movement speed of the target object according to the target object in the first point cloud frame and the target object in the second point cloud frame;
根据所述目标对象的运动速度对所述第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正。Distortion correction is performed on the target object in the first point cloud frame and/or the second point cloud frame according to the moving speed of the target object.
可选地,在一些实施例中,所述处理器1520进一步用于:根据所述目标对象的运动速度确定所述目标对象的畸变系数;根据所述畸变系数对所述第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正。Optionally, in some embodiments, the processor 1520 is further configured to: determine a distortion coefficient of the target object according to the movement speed of the target object; /Or perform distortion correction on the target object in the second point cloud frame.
可选地,在一些实施例中,所述畸变系数包括旋转插值畸变系数和线性运动插值畸变系数。Optionally, in some embodiments, the distortion coefficients include rotational interpolation distortion coefficients and linear motion interpolation distortion coefficients.
可选地,在一些实施例中,所述处理器1520进一步用于:针对所述第一点云帧和/或所述第二点云帧中的目标对象的点云,根据所述点云的旋转插值畸变系数和所述点云的坐标位置的乘积,与所述点云的线性运动插值畸变系数之和确定所述点云修正后的坐标位置。Optionally, in some embodiments, the processor 1520 is further configured to: for the point cloud of the target object in the first point cloud frame and/or the second point cloud frame, according to the point cloud The product of the rotation interpolation distortion coefficient and the coordinate position of the point cloud, and the sum of the linear motion interpolation distortion coefficient of the point cloud determine the corrected coordinate position of the point cloud.
可选地,在一些实施例中,所述处理器1520进一步用于:根据所述目标对象的运动速度和插值系数确定所述畸变系数,所述插值系数基于所述第一点云帧和所述第二点云帧中的点云的时间戳计算得到。Optionally, in some embodiments, the processor 1520 is further configured to: determine the distortion coefficient according to the motion speed of the target object and an interpolation coefficient, where the interpolation coefficient is based on the first point cloud frame and all The timestamp of the point cloud in the second point cloud frame is calculated.
可选地,在一些实施例中,所述处理器1520进一步用于:利用聚类算法,分别将所述第一点云帧中的点云和所述第二点云帧中的点云聚类成至少一个对象;对所述第一点云帧中的对象与所述第二点云帧中的对象进行关联。Optionally, in some embodiments, the processor 1520 is further configured to: cluster the point clouds in the first point cloud frame and the point clouds in the second point cloud frame respectively by using a clustering algorithm Classify into at least one object; associate the object in the first point cloud frame with the object in the second point cloud frame.
可选地,在一些实施例中,所述处理器1520进一步用于:根据所述第一点云帧中的对象的目标点与所述第二点云帧中的对象的目标点对所述第一点云帧中的对象与所述第二点云帧中的对象进行关联,其中,所述第一点云帧中的目标对象的目标点与所述第二点云帧中目标对象的目标点的距离小于与所述第二点云帧中其他对象的目标点的距离。Optionally, in some embodiments, the processor 1520 is further configured to: pair the target point of the object in the first point cloud frame with the target point of the object in the second point cloud frame The object in the first point cloud frame is associated with the object in the second point cloud frame, wherein the target point of the target object in the first point cloud frame is related to the target point of the target object in the second point cloud frame The distance of the target point is smaller than the distance to the target points of other objects in the second point cloud frame.
可选地,在一些实施例中,所述目标点为中心点或重心点。Optionally, in some embodiments, the target point is a center point or a center of gravity point.
可选地,在一些实施例中,在所述从所述第一点云帧和所述第二点云帧中提取同一目标对象之前,所述处理器1520进一步用于:分别对所述第一点云帧和所述第二点云帧进行预处理操作;所述处理器进一步用于:从经过 所述预处理操作后的所述第一点云帧和所述第二点云帧中提取所述目标对象。Optionally, in some embodiments, before extracting the same target object from the first point cloud frame and the second point cloud frame, the processor 1520 is further configured to: The point cloud frame and the second point cloud frame are subjected to a preprocessing operation; the processor is further configured to: extract the first point cloud frame and the second point cloud frame from the first point cloud frame and the second point cloud frame after the preprocessing operation. Extract the target object.
可选地,在一些实施例中,所述预处理操作包括地面滤除操作和/或降采样操作。Optionally, in some embodiments, the preprocessing operations include ground filtering operations and/or downsampling operations.
可选地,在一些实施例中,所述第一点云帧和所述第二点云帧是针对所述同一目标场景的相邻时刻的点云帧。Optionally, in some embodiments, the first point cloud frame and the second point cloud frame are point cloud frames at adjacent moments of the same target scene.
本申请实施例还提供了一种计算机可读存储介质,用于存储计算机程序。Embodiments of the present application further provide a computer-readable storage medium for storing a computer program.
可选的,该计算机可读存储介质可应用于本申请实施例中的点云运动畸变修正装置,并且该计算机程序使得计算机执行本申请实施例的各个方法中由点云运动畸变修正装置实现的相应流程,为了简洁,在此不再赘述。Optionally, the computer-readable storage medium can be applied to the device for correcting point cloud motion distortion in the embodiments of the present application, and the computer program enables the computer to execute the methods implemented by the device for correcting point cloud motion distortion in the embodiments of the present application. For the sake of brevity, the corresponding process is not repeated here.
本申请实施例还提供了一种计算机程序产品,包括计算机程序指令。Embodiments of the present application also provide a computer program product, including computer program instructions.
可选的,该计算机程序产品可应用于本申请实施例中的点云运动畸变修正装置,并且该计算机程序指令使得计算机执行本申请实施例的各个方法中由点云运动畸变修正装置实现的相应流程,为了简洁,在此不再赘述。Optionally, the computer program product can be applied to the point cloud motion distortion correction device in the embodiments of the present application, and the computer program instructions cause the computer to execute the corresponding methods implemented by the point cloud motion distortion correction device in each method of the embodiments of the present application. The process, for the sake of brevity, will not be repeated here.
本申请实施例还提供了一种计算机程序。The embodiments of the present application also provide a computer program.
可选的,该计算机程序可应用于本申请实施例中的点云运动畸变修正装置,当该计算机程序在计算机上运行时,使得计算机执行本申请实施例的各个方法中由点云运动畸变修正装置实现的相应流程,为了简洁,在此不再赘述。Optionally, the computer program can be applied to the device for correcting point cloud motion distortion in the embodiments of the present application. When the computer program runs on the computer, the computer executes the correction by point cloud motion distortion in each method of the embodiments of the present application. For the sake of brevity, the corresponding process implemented by the device will not be repeated here.
本申请实施例还提供了一种雷达,该雷达包括存储器和处理器,处理器可以从存储器中调用并运行计算机程序,以实现本申请实施例中所述的方法。The embodiments of the present application also provide a radar, the radar includes a memory and a processor, and the processor can call and run a computer program from the memory to implement the methods described in the embodiments of the present application.
图16是本申请另一实施例提供的点云运动畸变修正装置的示意性结构图。图16所示的点云运动畸变修正装置1600包括处理器1610,处理器1610可以从存储器中调用并运行计算机程序,以实现本申请实施例中所述的方法。FIG. 16 is a schematic structural diagram of a point cloud motion distortion correction device provided by another embodiment of the present application. The point cloud motion distortion correction device 1600 shown in FIG. 16 includes a processor 1610, and the processor 1610 can call and run a computer program from a memory to implement the methods described in the embodiments of the present application.
可选地,如图16所示,点云运动畸变修正装置1600还可以包括存储器1620。其中,处理器1610可以从存储器1620中调用并运行计算机程序,以实现本申请实施例中的方法。Optionally, as shown in FIG. 16 , the point cloud motion distortion correction apparatus 1600 may further include a memory 1620 . The processor 1610 may call and run a computer program from the memory 1620 to implement the methods in the embodiments of the present application.
其中,存储器1620可以是独立于处理器1610的一个单独的器件,也可 以集成在处理器1610中。The memory 1620 may be a separate device independent of the processor 1610, or may be integrated in the processor 1610.
可选地,如图16所示,点云运动畸变修正装置1600还可以包括收发器1630,处理器1610可以控制该收发器1630与其他装置进行通信,具体地,可以向其他装置发送信息或数据,或接收其他装置发送的信息或数据。Optionally, as shown in FIG. 16 , the point cloud motion distortion correction device 1600 may further include a transceiver 1630, and the processor 1610 may control the transceiver 1630 to communicate with other devices, specifically, may send information or data to other devices , or receive information or data sent by other devices.
可选地,点云运动畸变修正装置例如可以是雷达等,并且该点云运动畸变修正装置1600可以实现本申请实施例的各个方法中的相应流程,为了简洁,在此不再赘述。Optionally, the point cloud motion distortion correction device may be, for example, a radar, etc., and the point cloud motion distortion correction device 1600 may implement the corresponding processes in each method of the embodiments of the present application, which will not be repeated here for brevity.
图17是本申请实施例的芯片的示意性结构图。图17所示的芯片1700包括处理器1710,处理器1710可以从存储器中调用并运行计算机程序,以实现本申请实施例中的方法。FIG. 17 is a schematic structural diagram of a chip according to an embodiment of the present application. The chip 1700 shown in FIG. 17 includes a processor 1710, and the processor 1710 can call and run a computer program from a memory, so as to implement the methods in the embodiments of the present application.
可选地,如图17所示,芯片1700还可以包括存储器1720。其中,处理器1710可以从存储器1720中调用并运行计算机程序,以实现本申请实施例中的方法。Optionally, as shown in FIG. 17 , the chip 1700 may further include a memory 1720 . The processor 1710 may call and run a computer program from the memory 1720 to implement the methods in the embodiments of the present application.
其中,存储器1720可以是独立于处理器1710的一个单独的器件,也可以集成在处理器1710中。The memory 1720 may be a separate device independent of the processor 1710, or may be integrated in the processor 1710.
可选地,该芯片1700还可以包括输入接口1730。其中,处理器1710可以控制该输入接口1730与其他装置或芯片进行通信,具体地,可以获取其他装置或芯片发送的信息或数据。Optionally, the chip 1700 may further include an input interface 1730 . The processor 1710 can control the input interface 1730 to communicate with other devices or chips, and specifically, can obtain information or data sent by other devices or chips.
可选地,该芯片1700还可以包括输出接口1740。其中,处理器1710可以控制该输出接口1740与其他装置或芯片进行通信,具体地,可以向其他装置或芯片输出信息或数据。Optionally, the chip 1700 may further include an output interface 1740 . The processor 1710 can control the output interface 1740 to communicate with other devices or chips, and specifically, can output information or data to other devices or chips.
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。It should be understood that the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-chip, or a system-on-a-chip, or the like.
应理解,本申请实施例的处理器可能是一种集成电路图像处理系统,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以 是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。It should be understood that the processor in this embodiment of the present application may be an integrated circuit image processing system, which has signal processing capability. In the implementation process, each step of the above method embodiments may be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software. The above-mentioned processor can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other available Programming logic devices, discrete gate or transistor logic devices, discrete hardware components. The methods, steps, and logic block diagrams disclosed in the embodiments of this application can be implemented or executed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor. The software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art. The storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the memory in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory. Wherein, the non-volatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM, PROM), an erasable programmable read-only memory (Erasable PROM, EPROM), an electrically programmable read-only memory (Erasable PROM, EPROM). Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. Volatile memory may be Random Access Memory (RAM), which acts as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (Synchlink DRAM, SLDRAM) ) and direct memory bus random access memory (Direct Rambus RAM, DR RAM). It should be noted that the memory of the systems and methods described herein is intended to include, but not be limited to, these and any other suitable types of memory.
应理解,上述存储器为示例性但不是限制性说明,例如,本申请实施例中的存储器还可以是静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch Link DRAM,SLDRAM)以及直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)等等。也就是说,本申请实施例中的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It should be understood that the above-mentioned memory is an exemplary but non-limiting description, for example, the memory in the embodiment of the present application may also be a static random access memory (Static RAM, SRAM), a dynamic random access memory (Dynamic RAM, DRAM), Synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection Dynamic random access memory (Synch Link DRAM, SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DR RAM) and so on. That is, the memory in the embodiments of the present application is intended to include but not limited to these and any other suitable types of memory.
本申请实施例中的存储器可以向处理器提供指令和数据。存储器的一部 分还可以包括非易失性随机存取存储器。例如,存储器还可以存储设备类型的信息。该处理器可以用于执行存储器中存储的指令,并且该处理器执行该指令时,该处理器可以执行上述方法实施例中与终端设备对应的各个步骤。The memory in the embodiments of the present application may provide instructions and data to the processor. A portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information. The processor may be configured to execute the instruction stored in the memory, and when the processor executes the instruction, the processor may execute each step corresponding to the terminal device in the foregoing method embodiments.
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器执行存储器中的指令,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。In the implementation process, each step of the above-mentioned method can be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software. The steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor. The software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art. The storage medium is located in the memory, and the processor executes the instructions in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, detailed description is omitted here.
还应理解,上文对本申请实施例的描述着重于强调各个实施例之间的不同之处,未提到的相同或相似之处可以互相参考,为了简洁,这里不再赘述。It should also be understood that the above description of the embodiments of the present application focuses on emphasizing the differences between the various embodiments, and the unmentioned same or similar points can be referred to each other, and are not repeated here for brevity.
应理解,在本申请实施例中,术语“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系。例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。It should be understood that, in this embodiment of the present application, the term "and/or" is only an association relationship for describing associated objects, indicating that there may be three kinds of relationships. For example, A and/or B can mean that A exists alone, A and B exist at the same time, and B exists alone. In addition, the character "/" in this document generally indicates that the related objects are an "or" relationship.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art can realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, computer software, or a combination of the two. Interchangeability, the above description has generally described the components and steps of each example in terms of function. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working process of the system, device and unit described above may refer to the corresponding process in the foregoing method embodiments, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的 耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms of connection.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本申请实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solutions of the embodiments of the present application.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the present application are essentially or part of contributions to the prior art, or all or part of the technical solutions can be embodied in the form of software products, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk and other mediums that can store program codes.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited thereto. Any person skilled in the art can easily think of various equivalents within the technical scope disclosed in the present application. Modifications or substitutions shall be covered by the protection scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (24)

  1. 一种点云运动畸变修正方法,其特征在于,包括:A point cloud motion distortion correction method, comprising:
    获取第一点云帧和第二点云帧,所述第一点云帧和所述第二点云帧是针对同一目标场景的不同时刻的点云帧;acquiring a first point cloud frame and a second point cloud frame, where the first point cloud frame and the second point cloud frame are point cloud frames at different times for the same target scene;
    从所述第一点云帧和所述第二点云帧中提取同一目标对象;extracting the same target object from the first point cloud frame and the second point cloud frame;
    根据所述第一点云帧中的目标对象和所述第二点云帧中的目标对象估计所述目标对象的运动速度;Estimate the movement speed of the target object according to the target object in the first point cloud frame and the target object in the second point cloud frame;
    根据所述目标对象的运动速度对所述第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正。Distortion correction is performed on the target object in the first point cloud frame and/or the second point cloud frame according to the moving speed of the target object.
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述目标对象的运动速度对所述第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正,包括:The method according to claim 1, wherein the performing distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the moving speed of the target object, comprising: :
    根据所述目标对象的运动速度确定所述目标对象的畸变系数;Determine the distortion coefficient of the target object according to the moving speed of the target object;
    根据所述畸变系数对所述第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正。Perform distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the distortion coefficient.
  3. 根据权利要求2所述的方法,其特征在于,所述畸变系数包括旋转插值畸变系数和线性运动插值畸变系数。The method according to claim 2, wherein the distortion coefficients include rotational interpolation distortion coefficients and linear motion interpolation distortion coefficients.
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述畸变系数对第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正,包括:The method according to claim 3, wherein the performing distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the distortion coefficient comprises:
    针对所述第一点云帧和/或所述第二点云帧中的目标对象的点云,根据所述点云的旋转插值畸变系数和所述点云的坐标位置的乘积,与所述点云的线性运动插值畸变系数之和确定所述点云修正后的坐标位置。For the point cloud of the target object in the first point cloud frame and/or the second point cloud frame, according to the product of the rotation interpolation distortion coefficient of the point cloud and the coordinate position of the point cloud, and the The sum of the linear motion interpolation distortion coefficients of the point cloud determines the corrected coordinate position of the point cloud.
  5. 根据权利要求2至4中任一项所述的方法,其特征在于,所述根据所述目标对象的运动速度确定所述目标对象的畸变系数,包括:The method according to any one of claims 2 to 4, wherein the determining the distortion coefficient of the target object according to the movement speed of the target object comprises:
    根据所述目标对象的运动速度和插值系数确定所述畸变系数,所述插值系数基于所述第一点云帧和所述第二点云帧中的点云的时间戳计算得到。The distortion coefficient is determined according to the moving speed of the target object and an interpolation coefficient, where the interpolation coefficient is calculated based on the timestamps of the point clouds in the first point cloud frame and the second point cloud frame.
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述从所述第一点云帧和所述第二点云帧中提取同一目标对象,包括:The method according to any one of claims 1 to 5, wherein the extracting the same target object from the first point cloud frame and the second point cloud frame comprises:
    利用聚类算法,分别将所述第一点云帧中的点云和所述第二点云帧中的点云聚类成至少一个对象;Using a clustering algorithm, the point cloud in the first point cloud frame and the point cloud in the second point cloud frame are respectively clustered into at least one object;
    对所述第一点云帧中的对象与所述第二点云帧中的对象进行关联。Associating objects in the first point cloud frame with objects in the second point cloud frame.
  7. 根据权利要求6所述的方法,其特征在于,所述对所述第一点云帧中的对象与所述第二点云帧中的对象进行关联,包括:The method according to claim 6, wherein the associating the objects in the first point cloud frame with the objects in the second point cloud frame comprises:
    根据所述第一点云帧中的对象的目标点与所述第二点云帧中的对象的目标点对所述第一点云帧中的对象与所述第二点云帧中的对象进行关联,其中,所述第一点云帧中的目标对象的目标点与所述第二点云帧中目标对象的目标点的距离小于与所述第二点云帧中其他对象的目标点的距离。According to the target point of the object in the first point cloud frame and the target point of the object in the second point cloud frame, the object in the first point cloud frame and the object in the second point cloud frame are compared Perform association, wherein the distance between the target point of the target object in the first point cloud frame and the target point of the target object in the second point cloud frame is smaller than the target point of other objects in the second point cloud frame the distance.
  8. 根据权利要求7所述的方法,其特征在于,所述目标点为中心点或重心点。The method according to claim 7, wherein the target point is a center point or a center of gravity point.
  9. 根据权利要求1至8中任一项所述的方法,其特征在于,在所述从所述第一点云帧和所述第二点云帧中提取同一目标对象之前,所述方法还包括:The method according to any one of claims 1 to 8, wherein before extracting the same target object from the first point cloud frame and the second point cloud frame, the method further comprises: :
    分别对所述第一点云帧和所述第二点云帧进行预处理操作;Perform preprocessing operations on the first point cloud frame and the second point cloud frame respectively;
    所述从所述第一点云帧和所述第二点云帧中提取同一目标对象,包括:The extracting the same target object from the first point cloud frame and the second point cloud frame includes:
    从经过所述预处理操作后的所述第一点云帧和所述第二点云帧中提取所述目标对象。The target object is extracted from the first point cloud frame and the second point cloud frame after the preprocessing operation.
  10. 根据权利要求9所述的方法,其特征在于,所述预处理操作包括地面滤除操作和/或降采样操作。The method according to claim 9, wherein the preprocessing operation comprises a ground filtering operation and/or a downsampling operation.
  11. 根据权利要求1至10中任一项所述的方法,其特征在于,所述第一点云帧和所述第二点云帧是针对所述同一目标场景的相邻时刻的点云帧。The method according to any one of claims 1 to 10, wherein the first point cloud frame and the second point cloud frame are point cloud frames at adjacent moments of the same target scene.
  12. 一种点云运动畸变修正装置,其特征在于,所述装置包括存储器和处理器;A point cloud motion distortion correction device, characterized in that the device includes a memory and a processor;
    所述存储器用于存储程序代码;the memory is used to store program codes;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:The processor calls the program code, and when the program code is executed, is configured to perform the following operations:
    获取第一点云帧和第二点云帧,所述第一点云帧和所述第二点云帧是针对同一目标场景的不同时刻的点云帧;acquiring a first point cloud frame and a second point cloud frame, where the first point cloud frame and the second point cloud frame are point cloud frames at different times for the same target scene;
    从所述第一点云帧和所述第二点云帧中提取同一目标对象;extracting the same target object from the first point cloud frame and the second point cloud frame;
    根据所述第一点云帧中的目标对象和所述第二点云帧中的目标对象估计所述目标对象的运动速度;Estimate the movement speed of the target object according to the target object in the first point cloud frame and the target object in the second point cloud frame;
    根据所述目标对象的运动速度对所述第一点云帧和/或所述第二点云帧 中的目标对象进行畸变修正。Distortion correction is performed on the target object in the first point cloud frame and/or the second point cloud frame according to the moving speed of the target object.
  13. 根据权利要求12所述的装置,其特征在于,所述处理器进一步用于:The apparatus of claim 12, wherein the processor is further configured to:
    根据所述目标对象的运动速度确定所述目标对象的畸变系数;Determine the distortion coefficient of the target object according to the moving speed of the target object;
    根据所述畸变系数对所述第一点云帧和/或所述第二点云帧中的目标对象进行畸变修正。Perform distortion correction on the target object in the first point cloud frame and/or the second point cloud frame according to the distortion coefficient.
  14. 根据权利要求13所述的装置,其特征在于,所述畸变系数包括旋转插值畸变系数和线性运动插值畸变系数。The apparatus according to claim 13, wherein the distortion coefficients include rotational interpolation distortion coefficients and linear motion interpolation distortion coefficients.
  15. 根据权利要求14所述的装置,其特征在于,所述处理器进一步用于:The apparatus of claim 14, wherein the processor is further configured to:
    针对所述第一点云帧和/或所述第二点云帧中的目标对象的点云,根据所述点云的旋转插值畸变系数和所述点云的坐标位置的乘积,与所述点云的线性运动插值畸变系数之和确定所述点云修正后的坐标位置。For the point cloud of the target object in the first point cloud frame and/or the second point cloud frame, according to the product of the rotation interpolation distortion coefficient of the point cloud and the coordinate position of the point cloud, and the The sum of the linear motion interpolation distortion coefficients of the point cloud determines the corrected coordinate position of the point cloud.
  16. 根据权利要求13至15中任一项所述的装置,其特征在于,所述处理器进一步用于:The apparatus according to any one of claims 13 to 15, wherein the processor is further configured to:
    根据所述目标对象的运动速度和插值系数确定所述畸变系数,所述插值系数基于所述第一点云帧和所述第二点云帧中的点云的时间戳计算得到。The distortion coefficient is determined according to the moving speed of the target object and an interpolation coefficient, and the interpolation coefficient is calculated based on the timestamps of the point clouds in the first point cloud frame and the second point cloud frame.
  17. 根据权利要求12至16中任一项所述的装置,其特征在于,所述处理器进一步用于:The apparatus according to any one of claims 12 to 16, wherein the processor is further configured to:
    利用聚类算法,分别将所述第一点云帧中的点云和所述第二点云帧中的点云聚类成至少一个对象;Using a clustering algorithm, the point cloud in the first point cloud frame and the point cloud in the second point cloud frame are respectively clustered into at least one object;
    对所述第一点云帧中的对象与所述第二点云帧中的对象进行关联。Associating objects in the first point cloud frame with objects in the second point cloud frame.
  18. 根据权利要求17所述的装置,其特征在于,所述处理器进一步用于:The apparatus of claim 17, wherein the processor is further configured to:
    根据所述第一点云帧中的对象的目标点与所述第二点云帧中的对象的目标点对所述第一点云帧中的对象与所述第二点云帧中的对象进行关联,其中,所述第一点云帧中的目标对象的目标点与所述第二点云帧中目标对象的目标点的距离小于与所述第二点云帧中其他对象的目标点的距离。According to the target point of the object in the first point cloud frame and the target point of the object in the second point cloud frame, the object in the first point cloud frame and the object in the second point cloud frame are compared Perform association, wherein the distance between the target point of the target object in the first point cloud frame and the target point of the target object in the second point cloud frame is smaller than the target point of other objects in the second point cloud frame the distance.
  19. 根据权利要求18所述的装置,其特征在于,所述目标点为中心点或重心点。The device according to claim 18, wherein the target point is a center point or a center of gravity point.
  20. 根据权利要求12至19中任一项所述的装置,其特征在于,在所述 从所述第一点云帧和所述第二点云帧中提取同一目标对象之前,所述处理器进一步用于:The apparatus according to any one of claims 12 to 19, wherein before extracting the same target object from the first point cloud frame and the second point cloud frame, the processor further Used for:
    分别对所述第一点云帧和所述第二点云帧进行预处理操作;Perform preprocessing operations on the first point cloud frame and the second point cloud frame respectively;
    所述处理器进一步用于:从经过所述预处理操作后的所述第一点云帧和所述第二点云帧中提取所述目标对象。The processor is further configured to: extract the target object from the first point cloud frame and the second point cloud frame after the preprocessing operation.
  21. 根据权利要求20所述的装置,其特征在于,所述预处理操作包括地面滤除操作和/或降采样操作。The apparatus of claim 20, wherein the preprocessing operation includes a ground filtering operation and/or a downsampling operation.
  22. 根据权利要求12至21中任一项所述的装置,其特征在于,所述第一点云帧和所述第二点云帧是针对所述同一目标场景的相邻时刻的点云帧。The apparatus according to any one of claims 12 to 21, wherein the first point cloud frame and the second point cloud frame are point cloud frames at adjacent moments of the same target scene.
  23. 一种雷达,其特征在于,所述雷达包括存储器和处理器;A radar, characterized in that the radar includes a memory and a processor;
    所述存储器用于存储程序代码;the memory is used to store program codes;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行权利要求1至11中任一项所述的点云运动畸变修正方法。The processor calls the program code, and when the program code is executed, is used to execute the point cloud motion distortion correction method according to any one of claims 1 to 11.
  24. 一种计算机可读存储介质,其特征在于,包括用于执行权利要求1至11中任一项所述的点云运动畸变修正方法的指令。A computer-readable storage medium, characterized by comprising instructions for executing the point cloud motion distortion correction method according to any one of claims 1 to 11.
PCT/CN2020/118270 2020-09-28 2020-09-28 Point cloud motion distortion correction method and device WO2022061850A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/118270 WO2022061850A1 (en) 2020-09-28 2020-09-28 Point cloud motion distortion correction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/118270 WO2022061850A1 (en) 2020-09-28 2020-09-28 Point cloud motion distortion correction method and device

Publications (1)

Publication Number Publication Date
WO2022061850A1 true WO2022061850A1 (en) 2022-03-31

Family

ID=80844859

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/118270 WO2022061850A1 (en) 2020-09-28 2020-09-28 Point cloud motion distortion correction method and device

Country Status (1)

Country Link
WO (1) WO2022061850A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820392A (en) * 2022-06-28 2022-07-29 新石器慧通(北京)科技有限公司 Laser radar detection moving target distortion compensation method, device and storage medium
CN116359938A (en) * 2023-05-31 2023-06-30 未来机器人(深圳)有限公司 Object detection method, device and carrying device
TWI832242B (en) * 2022-05-13 2024-02-11 廣達電腦股份有限公司 Preprocessing method and electronic device for radar point cloud

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584183A (en) * 2018-12-05 2019-04-05 吉林大学 A kind of laser radar point cloud goes distortion method and system
CN109613543A (en) * 2018-12-06 2019-04-12 深圳前海达闼云端智能科技有限公司 Method and device for correcting laser point cloud data, storage medium and electronic equipment
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN110235027A (en) * 2017-04-28 2019-09-13 深圳市大疆创新科技有限公司 More object trackings based on LIDAR point cloud
CN110703229A (en) * 2019-09-25 2020-01-17 禾多科技(北京)有限公司 Point cloud distortion removal method and external reference calibration method for vehicle-mounted laser radar reaching IMU
CN110888120A (en) * 2019-12-03 2020-03-17 华南农业大学 Method for correcting laser radar point cloud data motion distortion based on integrated navigation system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110235027A (en) * 2017-04-28 2019-09-13 深圳市大疆创新科技有限公司 More object trackings based on LIDAR point cloud
CN109584183A (en) * 2018-12-05 2019-04-05 吉林大学 A kind of laser radar point cloud goes distortion method and system
CN109613543A (en) * 2018-12-06 2019-04-12 深圳前海达闼云端智能科技有限公司 Method and device for correcting laser point cloud data, storage medium and electronic equipment
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN110703229A (en) * 2019-09-25 2020-01-17 禾多科技(北京)有限公司 Point cloud distortion removal method and external reference calibration method for vehicle-mounted laser radar reaching IMU
CN110888120A (en) * 2019-12-03 2020-03-17 华南农业大学 Method for correcting laser radar point cloud data motion distortion based on integrated navigation system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI RONG-HUA;LI JIN-MING;CHEN FENG;XIAO YU-ZHI: "A Method of Relative Pose Measurement by Single Load for GEO Instability Target", JOURNAL OF ASTRONAUTICS, vol. 38, no. 10, 30 October 2017 (2017-10-30), pages 1105 - 1113, XP055916562, ISSN: 1000-1328 *
ZHANG BIAO; ZHANG XIAOYUAN; WEI BAOCHEN; QI CHENKUN: "A Point Cloud Distortion Removing and Mapping Algorithm based on Lidar and IMU UKF Fusion", 2019 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 8 July 2019 (2019-07-08), pages 966 - 971, XP033629972, DOI: 10.1109/AIM.2019.8868647 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI832242B (en) * 2022-05-13 2024-02-11 廣達電腦股份有限公司 Preprocessing method and electronic device for radar point cloud
CN114820392A (en) * 2022-06-28 2022-07-29 新石器慧通(北京)科技有限公司 Laser radar detection moving target distortion compensation method, device and storage medium
CN114820392B (en) * 2022-06-28 2022-10-18 新石器慧通(北京)科技有限公司 Laser radar detection moving target distortion compensation method, device and storage medium
CN116359938A (en) * 2023-05-31 2023-06-30 未来机器人(深圳)有限公司 Object detection method, device and carrying device
CN116359938B (en) * 2023-05-31 2023-08-25 未来机器人(深圳)有限公司 Object detection method, device and carrying device

Similar Documents

Publication Publication Date Title
WO2022061850A1 (en) Point cloud motion distortion correction method and device
CN109816011B (en) Video key frame extraction method
WO2020103647A1 (en) Object key point positioning method and apparatus, image processing method and apparatus, and storage medium
WO2023193670A1 (en) Pulse neural network target tracking method and system based on event camera
WO2022156626A1 (en) Image sight correction method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN111160232B (en) Front face reconstruction method, device and system
JP7064257B2 (en) Image depth determination method and creature recognition method, circuit, device, storage medium
JP7116262B2 (en) Image depth estimation method and apparatus, electronic device, and storage medium
CN112308866A (en) Image processing method, image processing device, electronic equipment and storage medium
US20200380250A1 (en) Image processing method and apparatus, and computer storage medium
US11783447B2 (en) Methods and apparatus for optimized stitching of overcapture content
JP7113910B2 (en) Image processing method and apparatus, electronic equipment, and computer-readable storage medium
US10929982B2 (en) Face pose correction based on depth information
US20210118172A1 (en) Target detection method, target detection apparatus, and unmanned aerial vehicle
WO2021185036A1 (en) Point cloud data generation and real-time display method and apparatus, device, and medium
CN112509003A (en) Method and system for solving target tracking frame drift
WO2024051591A1 (en) Method and apparatus for estimating rotation of video, and electronic device and storage medium
CN113902932A (en) Feature extraction method, visual positioning method and device, medium and electronic equipment
CN113920023A (en) Image processing method and device, computer readable medium and electronic device
CN113435367A (en) Social distance evaluation method and device and storage medium
WO2023142886A1 (en) Expression transfer method, model training method, and device
CN117152330A (en) Point cloud 3D model mapping method and device based on deep learning
WO2023109069A1 (en) Image retrieval method and apparatus
TWI823491B (en) Optimization method of a depth estimation model, device, electronic equipment and storage media
CN109328459B (en) Intelligent terminal, 3D imaging method thereof and 3D imaging system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20954701

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20954701

Country of ref document: EP

Kind code of ref document: A1