CN115147612B - Processing method for estimating vehicle size in real time based on accumulated point cloud - Google Patents
Processing method for estimating vehicle size in real time based on accumulated point cloud Download PDFInfo
- Publication number
- CN115147612B CN115147612B CN202210897030.1A CN202210897030A CN115147612B CN 115147612 B CN115147612 B CN 115147612B CN 202210897030 A CN202210897030 A CN 202210897030A CN 115147612 B CN115147612 B CN 115147612B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- vehicle
- detection frame
- size
- generate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 238000001514 detection method Methods 0.000 claims abstract description 191
- 238000009825 accumulation Methods 0.000 claims abstract description 59
- 238000000034 method Methods 0.000 claims abstract description 38
- 230000008569 process Effects 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims description 66
- 238000006243 chemical reaction Methods 0.000 claims description 26
- 230000008030 elimination Effects 0.000 claims description 4
- 238000003379 elimination reaction Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 230000008447 perception Effects 0.000 description 8
- 238000005070 sampling Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000007613 environmental effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004513 sizing Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
- G06V10/422—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The embodiment of the invention relates to a processing method for estimating the size of a vehicle in real time based on accumulated point clouds, which comprises the following steps: acquiring a first detection frame of a first vehicle at an initial moment and a first vehicle point cloud; initializing the vehicle accumulation point cloud to generate a first accumulation point cloud; performing a vehicle size estimation process to generate a first vehicle size; performing size adjustment treatment on the detection frame; taking the first accumulated point cloud as a historical accumulated point cloud of the next moment; acquiring a second detection frame and a second vehicle point cloud of the first vehicle at the next moment; updating the vehicle accumulation point cloud to generate a new first accumulation point cloud; performing a vehicle size estimation process to generate a new first vehicle size; performing size adjustment treatment on the detection frame; and taking the new first accumulation point cloud as a new historical accumulation point cloud. According to the invention, fluctuation of the size parameters of the detection frame at the front and rear moments can be eliminated based on the accumulated point cloud.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a processing method for estimating the size of a vehicle in real time based on accumulated point clouds.
Background
The automatic driving system is provided with a sensing module, a map planning module and a track planning module.
The perception module has multi-target tracking (multiple object tracking, MOT) capability. When executing the MOT task, the perception module splits the task into two subtasks: target detection classification tasks and target association tasks. When the target detection classification task is executed, the sensing module performs target detection classification processing on the environmental point cloud generated by the vehicle-mounted laser radar at the current moment through the target detection classification model to obtain a group of detection frames (Bounding boxes), wherein each detection frame corresponds to one target type (such as a vehicle target type, a pedestrian target type, an animal target type, a plant target type, a bicycle target type, a tricycle target type, a building target type and the like) and one group of detection frame parameters (a detection frame center point, a detection frame depth/length, a detection frame width, a detection frame height and a detection frame orientation angle). When the target association task is executed, the sensing module associates two detection frames belonging to the same target in two groups of detection frames at the front and rear adjacent moments based on the similarity comparison result of the detection frames. The sensing module can acquire a detection frame of any target at any moment by executing the MOT task.
The map planning module has synchronized positioning and mapping (Simultaneous Localization and Mapping, SLAM) capabilities. When the map planning module executes the SLAM task, a self-vehicle map taking the self-vehicle as the center is constructed based on known self-vehicle positioning information, map information, lane information and the like, then the positions and detection frame parameters of all vehicle targets around the self-vehicle are obtained from MOT execution results of the sensing module, and the positions and the sizes of the vehicle obstacles are calibrated on the self-vehicle map based on the obtained positions and the detection frame parameters. After the map planning module finishes the calibration of the self-vehicle map, the self-vehicle map is sent to the track planning module to be used as one of the reference data of the self-vehicle track planning of the module. That is, the more accurate the sizing of the vehicle obstacle on the vehicle map, the more accurate the trajectory plan output by the downstream trajectory planning module.
However, in practical applications, it is found that the detection frame size parameters (detection frame depth/length, detection frame width, detection frame height) of the same vehicle target in the MOT execution results output by the sensing module at the front and rear moments may fluctuate, which is caused by the fluctuation of the point cloud shape of the same vehicle target at the front and rear moments, while the fluctuation of the point cloud shape is caused by the relative relationship (relative speed, relative angle, relative position) between the host vehicle and the surrounding vehicle, which is almost unavoidable, and in some extreme cases (such as partial shielding of the surrounding vehicle) may also occur larger fluctuations. The fluctuation of the size parameters of the detection frame can increase the error between the calibrated size and the real size of the vehicle obstacle on the self-vehicle map, so that the accuracy of track planning output by the downstream track planning module is affected.
Disclosure of Invention
The invention aims at overcoming the defects of the prior art, and provides a processing method, electronic equipment and a computer readable storage medium for estimating the vehicle size in real time based on accumulated point clouds, wherein a new task for estimating the vehicle size in real time is added when a perception module executes MOT tasks each time, the perception module generates dynamically updated accumulated point clouds based on the current detection frame of each vehicle target and the point clouds in the current detection frame, namely the current vehicle point clouds, when executing the task, and carries out real-time estimation on the vehicle size of the current vehicle target based on the current updated accumulated point clouds so as to obtain dynamically updated vehicle size (length, width and height), and carries out real-time adjustment on the size parameters of the detection frame of the current vehicle target based on the current estimated vehicle size (length, width and height); according to the invention, the fluctuation of the size parameter of the detection frame caused by the fluctuation of the shape of the point cloud at the front moment and the rear moment can be eliminated based on the accumulated point cloud, so that the aim of reducing the error of the standard size and the real size of the obstacle of the vehicle on the self-vehicle map is fulfilled.
To achieve the above object, a first aspect of the present invention provides a processing method for estimating a vehicle size in real time based on an accumulated point cloud, where the method includes:
acquiring a first detection frame of a first vehicle at an initial moment and a first vehicle point cloud; initializing the vehicle accumulation point cloud of the first vehicle according to the first detection frame and the first vehicle point cloud to generate a corresponding first accumulation point cloud; performing vehicle size estimation processing on the first vehicle according to the first accumulation point cloud to generate a corresponding first vehicle size; performing detection frame size adjustment processing according to the first vehicle size and the first detection frame; if the detection frame size adjustment processing is successful, the first accumulated point cloud is used as a historical accumulated point cloud at the next moment;
acquiring a second detection frame and a second vehicle point cloud of the first vehicle at the next moment; updating the vehicle accumulation point cloud according to the second detection frame, the second vehicle point cloud and the historical accumulation point cloud to generate a new first accumulation point cloud; performing vehicle size estimation processing on the first vehicle according to the new first accumulation point cloud to generate a new first vehicle size; performing detection frame size adjustment processing according to the new first vehicle size and the new second detection frame; and if the detection frame size adjustment processing is successful, taking the new first accumulation point cloud as the new historical accumulation point cloud.
Preferably, the first and second detecting frames comprise a detecting frame center point, a detecting frame depth, a detecting frame width and a detecting frame height.
Preferably, the initializing the vehicle accumulation point cloud of the first vehicle according to the first detection frame and the first vehicle point cloud to generate a corresponding first accumulation point cloud specifically includes:
a right-hand coordinate system is constructed in a forward direction by taking a detection frame center point of the first detection frame as an origin and a depth direction of the detection frame as a y-axis to serve as a corresponding vehicle coordinate system, and coordinate conversion processing is carried out on the first vehicle point cloud based on the current vehicle coordinate system to generate a corresponding first conversion point cloud;
performing point cloud downsampling processing on the first conversion point cloud to generate a corresponding first downsampled point cloud;
and recording the first downsampling point cloud as a vehicle accumulation point cloud of the first vehicle at the initial moment as the corresponding first accumulation point cloud and storing the first accumulation point cloud.
Preferably, the updating the vehicle accumulation point cloud according to the second detection frame, the second vehicle point cloud and the history accumulation point cloud to generate a new first accumulation point cloud specifically includes:
a right-hand coordinate system is constructed in a forward direction by taking a detection frame center point of the second detection frame as an origin and a depth direction of the detection frame as a y-axis to serve as a corresponding vehicle coordinate system, and coordinate conversion processing is carried out on the second vehicle point cloud based on the current vehicle coordinate system to generate a corresponding second conversion point cloud;
performing point cloud registration processing on the second conversion point cloud by using the historical accumulated point cloud to generate a corresponding first registration point cloud;
performing point cloud fusion processing on the history accumulated point cloud and the first registration point cloud to generate a corresponding first fusion point cloud;
performing point cloud downsampling on the first fused point cloud to generate a corresponding second downsampled point cloud;
and taking the second downsampled point cloud as the new first accumulated point cloud.
Preferably, the vehicle size estimation processing for the first vehicle specifically includes:
performing outlier elimination processing on the first accumulated point cloud input by the current vehicle size estimation processing to generate a corresponding first point cloud;
carrying out statistics on three-dimensional extremum of the first point cloud under a corresponding coordinate system to generate a corresponding first extremum combination; the first extremum combination comprises an X-axis minimum X min Maximum X-axis value X max Minimum value Y of Y axis min Maximum Y of Y axis max Minimum Z value of Z axis min And a Z-axis maximum Z max ;
Calculating the length L, the width W and the height H of the first vehicle according to the first extreme value combination; l= |y max -Y min |,W=|X max -X min |,H=|Z max -Z min |;
And the obtained length L, width W and height H form corresponding first vehicle size output.
Preferably, the detection frame size adjustment process specifically includes:
taking the first vehicle size input by the current detection frame size adjustment processing as the current vehicle size, and taking the first detection frame or the second detection frame input by the current detection frame size adjustment processing as the current detection frame;
setting the depth of the detection frame of the current detection frame according to the length L of the current vehicle size; setting the width of the detection frame of the current detection frame according to the width W of the current vehicle size; setting the height of the detection frame of the current detection frame according to the height H of the current vehicle size; and after the length, width and height are set, confirming that the size adjustment of the detection frame is successful.
A second aspect of an embodiment of the present invention provides an electronic device, including: memory, processor, and transceiver;
the processor is configured to couple to the memory, and read and execute the instructions in the memory, so as to implement the method steps described in the first aspect;
the transceiver is coupled to the processor and is controlled by the processor to transmit and receive messages.
A third aspect of the embodiments of the present invention provides a computer-readable storage medium storing computer instructions that, when executed by a computer, cause the computer to perform the method of the first aspect described above.
The embodiment of the invention provides a processing method, electronic equipment and a computer readable storage medium for estimating the size of a vehicle in real time based on accumulated point clouds, wherein a new task for estimating the size of the vehicle in real time is added when a perception module executes MOT tasks each time, the perception module generates dynamically updated accumulated point clouds based on the current detection frame of each vehicle target and the point clouds in the current detection frame, namely the current vehicle point clouds, the accumulated point clouds of the current vehicle target, and the vehicle size of the current vehicle target is estimated in real time based on the current updated accumulated point clouds so as to obtain dynamically updated vehicle sizes (length, width and height), and the detection frame size parameters of the current vehicle target are adjusted in real time based on the currently estimated vehicle sizes (length, width and height); according to the invention, the fluctuation of the size parameter of the detection frame caused by the fluctuation of the shape of the point cloud at the front moment and the rear moment can be eliminated based on the accumulated point cloud, so that the aim of reducing the error of the standard size and the real size of the obstacle of the vehicle on the self-vehicle map is fulfilled.
Drawings
Fig. 1 is a schematic diagram of a processing method for estimating a vehicle size in real time based on an accumulated point cloud according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of an electronic device according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The sensing module extracts detection frames of all vehicle targets output by each MOT task, takes sub-point clouds in the detection frames as corresponding vehicle point clouds, and through the processing method for estimating the vehicle size in real time based on the accumulated point clouds provided by the first embodiment of the invention, the accumulated point clouds of the vehicle targets are initialized based on the corresponding vehicle point clouds at the initial moment, the vehicle size of the vehicle targets is estimated based on the accumulated point clouds obtained at the time, and the detection frame size of the vehicle targets is adjusted based on the estimation result; the method comprises the steps that accumulated point cloud updating is conducted on the basis of historical accumulated point clouds of a vehicle target at a previous moment and current vehicle point clouds at a subsequent moment, the vehicle size of the vehicle target is estimated on the basis of the updated accumulated point clouds, and the size of a detection frame of the vehicle target is adjusted on the basis of an estimation result; fig. 1 is a schematic diagram of a processing method for estimating a vehicle size in real time based on an accumulated point cloud according to a first embodiment of the present invention, where, as shown in fig. 1, the method mainly includes the following steps:
step 1, acquiring a first detection frame and first vehicle point cloud of a first vehicle at an initial moment; initializing a vehicle accumulation point cloud of a first vehicle according to a first detection frame and the first vehicle point cloud to generate a corresponding first accumulation point cloud; the vehicle size estimation processing is carried out on the first vehicle according to the first accumulation point cloud to generate a corresponding first vehicle size; performing detection frame size adjustment processing according to the first vehicle size and the first detection frame; if the detection frame size adjustment processing is successful, the first accumulated point cloud is used as a historical accumulated point cloud at the next moment;
the first detection frame comprises a detection frame center point, a detection frame depth, a detection frame width and a detection frame height;
the method specifically comprises the following steps: step 11, acquiring a first detection frame of a first vehicle at an initial moment and a first vehicle point cloud;
the first detection frame is a target detection frame of the first vehicle, which is obtained when the sensing module performs target detection classification processing on the environmental point cloud corresponding to the current moment, and the characteristics of the target detection frame can be used for identifying the characteristics including the center point of the detection frame, the depth of the detection frame, the width of the detection frame, the height of the detection frame and the like, wherein the first vehicle point cloud is a sub point cloud in the first detection frame in the environmental point cloud corresponding to the current moment;
step 12, initializing a vehicle accumulation point cloud of a first vehicle according to a first detection frame and the first vehicle point cloud to generate a corresponding first accumulation point cloud;
the method specifically comprises the following steps: step 121, a right-hand coordinate system is constructed forward by taking a detection frame center point of a first detection frame as an origin and a depth direction of the detection frame as a y-axis to serve as a corresponding vehicle coordinate system, and coordinate conversion processing is performed on a first vehicle point cloud based on the current vehicle coordinate system to generate a corresponding first conversion point cloud;
here, the depth difference caused by the distance between the vehicle target and the vehicle in the original point cloud can be eliminated through the conversion of the vehicle coordinate system of the first vehicle, and the point clouds with the depth difference eliminated can be mutually fused in the subsequent steps;
step 122, performing point cloud downsampling processing on the first conversion point cloud to generate a corresponding first downsampled point cloud;
here, in the embodiment of the present invention, the down-sampling process is performed on the first conversion point cloud by adopting a voxel grid-based manner, that is, the following steps are adopted: dividing a first conversion point cloud into a grid network formed by a plurality of unit grids with the volume size of Deltax, deltay and Deltaz in a point cloud space corresponding to a vehicle coordinate system according to the volume size of a preset voxel grid; sampling each unit grid in the grid network based on a preset downsampling mode, carrying out random point extraction processing on each unit grid if the downsampling mode is a first mode, taking the extracted points as sampling points corresponding to the current unit grid, carrying out center point calculation on each unit grid if the downsampling mode is a second mode, and taking the point closest to the center point in the grid as the sampling point corresponding to the current unit grid; after the cell grid sampling is finished, forming a corresponding first down-sampling point cloud by all the obtained sampling points;
step 123, the first downsampled point cloud is recorded as a corresponding first accumulated point cloud by taking the first downsampled point cloud as a vehicle accumulated point cloud of the first vehicle at the initial moment and is stored;
step 13, performing vehicle size estimation processing on the first vehicle according to the first accumulation point cloud to generate a corresponding first vehicle size;
the method specifically comprises the following steps: step 131, performing outlier elimination processing on the first accumulated point cloud input by the current vehicle size estimation processing to generate a corresponding first point cloud;
here, the embodiment of the present invention traverses each point in the first accumulated point cloud; calculating the average distance from the current point to all adjacent points around the current point in a traversing way, and marking the current point as an outlier if the average distance exceeds a preset distance range; deleting all outliers in the first accumulated point cloud when the traversal is finished so as to obtain a first point cloud;
step 132, counting three-dimensional extremum of the first point cloud under the corresponding coordinate system to generate a corresponding first extremum combination;
wherein the first extremum combination comprises an X-axis minimum X min Maximum X-axis value X max Minimum value Y of Y axis min Maximum Y of Y axis max Minimum Z value of Z axis min And a Z-axis maximum Z max ;
Step 133, calculating the length L, the width W and the height H of the first vehicle according to the first extreme value combination; l= |y max -Y min |,W=|X max -X min |,H=|Z max -Z min |;
Step 134, composing corresponding first vehicle size output by the obtained length L, width W and height H;
step 14, performing detection frame size adjustment processing according to the first vehicle size and the first detection frame;
the method specifically comprises the following steps: step 141, taking the first vehicle size input by the current detection frame size adjustment process as the current vehicle size, and taking the first detection frame input by the current detection frame size adjustment process as the current detection frame;
step 142, setting the depth of the detection frame of the current detection frame according to the length L of the current vehicle size; setting the width of the detection frame of the current detection frame according to the width W of the current vehicle size; setting the height of the detection frame of the current detection frame according to the height H of the current vehicle size; after the length, width and height are set, the success of the size adjustment of the detection frame is confirmed;
and 15, if the detection frame size adjustment processing is successful, taking the first accumulated point cloud as the historical accumulated point cloud of the next moment.
Step 2, acquiring a second detection frame and a second vehicle point cloud of the first vehicle at the next moment; updating the vehicle accumulation point cloud according to the second detection frame, the second vehicle point cloud and the historical accumulation point cloud to generate a new first accumulation point cloud; performing vehicle size estimation processing on the first vehicle according to the new first accumulation point cloud to generate a new first vehicle size; performing detection frame size adjustment processing according to the new first vehicle size and the new second detection frame; if the detection frame size adjustment processing is successful, the new first accumulation point cloud is used as a new historical accumulation point cloud;
the second detection frame comprises a detection frame center point, a detection frame depth, a detection frame width and a detection frame height;
the method specifically comprises the following steps: step 21, acquiring a second detection frame and a second vehicle point cloud of the first vehicle at the next moment;
the second detection frame is a target detection frame of the first vehicle at the current moment, which is obtained after the sensing module performs target detection classification and target association on the environmental point cloud corresponding to the current moment, and the characteristics of the target detection frame can be used for identifying the characteristics including the center point of the detection frame, the depth of the detection frame, the width of the detection frame, the height of the detection frame and the like, wherein the second vehicle point cloud is a sub point cloud in the second detection frame in the environmental point cloud corresponding to the current moment;
step 22, updating the vehicle accumulation point cloud according to the second detection frame, the second vehicle point cloud and the historical accumulation point cloud to generate a new first accumulation point cloud;
the method specifically comprises the following steps: step 221, a right-hand coordinate system is constructed forward by taking a detection frame center point of a second detection frame as an origin and a depth direction of the detection frame as a y-axis to serve as a corresponding vehicle coordinate system, and coordinate conversion processing is performed on a second vehicle point cloud based on the current vehicle coordinate system to generate a corresponding second conversion point cloud;
here, the depth difference caused by the distance between the vehicle target and the vehicle in the original point cloud can be eliminated through the conversion of the vehicle coordinate system of the first vehicle, and the point cloud with the eliminated depth difference can be mutually fused with the history accumulation point cloud;
step 222, performing point cloud registration processing on the second conversion point cloud by using the history accumulated point cloud to generate a corresponding first registration point cloud;
here, although the history accumulated point cloud and the second conversion point cloud are both in the same vehicle coordinate system, a certain dislocation exists between the two point clouds under the influence of external factors such as road surface bump, signal interference and the like, so that the second conversion point cloud needs to be registered by taking the history accumulated point cloud as a reference; when the point cloud registration processing is carried out, firstly, solving a pose transformation matrix T of the historical accumulated point cloud and the second conversion point cloud based on an iterative closest point ICP algorithm, and then carrying out coordinate transformation on the points of the second conversion point cloud based on the obtained pose transformation matrix T so as to obtain a first registration point cloud; here, the implementation manner of solving the two-frame point cloud pose transformation matrix T by using the iterative closest point ICP algorithm may be obtained by querying related technical documents, which will not be further described herein;
step 223, performing point cloud fusion processing on the history accumulated point cloud and the first registration point cloud to generate a corresponding first fusion point cloud;
step 224, performing a point cloud downsampling process on the first fused point cloud to generate a corresponding second downsampled point cloud;
here, the processing procedure of the current step is similar to that of the foregoing step 122, and will not be further described herein;
step 225, taking the second downsampled point cloud as a new first accumulated point cloud;
step 23, performing vehicle size estimation processing on the first vehicle according to the new first accumulation point cloud to generate a new first vehicle size;
the method specifically comprises the following steps: step 231, performing outlier elimination processing on the first accumulated point cloud input by the current vehicle size estimation processing to generate a corresponding first point cloud;
here, the processing procedure of the current step is similar to the foregoing step 131, and will not be further described herein;
step 232, counting three-dimensional extremum of the first point cloud under the corresponding coordinate system to generate a corresponding first extremum combination;
wherein the first extremum combination comprises an X-axis minimum X min Maximum X-axis value X max Minimum value Y of Y axis min Maximum Y of Y axis max Minimum Z value of Z axis min And a Z-axis maximum Z max ;
Here, the processing of the present step is similar to the foregoing step 132;
step 233, calculating the length L, the width W and the height H of the first vehicle according to the first extreme value combination; l= |y max -Y min |,W=|X max -X min |,H=|Z max -Z min |;
Here, the processing of the present step is similar to the foregoing step 133;
step 234, composing a corresponding first vehicle size output from the obtained length L, width W and height H;
here, the processing of the present step is similar to that of the aforementioned step 134;
step 24, performing detection frame size adjustment processing according to the new first vehicle size and the second detection frame;
the method specifically comprises the following steps: step 241, taking the first vehicle size input by the current detection frame size adjustment process as the current vehicle size, and taking the second detection frame input by the current detection frame size adjustment process as the current detection frame;
here, the processing procedure of the present step is similar to the foregoing step 141;
step 242, setting the depth of the detection frame of the current detection frame according to the length L of the current vehicle size; setting the width of the detection frame of the current detection frame according to the width W of the current vehicle size; setting the height of the detection frame of the current detection frame according to the height H of the current vehicle size; after the length, width and height are set, the success of the size adjustment of the detection frame is confirmed;
here, the processing of the present step is similar to the aforementioned step 142;
and step 25, if the detection frame size adjustment process is successful, taking the new first accumulation point cloud as a new historical accumulation point cloud.
In summary, the sensing template completes the initialization of the accumulated point cloud of the vehicle and the first vehicle size estimation and detection frame size adjustment through the above step 1, and then continuously performs the accumulated point cloud update, the vehicle size estimation and the detection frame size adjustment through repeatedly executing the above step 2. Because the accumulated point cloud does not lose point cloud information at any moment, its point cloud shape is closer to the real shape of the vehicle than the vehicle point cloud obtained at each moment; the three-dimensional extremum of the accumulated point cloud can not change obviously after accumulation for a certain number of times, so that the sensing template can be ensured to output stable detection frame size. Once the sensing template can stably output the size of the detection frame, the corresponding map planning module can stably calibrate the size of each vehicle obstacle on the own vehicle map, so that the track planning accuracy of the downstream track planning module can be improved.
Fig. 2 is a schematic structural diagram of an electronic device according to a second embodiment of the present invention. The electronic device may be the aforementioned terminal device or server, or may be a terminal device or server connected to the aforementioned terminal device or server for implementing the method of the embodiment of the present invention. As shown in fig. 2, the electronic device may include: a processor 301 (e.g., a CPU), a memory 302, a transceiver 303; the transceiver 303 is coupled to the processor 301, and the processor 301 controls the transceiving actions of the transceiver 303. The memory 302 may store various instructions for performing the various processing functions and implementing the processing steps described in the method embodiments previously described. Preferably, the electronic device according to the embodiment of the present invention further includes: a power supply 304, a system bus 305, and a communication port 306. The system bus 305 is used to implement communication connections between the elements. The communication port 306 is used for connection communication between the electronic device and other peripheral devices.
The system bus 305 referred to in fig. 2 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 2, but not only one bus or one type of bus. The communication interface is used to enable communication between the database access apparatus and other devices (e.g., clients, read-write libraries, and read-only libraries). The Memory may comprise random access Memory (Random Access Memory, RAM) and may also include Non-Volatile Memory (Non-Volatile Memory), such as at least one disk Memory.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a graphics processor (Graphics Processing Unit, GPU), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
It should be noted that, the embodiments of the present invention also provide a computer readable storage medium, where instructions are stored, when the computer readable storage medium runs on a computer, to cause the computer to perform the method and the process provided in the above embodiments.
The embodiment of the invention also provides a chip for running the instructions, and the chip is used for executing the processing steps described in the embodiment of the method.
The embodiment of the invention provides a processing method, electronic equipment and a computer readable storage medium for estimating the size of a vehicle in real time based on accumulated point clouds, wherein a new task for estimating the size of the vehicle in real time is added when a perception module executes MOT tasks each time, the perception module generates dynamically updated accumulated point clouds based on the current detection frame of each vehicle target and the point clouds in the current detection frame, namely the current vehicle point clouds, the accumulated point clouds of the current vehicle target, and the vehicle size of the current vehicle target is estimated in real time based on the current updated accumulated point clouds so as to obtain dynamically updated vehicle sizes (length, width and height), and the detection frame size parameters of the current vehicle target are adjusted in real time based on the currently estimated vehicle sizes (length, width and height); according to the invention, the fluctuation of the size parameter of the detection frame caused by the fluctuation of the shape of the point cloud at the front moment and the rear moment can be eliminated based on the accumulated point cloud, so that the aim of reducing the error of the standard size and the real size of the obstacle of the vehicle on the self-vehicle map is fulfilled.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.
Claims (5)
1. A processing method for estimating a vehicle size in real time based on an accumulated point cloud, the method comprising:
acquiring a first detection frame of a first vehicle at an initial moment and a first vehicle point cloud; initializing the vehicle accumulation point cloud of the first vehicle according to the first detection frame and the first vehicle point cloud to generate a corresponding first accumulation point cloud; performing vehicle size estimation processing on the first vehicle according to the first accumulation point cloud to generate a corresponding first vehicle size; performing detection frame size adjustment processing according to the first vehicle size and the first detection frame; if the detection frame size adjustment processing is successful, the first accumulated point cloud is used as a historical accumulated point cloud at the next moment;
acquiring a second detection frame and a second vehicle point cloud of the first vehicle at the next moment; updating the vehicle accumulation point cloud according to the second detection frame, the second vehicle point cloud and the historical accumulation point cloud to generate a new first accumulation point cloud; performing vehicle size estimation processing on the first vehicle according to the new first accumulation point cloud to generate a new first vehicle size; performing detection frame size adjustment processing according to the new first vehicle size and the new second detection frame; if the detection frame size adjustment processing is successful, taking the new first accumulation point cloud as the new historical accumulation point cloud;
the first detection frame and the second detection frame comprise a detection frame center point, a detection frame depth, a detection frame width and a detection frame height;
initializing the vehicle accumulation point cloud of the first vehicle according to the first detection frame and the first vehicle point cloud to generate a corresponding first accumulation point cloud, specifically including:
a right-hand coordinate system is constructed in a forward direction by taking a detection frame center point of the first detection frame as an origin and a depth direction of the detection frame as a y-axis to serve as a corresponding vehicle coordinate system, and coordinate conversion processing is carried out on the first vehicle point cloud based on the current vehicle coordinate system to generate a corresponding first conversion point cloud;
performing point cloud downsampling processing on the first conversion point cloud to generate a corresponding first downsampled point cloud;
the first downsampling point cloud is used as a vehicle accumulation point cloud of the first vehicle at the initial moment and is recorded as the corresponding first accumulation point cloud and stored;
the updating the vehicle accumulation point cloud according to the second detection frame, the second vehicle point cloud and the history accumulation point cloud to generate a new first accumulation point cloud specifically includes:
a right-hand coordinate system is constructed in a forward direction by taking a detection frame center point of the second detection frame as an origin and a depth direction of the detection frame as a y-axis to serve as a corresponding vehicle coordinate system, and coordinate conversion processing is carried out on the second vehicle point cloud based on the current vehicle coordinate system to generate a corresponding second conversion point cloud;
performing point cloud registration processing on the second conversion point cloud by using the historical accumulated point cloud to generate a corresponding first registration point cloud;
performing point cloud fusion processing on the history accumulated point cloud and the first registration point cloud to generate a corresponding first fusion point cloud;
performing point cloud downsampling on the first fused point cloud to generate a corresponding second downsampled point cloud;
and taking the second downsampled point cloud as the new first accumulated point cloud.
2. The method for real-time estimating vehicle size based on accumulated point cloud according to claim 1, wherein the estimating vehicle size for the first vehicle specifically comprises:
performing outlier elimination processing on the first accumulated point cloud input by the current vehicle size estimation processing to generate a corresponding first point cloud;
carrying out statistics on three-dimensional extremum of the first point cloud under a corresponding coordinate system to generate a corresponding first extremum combination; the first extremum combination comprises an X-axis minimum X min Maximum X-axis value X max Minimum value Y of Y axis min Maximum Y of Y axis max Minimum Z value of Z axis min And a Z-axis maximum Z max ;
Calculating the length L, the width W and the height H of the first vehicle according to the first extreme value combination; l= |y max -Y min |,W=|X max -X min |,H=|Z max -Z min |;
And the obtained length L, width W and height H form corresponding first vehicle size output.
3. The method for real-time estimating a vehicle size based on the accumulated point cloud according to claim 2, wherein the detecting frame size adjusting process specifically includes:
taking the first vehicle size input by the current detection frame size adjustment processing as the current vehicle size, and taking the first detection frame or the second detection frame input by the current detection frame size adjustment processing as the current detection frame;
setting the depth of the detection frame of the current detection frame according to the length L of the current vehicle size; setting the width of the detection frame of the current detection frame according to the width W of the current vehicle size; setting the height of the detection frame of the current detection frame according to the height H of the current vehicle size; and after the length, width and height are set, confirming that the size adjustment of the detection frame is successful.
4. An electronic device, comprising: memory, processor, and transceiver;
the processor being adapted to be coupled to the memory, read and execute the instructions in the memory to carry out the method steps of any one of claims 1-3;
the transceiver is coupled to the processor and is controlled by the processor to transmit and receive messages.
5. A computer readable storage medium storing computer instructions which, when executed by a computer, cause the computer to perform the instructions of the method of any one of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210897030.1A CN115147612B (en) | 2022-07-28 | 2022-07-28 | Processing method for estimating vehicle size in real time based on accumulated point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210897030.1A CN115147612B (en) | 2022-07-28 | 2022-07-28 | Processing method for estimating vehicle size in real time based on accumulated point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115147612A CN115147612A (en) | 2022-10-04 |
CN115147612B true CN115147612B (en) | 2024-03-29 |
Family
ID=83414200
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210897030.1A Active CN115147612B (en) | 2022-07-28 | 2022-07-28 | Processing method for estimating vehicle size in real time based on accumulated point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115147612B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109188457A (en) * | 2018-09-07 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Generation method, device, equipment, storage medium and the vehicle of object detection frame |
WO2022016311A1 (en) * | 2020-07-20 | 2022-01-27 | 深圳元戎启行科技有限公司 | Point cloud-based three-dimensional reconstruction method and apparatus, and computer device |
WO2022022694A1 (en) * | 2020-07-31 | 2022-02-03 | 北京智行者科技有限公司 | Method and system for sensing automated driving environment |
-
2022
- 2022-07-28 CN CN202210897030.1A patent/CN115147612B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109188457A (en) * | 2018-09-07 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Generation method, device, equipment, storage medium and the vehicle of object detection frame |
WO2022016311A1 (en) * | 2020-07-20 | 2022-01-27 | 深圳元戎启行科技有限公司 | Point cloud-based three-dimensional reconstruction method and apparatus, and computer device |
WO2022022694A1 (en) * | 2020-07-31 | 2022-02-03 | 北京智行者科技有限公司 | Method and system for sensing automated driving environment |
Non-Patent Citations (1)
Title |
---|
基于激光点云与图像信息融合的交通环境车辆检测;郑少武;李巍华;胡坚耀;;仪器仪表学报;20191215(第12期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115147612A (en) | 2022-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022022694A1 (en) | Method and system for sensing automated driving environment | |
CN111797734B (en) | Vehicle point cloud data processing method, device, equipment and storage medium | |
CN111583369B (en) | Laser SLAM method based on facial line angular point feature extraction | |
CN107632308B (en) | Method for detecting contour of obstacle in front of vehicle based on recursive superposition algorithm | |
Weon et al. | Object Recognition based interpolation with 3d lidar and vision for autonomous driving of an intelligent vehicle | |
US20230386076A1 (en) | Target detection method, storage medium, electronic device, and vehicle | |
CN112051575B (en) | Method for adjusting millimeter wave radar and laser radar and related device | |
CN115436910B (en) | Data processing method and device for performing target detection on laser radar point cloud | |
EP3324210A1 (en) | Self-calibrating sensor system for a wheeled vehicle | |
WO2022179094A1 (en) | Vehicle-mounted lidar external parameter joint calibration method and system, medium and device | |
CN114485698B (en) | Intersection guide line generation method and system | |
CN114705121B (en) | Vehicle pose measurement method and device, electronic equipment and storage medium | |
CN112581613A (en) | Grid map generation method and system, electronic device and storage medium | |
CN117590362B (en) | Multi-laser radar external parameter calibration method, device and equipment | |
CN114966736A (en) | Processing method for predicting target speed based on point cloud data | |
CN115147612B (en) | Processing method for estimating vehicle size in real time based on accumulated point cloud | |
CN114648639B (en) | Target vehicle detection method, system and device | |
CN116844124A (en) | Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium | |
CN115236643A (en) | Sensor calibration method, system, device, electronic equipment and medium | |
CN113409376A (en) | Method for filtering laser radar point cloud based on depth estimation of camera | |
CN113537161B (en) | Obstacle identification method, system and device | |
CN114648576B (en) | Target vehicle positioning method, device and system | |
CN115511975A (en) | Distance measurement method of monocular camera and computer program product | |
CN115527034B (en) | Vehicle end point cloud dynamic and static segmentation method, device and medium | |
CN114239706A (en) | Target fusion method and system based on multiple cameras and laser radar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |