WO2022000857A1 - 数据集的建立方法、车辆和存储介质 - Google Patents

数据集的建立方法、车辆和存储介质 Download PDF

Info

Publication number
WO2022000857A1
WO2022000857A1 PCT/CN2020/121496 CN2020121496W WO2022000857A1 WO 2022000857 A1 WO2022000857 A1 WO 2022000857A1 CN 2020121496 W CN2020121496 W CN 2020121496W WO 2022000857 A1 WO2022000857 A1 WO 2022000857A1
Authority
WO
WIPO (PCT)
Prior art keywords
millimeter
lidar
wave radar
image
target
Prior art date
Application number
PCT/CN2020/121496
Other languages
English (en)
French (fr)
Inventor
王澎洛
董旭
刘兰个川
Original Assignee
广东小鹏汽车科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东小鹏汽车科技有限公司 filed Critical 广东小鹏汽车科技有限公司
Priority to EP20900681.6A priority Critical patent/EP3961256A4/en
Publication of WO2022000857A1 publication Critical patent/WO2022000857A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating

Definitions

  • the present invention relates to the technical field of automatic driving, in particular to a method for establishing a data set, a vehicle and a storage medium.
  • Millimeter-wave radar has become an indispensable part of advanced driver assistance systems (ADAS) in automobiles.
  • ADAS advanced driver assistance systems
  • the wave radar can also work normally and has strong robustness.
  • the work of traditional millimeter-wave radar is to use digital signal processing algorithms to extract the position and velocity information of the target, and to establish a data set based on the generation of a series of radar points or point clouds.
  • the generated radar points or point clouds do not have information such as the size and attitude of the target.
  • autonomous driving it is usually necessary to use the pose information including the position, depth, size, and angle of the target to complete the environmental perception work.
  • the millimeter-wave radar datasets used in autonomous driving are usually manually annotated, the labeling efficiency is extremely high. Low.
  • sensors such as cameras
  • cameras have low accuracy in estimating pose information.
  • the embodiments of the present invention are proposed to provide a method for establishing a data set, a vehicle and a storage medium that overcome the above-mentioned problems or at least partially solve the above-mentioned problems.
  • an embodiment of the present invention discloses a method for establishing a data set, which is applied to a vehicle equipped with a millimeter-wave radar and a lidar, and is characterized in that the method includes:
  • the target estimation result is projected onto the matching millimeter-wave radar image as a pseudo-annotation of the millimeter-wave radar image
  • a millimeter-wave radar dataset is established based on target confidence and pseudo-annotation.
  • acquiring a millimeter-wave radar image includes:
  • the initial image of the millimeter-wave radar is obtained after collecting the millimeter-wave radar signal for digital signal processing;
  • the initial image of the millimeter-wave radar is transformed into the coordinate system to obtain the millimeter-wave radar image in the Cartesian coordinate system and the time stamp corresponding to the millimeter-wave radar image.
  • acquiring a lidar image includes:
  • the lidar image after frame reconstruction is transformed into the Cartesian coordinate system, and the time stamp corresponding to the lidar image is obtained.
  • frame reconstruction is performed on the initial image of the lidar, including:
  • spatial calibration is performed on millimeter-wave radar and lidar, and the target calibration attitude information is obtained, including:
  • the preset calibration target match the millimeter wave radar signal and lidar signal of the calibration target; calculate the space transformation matrix to obtain the target calibration attitude information.
  • the time stamp matching is performed on the millimeter-wave radar image and the lidar image to complete the time calibration of the millimeter-wave radar and the lidar, including:
  • the time offset of the millimeter-wave radar and the lidar is calculated to complete the time calibration, and the calibrated timestamp is generated.
  • building a deep neural network for inferring lidar images and using the deep neural network to generate target inference results for lidar including:
  • the model integration method is used to build a deep neural network for inferring lidar images, and the deep neural network is used to generate the target inference results of lidar.
  • the radar target confidence is generated according to the local millimeter-wave radar signal in the area where the pseudo-annotation is located, including:
  • the embodiment of the present invention also discloses a vehicle, which is characterized by comprising:
  • a first image acquisition module for acquiring millimeter-wave radar images
  • a second image acquisition module used for acquiring lidar images
  • Spatial calibration module used for spatial calibration of millimeter-wave radar and lidar, to obtain target calibration attitude information
  • the time calibration module is used to match the time stamp of the millimeter-wave radar image and the lidar image, and complete the time calibration of the millimeter-wave radar and the lidar;
  • the target inference module is used to build a deep neural network for inferring lidar images and use the deep neural network to generate the target inference results of the lidar;
  • the pseudo-labeling module is used to project the target estimation result onto the matching millimeter-wave radar image according to the target calibration attitude information as the pseudo-labeling of the millimeter-wave radar image;
  • the confidence level generation module is used to generate the radar target confidence level according to the local signal of the millimeter-wave radar in the area where the pseudo-label is located;
  • the data set building module is used to build a millimeter-wave radar data set according to the target confidence and pseudo-annotation.
  • An embodiment of the present invention further discloses a vehicle, which is characterized by comprising: a processor, a memory, and a computer program stored on the memory and capable of running on the processor, the computer program being executed by the processor When executed, the steps of the method for establishing a data set as described above are realized.
  • An embodiment of the present invention further discloses a computer-readable storage medium, characterized in that, a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the above-mentioned method for establishing a data set is implemented. step.
  • the present invention can realize automatic labeling with the help of lidar, improve labeling efficiency, and eliminate the jelly effect of lidar signals in a relatively simple way by the frame reconstruction algorithm of lidar in a high-speed environment, and based on the model
  • the integrated deep neural network can realize target recognition with high recall rate, filter out false positive target detection frames, and improve the quality of annotation.
  • Fig. 1 is a flow chart of steps of an embodiment of a method for establishing a data set of the present invention
  • FIG. 2 is a structural block diagram of a vehicle embodiment of the present invention.
  • FIG. 3 is a comparison diagram of the effect of performing frame reconstruction on a lidar image in the present invention.
  • One of the core concepts of the embodiments of the present invention is that the establishment of a millimeter-wave radar data set is realized by using a laser radar.
  • FIG. 1 a flowchart of steps in an embodiment of a method for establishing a data set of the present invention is shown, which may specifically include the following steps:
  • S3 perform spatial calibration on millimeter-wave radar and lidar, and obtain target calibration attitude information
  • the present invention also proposes a vehicle, and the above-mentioned method for establishing a data set can be implemented by using the vehicle as an action execution subject.
  • the vehicle includes:
  • a first image acquisition module for acquiring millimeter-wave radar images
  • a second image acquisition module used for acquiring lidar images
  • Spatial calibration module used for spatial calibration of millimeter-wave radar and lidar, to obtain target calibration attitude information
  • the time calibration module is used to match the time stamp of the millimeter-wave radar image and the lidar image, and complete the time calibration of the millimeter-wave radar and the lidar;
  • the target inference module is used to build a deep neural network for inferring lidar images and use the deep neural network to generate the target inference results of the lidar;
  • the pseudo-labeling module is used to project the target estimation result onto the matching millimeter-wave radar image according to the target calibration attitude information as the pseudo-labeling of the millimeter-wave radar image;
  • the confidence level generation module is used to generate the radar target confidence level according to the local signal of the millimeter-wave radar in the area where the pseudo-label is located;
  • the data set building module is used to build a millimeter-wave radar data set according to the target confidence and pseudo-annotation.
  • S1 can be implemented by the first image acquisition module
  • S2 can be implemented by the second image acquisition module
  • S3 can be implemented by the spatial calibration module
  • S4 It can be realized by the time calibration module
  • S5 can be realized by the target estimation module
  • S6 can be realized by the pseudo-annotation module
  • S7 can be realized by the confidence generation module
  • S8 can be realized by the data set establishment module.
  • the positions of the two sensors, the millimeter-wave radar and the lidar can be measured, and their respective rough calibration attitude information can be obtained as the initial calibration attitude information.
  • the first image acquisition module acquires a millimeter-wave radar image, including:
  • the first image acquisition module includes:
  • the first acquisition and processing unit is used for acquiring the initial image of the millimeter-wave radar after collecting the millimeter-wave radar signal and performing digital signal processing;
  • the first coordinate system transformation unit is configured to perform coordinate system transformation on the initial image of the millimeter-wave radar to obtain the time stamp corresponding to the millimeter-wave radar image and the millimeter-wave radar image in the Cartesian coordinate system.
  • S11 and S12 may be implemented by the first image acquisition module, or may be implemented by units within the module. Specifically, S11 is realized by the acquisition processing unit, and S12 is realized by the coordinate system transformation unit.
  • the first acquisition and processing unit first acquires radar signals in real cities and high-speed road scenarios, and performs digital signal processing to obtain an initial image of the millimeter-wave radar.
  • digital signal processing can collect a variety of implementation methods, such as Fast Fourier Transform (FFT), and can also use super-resolution algorithms such as Multiple Signal Classification (MUSIC) algorithm to generate super-resolution The initial image of the millimeter-wave radar at the rate.
  • FFT Fast Fourier Transform
  • MUSIC Multiple Signal Classification
  • the first coordinate system transformation unit performs coordinate system transformation on the initial image of the millimeter-wave radar, because the millimeter-wave radar image after digital signal processing is located in the polar coordinate system, in order to achieve subsequent spatial calibration and label information fusion, It is necessary to transform the initial image of the millimeter-wave radar in the polar coordinate system to the Cartesian coordinate system, and obtain the time stamp corresponding to the millimeter-wave radar image and the millimeter-wave radar image in the Cartesian coordinate system.
  • acquiring the lidar image by the second image acquisition module includes:
  • the second image acquisition module includes:
  • the second acquisition and processing unit is used for acquiring the lidar signal, and performing digital signal processing on the lidar signal to obtain an initial lidar image;
  • a frame reconstruction unit configured to perform frame reconstruction on the initial image of the lidar when the vehicle speed exceeds a preset speed threshold
  • the second coordinate system transformation unit is used to transform the reconstructed lidar image into a Cartesian coordinate system according to the initial calibration attitude information when the lidar is installed, and obtain a timestamp corresponding to the lidar image.
  • S21 to S23 may be implemented by the second image acquisition module, or may be implemented by each unit in the module. Specifically, S21 may be implemented by a second acquisition and processing unit, S22 may be implemented by a frame reconstruction unit, and S23 may be implemented by a second coordinate system transformation unit.
  • the lidar signal has a rolling shutter that causes certain targets to deform, while the millimeter-wave radar signal does not have this problem, so the lidar signal needs to be processed.
  • the jelly effect of lidar is not obvious, and frame reconstruction can be omitted.
  • FIG. 3 a comparison of effects before and after frame reconstruction in the present invention is shown.
  • the line segment extending from the center in the picture is the initial scanning angle of the lidar in the current frame, denoted as ⁇ and ⁇ ', where ⁇ represents the initial scanning angle of the lidar in the current frame before frame reconstruction is performed, and ⁇ ' represents the intended scanning angle.
  • the present invention proposes a frame reconstruction algorithm, which solves the jelly effect in a low-complexity manner by reconstructing the initial scanning angle of the lidar (shown by the line segment in the figure).
  • the frame reconstruction unit determines that the driving speed of the vehicle exceeds the preset speed threshold, that is, it is determined that the vehicle is driving at a high speed, then the frame reconstruction is performed on the initial image of the lidar, including:
  • the initial scanning angle ⁇ ' of the target is calculated; the initial scanning angle ⁇ ' of the target can be outside the coverage of the radar angle of view.
  • the lidar image after frame reconstruction is transformed into the same Cartesian coordinate system as that of the millimeter-wave radar, and the time stamp corresponding to the lidar image is obtained.
  • the space calibration module matches the millimeter-wave radar signal and the lidar signal of the calibration target according to the preset calibration target; calculates the space transformation matrix to obtain the target calibration attitude information.
  • the space calibration attitude information For example, use trihedral radar reflector as calibration targets, match the millimeter wave radar signal and lidar signal of the reflector, and then calculate the space transformation matrix to obtain accurate calibration attitude information as the target calibration attitude information.
  • the time calibration module calculates the time offset between the millimeter-wave radar and the lidar according to the preset segments of the millimeter-wave radar signal and the lidar signal to complete the time calibration, and generates a calibrated time stamp.
  • the preset segment may be a segment with a high-speed target passing by. According to the matching of the millimeter-wave radar response and the lidar response of the high-speed target on the image, the time offset of the two sensors is calculated to complete the time calibration.
  • the closely matched millimeter-wave radar images on the timestamps and the high-speed targets on the lidar images can be completely overlapped, and the overlapped images will be used for annotation and quality assessment.
  • the model ensemble method is used to build a deep neural network for inferring the lidar image, and the target inference result of the lidar is generated, which specifically includes:
  • the lidar image is flipped and rotated horizontally and vertically to generate a total of eight groups of data to be estimated, and the angle of each group of data differs by 90°. Of course, other angles can also be used for rotation, or the point cloud of the lidar can be randomly dropped (drop out), etc.
  • S52 input the data-enhanced lidar image into two or more different neural network models for inference, so that through a group of original images, multiple groups of inference results from different angles and models are obtained.
  • the present invention adopts two different state-of-the-art models, Complex-YOLO and PointRCNN, so that sixteen groups of inference results can be obtained from one set of original images.
  • S53 screening the sixteen groups of prediction results, if the number of prediction results for the target at the same position is less than a certain threshold, it is determined as a false positive (false positive) and filtered out.
  • the threshold is set to 2 in order to ensure the recall rate.
  • S54 Integrate multiple sets of inference results to obtain target inference results based on lidar, including objectness score, type, position, size, and angle.
  • objectness score including objectness score, type, position, size, and angle.
  • Other integration methods can also be used according to requirements, which are not limited here.
  • the lidar estimation result is projected onto the matching millimeter-wave radar image as a pseudo-label of the millimeter-wave radar image.
  • S7 includes:
  • the radar target confidence will be used as an important parameter for millimeter-wave radar annotation.
  • radar targets with low confidence will be removed to prevent the neural network from overfitting to the background noise of the millimeter-wave radar signal.
  • the preset ratio can be set to 120%, and then the target detection frame corresponding to the local millimeter-wave radar signal in the area where the pseudo-label is located is enlarged to 120%, so as to fully cover the radar signal generated on the surface of the target object.
  • the numerical value of the area under the curve can reflect the strength of the millimeter-wave radar signal of the object corresponding to the target detection frame.
  • false positive target detection frames can be filtered out by setting the threshold of the area under the curve to 0.1.
  • a millimeter-wave radar dataset is established according to the target confidence and pseudo-annotation. Due to the needs of the vehicle usage scenario or the design architecture of the deep neural network, various forms of mmWave radar images may be required. For example, FFT images or images processed by super-resolution algorithm, or images in polar coordinate system or Cartesian coordinate system, select the appropriate image form according to the target confidence when the final data set is established. The pseudo-labels are transformed accordingly to generate pseudo-label datasets as millimeter-wave radar datasets.
  • the present invention can realize automatic labeling with the help of lidar, improve labeling efficiency, and eliminate the jelly effect of lidar signal in a relatively simple way by the frame reconstruction algorithm of lidar in a high-speed environment.
  • the deep neural network based on model ensemble can realize target recognition with high recall rate, filter out false positive target detection frames, and improve the quality of annotation.
  • the embodiment of the present invention also provides a vehicle, including:
  • Embodiments of the present invention further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, each process of the above-mentioned embodiment of the method for establishing a data set can be achieved, and the same can be achieved.
  • the technical effect, in order to avoid repetition, will not be repeated here.
  • embodiments of the embodiments of the present invention may be provided as a method, an apparatus, or a computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product implemented on one or more computer-usable storage media having computer-usable program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
  • Embodiments of the present invention are described with reference to flowcharts and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the present invention. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal equipment to produce a machine that causes the instructions to be executed by the processor of the computer or other programmable data processing terminal equipment Means are created for implementing the functions specified in the flow or flows of the flowcharts and/or the blocks or blocks of the block diagrams.
  • These computer program instructions may also be stored in a computer readable memory capable of directing a computer or other programmable data processing terminal equipment to operate in a particular manner, such that the instructions stored in the computer readable memory result in an article of manufacture comprising instruction means, the The instruction means implement the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种数据集的建立方法和车辆,涉及自动驾驶技术领域,方法包括获取毫米波雷达图像和激光雷达图像;对毫米波雷达和激光雷达进行空间和时间校准;搭建用于推测激光雷达图像的深度神经网络并利用深度神经网络生成激光雷达的目标推测结果;将目标推测结果投影到相匹配的毫米波雷达图像上作为毫米波雷达图像的伪标注;根据伪标注所在区域毫米波雷达局部信号生成雷达目标置信度;根据目标置信度和伪标注建立毫米波雷达数据集。借助激光雷达可以实现自动标注,提高标注效率,并且基于模型集成的深度神经网络,可以实现高召回率的目标识别,滤除假阳性的目标检测框,提高标注质量。

Description

数据集的建立方法、车辆和存储介质 技术领域
本发明涉及自动驾驶技术领域,特别是涉及一种数据集的建立方法、车辆和存储介质。
背景技术
毫米波雷达已经成为汽车高级辅助驾驶系统(Advanced Driver Assistance Systems,简称ADAS)上不可或缺的一部分,当汽车行驶在暴雨暴雪此类恶劣环境或者行驶在强光暗光的恶劣照明条件时,毫米波雷达也能够正常工作,具备较强的鲁棒性。传统毫米波雷达的工作是采用数字信号处理算法来提取目标的位置和速度信息,根据生成一系列雷达点或者点云建立数据集。但是生成的雷达点或点云并不具有目标的尺寸、姿态等信息。自动驾驶中通常需要利用包括目标的位置、深度、尺寸和角度在内的位姿信息完成环境感知的工作,但由于应用于自动驾驶的毫米波雷达数据集通常采用人工标注而成,标注效率极低。
虽然也可以使用其他传感器,例如摄像头来对毫米波雷达进行标注,但是摄像头对于位姿信息的估计准确度较低。
发明内容
鉴于上述问题,提出了本发明实施例以便提供一种克服上述问题或者至少部分地解决上述问题的一种数据集的建立方法、车辆和存储介质。
为了解决上述问题,本发明实施例公开了一种数据集的建立方法,应用于设置有毫米波雷达和激光雷达的车辆,其特征在于,该方法包括:
获取毫米波雷达图像;
获取激光雷达图像;
对毫米波雷达和激光雷达进行空间校准,获取目标校准姿态信息;
对毫米波雷达图像和激光雷达图像进行时间戳上的匹配,完成毫米波雷达和激光雷达的时间校准;
搭建用于推测激光雷达图像的深度神经网络并利用深度神经网络生成激光雷达的目标推测结果;
根据目标校准姿态信息,将目标推测结果投影到相匹配的毫米波雷达图像上作为毫米波雷达图像的伪标注;
根据伪标注所在区域毫米波雷达局部信号生成雷达目标置信度;
根据目标置信度和伪标注建立毫米波雷达数据集。
在具体的实施方式中,获取毫米波雷达图像,包括:
采集毫米波雷达信号进行数字信号处理后获得毫米波雷达的初始图像;
对毫米波雷达的初始图像进行坐标系变换,获得笛卡尔坐标系下的毫米波雷达图像和毫米波雷达图像对应的时间戳。
在具体的实施方式中,获取激光雷达图像,包括:
采集激光雷达信号,对激光雷达信号进行数字信号处理后获得激光雷达初始图像;
在车辆行驶速度超过预设的速度阈值时,对激光雷达初始图像进行帧重构;
根据激光雷达安装时的初始校准姿态信息,将帧重构后的激光雷达图像变换到笛卡尔坐标系下,并获取激光雷达图像对应的时间戳。
在具体的实施方式中,对激光雷达初始图像进行帧重构,包括:
根据毫米波雷达视角在激光雷达初始图像上的覆盖范围,计算目标初始扫描角度;
获取当前时刻t 0的图像f 0和下一时刻t 1的图像f 1;计算激光雷达在扫描到目标初始扫描角度时的时刻t' 0和t′ 1,将图像f 0在t' 0到t 1期间和图像f 1在t 1到t′ 1期间的扫描图像合并,重构出一张t' 0到t′ 1时刻的扫描图像作为帧重构后的激光雷达图像。
在具体的实施方式中,对毫米波雷达和激光雷达进行空间校准,获取目标校准姿态信息,包括:
根据预设的校准目标,匹配校准目标的毫米波雷达信号和激光雷达信号;计算空间变换矩阵获得目标校准姿态信息。
在具体的实施方式中,对毫米波雷达图像和激光雷达图像进行时间戳上的匹配,完成毫米波雷达和激光雷达的时间校准,包括:
根据毫米波雷达信号和激光雷达信号的预设片段,计算毫米波雷达和激光雷达的时间偏差来完成时间校准,并生成校准后的时间戳。
在具体的实施方式中,搭建用于推测激光雷达图像的深度神经网络并利用深度神经网络生成激光雷达的目标推测结果,包括:
采用模型集成方法搭建用于推测激光雷达图像的深度神经网络,利用深度神经网络生成激光雷达的目标推测结果。
在具体的实施方式中,根据伪标注所在区域毫米波雷达局部信号生成雷达目标置信度,包括:
将伪标注所在区域毫米波雷达局部信号对应的目标检测框扩大至预设比例,然后计算被扩大的目标检测框内毫米波雷达信号的信号强度频率直方图,根据频率直方图画出对应的累计分布函数;再对累计分布函数继续归一化处理,计算曲线下面积;根据曲线下面积的数值大小生成雷达目标置信度。
本发明实施例还公开了一种车辆,其特征在于,包括:
第一图像获取模块,用于获取毫米波雷达图像;
第二图像获取模块,用于获取激光雷达图像;
空间校准模块,用于对毫米波雷达和激光雷达进行空间校准,获取目标校准姿态信息;
时间校准模块,用于对毫米波雷达图像和激光雷达图像进行时间戳上的匹配,完成毫米波雷达和激光雷达的时间校准;
目标推测模块,用于搭建用于推测激光雷达图像的深度神经网络并利用深度神经网络生成激光雷达的目标推测结果;
伪标注模块,用于根据目标校准姿态信息,将目标推测结果投影到相匹配的毫米波雷达图像上作为毫米波雷达图像的伪标注;
置信度生成模块,用于根据伪标注所在区域毫米波雷达局部信号生成雷达目标置信度;
数据集建立模块,用于根据目标置信度和伪标注建立毫米波雷达数据集。
本发明实施例还公开了一种车辆,其特征在于,包括:处理器、存储器及存储在所述存储器上并能够在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如上述的数据集的建立方法的步骤。
本发明实施例还公开了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如上述的数据集的建立方法的步骤。
本发明实施例包括以下优点:
相对于传统的人工标注,本发明借助激光雷达可以实现自动标注,提高标注效率,并且高速环境下藉由激光雷达的帧重构算法以较为简单的方式消除激光雷达信号的果冻效应,并且基于模型集成的深度神经网络,可以实现高召回率的目标识别,滤除假阳性的目标检测框,提高标注质量。
附图说明
图1是本发明的一种数据集的建立方法实施例的步骤流程图;
图2是本发明的一种车辆实施例的结构框图;
图3是本发明中对激光雷达图像进行帧重构的效果对比图。
具体实施方式
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。
本发明实施例的核心构思之一在于,利用激光雷达实现了毫米波雷达数据集的建立。
参照图1,示出了本发明的一种数据集的建立方法实施例的步骤流程图,具体可以包括如下步骤:
S1,获取毫米波雷达图像;
S2,获取激光雷达图像;
S3,对毫米波雷达和激光雷达进行空间校准,获取目标校准姿态信息;
S4,对毫米波雷达图像和激光雷达图像进行时间戳上的匹配,完成毫米波雷达和激光雷达的时间校准;
S5,搭建用于推测激光雷达图像的深度神经网络并利用深度神经网络生成激光雷达的目标推测结果;
S6,根据目标校准姿态信息,将目标推测结果投影到相匹配的毫米波雷达图像上作为毫米波雷达图像的伪标注;
S7,根据伪标注所在区域毫米波雷达局部信号生成雷达目标置信度;
S8,根据目标置信度和伪标注建立毫米波雷达数据集。
请参阅图2,本发明还提出一种车辆,上述数据集的建立方法可以由该车辆作为动作执行主体来实现。
具体地,车辆包括:
第一图像获取模块,用于获取毫米波雷达图像;
第二图像获取模块,用于获取激光雷达图像;
空间校准模块,用于对毫米波雷达和激光雷达进行空间校准,获取目标校准姿态信息;
时间校准模块,用于对毫米波雷达图像和激光雷达图像进行时间戳上的匹配,完成毫米波雷达和激光雷达的时间校准;
目标推测模块,用于搭建用于推测激光雷达图像的深度神经网络并利用深度神经网络生成激光雷达的目标推测结果;
伪标注模块,用于根据目标校准姿态信息,将目标推测结果投影到相匹配的毫米波雷达图像上作为毫米波雷达图像的伪标注;
置信度生成模块,用于根据伪标注所在区域毫米波雷达局部信号生成雷达目标置信度;
数据集建立模块,用于根据目标置信度和伪标注建立毫米波雷达数据集。
其中,S1至S8的步骤也可以由车辆中的各个模块来实现,具体地,S1可以由第一图像获取模块实现,S2可以由第二图像获取模块实现,S3可以由空间校准模块实现,S4可以由时间校准模块实现,S5可以由目标推测模块实现,S6可以由伪标注模块实现,S7可以由置信度生成模块实现,S8可以由数据集建立模块实现。
当车辆上安装好毫米波雷达和激光雷达后,可以测量毫米波雷达和激光 雷达这两个传感器的位置,获得各自粗略的校准姿态信息作为初始校准姿态信息。
在S1的步骤中,第一图像获取模块获取毫米波雷达图像,包括:
S11,采集毫米波雷达信号进行数字信号处理后获得毫米波雷达的初始图像;
S12,对毫米波雷达的初始图像进行坐标系变换,获得笛卡尔坐标系下的毫米波雷达图像和毫米波雷达图像对应的时间戳。
对应地,第一图像获取模块包括:
第一采集处理单元,用于采集毫米波雷达信号进行数字信号处理后获得毫米波雷达的初始图像;
第一坐标系变换单元,用于对毫米波雷达的初始图像进行坐标系变换,获得笛卡尔坐标系下的毫米波雷达图像和毫米波雷达图像对应的时间戳。
S11和S12可以由第一图像获取模块来实现,也可以由模块内的单元来实现。具体地,S11由采集处理单元实现,S12由坐标系变换单元实现。
S11中,第一采集处理单元先采集真实城市及高速道路场景下的雷达信号,进行数字信号处理后获得毫米波雷达的初始图像。其中,数字信号处理可以采集多种实现方式,例如快速傅里叶变换(Fast Fourier Transform,FFT),也可以使用多重信号分类(Multiple signal classification,MUSIC)算法这类超分辨率算法来生成超分辨率的毫米波雷达的初始图像。
S12中,第一坐标系变换单元对毫米波雷达的初始图像进行坐标系变换,因为经过数字信号处理的毫米波雷达图像是位于极坐标系下的,为实现后续的空间校准和标注信息融合,需要将极坐标系下毫米波雷达的初始图像变换到笛卡尔坐标系下,获得笛卡尔坐标系下的毫米波雷达图像和毫米波雷达图像对应的时间戳。
在S2的步骤中,第二图像获取模块获取激光雷达图像包括:
S21,采集激光雷达信号,对激光雷达信号进行数字信号处理后获得激光雷达初始图像;
S22,在车辆行驶速度超过预设的速度阈值时,对激光雷达初始图像进 行帧重构;
S23,根据激光雷达安装时的初始校准姿态信息,将帧重构后的激光雷达图像变换到笛卡尔坐标系下,并获取激光雷达图像对应的时间戳。
对应地,第二图像获取模块包括:
第二采集处理单元,用于采集激光雷达信号,对激光雷达信号进行数字信号处理后获得激光雷达初始图像;
帧重构单元,用于在车辆行驶速度超过预设的速度阈值时,对激光雷达初始图像进行帧重构;
第二坐标系变换单元,用于根据激光雷达安装时的初始校准姿态信息,将帧重构后的激光雷达图像变换到笛卡尔坐标系下,并获取激光雷达图像对应的时间戳。
S21至S23可以由第二图像获取模块来实现,也可以由模块内的各个单元来实现。具体地,S21可以由第二采集处理单元来实现,S22可以由帧重构单元来实现,S23可以由第二坐标系变换单元来实现。
毫米波雷达和激光雷达的工作方式不同,激光雷达信号具有果冻效应(rolling shutter),导致特定目标发生形变,而毫米波雷达信号则不具有此问题,故需要对激光雷达信号进行处理。在非高速的情况下,激光雷达的果冻效应不明显,可以不进行帧重构。参照图3,示出了本发明中帧重构前后的效果对比。图片中从中心向外延伸的线段为激光雷达在当前帧的初始扫描角度,记为θ和θ',其中θ表示未进行帧重构前激光雷达在当前帧的初始扫描角度,θ'表示拟进行帧重构激光雷达在当前帧的初始扫描角度。可以看到在左侧图的初始扫描角度附近,经过的目标车辆产生重影,使得目标车辆的位置和形状发生畸变,而经过帧重构后,右侧图像则修正了目标形状。这种畸变会导致神经网络推测结果错误,如目标类型、位置和尺寸信息,进而在标注信息融合中,造成毫米波雷达伪标注的不准确性和目标置信度计算的不准确性。造成问题的原因是激光雷达的果冻效应,传统解决方案采用速度补偿方法修正目标,但其复杂度较高,且需要知道目标车辆的相对速度,因此实现难度大。本发明提出帧重构算法,通过重构激光雷达的初始扫描角度(图中 线段所示),以一种低复杂度的方式解决果冻效应。
具体地,S22的步骤中,帧重构单元确定车辆行驶速度超过预设的速度阈值时,即认定车辆处于高速行驶中,则对激光雷达初始图像进行帧重构,包括:
根据毫米波雷达视角在激光雷达初始图像上的覆盖范围,计算目标初始扫描角度θ';目标初始扫描角度θ'在雷达视角覆盖范围外即可。
基于激光雷达的扫描连续性,获取当前时刻t 0的图像f 0和下一时刻t 1的图像f 1;计算激光雷达在扫描到目标初始扫描角度θ'时的时刻t' 0和t′ 1,将图像f 0在t' 0到t 1期间和图像f 1在t 1到t′ 1期间的扫描图像合并,重构出一张t' 0到t′ 1时刻的扫描图像作为帧重构后的激光雷达图像。
在帧重构后,根据激光雷达安装时的初始校准姿态信息,将帧重构后的激光雷达图像变换到和毫米波雷达相同的笛卡尔坐标系下,并获取激光雷达图像对应的时间戳。
具体地,在S3的步骤中,空间校准模块根据预设的校准目标,匹配校准目标的毫米波雷达信号和激光雷达信号;计算空间变换矩阵获得目标校准姿态信息。例如使用三面角雷达反射器(trihedral radar reflector)作为校准目标(calibration targets),匹配反射器的毫米波雷达信号和激光雷达信号,然后计算空间变换矩阵获得精确的校准姿态信息作为目标校准姿态信息。
具体地,在S4的步骤中,时间校准模块根据毫米波雷达信号和激光雷达信号的预设片段,计算毫米波雷达和激光雷达的时间偏差来完成时间校准,并生成校准后的时间戳。其中,预设片段可以是有高速目标经过的片段。根据该高速目标的毫米波雷达响应和激光雷达响应在图像上匹配,进而计算出这两个传感器的时间偏差完成时间校准。
经过S3的空间校准和S4的时间校准,时间戳上相近匹配的毫米波雷达图像和激光雷达图像上的高速目标可以完全重叠,重叠后的图像将用于标注和质量评估(quality assessment)。
在S5的步骤中,采用模型集成(model ensemble)方法,搭建用于推测激光雷达图像的深度神经网络,生成激光雷达的目标推测(inference)结果, 具体包括:
S51,对输入的激光雷达图像进行测试时数据增强(test time augmentation)。具体做法有很多种,本发明采用将激光雷达图像进行水平、垂直翻转及旋转操作,共生成八组待推测数据,每组数据角度相差90°。当然也可以采用其他角度进行旋转,或者采用对激光雷达的点云进行随机摒弃(drop out)等。
S52,将数据增强后的激光雷达图像输入到两或多个不同的神经网络模型中进行推测,如此通过一组原始图像,获得多组来自不同角度和模型的推测结果。本发明采用了两个不同的最新(state-of-the-art)模型Complex-YOLO和PointRCNN,因此一组原始图像可以得到十六组推测结果。
S53,对十六组预测结果进行筛选,如果对同一位置目标的预测结果个数少于特定门限,则判定为假阳性(false positive)并滤除。本方案为保证召回率(recall),将门限设定为2。
S54,将多组推测结果进行集成,得到基于激光雷达的目标推测结果,包括目标分数(objectness score)、类型、位置、尺寸、角度。集成的方法有很多种,本发明采用的是基于非极大值抑制的算法。还可以根据需求使用其他集成的方法,在此不做限定。
在S6的步骤中,根据S4获得的目标校准姿态信息,也就是准确校准姿态信息,将激光雷达推测结果投影到相匹配的毫米波雷达图像上,作为毫米波雷达图像的伪标注。
具体地,S7包括:
将伪标注所在区域毫米波雷达局部信号对应的目标检测框扩大至预设比例,然后计算被扩大的目标检测框内毫米波雷达信号的信号强度频率直方图,根据频率直方图画出对应的累计分布函数;再对累计分布函数继续归一化处理,计算曲线下面积;根据曲线下面积的数值大小生成雷达目标置信度。雷达目标置信度将作为毫米波雷达标注的重要参数,在使用神经网络训练数据集时对低置信度的雷达目标进行去除,防止神经网络过拟合(overfit)到毫米波雷达信号的背景噪声。
在本发明的实施方式中,需要基于伪标注所在区域的毫米波雷达局部信 号计算响应强度,判断是否存在有效的毫米波雷达信号,生成雷达目标置信度。例如可以将预设比例设为120%,然后将伪标注所在区域毫米波雷达局部信号对应的目标检测框扩大至120%,以便充分涵盖目标物体表面产生的雷达信号。计算被放大的目标检测框内毫米波雷达信号的信号强度频率直方图(histogram)。再根据频率直方图画出对应的累积分布函数(Cumulative Distribution Function),最后对累计分布函数进行归一化处理,计算曲线下面积(Area Under Curve)。
该曲线下面积的数值大小即可反映出该目标检测框对应的物体毫米波雷达信号的强弱。该数值越大,即说明对应的物体产生的毫米波雷达信号越强,该目标检测框越有可能是真阳性(true positive);该数值越小,越有可能是假阳性。在本发明中,可以通过将曲线下面积的门限设置为0.1来滤去假阳性的目标检测框。
在S8中,根据目标置信度和伪标注建立毫米波雷达数据集。由于车辆使用场景需要或者深度神经网络的设计架构,可能需要多种形式的毫米波雷达图像。如采用FFT图像或者是经过超分辨率算法处理后的图像,又或者是极坐标系下或者是笛卡尔坐标系下的图像,在最终数据集的建立时根据目标置信度选取合适的图像形式,并将伪标注进行相应的变换处理,生成伪标注数据集作为毫米波雷达数据集。
综上可知,相对于传统的人工标注,本发明借助激光雷达可以实现自动标注,提高标注效率,并且高速环境下藉由激光雷达的帧重构算法以较为简单的方式消除激光雷达信号的果冻效应,并且基于模型集成的深度神经网络,可以实现高召回率的目标识别,滤除假阳性的目标检测框,提高标注质量。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明实施例并不受所描述的动作顺序的限制,因为依据本发明实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明实施例所必须的。
本发明实施例还提供了一种车辆,包括:
包括处理器、存储器及存储在所述存储器上并能够在所述处理器上运行的计算机程序,该计算机程序被处理器执行时实现上述数据集的建立方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本发明实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储计算机程序,计算机程序被处理器执行时实现上述数据集的建立方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本发明实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本发明实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明实施例是参照根据本发明实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设 备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本发明所提供的数据集的建立方法、车辆和存储介质,进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (11)

  1. 一种数据集的建立方法,应用于设置有毫米波雷达和激光雷达的车辆,其特征在于,该方法包括:
    获取毫米波雷达图像;
    获取激光雷达图像;
    对毫米波雷达和激光雷达进行空间校准,获取目标校准姿态信息;
    对毫米波雷达图像和激光雷达图像进行时间戳上的匹配,完成毫米波雷达和激光雷达的时间校准;
    搭建用于推测激光雷达图像的深度神经网络并利用深度神经网络生成激光雷达的目标推测结果;
    根据目标校准姿态信息,将目标推测结果投影到相匹配的毫米波雷达图像上作为毫米波雷达图像的伪标注;
    根据伪标注所在区域毫米波雷达局部信号生成雷达目标置信度;
    根据目标置信度和伪标注建立毫米波雷达数据集。
  2. 如权利要求1所述的方法,其特征在于,获取毫米波雷达图像,包括:
    采集毫米波雷达信号进行数字信号处理后获得毫米波雷达的初始图像;
    对毫米波雷达的初始图像进行坐标系变换,获得笛卡尔坐标系下的毫米波雷达图像和毫米波雷达图像对应的时间戳。
  3. 如权利要求2所述的方法,其特征在于,获取激光雷达图像,包括:
    采集激光雷达信号,对激光雷达信号进行数字信号处理后获得激光雷达初始图像;
    在车辆行驶速度超过预设的速度阈值时,对激光雷达初始图像进行帧重构;
    根据激光雷达安装时的初始校准姿态信息,将帧重构后的激光雷达图像变换到笛卡尔坐标系下,并获取激光雷达图像对应的时间戳。
  4. 如权利要求3所述的方法,其特征在于,对激光雷达初始图像进行帧重构,包括:
    根据毫米波雷达视角在激光雷达初始图像上的覆盖范围,计算目标初始 扫描角度;
    获取当前时刻t 0的图像f 0和下一时刻t 1的图像f 1;计算激光雷达在扫描到目标初始扫描角度时的时刻t' 0和t 1',将图像f 0在t' 0到t 1期间和图像f 1在t 1到t 1'期间的扫描图像合并,重构出一张t' 0到t 1'时刻的扫描图像作为帧重构后的激光雷达图像。
  5. 如权利要求1所述的方法,其特征在于,对毫米波雷达和激光雷达进行空间校准,获取目标校准姿态信息,包括:
    根据预设的校准目标,匹配校准目标的毫米波雷达信号和激光雷达信号;计算空间变换矩阵获得目标校准姿态信息。
  6. 如权利要求5所述的方法,其特征在于,对毫米波雷达图像和激光雷达图像进行时间戳上的匹配,完成毫米波雷达和激光雷达的时间校准,包括:
    根据毫米波雷达信号和激光雷达信号的预设片段,计算毫米波雷达和激光雷达的时间偏差来完成时间校准,并生成校准后的时间戳。
  7. 如权利要求1所述的方法,其特征在于,搭建用于推测激光雷达图像的深度神经网络并利用深度神经网络生成激光雷达的目标推测结果,包括:
    采用模型集成方法搭建用于推测激光雷达图像的深度神经网络,利用深度神经网络生成激光雷达的目标推测结果。
  8. 如权利要求1所述的方法,其特征在于,根据伪标注所在区域毫米波雷达局部信号生成雷达目标置信度,包括:
    将伪标注所在区域毫米波雷达局部信号对应的目标检测框扩大至预设比例,然后计算被扩大的目标检测框内毫米波雷达信号的信号强度频率直方图,根据频率直方图画出对应的累计分布函数;再对累计分布函数继续归一化处理,计算曲线下面积;根据曲线下面积的数值大小生成雷达目标置信度。
  9. 一种车辆,其特征在于,包括:
    第一图像获取模块,用于获取毫米波雷达图像;
    第二图像获取模块,用于获取激光雷达图像;
    空间校准模块,用于对毫米波雷达和激光雷达进行空间校准,获取目标 校准姿态信息;
    时间校准模块,用于对毫米波雷达图像和激光雷达图像进行时间戳上的匹配,完成毫米波雷达和激光雷达的时间校准;
    目标推测模块,用于搭建用于推测激光雷达图像的深度神经网络并利用深度神经网络生成激光雷达的目标推测结果;
    伪标注模块,用于根据目标校准姿态信息,将目标推测结果投影到相匹配的毫米波雷达图像上作为毫米波雷达图像的伪标注;
    置信度生成模块,用于根据伪标注所在区域毫米波雷达局部信号生成雷达目标置信度;
    数据集建立模块,用于根据目标置信度和伪标注建立毫米波雷达数据集。
  10. 一种车辆,其特征在于,包括:处理器、存储器及存储在所述存储器上并能够在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1-8中任一项所述的数据集的建立方法的步骤。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8中任一项所述的数据集的建立方法的步骤。
PCT/CN2020/121496 2020-06-30 2020-10-16 数据集的建立方法、车辆和存储介质 WO2022000857A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20900681.6A EP3961256A4 (en) 2020-06-30 2020-10-16 DATA SET ESTABLISHMENT METHOD AND STORAGE MEDIA

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010618892.7 2020-06-30
CN202010618892.7A CN111830502B (zh) 2020-06-30 2020-06-30 数据集的建立方法、车辆和存储介质

Publications (1)

Publication Number Publication Date
WO2022000857A1 true WO2022000857A1 (zh) 2022-01-06

Family

ID=72899947

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/121496 WO2022000857A1 (zh) 2020-06-30 2020-10-16 数据集的建立方法、车辆和存储介质

Country Status (3)

Country Link
EP (1) EP3961256A4 (zh)
CN (1) CN111830502B (zh)
WO (1) WO2022000857A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638262A (zh) * 2022-03-09 2022-06-17 西北工业大学 基于时频二维特征学习的雷达对海杂波智能抑制方法
CN114924274A (zh) * 2022-04-08 2022-08-19 苏州大学 一种基于固定网格的高动态铁路环境雷达感知方法
CN115144843A (zh) * 2022-06-28 2022-10-04 海信集团控股股份有限公司 一种物体位置的融合方法及装置
CN115393680A (zh) * 2022-08-08 2022-11-25 武汉理工大学 雾天场景下多模态信息时空融合的3d目标检测方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419717B (zh) * 2020-11-13 2022-03-11 中国第一汽车股份有限公司 目标管理方法、装置、车辆及存储介质
CN117036868B (zh) * 2023-10-08 2024-01-26 之江实验室 一种人体感知模型的训练方法、装置、介质及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160274589A1 (en) * 2012-09-26 2016-09-22 Google Inc. Wide-View LIDAR With Areas of Special Attention
CN108229366A (zh) * 2017-12-28 2018-06-29 北京航空航天大学 基于雷达和图像数据融合的深度学习车载障碍物检测方法
CN108694731A (zh) * 2018-05-11 2018-10-23 武汉环宇智行科技有限公司 基于低线束激光雷达和双目相机的融合定位方法及设备
CN108872991A (zh) * 2018-05-04 2018-11-23 上海西井信息科技有限公司 目标物检测与识别方法、装置、电子设备、存储介质
CN110363820A (zh) * 2019-06-28 2019-10-22 东南大学 一种基于激光雷达、图像前融合的目标检测方法
CN111077506A (zh) * 2019-12-12 2020-04-28 苏州智加科技有限公司 对毫米波雷达进行标定的方法、装置及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3052286B2 (ja) * 1997-08-28 2000-06-12 防衛庁技術研究本部長 飛行システムおよび航空機用擬似視界形成装置
CN108509972A (zh) * 2018-01-16 2018-09-07 天津大学 一种基于毫米波和激光雷达的障碍物特征提取方法
CN110018470A (zh) * 2019-03-01 2019-07-16 北京纵目安驰智能科技有限公司 基于多传感器前融合的实例标注方法、模型、终端和存储介质
CN110794396B (zh) * 2019-08-05 2021-08-17 上海埃威航空电子有限公司 基于激光雷达和导航雷达的多目标识别方法及系统
CN111339876B (zh) * 2020-02-19 2023-09-01 北京百度网讯科技有限公司 用于识别场景中各区域类型的方法和装置
CN111310840B (zh) * 2020-02-24 2023-10-17 北京百度网讯科技有限公司 数据融合处理方法、装置、设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160274589A1 (en) * 2012-09-26 2016-09-22 Google Inc. Wide-View LIDAR With Areas of Special Attention
CN108229366A (zh) * 2017-12-28 2018-06-29 北京航空航天大学 基于雷达和图像数据融合的深度学习车载障碍物检测方法
CN108872991A (zh) * 2018-05-04 2018-11-23 上海西井信息科技有限公司 目标物检测与识别方法、装置、电子设备、存储介质
CN108694731A (zh) * 2018-05-11 2018-10-23 武汉环宇智行科技有限公司 基于低线束激光雷达和双目相机的融合定位方法及设备
CN110363820A (zh) * 2019-06-28 2019-10-22 东南大学 一种基于激光雷达、图像前融合的目标检测方法
CN111077506A (zh) * 2019-12-12 2020-04-28 苏州智加科技有限公司 对毫米波雷达进行标定的方法、装置及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3961256A4 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638262A (zh) * 2022-03-09 2022-06-17 西北工业大学 基于时频二维特征学习的雷达对海杂波智能抑制方法
CN114638262B (zh) * 2022-03-09 2024-02-27 西北工业大学 基于时频二维特征学习的雷达对海杂波智能抑制方法
CN114924274A (zh) * 2022-04-08 2022-08-19 苏州大学 一种基于固定网格的高动态铁路环境雷达感知方法
CN114924274B (zh) * 2022-04-08 2023-06-30 苏州大学 一种基于固定网格的高动态铁路环境雷达感知方法
CN115144843A (zh) * 2022-06-28 2022-10-04 海信集团控股股份有限公司 一种物体位置的融合方法及装置
CN115393680A (zh) * 2022-08-08 2022-11-25 武汉理工大学 雾天场景下多模态信息时空融合的3d目标检测方法及系统

Also Published As

Publication number Publication date
EP3961256A1 (en) 2022-03-02
CN111830502B (zh) 2021-10-12
EP3961256A4 (en) 2022-08-17
CN111830502A (zh) 2020-10-27

Similar Documents

Publication Publication Date Title
WO2022000857A1 (zh) 数据集的建立方法、车辆和存储介质
Wang et al. RODNet: A real-time radar object detection network cross-supervised by camera-radar fused object 3D localization
Wang et al. Fusing bird’s eye view lidar point cloud and front view camera image for 3d object detection
Lim et al. Radar and camera early fusion for vehicle detection in advanced driver assistance systems
CN109949372B (zh) 一种激光雷达与视觉联合标定方法
Dong et al. Probabilistic oriented object detection in automotive radar
CN111126152A (zh) 一种基于视频的多目标行人检测与跟踪的方法
CN111553859A (zh) 一种激光雷达点云反射强度补全方法及系统
US11210801B1 (en) Adaptive multi-sensor data fusion method and system based on mutual information
Strbac et al. YOLO multi-camera object detection and distance estimation
CN111222395A (zh) 目标检测方法、装置与电子设备
US20170039727A1 (en) Methods and Systems for Detecting Moving Objects in a Sequence of Image Frames Produced by Sensors with Inconsistent Gain, Offset, and Dead Pixels
CN105335955A (zh) 对象检测方法和对象检测装置
CN111696196B (zh) 一种三维人脸模型重建方法及装置
CN103679167A (zh) 一种ccd图像处理的方法
CN111080784A (zh) 一种基于地面图像纹理的地面三维重建方法和装置
US11436452B2 (en) System and method for label augmentation in video data
CN113139602A (zh) 基于单目相机和激光雷达融合的3d目标检测方法及系统
CN114445310A (zh) 一种3d目标检测方法、装置、电子设备和介质
CN111856445B (zh) 一种目标检测方法、装置、设备及系统
CN114663598A (zh) 三维建模方法、装置和存储介质
CN113191427B (zh) 一种多目标车辆跟踪方法及相关装置
WO2020253764A1 (zh) 确定行驶区域信息的方法及装置
JP2018124963A (ja) 画像処理装置、画像認識装置、画像処理プログラム、及び画像認識プログラム
CN113895482B (zh) 基于轨旁设备的列车测速方法及装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020900681

Country of ref document: EP

Effective date: 20210621

NENP Non-entry into the national phase

Ref country code: DE