CN116203577A - Target detection method and device based on multi-signal fusion - Google Patents

Target detection method and device based on multi-signal fusion Download PDF

Info

Publication number
CN116203577A
CN116203577A CN202111452399.3A CN202111452399A CN116203577A CN 116203577 A CN116203577 A CN 116203577A CN 202111452399 A CN202111452399 A CN 202111452399A CN 116203577 A CN116203577 A CN 116203577A
Authority
CN
China
Prior art keywords
data
radar
point cloud
target
millimeter wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111452399.3A
Other languages
Chinese (zh)
Inventor
杨炎龙
李娟娟
孟凡志
吴雷
邓永强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanji Technology Co Ltd
Original Assignee
Beijing Wanji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanji Technology Co Ltd filed Critical Beijing Wanji Technology Co Ltd
Priority to CN202111452399.3A priority Critical patent/CN116203577A/en
Publication of CN116203577A publication Critical patent/CN116203577A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The embodiment of the application provides a target detection method and device based on multi-signal fusion, and belongs to the technical field of road perception. The method comprises the following steps: acquiring initial echo data acquired by a millimeter wave radar and initial point cloud data acquired by a laser radar; processing the initial echo data to obtain radar cube data; the radar cube data comprises information of three dimensions of speed, azimuth and distance; acquiring integrated data based on the initial point cloud data and the radar cube data; and inputting the integrated data into a target neural network model to obtain a target detection result. According to the method, the laser radar signals and millimeter wave type signals are processed and then fused, and fused data are input into a neural network model for processing, so that a target detection result is obtained, and the problems of low accuracy and poor detection effect when the target detection is carried out only by relying on single sensor signals can be solved.

Description

Target detection method and device based on multi-signal fusion
Technical Field
The application belongs to the technical field of road perception, and particularly relates to a target detection method and device based on multi-signal fusion.
Background
With the development of automatic driving technology, road sensing by using fusion information of a plurality of sensors is currently the dominant technology. The multi-type sensor signal fusion can overcome the defect that a single sensor signal is easily influenced by environmental factors such as weather, illumination and the like, and the multi-type sensor has different signal acquisition advantages, and can make up the defect that the single sensor independently acquires the signal through mutual matching. For example, at present, the laser radar is mainly based on point cloud sensing, has strong stereoscopic sensing capability and is easy to distinguish target types, but the detection precision of the point cloud data sparse target is not high, and the influence of target shielding on the point cloud detection is larger. For another example, the millimeter wave radar is more suitable for detecting moving targets, and larger errors exist in the detection of static targets.
Therefore, how to improve the detection effect when the target detection is performed by only relying on the signal acquired by a single sensor is a problem to be solved.
Disclosure of Invention
The application provides a method and equipment for target detection based on multi-signal fusion, which are used for processing and fusing laser radar signals and millimeter wave type signals and inputting fused data into a neural network model for processing so as to obtain a target detection result, and can solve the problems of low accuracy and poor detection effect when only relying on single sensor signals for target detection.
In a first aspect, a method for detecting a target based on multi-signal fusion is provided, including:
acquiring initial echo data acquired by a millimeter wave radar and initial point cloud data acquired by a laser radar;
processing the initial echo data to obtain radar cube data; the radar cube data comprises information of three dimensions of speed, azimuth and distance;
acquiring integration data based on the initial point cloud data and the radar cube data;
and inputting the integrated data into a target neural network model to obtain a target detection result.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes:
transmitting pulses to the millimeter wave radar through the laser radar, so that the millimeter wave radar and the laser radar respectively acquire the initial echo data and the initial point cloud data synchronously; and/or the number of the groups of groups,
and adjusting the frame rate of the millimeter wave radar or the laser radar transmitting signal, and aligning the time stamps of the initial echo data and the initial point cloud data, so that the millimeter wave radar and the laser radar respectively and synchronously acquire the initial echo data and the initial point cloud data.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes:
calibrating the millimeter wave radar space and the laser radar space, and determining space synchronization parameters of the millimeter wave radar and the laser radar;
and carrying out rotation translation transformation on the initial echo data and the point cloud data in space according to the space synchronization parameters so as to obtain the initial echo data and the initial point cloud data under the same space.
With reference to the first aspect, in some implementations of the first aspect, the processing the initial echo data to obtain radar cube data specifically includes:
performing three-dimensional fast Fourier transform (3D-FFT) on the initial millimeter wave data to obtain radar cube data, wherein the radar cube data are used for indicating energy values corresponding to different speeds, directions and distances;
the acquiring integrated data based on the initial point cloud data and the radar cube data specifically includes:
taking the part of the radar cube data, the energy value of which is larger than a first threshold value, as target radar cube data;
and acquiring integrated data based on the initial point cloud data and the target radar cube data.
With reference to the first aspect, in certain implementations of the first aspect, acquiring integration data based on the initial point cloud data and the radar cube data includes:
denoising the initial point cloud data to obtain target point cloud data;
and acquiring integrated data based on the radar cube data and the target point cloud data.
With reference to the first aspect, in certain implementation manners of the first aspect, the radar cube data is used to indicate energy values corresponding to different speeds, directions and distances;
correspondingly, the acquiring the integration data based on the initial point cloud data and the radar cube data specifically includes:
taking the part of the radar cube data, the energy value of which is larger than a first threshold value, as target radar cube data;
denoising the initial point cloud data to obtain target point cloud data;
regularization processing is carried out on the target point cloud data and the target echo data, and first regularization data and second regularization data with uniform signal amplitude ranges are respectively obtained;
and integrating the first regularized data and the second regularized data to obtain integrated data.
With reference to the first aspect, in certain implementation manners of the first aspect, the first regularized data includes a plurality of three-dimensional space coordinates and energy values corresponding to the three-dimensional space coordinates; the second regularized data comprise a plurality of the three-dimensional space coordinates, the speeds corresponding to the three-dimensional space coordinates, radar cross section energy and the initial echo data of the positions corresponding to the three-dimensional space coordinates;
The integrating the first regularized data and the second regularized data to obtain the integrated data specifically includes:
and merging the first regularization data and the second regularization data which correspond to the same three-dimensional space coordinates respectively to obtain integration data, wherein the integration data comprise the three-dimensional space coordinates, and energy values, speeds, radar scattering surface energy and initial echo data which correspond to the three-dimensional space coordinates respectively.
With reference to the first aspect, in certain implementations of the first aspect, the target detection result includes a speed, a location, and a category of the target.
In a second aspect, there is provided a roadside apparatus comprising:
one or more memories;
one or more processors;
the one or more memories store computer program instructions that, when executed by the one or more processors, cause the roadside device to:
acquiring initial echo data acquired by a millimeter wave radar and initial point cloud data acquired by a laser radar;
processing the initial echo data to obtain radar cube data; the radar cube data comprises information of three dimensions of speed, azimuth and distance;
Acquiring integration data based on the initial point cloud data and the radar cube data;
and inputting the integrated data into a target neural network model to obtain a target detection result.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors, when executing the computer program instructions, cause the roadside device to implement the steps of:
transmitting pulses to the millimeter wave radar through the laser radar, so that the millimeter wave radar and the laser radar respectively acquire the initial echo data and the initial point cloud data synchronously; and/or the number of the groups of groups,
and adjusting the frame rate of the millimeter wave radar or the laser radar transmitting signal, and aligning the time stamps of the initial echo data and the initial point cloud data, so that the millimeter wave radar and the laser radar respectively and synchronously acquire the initial echo data and the initial point cloud data.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors, when executing the computer program instructions, cause the roadside device to implement the steps of:
calibrating the millimeter wave radar space and the laser radar space, and determining space synchronization parameters of the millimeter wave radar and the laser radar;
And carrying out rotation translation transformation on the initial echo data and the point cloud data in space according to the space synchronization parameters so as to obtain the initial echo data and the initial point cloud data under the same space.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors, when executing the computer program instructions, cause the roadside device to implement the steps of:
performing three-dimensional fast Fourier transform (3D-FFT) on the initial millimeter wave data to obtain radar cube data, wherein the radar cube data are used for indicating energy values corresponding to different speeds, directions and distances;
the acquiring integrated data based on the initial point cloud data and the radar cube data specifically includes:
taking the part of the radar cube data, the energy value of which is larger than a first threshold value, as target radar cube data;
and acquiring integrated data based on the initial point cloud data and the target radar cube data.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors, when executing the computer program instructions, cause the roadside device to implement the steps of:
Denoising the initial point cloud data to obtain target point cloud data;
and acquiring integrated data based on the radar cube data and the target point cloud data.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors, when executing the computer program instructions, cause the roadside device to implement the steps of:
correspondingly, the acquiring the integration data based on the initial point cloud data and the radar cube data specifically includes:
taking the part of the radar cube data, the energy value of which is larger than a first threshold value, as target radar cube data;
denoising the initial point cloud data to obtain target point cloud data;
regularization processing is carried out on the target point cloud data and the target echo data, and first regularization data and second regularization data with uniform signal amplitude ranges are respectively obtained;
and integrating the first regularized data and the second regularized data to obtain integrated data.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors, when executing the computer program instructions, cause the roadside device to implement the steps of:
The first regularization data comprise a plurality of three-dimensional space coordinates and energy values corresponding to the three-dimensional space coordinates; the second regularized data comprise a plurality of the three-dimensional space coordinates, the speeds corresponding to the three-dimensional space coordinates, radar cross section energy and the initial echo data of the positions corresponding to the three-dimensional space coordinates;
the integrating the first regularized data and the second regularized data to obtain the integrated data specifically includes:
and merging the first regularization data and the second regularization data which correspond to the same three-dimensional space coordinates respectively to obtain integration data, wherein the integration data comprise the three-dimensional space coordinates, and energy values, speeds, radar scattering surface energy and initial echo data which correspond to the three-dimensional space coordinates respectively.
In a third aspect, there is provided a roadside system comprising:
road side equipment;
millimeter wave radar;
a laser radar;
the roadside device includes one or more memories, and one or more processors;
the millimeter wave radar is used for collecting initial point cloud data, the laser radar is used for collecting initial millimeter wave data, the one or more memories are stored with computer program instructions, and when the one or more processors execute the computer program instructions, the roadside equipment is caused to execute the method according to any implementation manner of the first aspect.
In a fourth aspect, a computer readable storage medium is provided, comprising computer program instructions which, when executed, cause the method according to any of the above-mentioned first aspect implementations to be implemented.
In a fifth aspect, there is provided a computer product comprising computer instructions which, when executed in a computer, cause the method described in any of the implementations of the first aspect above to be carried out.
In a sixth aspect, a chip is provided, the chip comprising computer instructions which, when executed in a computer, cause the method described in any of the implementations of the first aspect described above to be implemented.
Drawings
Fig. 1 is a schematic diagram of a system architecture to which a method for target detection based on multi-signal fusion is applicable in an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a computing system in a roadside device according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of a method for target detection based on multi-signal fusion according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a millimeter wave radar and lidar mounting arrangement provided by an embodiment of the present application.
Fig. 5A to 5C are schematic flow diagrams of some target detection by a neural network according to embodiments of the present application.
Fig. 6 is a schematic structural diagram of a road side device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings in the embodiments of the present application.
It should be noted that the terms used in the implementation section of the embodiments of the present application are only used to explain the specific embodiments of the present application, and are not intended to limit the present application. In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, unless otherwise indicated, "a plurality" means two or more, and "at least one", "one or more" means one, two or more.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a definition of "a first", "a second" feature may explicitly or implicitly include one or more of such features.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In order to facilitate understanding of the solutions described in the embodiments of the present application, the following explains some terms that may be related to the embodiments of the present application:
1. feature layer information fusion
The feature layer information fusion is to combine feature vectors extracted from the original data and calculate and process the feature vectors. The feature vector fusion processing method has the advantages that the processing capacity of the original data can be reduced, and the processing speed and instantaneity of the system are improved. The disadvantage is that the original observed data, while being compressed, loses part of the available information, reducing the accuracy of the system. Common feature layer information fusion algorithms include genetic algorithms, search tree algorithms, and the like.
2. Decision layer information fusion
Each part of sensors of the decision layer information fusion has independent data processing capability aiming at the same observation target, and comprises the steps of preprocessing original observation data, extracting original information characteristics, judging target identification, obtaining preliminary conclusions according to respective measurement results, and then fusing the conclusions of each sensor to obtain the final judgment result of the observation target. The method has the advantages that the sensor result can be flexibly selected, and the fault tolerance of the system is improved; the accommodation capacity of the multi-source heterogeneous sensor is enhanced; the calculated amount of the fusion information is reduced, and the real-time performance of the system is improved. Common decision layer information fusion algorithms include voting, bayesian, and the like.
3. Data layer information fusion
The data layer information fusion is to carry out statistical analysis on the original observation data of each sensor. The method has the advantages that the original data are completely saved, the relevance among the original data is emphasized, and the measurement result is more accurate. The cloud end quantity is large, the real-time performance of the system is reduced, and meanwhile, the uncertainty and the instability of the observed data increase the processing difficulty of the system. Common data layer information fusion algorithms include weighted averaging, kalman filtering, bayesian estimation, and the like.
4. LiDAR (light laser detection and ranging LiDAR)
The laser radar is a target detection technology, a laser beam is emitted by a laser, diffuse reflection occurs after the laser beam meets a target object, the reflected beam is received by a detector, and the characteristic quantities of the target object such as distance, azimuth, height, speed, gesture, shape and the like are determined according to the emitted beam and the reflected beam.
5. Millimeter wave radar (millimeter-wave radar)
Millimeter wave radars are radars that operate in the millimeter wave band for detection. Millimeter waves generally refer to electromagnetic waves in the frequency domain (wavelength 1-10 mm) of 30-300 GHz. Current millimeter wave radars have been limited by low angular resolution and limited field of view, requiring more antennas to achieve higher angular resolution, and additional antennas can double the cost, size, and power, while limiting commercial viability.
6. Echo signal
In the embodiment of the present application, an optical signal received by a millimeter wave radar detector is referred to as an echo signal. The echo signal may include a signal transmitted by the millimeter wave radar detection device, and may also include a relevant optical signal including an ambient light source (for example, may refer to an optical signal directly from the ambient light source, or may be a reflected signal obtained by reflecting an optical signal of the ambient light source). The ambient light source may include one or more of a natural light source (e.g., sun), or an artificial light source (e.g., street lamp, car light, etc.), or the like.
Compared with the collected signals of a single sensor, the signal fusion of a plurality of sensors has stronger adaptability to weather, illumination and other conditions, has better adaptability in severe weather (such as rain, snow, darkness and the like), and can ensure the maximization of the accuracy and measurement accuracy of a detection target. Therefore, the target can be detected based on the fusion signals of the multiple sensors, and a better detection effect can be obtained.
In view of this, the embodiment of the application provides a target detection method based on multi-signal fusion, which fuses point cloud data acquired by a laser radar and echo signals acquired by a millimeter wave radar, and extracts characteristics by using a deep learning network to perform target detection, so that data characteristics of the laser radar signals and the millimeter wave signals can be synthesized, and a target detection result is more accurate.
The target detection method based on the multi-signal fusion can be applied to various target detection scenes, and particularly can be applied to application scenes of target detection of traffic participation objects (such as vehicles, pedestrians and the like) in roads in the field of road traffic.
Exemplary, as shown in fig. 1, a schematic system architecture suitable for a method for target detection based on multi-signal fusion according to an embodiment of the present application is shown. The system architecture may include a roadside system 100 and at least one detection target, wherein the detection target may include, for example, a vehicle, a pedestrian, etc., and for ease of understanding, the description is given herein with respect to the target being a vehicle 200 in a road.
In some embodiments, the roadside system 100 may include roadside devices (not shown in fig. 1), radar sensors, and the like. The radar sensor may specifically include a laser radar 101 and a millimeter wave radar 102. The roadside device, in turn, may be described as a roadside base station, a smart base station, which may include computing system 103 (not shown in fig. 1). Optionally, the roadside system 100 may also include visual sensors, such as cameras, video cameras, and the like, although embodiments of the present application are not limited in this regard.
In some embodiments, lidar 101 may be used to collect initial point cloud data and millimeter wave radar 102 may be used to collect initial echo data (or initial echo signals). The principle and process of the laser radar 101 collecting initial point cloud data, and the principle and process of the millimeter wave radar 102 collecting initial echo signals can refer to the existing mode, and will not be described herein.
In some embodiments, the road side device may be specifically implemented as an apparatus (such as a terminal, a server, etc.) including a processor, a controller, etc. and the road side device may be disposed on the road side or may be disposed at another location, which is not limited in the embodiments of the present application.
In one implementation, the roadside device may have a communication function, and may be capable of performing communication connection with the lidar 101 and the millimeter wave radar 102 in a wired or wireless communication manner, so as to implement information interaction between the roadside device and the lidar 101 and the millimeter wave radar 102, respectively. By way of example, the process of information interaction between the road side device and the lidar 101 may include, for example: the roadside device sends indication information (such as indication information for starting point cloud data acquisition, etc.) to the laser radar 101, or the roadside device receives point cloud data sent by the laser radar 101 (such as receiving initial point cloud data sent by the laser radar 101). Similarly, the process of information interaction between the roadside device and millimeter wave radar 102 may include, for example: the roadside device transmits indication information (such as indication information for starting echo signal acquisition, etc.) to the millimeter wave radar 102, or the roadside device receives echo data transmitted by the millimeter wave radar (such as initial echo data transmitted by the millimeter wave radar 102, etc.).
In another implementation, the roadside device may also perform a variety of computing processing tasks through the computing system 103, e.g., the computing system 103 may be used to denoise the initial point cloud data; for another example, the computing system 103 may be further configured to perform fourier transform (such as three-dimensional fast fourier transform (3 DFFT), doppler fourier transform, range fourier transform (range fft), angle fourier transform (angleFFT) and the like on the initial echo data to obtain radar cube data, and for another example, the computing system 103 may be further configured to integrate the preprocessed point cloud data and callback data, perform neural network-based deep learning processing to obtain a target detection result, and the like.
In some embodiments, the vehicle 200 may include multiple types, such as a general driving vehicle, an automatic driving vehicle (autonomous vehicles) (or unmanned vehicle, a computer-driven vehicle, or a wheeled mobile robot, etc.), and the embodiments of the present application do not limit the specific types of vehicles 200.
It should be noted that, the system architecture shown in fig. 1 is only described by taking a vehicle as an object to be detected as an example, but in practical application, the object may be various other types of traffic participation objects, such as pedestrians, passengers, other objects in the environment, and the like, which is not limited in this embodiment of the present application.
It should be noted that, although the above description has been made taking the roadside system 100 as the execution subject of the target detection as an example, in practical application, the method for target detection based on multi-signal fusion provided in the embodiment of the present application may be applied to other devices (such as an autonomous vehicle, etc.), and the embodiment of the present application is not limited to this.
Exemplary, as shown in fig. 2, a schematic structural diagram of a computing system according to an embodiment of the present application is provided. The computing system may correspond to computing system 103 in the roadside device of fig. 1. The computing system 103 may include, for example, a preprocessing module 1031, a fourier transform module 1032, a time synchronization module 1033, a spatial synchronization module 1034, a signal fusion module 1035, and a target detection module 1036.
In some embodiments, the preprocessing module 1031 may be used to denoise the initial point cloud data.
In some embodiments, fourier transform module 1032 may be configured to transform the initial echo data to obtain radar cube data.
In some embodiments, the time synchronization module 1033 may be used to enable the synchronized output of initial point cloud data and initial echo data. By way of example, the manner in which the time synchronization module 1033 implements the time synchronization function may include: (1) a hardware synchronization scheme (or hard synchronization scheme); (2) software synchronization mode (or soft synchronization mode). Specifically, the hardware synchronization method may include: one of the laser radar and the millimeter wave radar transmits a trigger pulse to the other party at fixed time intervals, and the other party receives the trigger pulse and then performs electromagnetic wave acquisition and data (initial point cloud data or initial echo data) transmission. The software synchronization means may include: and according to the inherent output frame rate of the laser radar and the millimeter wave radar, carrying out frame number alignment and translation alignment on each frame output by the laser radar and the millimeter wave radar under the same time coordinate axis.
The above description is given by taking the software synchronization and the hardware synchronization as examples, and the time synchronization is described, but the embodiments of the present application are not limited thereto.
In some embodiments, the spatial synchronization module 1034 may be configured to spatially calibrate the lidar and the millimeter wave radar such that initial point cloud data collected by the lidar is in the same spatial coordinate system as initial echo data collected by the millimeter wave radar. By way of example, the process of spatially calibrating the lidar and the millimeter wave radar may be: firstly, setting experimental environments for space calibration of a millimeter wave radar and a laser radar, wherein the experimental environments can comprise the steps of placing a millimeter wave sensitive metal marker in a millimeter wave darkroom as a device capable of reflecting or refracting millimeter waves, and setting other positions except the millimeter wave sensitive metal marker as devices (or materials) in which millimeter wave scanning electromagnetic waves are absorbed and echoes cannot be generated; similarly, for lidar, the lidar markers may be set to white, with the other locations uniformly set to light-absorbing black material; then, respectively calculating the specific positions of the markers in echoes of the laser radar and the millimeter wave radar; and then, after aligning the markers one by one, solving a preset equation set to calculate the relative position parameters of the laser radar and the millimeter wave radar, namely the spatial synchronization parameters of the two radar sensors.
It should be noted that, the spatial synchronization process of the laser radar and the millimeter wave radar may be performed before the data is collected, or may also be performed during the data collection process, and the specific time for performing the spatial synchronization process is not limited in the embodiment of the present application.
It should be noted that, although the spatial synchronization module 1034 is described herein as an example of one module in the computing system 103 of the roadside system 100, in practical application, the spatial synchronization module 1034 may be a module in another independent device other than the roadside system 100, which is not limited in this embodiment of the present application.
In some embodiments, the signal fusion module 1035 may be configured to further process and integrate the denoised point cloud data and radar cube data. Optionally, the signal fusion module 1035 may also perform regularization processing on the data before fusing the data, so as to unify the signal amplitude ranges of the laser radar data and the millimeter wave radar data.
In some embodiments, the target detection module 1036 may utilize a convolutional neural network (convolutional neural network, CNN) to perform target detection based on the processed point cloud data and echo data.
Exemplary, as shown in fig. 3, a schematic flowchart of a method for target detection based on multi-signal fusion is provided in an embodiment of the present application. The method may be performed by the above-described roadside system, and the method may include the steps of:
s301, acquiring initial echo data acquired by the millimeter wave radar and initial point cloud data acquired by the laser radar.
In some embodiments, the millimeter wave radar in the embodiments of the present application may, for example, use a 24G fundamental frequency radar, and the laser radar may, for example, use an 8-wire vehicle radar, a 16-wire vehicle radar, or a 32-wire vehicle radar, etc., but the specific types of millimeter wave radar and laser radar are not limited in the embodiments of the present application.
In some embodiments, the millimeter wave radar and the lidar need to be first installed prior to data acquisition by the millimeter wave radar and the lidar. For example, the embodiments of the present application are described with reference to the installation of the millimeter wave radar and the laser radar on the road side (as shown in fig. 4), but in practical application, the millimeter wave radar and the laser radar may be disposed at other positions, such as the roof of the vehicle, etc., which is not limited in the embodiments of the present application.
It should be noted that, since the data acquisition ranges (such as the angle ranges) and the distances, etc. of the millimeter wave radar and the laser radar may be different, when the millimeter wave radar and the laser radar are installed, the field matching problem of the millimeter wave radar and the laser radar needs to be considered, that is, the field ranges of the millimeter wave radar and the laser radar need to be made to cover the target detection area (such as the target detection area shown in fig. 4) at least simultaneously.
As an example, taking a millimeter wave radar of a commonly used area array antenna array as an example, the field of view covered by it is 180 ° forward space or less; while lidars typically have a limited field of view and a field of view above 90 deg.. In order to ensure that the laser radar and the millimeter wave radar can cover the same target detection area at the same time, one possible arrangement mode is: scanning imaging of the forward field of view can be accomplished using a forward scanning lidar in combination with a millimeter wave radar (see the field of view range and target detection area of the lidar and millimeter wave radar shown in fig. 4); another possible arrangement is: the method has the advantages that a plurality of (at least 2) over 90-degree all-around laser radars are adopted to be matched with one millimeter wave radar, and the fields of view of the plurality of laser radars are overlapped in part at least two by two, so that the detection range of the plurality of laser radars and the detection range of the millimeter wave radar can cover a target detection area at the same time.
In some embodiments, the roadside device may instruct the lidar and the millimeter wave radar to detect objects within the target detection region, after which the roadside device may acquire initial point cloud data transmitted by the lidar and may acquire echo data transmitted by the millimeter wave radar. The initial point cloud data refers to original collected data sent to the road side equipment by the laser radar, namely point cloud data which is not processed (such as denoising) by the road side equipment; the initial echo data refers to an original acquisition signal sent to the roadside device by the millimeter wave radar, namely, the echo data which is not processed (such as Fourier transform) by the roadside device.
It should be noted that, the millimeter wave radar acquisition may use frequency modulated continuous wave (frequency continuous wave, FMCW) for spatial measurement, where the principle of frequency modulated continuous wave is: the transmitting wave of the millimeter wave radar is a high-frequency continuous wave, and the frequency of the transmitting wave changes along with time according to a triangular wave rule; the frequency of the echo received by the millimeter wave radar is the same as the change rule of the frequency of the emission, and the change rule is a triangular wave rule, but the time difference exists, and the target distance can be calculated by using the small time difference. By way of example, millimeter wave radar may transmit and receive continuously modulated electromagnetic waves from a radio frequency antenna at a fundamental frequency of 24 GHz.
In order to ensure that the data acquired by the millimeter wave radar and the laser radar can be aligned in time, the data acquisition time and/or the data transmission time to the road side equipment need to be synchronized before the data acquisition time of the two radar sensors.
In some embodiments, the manner in which the lidar and millimeter-wave radar acquisition data are time synchronized may include: transmitting pulses to the millimeter wave radar through the laser radar, so that the millimeter wave radar and the laser radar respectively acquire the initial echo data and the initial point cloud data synchronously; and/or adjusting the frame rate of the millimeter wave radar or the laser radar transmitting signal, and aligning the time stamps of the initial echo data and the initial point cloud data, so that the millimeter wave radar and the laser radar respectively acquire the initial echo data and the initial point cloud data synchronously.
In other words, the manner of time synchronizing the millimeter wave radar and the lidar to collect data and/or transmit data to the roadside device may include both a hardware synchronization manner (or hard synchronization manner) and a software synchronization manner (or soft synchronization manner). The hardware synchronization method may include: one of the laser radar and the millimeter wave radar transmits a trigger pulse to the other party at fixed time intervals, and the other party receives the trigger pulse and then performs electromagnetic wave acquisition and data (initial point cloud data or initial echo data) transmission. The software synchronization means may include: and according to the inherent output frame rate of the laser radar and the millimeter wave radar, carrying out frame number alignment and translation alignment on each frame output by the laser radar and the millimeter wave radar under the same time coordinate axis.
Taking a hardware synchronization manner as an example, a specific implementation scheme may be: when the laser radar rotates and scans, setting a zero point of the motor of the laser radar when the motor rotates, and triggering the laser radar to send a synchronous pulse to the millimeter wave radar; triggering the millimeter wave radar to start transmitting continuous frequency modulation electromagnetic waves (the carrier wave can be 24 GHz) after receiving the synchronous pulse, then receiving echo signals of the electromagnetic waves by a receiving antenna of the millimeter wave radar, and basically finishing receiving the echo signals after the continuous frequency modulation electromagnetic waves are transmitted, wherein the laser radar can rotate and scan for the same duration (such as a field of view corresponding to a target detection area); then, the millimeter wave radar and the laser radar sample the signals collected by the millimeter wave radar and the laser radar respectively, and then the signals sampled by the laser radar and the millimeter wave radar can be output to the road side equipment as the same frame content.
In some embodiments, after receiving an echo signal, the millimeter wave radar may perform analog-to-digital (AD) sampling on the echo signal, store the sampled echo data in a register, and send the echo signal to a roadside device via a network interface after completing one frame of echo signal reception.
S302, processing the initial echo data to obtain radar cube data, wherein the radar cube data comprises information of three dimensions of speed, azimuth and distance.
In some embodiments, after receiving the initial echo data, the roadside device may perform three-dimensional fast fourier transform (3 DFFT) on the initial echo data, and arrange the initial echo data into a three-dimensional matrix, to obtain radar cube data. The 3DFFT performs 3 FFT operations on 3 dimensions, specifically, the roadside device may perform ranging, velocity measurement, and angle measurement on the target by using corresponding distance fourier transform (range FFT), doppler FFT, and angle fourier transform (angleFFT), so that the obtained radar cube data may include information of three dimensions of velocity, azimuth, and distance.
S303, acquiring integration data based on the initial point cloud data and the radar cube data.
In some embodiments, after acquiring the radar cube data, the roadside device may integrate the radar cube data and the initial point cloud data to acquire the integrated data.
In some embodiments, after acquiring the initial point cloud data, the roadside device may perform denoising processing on the initial point cloud data to obtain the target point cloud data.
In some embodiments, after acquiring the radar cube data, the roadside device may select target radar cube data according to the energy value corresponding to each cube, and integrate the target radar cube data based on the initial point cloud data. For example, the process of selecting target radar stereo data may include: the road side equipment performs three-dimensional fast Fourier transform (3D-FFT) on the initial millimeter wave data to obtain radar cube data, wherein the radar cube data are used for indicating energy values corresponding to different speeds, directions and distances; then, a portion of the radar cube data having an energy value greater than a first threshold value is taken as target radar cube data.
In some embodiments, obtaining the integration data based on the initial point cloud data and the radar cube data may include: integrating the initial point cloud data with the target radar cube data to obtain integrated data; or integrating the target point cloud data and the radar cube data to obtain integrated data; or integrating the target point cloud data and the target radar cube data to obtain integrated data.
In some embodiments, prior to integrating the initial point cloud data and the radar cube data, the initial point cloud data and the radar cube data may also be spatially synchronized such that the initial point cloud data and the radar cube correspond to one and the same spatial coordinate system. In particular, the process of spatially synchronizing the initial point cloud data and the radar cube data may include: calibrating the millimeter wave radar space and the laser radar space, and determining space synchronization parameters of the millimeter wave radar and the laser radar; and carrying out rotation translation transformation on the initial echo data and the point cloud data in space according to the space synchronization parameters so as to obtain the initial echo data and the initial point cloud data under the same space.
For example, one way for spatially calibrating the millimeter wave radar and the laser radar to obtain the spatial synchronization parameter may include: firstly, setting experimental environments for space calibration of a millimeter wave radar and a laser radar, wherein the experimental environments can comprise the steps of placing a millimeter wave sensitive metal marker in a millimeter wave darkroom as a device capable of reflecting or refracting millimeter waves, and setting other positions except the millimeter wave sensitive metal marker as devices (or materials) in which millimeter wave scanning electromagnetic waves are absorbed and echoes cannot be generated; similarly, for lidar, the lidar markers may be set to white, with the other locations uniformly set to light-absorbing black material; then, respectively calculating the specific positions of the markers in echoes of the laser radar and the millimeter wave radar; and then, after aligning the markers one by one, calculating relative position parameters of the laser radar and the millimeter wave radar according to a preset equation set, namely, the spatial synchronization parameters of the two radar sensors.
The data acquired by the millimeter wave radar and the laser radar can be unified into the same space coordinate system through the space calibration process of the millimeter wave radar and the laser radar; moreover, the data collected by the millimeter wave radar and the laser radar can be unified to the same time axis by performing time synchronization on the data collected by the millimeter wave radar and the laser radar and/or transmitting the data to the road side equipment. At this time, only the difference in amplitude exists between the target point cloud data and the target radar cube data, and therefore regularization processing can be performed on the target point cloud data and the target radar cube data before integrating the data.
In some embodiments, the roadside device may regularize the target initial point cloud data and the target radar cube data, unifying the signal amplitude ranges of the two. At this time, the process of acquiring the integration data based on the initial point cloud data and the radar cube data may include: taking the part of the radar cube data, the energy value of which is larger than a first threshold value, as target radar cube data; denoising the initial point cloud data to obtain target point cloud data; regularizing the target point cloud data and the target echo data to respectively acquire first regularized data and second regularized data with uniform signal amplitude ranges; and integrating the first regularized data and the second regularized data to obtain integrated data. The first regularization data are data after regularization processing is carried out on the target point cloud data; the second regularized data is data after regularization processing is carried out on the target echo data.
The first regularization data may include a plurality of three-dimensional spatial coordinates in a spatial coordinate system corresponding to the target point cloud data, and energy values corresponding to the three-dimensional spatial coordinates. Illustratively, the first regularized data may be represented as: (x) n1 ,y n1 ,z n1 I), wherein x n1 ,y n1 ,z n1 And respectively setting three-dimensional space coordinate parameters corresponding to the target point cloud data, wherein I is an energy value corresponding to the three-dimensional space coordinate parameters, and the energy value can refer to the intensity value of the laser radar point cloud.
The second regularized data may comprise a plurality of said three-dimensional spatial coordinates of the spatial coordinates corresponding to the target radar cube data, and the speed and radar cross-section energy corresponding to said three-dimensional spatial coordinates, and said initial echo data (AD sampled data) of the position corresponding to said three-dimensional spatial coordinates. Illustratively, the second regularized data may be represented as: (x) n2 ,y n2 ,z n2 ,v,rcs,[V 1 ,V 2 ,…,V N ]) Wherein x is n2 ,y n2 ,z n2 Three-dimensional space coordinate parameters (x n2 Is the value on the X axis in the three-dimensional space coordinate system, y n2 Is the value on the Y axis in the three-dimensional space coordinate system, z n2 Is a value on a Z axis in a three-dimensional space coordinate system), V the speed corresponding to the three-dimensional space coordinate, rcs is radar cross section energy corresponding to the three-dimensional space coordinate, [ V ] 1 ,V 2 ,…,V N ]The initial echo data (AD sampling data) is the position corresponding to the three-dimensional space coordinates.
In some embodiments, integrating the first regularized data and the second regularized data, and obtaining the integrated data specifically includes: combining the first regularized data and the second regularized data which respectively correspond to the same three-dimensional space coordinates to obtain integrated data, wherein the integrated data comprises the three-dimensional space coordinates and the three-dimensional space The coordinates correspond to the energy value, the speed, the radar scattering surface energy and the initial echo data respectively. For example, x is n1 =x n2 ,y n1 =y n2 ,z n1 =z n2 And combining the first regularized data and the second regularized data which correspond to the position, and acquiring the integrated data, wherein the integrated data comprises three-dimensional space coordinate parameters, and energy values, speeds, radar scattering surface energy and initial echo data which correspond to the three-dimensional space coordinate parameters. For example, the consolidated data may be expressed as: (x, y, z, V, I, rcs, [ V ] 1 ,V 2 ,...,V N ])。
S304, inputting the integrated data into a target neural network model to obtain a target detection result.
The target neural network model performs target recognition by the characteristic of millimeter waves in the speed dimension. For moving objects (such as walking pedestrians, cyclists, and running vehicles), the data in the Doppler dimension may exhibit different peak characteristics. Thus input to the target neural network is raw signal data extracted from the millimeter wave data radar cube (corresponding to V described above 1 ,V 2 ,…,V N ]) And combining with high latitude information (x, y, z, I, v, rcs) information of the position of the peak value of the signal to serve as input data of a network model. Wherein x, y and z are provided by millimeter wave radar and laser radar at the same time and are respectively used for indicating the position of three-dimensional dimension in space coordinates; i is the intensity information of the laser radar point cloud; v and RCS are provided by millimeter wave radar, v being the speed corresponding to the (x, y, z) position, and RCS being the radar cross-sectional area (Radar Cross Section, RCS) for indicating a measure of the target's ability to reflect radar signals in the radar's direction.
It should be understood that the target neural network designed in the embodiment of the present application has the characteristic of extracting features from millimeter wave raw data, and may perform spatial convolution by using two-dimensional (2D) convolution check waveform features, and perform velocity dimension convolution by using one-dimensional (1D) convolution check waveforms. Through training, the characteristics of data can be fully mined.
The process of performing the target detection by the target neural network according to the integrated data in this step will be described below with reference to the accompanying drawings, which will not be described in detail herein.
According to the target detection method based on multi-signal fusion, point cloud data acquired by the laser radar and echo signals acquired by the millimeter wave radar are fused, and the target detection is performed by utilizing deep learning network extraction characteristics, so that the data characteristics of the laser radar signals and the millimeter wave signals can be synthesized, and the target detection result is more accurate.
The following describes a process of performing target detection on the target neural network according to the integrated data with reference to fig. 5A to 5C.
Exemplary, as shown in fig. 5A to 5C, a schematic flowchart of another method for object detection based on multi-signal fusion according to an embodiment of the present application is provided.
First, as shown in fig. 5A, in some embodiments, after fourier transforming the initial echo data by the roadside device, the obtained radar cube may include a plurality of small cubes, as shown in the cube on the upper left side of fig. 5A, each of which may correspond to a three-dimensional coordinate (x, y, z) in a spatial coordinate system, and each of which may correspond to an energy value, which may be, for example, an echo signal energy value corresponding to the three-dimensional position. For example, the radar cube may include a target, and the position of the target may be as shown in the radar cube on the upper left side in fig. 5A.
In some embodiments, after the roadside device processes the initial point cloud data, the point cloud data corresponding to the obtained target may be shown as a target schematic diagram on the lower left side of fig. 5A.
In some embodiments, the roadside device may determine, as the target radar cube data, a portion in which the energy value is greater than a first threshold value, from the energy value corresponding to each small cube of the radar cubes. The target radar cube may, for example, correspond to the blocks around the target shown on the right side of fig. 5A.
In some embodiments, the roadside device may also perform global motion compensation (ego-motion compensation) according to point cloud data corresponding to a target acquired by the laser radar, obtain an angle (azimuth), a distance (range), a speed (such as an absolute speed), and the like of the target.
As shown in fig. 5B, after preprocessing is completed, the target radar force diagram data may be output to a real-time contrast (real time contrasts, RCT) neural network, the target radar cube data may be flattened, and two dimensions of the cube (such as a dimension corresponding to the X-axis and a dimension corresponding to the Z-axis) may be converted to 1; and then, integrating the output flattened data with target point cloud data (such as FC128 file), and processing and storing the flattened data into a deep network data input form so as to carry out target detection analysis later.
As shown in fig. 5C, after integrating the target point cloud data and the target radar stereo data, post processing (post processing) may be performed on the integrated data, to extract data features, implement target classification and clustering, and finally, may output a classification object list, so as to obtain a target detection result.
In the embodiment of the application, the integrated data is input into a deep CNN network to extract data characteristics, the CNN network layer carries out full connection layer of the target after convolution pooling and activation functions, and regression and position regression are carried out on the full connection layer and performance category. The CNN convolution process may include: selecting a region of the same size as the convolution kernel from the upper left corner of the original image; the selected area and the convolution kernel are multiplied by elements one by one, and then summed to obtain a value serving as a value of one pixel point of the new image; the selected region is moved horizontally and vertically on the original picture, and then the above-described calculation process is repeated. The step length of the movement may be 1 or greater than 1 (if the step length is greater than 1, the new image size is reduced, and at this time, padding may be added to the original image to ensure that the newly generated image is not reduced).
According to the target detection method based on multi-signal fusion, point cloud data acquired by the laser radar and echo signals acquired by the millimeter wave radar are fused, and the target detection is performed by utilizing deep learning network extraction characteristics, so that the data characteristics of the laser radar signals and the millimeter wave signals can be synthesized, and the target detection result is more accurate.
Exemplary, as shown in fig. 6, a schematic structural diagram of a road side system according to an embodiment of the present application is provided. The roadside system 600 includes millimeter wave radar 601, laser radar 602, roadside device 603, wherein the roadside device 603 may in turn include one or more memories 6031 and one or more processors 6032. Millimeter wave radar 601 is used to collect initial echo data and the laser radar is used to collect initial point cloud data, and the one or more memories store computer program instructions.
In some embodiments, the one or more processors, when executing the computer program instructions, cause the roadside device to perform the steps of:
acquiring initial echo data acquired by a millimeter wave radar and initial point cloud data acquired by a laser radar;
Processing the initial echo data to obtain radar cube data; the radar cube data comprises information of three dimensions of speed, azimuth and distance;
acquiring integration data based on the initial point cloud data and the radar cube data;
and inputting the integrated data into a target neural network model to obtain a target detection result.
In some embodiments, the one or more processors, when executing the computer program instructions, cause the roadside device to perform the steps of:
transmitting pulses to the millimeter wave radar through the laser radar, so that the millimeter wave radar and the laser radar respectively acquire the initial echo data and the initial point cloud data synchronously; and/or the number of the groups of groups,
and adjusting the frame rate of the millimeter wave radar or the laser radar transmitting signal, and aligning the time stamps of the initial echo data and the initial point cloud data, so that the millimeter wave radar and the laser radar respectively and synchronously acquire the initial echo data and the initial point cloud data.
In some embodiments, the one or more processors, when executing the computer program instructions, cause the roadside device to perform the steps of:
Calibrating the millimeter wave radar space and the laser radar space, and determining space synchronization parameters of the millimeter wave radar and the laser radar;
and carrying out rotation translation transformation on the initial echo data and the point cloud data in space according to the space synchronization parameters so as to obtain the initial echo data and the initial point cloud data under the same space.
In some embodiments, the one or more processors, when executing the computer program instructions, cause the roadside device to perform the steps of:
performing three-dimensional fast Fourier transform (3D-FFT) on the initial millimeter wave data to obtain radar cube data, wherein the radar cube data are used for indicating energy values corresponding to different speeds, directions and distances;
taking the part of the radar cube data, the energy value of which is larger than a first threshold value, as target radar cube data;
and acquiring integrated data based on the initial point cloud data and the target radar cube data.
In some embodiments, the one or more processors, when executing the computer program instructions, cause the roadside device to perform the steps of:
denoising the initial point cloud data to obtain target point cloud data;
And acquiring integrated data based on the radar cube data and the target point cloud data.
In some embodiments, the one or more processors, when executing the computer program instructions, cause the roadside device to perform the steps of:
taking the part of the radar cube data, the energy value of which is larger than a first threshold value, as target radar cube data;
denoising the initial point cloud data to obtain target point cloud data;
regularization processing is carried out on the target point cloud data and the target echo data, and first regularization data and second regularization data with uniform signal amplitude ranges are respectively obtained;
and integrating the first regularized data and the second regularized data to obtain integrated data.
In some embodiments, the one or more processors, when executing the computer program instructions, cause the roadside device to perform the steps of:
the first regularization data comprise a plurality of three-dimensional space coordinates and energy values corresponding to the three-dimensional space coordinates; the second regularized data comprise a plurality of the three-dimensional space coordinates, the speeds corresponding to the three-dimensional space coordinates, radar cross section energy and the initial echo data of the positions corresponding to the three-dimensional space coordinates;
And merging the first regularization data and the second regularization data which correspond to the same three-dimensional space coordinates respectively to obtain integration data, wherein the integration data comprise the three-dimensional space coordinates, and energy values, speeds, radar scattering surface energy and initial echo data which correspond to the three-dimensional space coordinates respectively.
Embodiments of the present application also provide a computer-readable storage medium including computer instructions that, when executed in a computer, cause the above-described method to be implemented.
Embodiments of the present application also provide a computer product comprising computer instructions which, when executed in a computer, cause the above-described method to be implemented.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer commands. When the computer program command is loaded and executed on a computer, the flow or functions described in the embodiments of the present application are all or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer commands may be stored in or transmitted over a computer readable storage medium. The computer commands may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program commanding associated hardware, the program being stored on a computer readable storage medium, the program when executed comprising the process of the above-described method embodiments. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.
The foregoing is merely a specific implementation of the embodiments of the present application, but the protection scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the protection scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for target detection based on multi-signal fusion, comprising:
acquiring initial echo data acquired by a millimeter wave radar and initial point cloud data acquired by a laser radar;
processing the initial echo data to obtain radar cube data; the radar cube data comprises information of three dimensions of speed, azimuth and distance;
Acquiring integration data based on the initial point cloud data and the radar cube data;
and inputting the integrated data into a target neural network model to obtain a target detection result.
2. The method according to claim 1, wherein the method further comprises:
transmitting pulses to the millimeter wave radar through the laser radar, so that the millimeter wave radar and the laser radar respectively acquire the initial echo data and the initial point cloud data synchronously; and/or the number of the groups of groups,
and adjusting the frame rate of the millimeter wave radar or the laser radar transmitting signal, and aligning the time stamps of the initial echo data and the initial point cloud data, so that the millimeter wave radar and the laser radar respectively and synchronously acquire the initial echo data and the initial point cloud data.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
calibrating the millimeter wave radar space and the laser radar space, and determining space synchronization parameters of the millimeter wave radar and the laser radar;
and carrying out rotation translation transformation on the initial echo data and the point cloud data in space according to the space synchronization parameters so as to obtain the initial echo data and the initial point cloud data under the same three-dimensional space coordinate system.
4. The method according to claim 1, wherein said processing said initial echo data to obtain radar cube data, comprises in particular:
performing three-dimensional fast Fourier transform (3D-FFT) on the initial millimeter wave data to obtain radar cube data, wherein the radar cube data are used for indicating energy values corresponding to different speeds, directions and distances;
the acquiring integrated data based on the initial point cloud data and the radar cube data specifically includes:
taking the part of the radar cube data, the energy value of which is larger than a first threshold value, as target radar cube data;
and acquiring integrated data based on the initial point cloud data and the target radar cube data.
5. The method of claim 1, wherein obtaining integration data based on the initial point cloud data and the radar cube data comprises:
denoising the initial point cloud data to obtain target point cloud data;
and acquiring integrated data based on the radar cube data and the target point cloud data.
6. The method of claim 5, wherein the radar cube data is used to indicate energy values for different speeds, orientations, distances;
Correspondingly, the acquiring the integration data based on the initial point cloud data and the radar cube data specifically includes:
taking the part of the radar cube data, the energy value of which is larger than a first threshold value, as target radar cube data;
denoising the initial point cloud data to obtain target point cloud data;
regularization processing is carried out on the target point cloud data and the target echo data, and first regularization data and second regularization data with uniform signal amplitude ranges are respectively obtained;
and integrating the first regularized data and the second regularized data to obtain integrated data.
7. The method of claim 6, wherein the first regularized data comprises a plurality of three-dimensional spatial coordinates and energy values corresponding to the three-dimensional spatial coordinates; the second regularized data comprise a plurality of the three-dimensional space coordinates, the speeds corresponding to the three-dimensional space coordinates, radar cross section energy and the initial echo data of the positions corresponding to the three-dimensional space coordinates;
the integrating the first regularized data and the second regularized data to obtain the integrated data specifically includes:
And merging the first regularization data and the second regularization data which correspond to the same three-dimensional space coordinates respectively to obtain integration data, wherein the integration data comprise the three-dimensional space coordinates, and energy values, speeds, radar scattering surface energy and initial echo data which correspond to the three-dimensional space coordinates respectively.
8. The method of any one of claims 1-8, wherein the target detection results include a speed, a location, and a category of the target.
9. A roadside apparatus, comprising:
including one or more memories;
one or more processors;
the one or more memories store computer program instructions that, when executed by the one or more processors, cause the roadside device to implement the method of any of claims 1 to 8.
10. A computer readable storage medium, characterized in that the computer readable storage medium comprises computer program instructions which, when executed, cause the method of any of claims 1 to 8 to be implemented.
CN202111452399.3A 2021-11-30 2021-11-30 Target detection method and device based on multi-signal fusion Pending CN116203577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111452399.3A CN116203577A (en) 2021-11-30 2021-11-30 Target detection method and device based on multi-signal fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111452399.3A CN116203577A (en) 2021-11-30 2021-11-30 Target detection method and device based on multi-signal fusion

Publications (1)

Publication Number Publication Date
CN116203577A true CN116203577A (en) 2023-06-02

Family

ID=86517922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111452399.3A Pending CN116203577A (en) 2021-11-30 2021-11-30 Target detection method and device based on multi-signal fusion

Country Status (1)

Country Link
CN (1) CN116203577A (en)

Similar Documents

Publication Publication Date Title
CN110456343B (en) Instant positioning method and system based on FMCW millimeter wave radar
US11899099B2 (en) Early fusion of camera and radar frames
US10353053B2 (en) Object detection using radar and machine learning
US11276189B2 (en) Radar-aided single image three-dimensional depth reconstruction
US10739438B2 (en) Super-resolution radar for autonomous vehicles
KR20210096607A (en) Radar Deep Learning
US11479262B2 (en) Geographically disparate sensor fusion for enhanced target detection and identification in autonomous vehicles
CN113359097B (en) Millimeter wave radar and camera combined calibration method
CN110568433A (en) High-altitude parabolic detection method based on millimeter wave radar
US11921213B2 (en) Non-line-of-sight correction for target detection and identification in point clouds
US11587204B2 (en) Super-resolution radar for autonomous vehicles
Cui et al. 3D detection and tracking for on-road vehicles with a monovision camera and dual low-cost 4D mmWave radars
CN103487810A (en) Method for detecting terrain obstacles with unmanned vehicle-borne radar based on echo characteristics
CN113627373A (en) Vehicle identification method based on radar-vision fusion detection
CN111736613A (en) Intelligent driving control method, device and system and storage medium
Gao et al. Dc-loc: Accurate automotive radar based metric localization with explicit doppler compensation
Ram Fusion of inverse synthetic aperture radar and camera images for automotive target tracking
Zhou A review of LiDAR sensor technologies for perception in automated driving
CN109870685B (en) Indoor distance direction moving SAR imaging method based on improved RD algorithm
Phippen et al. 3D Images of Pedestrians at 300GHz
CN116203577A (en) Target detection method and device based on multi-signal fusion
WO2022083529A1 (en) Data processing method and apparatus
Hoffmann et al. Filter-based segmentation of automotive SAR images
CN116027288A (en) Method and device for generating data, electronic equipment and storage medium
CN111123260B (en) Method for identifying state of environmental object by using millimeter wave radar and visible light camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination