CN117452392B - Radar data processing system and method for vehicle-mounted auxiliary driving system - Google Patents

Radar data processing system and method for vehicle-mounted auxiliary driving system Download PDF

Info

Publication number
CN117452392B
CN117452392B CN202311798523.0A CN202311798523A CN117452392B CN 117452392 B CN117452392 B CN 117452392B CN 202311798523 A CN202311798523 A CN 202311798523A CN 117452392 B CN117452392 B CN 117452392B
Authority
CN
China
Prior art keywords
radar
vehicle
identification data
moving object
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311798523.0A
Other languages
Chinese (zh)
Other versions
CN117452392A (en
Inventor
汪洋
孙晨阳
郭俊琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology
Original Assignee
Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology filed Critical Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology
Priority to CN202311798523.0A priority Critical patent/CN117452392B/en
Publication of CN117452392A publication Critical patent/CN117452392A/en
Application granted granted Critical
Publication of CN117452392B publication Critical patent/CN117452392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/91Radar or analogous systems specially adapted for specific applications for traffic control
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/003Transmission of data between radar, sonar or lidar systems and remote stations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a radar data processing system and a radar data processing method for a vehicle-mounted auxiliary driving system. Firstly, radar identification data of an off-vehicle radar and surrounding image data of a vehicle visual angle are acquired; then, a preset multi-perception fusion algorithm is applied, a corresponding relation is established between the moving object imaged in the surrounding image data and the space three-dimensional information of the moving object in the radar identification data according to the surrounding image data and the radar identification data, and the corresponding space three-dimensional information of the vehicle in the radar identification data is identified; finally, radar identification data identifying the vehicle is graphically displayed for driving assistance. As the radar outside the vehicle is applied to the driving auxiliary system of the radar-free vehicle, the sensing precision is higher and the sensing range is larger.

Description

Radar data processing system and method for vehicle-mounted auxiliary driving system
Technical Field
The invention relates to the technical field of vehicle-mounted auxiliary driving, in particular to a radar data processing system and method for a vehicle-mounted auxiliary driving system.
Background
The vehicle auxiliary driving technology based on the image sensor is widely applied to road traffic, but the vehicle radar does not become standard of vehicles due to cost, service life limitation and other reasons, so that the popularization range of the radar-based auxiliary driving technology is greatly restricted.
Disclosure of Invention
The invention mainly solves the technical problem of how to improve the perception range of a driving assistance system on a vehicle.
According to a first aspect, there is provided in one embodiment a radar data processing system for an in-vehicle driving assist system, comprising:
the vehicle-mounted wireless communication device is used for acquiring radar identification data; the radar identification data is space three-dimensional information and/or two-dimensional boundary information of all moving objects in a preset space range, which are acquired by an off-board radar; the vehicle exterior radar is a radar which is not loaded on the vehicle;
a vehicle-mounted image acquisition device for acquiring surrounding image data of the vehicle viewing angle;
the multi-perception fusion device is used for applying a preset multi-perception fusion algorithm, and establishing a corresponding relation between a moving object imaged in the surrounding image data and space three-dimensional information of the moving object in the radar identification data according to the surrounding image data and the radar identification data so as to identify the corresponding space three-dimensional information of the vehicle in the radar identification data;
and the auxiliary driving prompt device is used for graphically displaying the radar identification data for identifying the vehicle so as to be used for auxiliary driving.
In one embodiment, the off-board radar is disposed on other vehicles, sides and/or drones; the types of the radar outside the vehicle include microwave radar, millimeter wave radar and/or laser radar.
In an embodiment, the radar identification data only includes spatial three-dimensional information and/or two-dimensional boundary information of all moving objects within the preset spatial range, the spatial three-dimensional information includes a size, a position and/or a direction of the moving object, and the two-dimensional boundary information is projection boundary information of the moving object on a preset plane.
In an embodiment, the preset multi-perception fusion algorithm includes a dual-stage metric matching fusion algorithm, and the process of applying the dual-stage metric matching fusion algorithm includes:
extracting an image target feature set A of the imaged moving object in the surrounding image data;
extracting a radar target feature set B of a moving object in the radar identification data;
screening a common moving target pair set S of an imaging moving object in the surrounding image data and a moving object in the radar identification data through the similarity of each element in the image target feature set A and the radar target feature set B;
extracting eight vertex sets v_m_d of images of elements a_v_m in an image target feature set A which satisfy a preset vertex extraction condition in the surrounding image data from a common moving target pair set S;
extracting eight radar vertex sets r_theta_d corresponding to an element b_r_theta in a radar target feature set B of the element a_v_m from a common moving target pair set S;
calculating a rotation translation rigid transformation matrix for interconversion between eight vertex sets v_m_d of the image of the element a_v_m and eight vertex sets r_theta_d of the radar of the element b_r_theta by using a point cloud registration method;
and acquiring a three-dimensional coordinate corresponding relation between the imaged moving object in the surrounding image data and the corresponding moving object in the radar identification data according to the rotation translation rigid transformation matrix.
In one embodiment, the preset vertex extraction condition includes:
the moving object corresponding to the element a_v_m in the image target feature set a is in a preset central area of the surrounding image data, and the distance between the moving object and the vehicle is not greater than a preset value.
According to a second aspect, in one embodiment, there is provided a radar data processing method for an in-vehicle driving support system, including:
acquiring radar identification data; the radar identification data is space three-dimensional information and/or two-dimensional boundary information of all moving objects comprising the vehicle in a preset space range, which are acquired by an off-vehicle radar; the vehicle exterior radar is a radar which is not loaded on the vehicle;
when the vehicle is provided with a vehicle-mounted radar, acquiring vehicle-mounted radar data of the vehicle, and converting the vehicle-mounted radar data into vehicle-mounted identification data with the same radar identification data format;
performing accuracy verification on the vehicle-mounted radar according to the radar identification data and the vehicle-mounted identification data;
and when the accuracy of the vehicle-mounted radar does not meet the preset minimum accuracy requirement of the auxiliary driving, the radar identification data is used for the auxiliary driving of the vehicle.
In one embodiment, the radar data processing method further includes:
and when the vehicle-mounted radar precision meets the minimum precision requirement preset by the auxiliary driving, broadcasting the vehicle-mounted identification data outwards in a wireless communication mode so as to be used for the auxiliary driving of other vehicles.
In one embodiment, the radar data processing method further includes:
when the vehicle does not have a vehicle-mounted radar, acquiring surrounding image data of the vehicle visual angle;
a preset multi-perception fusion algorithm is applied, a corresponding relation between a moving object imaged in the surrounding image data and space three-dimensional information of the moving object in the radar identification data is established according to the surrounding image data and the radar identification data, and the corresponding space three-dimensional information of the vehicle in the radar identification data is identified;
the radar identification data identifying the vehicle is graphically displayed for assisted driving of the vehicle.
In an embodiment, the radar identification data only includes spatial three-dimensional information and/or two-dimensional boundary information of all moving objects including the vehicle in the preset spatial range, the spatial three-dimensional information includes a size, a position and/or a direction of the moving object, and the two-dimensional boundary information is projection boundary information of the moving object on a preset plane.
According to the radar data processing system of the embodiment, since the radar data of the non-self vehicle is applied to the auxiliary driving system of the self vehicle, the radar data is acquired by the vehicle which realizes the auxiliary driving based on the image data only, and the perception precision and the perception range of the vehicle-mounted auxiliary driving system are higher.
Drawings
FIG. 1 is a block diagram of a radar data processing system in one embodiment;
FIG. 2 is a schematic algorithm flow diagram of a two-stage metric matching fusion algorithm in one embodiment;
FIG. 3 is a flow chart of a radar data processing method in an embodiment;
FIG. 4 is an illustration of an example of surrounding image data in one embodiment;
FIG. 5 is a two-dimensional boundary information map of a vehicle driving surface in one embodiment;
FIG. 6 is a schematic diagram of a radar data display in one embodiment;
fig. 7 is a simplified schematic diagram of radar data in one embodiment.
Detailed Description
The invention will be described in further detail below with reference to the drawings by means of specific embodiments. Wherein like elements in different embodiments are numbered alike in association. In the following embodiments, numerous specific details are set forth in order to provide a better understanding of the present application. However, one skilled in the art will readily recognize that some of the features may be omitted, or replaced by other elements, materials, or methods in different situations. In some instances, some operations associated with the present application have not been shown or described in the specification to avoid obscuring the core portions of the present application, and may not be necessary for a person skilled in the art to describe in detail the relevant operations based on the description herein and the general knowledge of one skilled in the art.
Furthermore, the described features, operations, or characteristics of the description may be combined in any suitable manner in various embodiments. Also, various steps or acts in the method descriptions may be interchanged or modified in a manner apparent to those of ordinary skill in the art. Thus, the various orders in the description and drawings are for clarity of description of only certain embodiments, and are not meant to be required orders unless otherwise indicated.
The numbering of the components itself, e.g. "first", "second", etc., is used herein merely to distinguish between the described objects and does not have any sequential or technical meaning. The terms "coupled" and "connected," as used herein, are intended to encompass both direct and indirect coupling (coupling), unless otherwise indicated.
While current methods of image-based three-dimensional object detection have achieved many results, the following problems remain:
1) The near-distance vehicle target perception is more accurate, but the far-distance target is less accurate.
2) These methods are based on the assumption that the field of view of the vehicle sensor is parallel to the ground, and cannot be directly applied to sensing equipment of road side infrastructure with large difference between the field of view and the vehicle.
However, sensors of road side infrastructure are typically mounted on a light pole a few meters high from the ground, have a good view and are not easily obscured, thus having a good complementary potential as a vehicle side sensing device.
In order to avoid increasing the cost of a sensor at the vehicle side as much as possible and to achieve the purpose of optimizing the sensing result at the vehicle side, in the embodiment of the application, firstly, radar identification data identified by an off-vehicle radar is acquired in a wireless communication mode, and surrounding image data of a vehicle visual angle is acquired by a vehicle-mounted image acquisition device; then, a multi-perception fusion algorithm is applied, a corresponding relation is established between the moving object imaged in the surrounding image data and the space three-dimensional information of the moving object in the radar identification data according to the surrounding image data and the radar identification data, so that the space three-dimensional information corresponding to the vehicle in the radar identification data is identified; finally, radar identification data identifying the vehicle is graphically displayed for assisted driving of the vehicle. As the radar identification data of the off-vehicle radar is combined into the vehicle re-identification of the vehicle-mounted image data, the perception precision and the perception range of the vehicle-mounted auxiliary driving system can be improved, and the space positioning can be realized without the assistance of a GPS and a high-precision map.
Embodiment one:
referring to fig. 1, a block diagram of a radar data processing system for a vehicle-mounted driving assistance system according to an embodiment includes a vehicle-mounted wireless communication device 10, a vehicle-mounted image acquisition device 20, a multi-perception fusion device 30 and a driving assistance prompting device 40. The vehicle-mounted wireless communication device 10 is used for acquiring radar identification data, wherein the radar identification data is space three-dimensional information and two-dimensional boundary information of all moving objects of the vehicle in a preset space range, which are acquired by an off-board radar, and the off-board radar is a radar which is not loaded on the vehicle. In one embodiment, the off-board radar is located on other vehicles, drive tests and/or drones. In one embodiment, the types of off-board radar include microwave radar, millimeter wave radar, and/or lidar. The in-vehicle image acquisition device 20 is configured to acquire surrounding image data of the vehicle viewing angle. The multi-perception fusion device 30 is configured to apply a preset multi-perception fusion algorithm, and establish a corresponding relationship between a moving object imaged in the surrounding image data and spatial three-dimensional information of the moving object in the radar identification data according to the surrounding image data and the radar identification data, so as to identify the spatial three-dimensional information corresponding to the vehicle in the radar identification data. The driving assistance presenting device 40 is for graphically displaying radar identification data identifying the vehicle for driving assistance. In an embodiment, the radar identification data only includes spatial three-dimensional information and two-dimensional boundary information of all moving objects including the vehicle in a preset spatial range, the spatial three-dimensional information includes a size, a position and a direction of the moving objects, and the two-dimensional boundary information is projection boundary information of the moving objects on a preset plane.
In one embodiment, the moving object imaged in the surrounding image data is a moving object referenced to the host vehicle, the moving object including surrounding vehicles, road markers (e.g., a guideboard or street light), and/or a predetermined identifiable marker.
In an embodiment, the preset multi-sensing fusion algorithm includes a dual-stage metric matching fusion algorithm, please refer to fig. 2, which is a schematic algorithm flow chart of the multi-sensing fusion algorithm in an embodiment, and the process of applying the dual-stage metric matching fusion algorithm includes:
step 101, acquiring an image target feature set.
Image target feature sets a= { a_v_1, a_v_2,..a_v_n } of a moving object imaged in surrounding image data are extracted.
Step 102, acquiring a radar target feature set.
Extracting a radar target feature set b= { b_r_α, b_r_β, & gt.
Step 103, obtaining a common moving object pair set.
And screening a common moving target pair set S= { (v_1, r_alpha), (v_2, r_beta) } of the imaging moving object in the surrounding image data and the moving object in the radar identification data by the similarity of each element in the image target feature set A and the radar target feature set B.
Step 104, acquiring eight vertex sets of the image.
Eight vertex sets v_m_d= { d1, d2, d3, d4, d5, d6, d7, d8} of the image of the element a_v_m in the image target feature set a satisfying a preset vertex extraction condition in the surrounding image data are extracted from the common moving target pair set S. The preset vertex extraction conditions comprise:
the moving object corresponding to the element a_v_m in the image target feature set a is in a preset central area of the surrounding image data, and the distance between the moving object and the vehicle is not greater than a preset value.
Step 105, a radar eight set of vertices is acquired.
And extracting eight radar vertex sets r_theta_d= { dα, dβ, dγ, dδ, dε, dζ, dη, dθ } of the element b_r_theta in the radar target feature set B of the corresponding element a_v_m from the common moving target pair set S.
And 106, acquiring a rotation translation rigid transformation matrix.
And calculating a rotation translation rigid transformation matrix for interconversion between eight vertex sets v_m_d of the image of the element a_v_m and eight vertex sets r_theta_d of the radar of the element b_r_theta by using a point cloud registration method.
And 107, acquiring a three-dimensional coordinate corresponding relation.
And acquiring a three-dimensional coordinate corresponding relation between the imaged moving object in the surrounding image data and the corresponding moving object in the radar identification data according to the rotation translation rigid transformation matrix.
In one embodiment, the radar data processing system further includes a vehicle-mounted radar verification device 50, where the vehicle-mounted radar verification device 50 is configured to obtain vehicle-mounted radar data on the vehicle when the vehicle is provided with a vehicle-mounted radar, and convert the vehicle-mounted radar data into vehicle-mounted identification data with the same radar identification data format. And then, carrying out accuracy verification on the vehicle-mounted radar according to the radar identification data and the vehicle-mounted identification data, and when the accuracy of the vehicle-mounted radar does not meet a preset value, using the radar identification data for auxiliary driving of the vehicle.
The vehicle radar is a mobile three-dimensional radar scanning system, and the working principle is that the related information of the measured physical state, such as the parameters of the object distance, azimuth, altitude, gesture, shape and the like, is calculated and described by continuously transmitting detection signals (such as laser beams) to surrounding objects and receiving returned signals (object echoes), so as to achieve the purpose of dynamic 3D scanning. In the prior art, the accuracy detection of the vehicle-mounted radar in the running vehicle is required to be carried out in a professional whole vehicle ring test system, the running environment of the vehicle is complex, and the stability and the accuracy of the vehicle-mounted radar cannot be guaranteed in real time. In an embodiment of the application, accuracy verification is performed on the vehicle-mounted radar according to radar identification data acquired by the vehicle-mounted radar, so that reliability and stability of the vehicle-mounted radar are guaranteed, and when the accuracy of the vehicle-mounted radar is lower than that of the vehicle-mounted radar, the radar identification data provided by the vehicle-mounted radar are radar data of the vehicle-mounted auxiliary driving system, so that accuracy and reliability of the vehicle-mounted auxiliary driving system can be improved.
The embodiment of the application discloses a radar data processing system for a vehicle-mounted auxiliary driving system, which comprises a vehicle-mounted wireless communication device, a vehicle-mounted image acquisition device, a multi-perception fusion device and an auxiliary driving prompt device, wherein the radar data processing system firstly acquires radar identification data of an off-vehicle radar and surrounding image data of a vehicle visual angle; then, a preset multi-perception fusion algorithm is applied, a corresponding relation is established between the moving object imaged in the surrounding image data and the space three-dimensional information of the moving object in the radar identification data according to the surrounding image data and the radar identification data, and the corresponding space three-dimensional information of the vehicle in the radar identification data is identified; finally, radar identification data identifying the vehicle is graphically displayed for driving assistance. As the radar outside the vehicle is applied to the driving auxiliary system of the radar-free vehicle, the sensing precision is higher and the sensing range is larger.
Referring to fig. 3, a flow chart of a radar data processing method according to an embodiment, where the radar data processing method is applied to the radar data processing apparatus as described above, specifically includes:
step 201, radar data is acquired.
The radar identification data is space three-dimensional information and two-dimensional boundary information of all moving objects comprising the vehicle in a preset space range, which are acquired by an off-board radar, and the off-board radar is a radar which is not loaded on the vehicle.
Step 202, calibrating the vehicle-mounted radar.
When the vehicle is provided with a vehicle-mounted radar, acquiring vehicle-mounted radar data of the vehicle, and converting the vehicle-mounted radar data into vehicle-mounted identification data with the same radar identification data format; and then, carrying out accuracy verification on the vehicle-mounted radar according to the radar identification data and the vehicle-mounted identification data.
Step 203, radar identification data is applied.
And when the accuracy of the vehicle-mounted radar does not meet the preset value, using radar identification data acquired by the radar outside the vehicle for auxiliary driving of the vehicle. When the accuracy of the vehicle-mounted radar meets a preset value, the vehicle-mounted identification data is broadcasted outwards in a wireless communication mode so as to be used for assisting driving of other vehicles.
Step 204, is used for assisting driving.
When the vehicle does not have a vehicle-mounted radar, firstly acquiring surrounding image data of the vehicle visual angle; then, a preset multi-perception fusion algorithm is applied, a corresponding relation is established between the moving object imaged in the surrounding image data and the space three-dimensional information of the moving object in the radar identification data according to the surrounding image data and the radar identification data, and the corresponding space three-dimensional information of the vehicle in the radar identification data is identified; finally, radar identification data identifying the vehicle is graphically displayed for assisted driving of the vehicle.
In an embodiment, the radar identification data only includes spatial three-dimensional information and two-dimensional boundary information of all moving objects including the vehicle in a preset spatial range, the spatial three-dimensional information includes a size, a position and a direction of the moving objects, and the two-dimensional boundary information is projection boundary information of the moving objects on a preset plane. Only the three-dimensional information and the two-dimensional boundary information are reserved, so that the data size of radar identification data can be greatly reduced.
The method for realizing the fusion of the vehicle-mounted image data and the radar data outside the vehicle has the core problem of identifying the position information of the vehicle in the radar data outside the vehicle, and the fusion principle is discussed below through a simple embodiment, and specifically comprises the following steps:
1) And acquiring surrounding image data of the vehicle-mounted end.
The vehicle-mounted image acquisition device is generally a camera, extracts and detects a three-dimensional target in surrounding image data through a monocular three-dimensional target sensing network, reserves three-dimensional information (size, position and/or direction) of a target vehicle, and also reserves two-dimensional bounding box information of the target on an image, and can be defined as two-dimensional boundary information, namely projection boundary information of a moving object on a preset plane (a vehicle view angle plane or a preset vehicle running plane). As shown in fig. 4 and 5, an example of the surrounding image data map and a two-dimensional boundary information map of the driving surface of the vehicle in one embodiment are shown.
2) The radar data is converted into radar identification data.
Referring to fig. 6 and fig. 7, in an embodiment, a schematic diagram of displaying radar data and a simplified schematic diagram of radar data are shown, the radar data includes a relatively large amount of information, and in order to reduce the data transmission amount, only three-dimensional information (size, position and/or direction) of a moving target can be extracted, and two-dimensional bounding box information of the target on a sensing plane (corresponding to a preset vehicle driving surface) can be reserved.
3) And performing image verification according to the consistency of the object space distance and the size in the public perception range so as to establish a corresponding relation.
In an embodiment, the target space size and the distance features between the two types of data (one image data and one radar data) are respectively extracted, and as the image data and the radar data have a public space and can both be extracted from the same moving target in the same public space, the extracted feature data are necessarily similar or identical, the corresponding relationship can be established according to the feature data similarity of each moving target, and the position of the vehicle can be identified in the radar identification data, so that the fusion of the vehicle-mounted image data and the radar data outside the vehicle is realized.
The following describes the application of the two-stage metric matching fusion algorithm (Hamming Registration Hamming Fusion, HRHF) by a specific embodiment, which specifically includes:
in order to enhance the effect of vehicle-side perception, it is not enough to simply make a quick association with a common target in the vehicle-road cooperative view, and it is also necessary to optimize according to the characteristics of the vehicle-side perceived target. In particular, for monocular three-dimensional perception (surrounding image data) of a vehicle, a three-dimensional target near the center of the field of view is perceived accurately, and a perceived accuracy at a distance is lower. Based on the characteristics, the application discloses a double-stage measurement matching fusion algorithm, and in the algorithm, the characteristics of a high-precision sensing result and a vehicle-side sensing characteristic generated by multiple sensors outside a vehicle are utilized.
First, vehicle re-identification features are extracted from a vehicle target image (surrounding image data) detected by a vehicle-side sensor, and similarity calculation is performed with hamming distances with the vehicle re-identification features in the list of radar identification data generated by the road side.
Then, based on the re-recognition similarity and the distance information of the three-dimensional information of the vehicle target detected on the vehicle side, the three-dimensional bounding box vertexes of the vehicle target which are ranked forward and are close to the vehicle side sensor are selected, a rotation translation rigid transformation matrix between vertexes of the same vehicle three-dimensional bounding box under the vehicle side sensor and the road side sensor is calculated by using iteration nearest points, and all three-dimensional target information detected on the road side is converted into a vehicle side coordinate system by using the obtained transformation matrix.
Then, for the three-dimensional object of the vehicle detected on the vehicle side, similarity calculation (here, the re-recognition feature is calculated) is performed again by using the hamming distance and the vehicle (the road side is converted into the three-dimensional bounding box on the vehicle side) at the position 3 m around, and the three-dimensional bounding box of the road side vehicle with the highest similarity replaces the three-dimensional bounding box detected on the vehicle side.
And finally, the fused three-dimensional traffic environment can be obtained, and the result of cooperative sensing of the vehicle and the road is displayed.
The vehicle-road collaborative perception fusion can be finally realized by calculating the similarity between the targets on two sides of the vehicle and the road through the Hamming distance, calculating the translation transformation matrix through the registration algorithm, calculating the similarity around the targets detected on each vehicle side, and fusing in a manner of replacing the bounding boxes. The application of the double-stage measurement matching fusion algorithm is small in calculated amount and high in speed, and the perception capability of the vehicle side can be effectively enhanced.
In one embodiment, the target set detected by the vehicle side is preset to be { v_1, v_2,..v_n }, and the target set detected by the road side is { r_α, r_β,..r_λ }, and the specific steps of the HRHF algorithm include:
1) One hamming calculation: extracting a vehicle re-identification feature set a= { a_v_1, a_v_2, & a_v_n } from an image of a vehicle target detected by a vehicle side sensor, calculating a similarity with each element in the vehicle re-identification feature set b= { b_r_α, b_r_β, & b_r_λ } generated on a road side using a hamming distance, and screening a set s= { (v_1, r_α), (v_2, r_β), & a set of common vehicle target pairs in a vehicle and a road side view.
2) Performing corresponding vertex registration: the rotational translational rigid transformation matrix between eight vertex sets v_m_d= { d1, d2, d3, d4, d5, d6, d7, d8} of the three-dimensional bounding box of the vehicle target v_m near the center of the vehicle-mounted sensor field of view and at a relatively close spatial distance from the vehicle-mounted sensor is selected in the set S, and eight vertex sets r_θ_d= { dα, dβ, dγ, dδ, dεdζ, dη, dθ } of the bounding box of the common vehicle target r_θ in the road-side field of view are calculated using a point cloud registration method. Using the resulting transformation matrix, three-dimensional information of all targets { r_α, r_β,..r_λ } detected at the road side is converted to a vehicle-side coordinate system { vr_α, vr_β,..vr_λ }.
3) Secondary hamming calculation: for the three-dimensional information of the target { v_1, v_2,..v_n } detected on the vehicle side, the hamming distance is again used to calculate the similarity of each element v_i to the re-identification features of vehicles in the surrounding 3 meter range ({ vr_α, vr_β,..vr_λ } with elements having a three-dimensional distance from the element v_i of less than 3 meters). The vehicle three-dimensional bounding box converted from the road side with the highest similarity replaces the vehicle three-dimensional bounding box detected by the vehicle side (other three-dimensional targets are similar alternatives), and a final three-dimensional target fusion result { v_r_alpha, v_r_beta, }, v_r_lambda, v_delta, & gt, v_eta } is formed.
The fused three-dimensional traffic environment can be obtained through an HRHF algorithm, and the result of cooperative sensing of the vehicle and the road is displayed. The vehicle-road collaborative perception fusion can be finally realized by calculating the similarity between the targets on two sides of the vehicle road by utilizing the Hamming distance, calculating the translation transformation matrix by using the registration algorithm, calculating the similarity around the targets detected by each vehicle side, and fusing by replacing the bounding boxes. The HRHF algorithm is small in calculation amount and high in speed, and the vehicle side sensing capability can be effectively enhanced.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by a computer program. When all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a computer readable storage medium, and the storage medium may include: read-only memory, random access memory, magnetic disk, optical disk, hard disk, etc., and the program is executed by a computer to realize the above-mentioned functions. For example, the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above can be realized. In addition, when all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and the program in the above embodiments may be implemented by downloading or copying the program into a memory of a local device or updating a version of a system of the local device, and when the program in the memory is executed by a processor.
The foregoing description of the invention has been presented for purposes of illustration and description, and is not intended to be limiting. Several simple deductions, modifications or substitutions may also be made by a person skilled in the art to which the invention pertains, based on the idea of the invention.

Claims (7)

1. A radar data processing system for an in-vehicle driving assist system, comprising:
the vehicle-mounted wireless communication device is used for acquiring radar identification data; the radar identification data is space three-dimensional information and/or two-dimensional boundary information of all moving objects in a preset space range, which are acquired by an off-board radar; the off-board radar is a radar that is not loaded on the vehicle;
a vehicle-mounted image acquisition device for acquiring surrounding image data of the vehicle viewing angle;
the multi-perception fusion device is used for applying a preset multi-perception fusion algorithm, and establishing a corresponding relation between a moving object imaged in the surrounding image data and space three-dimensional information of the moving object in the radar identification data according to the surrounding image data and the radar identification data so as to identify the corresponding space three-dimensional information of the vehicle in the radar identification data;
a driving assistance prompting device for graphically displaying the radar identification data identifying the vehicle for driving assistance;
the preset multi-perception fusion algorithm comprises a double-stage metric matching fusion algorithm, and the process of applying the double-stage metric matching fusion algorithm comprises the following steps:
extracting an image target feature set A of the imaged moving object in the surrounding image data;
extracting a radar target feature set B of a moving object in the radar identification data;
screening a common moving target pair set S of an imaging moving object in the surrounding image data and a moving object in the radar identification data through the similarity of each element in the image target feature set A and the radar target feature set B;
extracting eight vertex sets v_m_d of images of elements a_v_m in an image target feature set A which satisfy a preset vertex extraction condition in the surrounding image data from a common moving target pair set S;
extracting eight radar vertex sets r_theta_d corresponding to an element b_r_theta in a radar target feature set B of the element a_v_m from a common moving target pair set S;
calculating a rotation translation rigid transformation matrix for interconversion between eight vertex sets v_m_d of the image of the element a_v_m and eight vertex sets r_theta_d of the radar of the element b_r_theta by using a point cloud registration method;
acquiring a three-dimensional coordinate corresponding relation between the moving object imaged in the surrounding image data and the corresponding moving object in the radar identification data according to the rotation translation rigid transformation matrix;
the preset vertex extraction conditions include:
the moving object corresponding to the element a_v_m in the image target feature set a is in a preset central area of the surrounding image data, and the distance between the moving object and the vehicle is not greater than a preset value.
2. The radar data processing system of claim 1, wherein the off-board radar is disposed on other vehicles, roadsides, and/or unmanned aerial vehicles; the types of the radar outside the vehicle include microwave radar, millimeter wave radar and/or laser radar.
3. The radar data processing system according to claim 1, wherein the radar identification data only includes spatial three-dimensional information and/or two-dimensional boundary information of all moving objects within the preset spatial range, the spatial three-dimensional information includes a size, a position and/or a direction of the moving object, and the two-dimensional boundary information is projection boundary information of the moving object on a preset plane.
4. A radar data processing method for an in-vehicle driving assist system, comprising:
acquiring radar identification data; the radar identification data is space three-dimensional information and/or two-dimensional boundary information of all moving objects including vehicles in a preset space range, which are acquired by an off-board radar; the vehicle exterior radar is a radar which is not loaded on the vehicle;
when the vehicle is provided with a vehicle-mounted radar, acquiring vehicle-mounted radar data of the vehicle, and converting the vehicle-mounted radar data into vehicle-mounted identification data with the same radar identification data format;
performing accuracy verification on the vehicle-mounted radar according to the radar identification data and the vehicle-mounted identification data;
when the accuracy of the vehicle-mounted radar does not meet the minimum accuracy requirement preset by the auxiliary driving, the radar identification data is used for the auxiliary driving of the vehicle;
when the vehicle does not have a vehicle-mounted radar, acquiring surrounding image data of the vehicle visual angle;
a preset multi-perception fusion algorithm is applied, a corresponding relation between a moving object imaged in the surrounding image data and space three-dimensional information of the moving object in the radar identification data is established according to the surrounding image data and the radar identification data, and the corresponding space three-dimensional information of the vehicle in the radar identification data is identified;
graphically displaying said radar identification data identifying the vehicle for assisted driving of the vehicle;
the preset multi-perception fusion algorithm comprises a double-stage metric matching fusion algorithm, and the process of applying the double-stage metric matching fusion algorithm comprises the following steps:
extracting an image target feature set A of the imaged moving object in the surrounding image data;
extracting a radar target feature set B of a moving object in the radar identification data;
screening a common moving target pair set S of an imaging moving object in the surrounding image data and a moving object in the radar identification data through the similarity of each element in the image target feature set A and the radar target feature set B;
extracting eight vertex sets v_m_d of images of elements a_v_m in an image target feature set A which satisfy a preset vertex extraction condition in the surrounding image data from a common moving target pair set S;
extracting eight radar vertex sets r_theta_d corresponding to an element b_r_theta in a radar target feature set B of the element a_v_m from a common moving target pair set S;
calculating a rotation translation rigid transformation matrix for interconversion between eight vertex sets v_m_d of the image of the element a_v_m and eight vertex sets r_theta_d of the radar of the element b_r_theta by using a point cloud registration method;
acquiring a three-dimensional coordinate corresponding relation between the moving object imaged in the surrounding image data and the corresponding moving object in the radar identification data according to the rotation translation rigid transformation matrix;
the preset vertex extraction conditions include:
the moving object corresponding to the element a_v_m in the image target feature set a is in a preset central area of the surrounding image data, and the distance between the moving object and the vehicle is not greater than a preset value.
5. The radar data processing method of claim 4, further comprising:
and when the vehicle-mounted radar precision meets the minimum precision requirement preset by the auxiliary driving, broadcasting the vehicle-mounted identification data outwards in a wireless communication mode so as to be used for the auxiliary driving of other vehicles.
6. The method according to claim 4, wherein the radar identification data includes only spatial three-dimensional information and/or two-dimensional boundary information of all moving objects including the vehicle within the predetermined spatial range, the spatial three-dimensional information includes a size, a position and/or a direction of the moving object, and the two-dimensional boundary information is projected boundary information of the moving object on a predetermined plane.
7. A computer readable storage medium, characterized in that the medium has stored thereon a program, which is executable by a processor to implement the method according to any of claims 4-6.
CN202311798523.0A 2023-12-26 2023-12-26 Radar data processing system and method for vehicle-mounted auxiliary driving system Active CN117452392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311798523.0A CN117452392B (en) 2023-12-26 2023-12-26 Radar data processing system and method for vehicle-mounted auxiliary driving system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311798523.0A CN117452392B (en) 2023-12-26 2023-12-26 Radar data processing system and method for vehicle-mounted auxiliary driving system

Publications (2)

Publication Number Publication Date
CN117452392A CN117452392A (en) 2024-01-26
CN117452392B true CN117452392B (en) 2024-03-08

Family

ID=89591350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311798523.0A Active CN117452392B (en) 2023-12-26 2023-12-26 Radar data processing system and method for vehicle-mounted auxiliary driving system

Country Status (1)

Country Link
CN (1) CN117452392B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012089114A (en) * 2010-09-24 2012-05-10 Toyota Motor Corp Obstacle recognition device
CN106651926A (en) * 2016-12-28 2017-05-10 华东师范大学 Regional registration-based depth point cloud three-dimensional reconstruction method
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud
CN111429514A (en) * 2020-03-11 2020-07-17 浙江大学 Laser radar 3D real-time target detection method fusing multi-frame time sequence point clouds
CN113537362A (en) * 2021-07-20 2021-10-22 中国第一汽车股份有限公司 Perception fusion method, device, equipment and medium based on vehicle-road cooperation
WO2022001618A1 (en) * 2020-07-01 2022-01-06 华为技术有限公司 Lane keep control method, apparatus, and system for vehicle
CN114972941A (en) * 2022-05-11 2022-08-30 燕山大学 Decision fusion method and device for three-dimensional detection of shielded vehicle and electronic equipment
CN115402347A (en) * 2021-05-27 2022-11-29 北京万集科技股份有限公司 Method for identifying a drivable region of a vehicle and driving assistance method
CN116433737A (en) * 2023-04-26 2023-07-14 吉林大学 Method and device for registering laser radar point cloud and image and intelligent terminal
CN116524311A (en) * 2023-03-22 2023-08-01 北京博宇通达科技有限公司 Road side perception data processing method and system, storage medium and electronic equipment thereof
CN116572995A (en) * 2023-07-11 2023-08-11 小米汽车科技有限公司 Automatic driving method and device of vehicle and vehicle
CN116685873A (en) * 2021-01-01 2023-09-01 同济大学 Vehicle-road cooperation-oriented perception information fusion representation and target detection method
CN116778448A (en) * 2023-04-26 2023-09-19 北京定邦科技有限公司 Vehicle safe driving assistance method, device, system, equipment and storage medium
CN117111055A (en) * 2023-06-19 2023-11-24 山东高速集团有限公司 Vehicle state sensing method based on thunder fusion
CN117111085A (en) * 2023-08-25 2023-11-24 河南科技大学 Automatic driving automobile road cloud fusion sensing method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012089114A (en) * 2010-09-24 2012-05-10 Toyota Motor Corp Obstacle recognition device
CN106651926A (en) * 2016-12-28 2017-05-10 华东师范大学 Regional registration-based depth point cloud three-dimensional reconstruction method
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud
CN111429514A (en) * 2020-03-11 2020-07-17 浙江大学 Laser radar 3D real-time target detection method fusing multi-frame time sequence point clouds
WO2022001618A1 (en) * 2020-07-01 2022-01-06 华为技术有限公司 Lane keep control method, apparatus, and system for vehicle
CN116685873A (en) * 2021-01-01 2023-09-01 同济大学 Vehicle-road cooperation-oriented perception information fusion representation and target detection method
CN115402347A (en) * 2021-05-27 2022-11-29 北京万集科技股份有限公司 Method for identifying a drivable region of a vehicle and driving assistance method
CN113537362A (en) * 2021-07-20 2021-10-22 中国第一汽车股份有限公司 Perception fusion method, device, equipment and medium based on vehicle-road cooperation
CN114972941A (en) * 2022-05-11 2022-08-30 燕山大学 Decision fusion method and device for three-dimensional detection of shielded vehicle and electronic equipment
CN116524311A (en) * 2023-03-22 2023-08-01 北京博宇通达科技有限公司 Road side perception data processing method and system, storage medium and electronic equipment thereof
CN116433737A (en) * 2023-04-26 2023-07-14 吉林大学 Method and device for registering laser radar point cloud and image and intelligent terminal
CN116778448A (en) * 2023-04-26 2023-09-19 北京定邦科技有限公司 Vehicle safe driving assistance method, device, system, equipment and storage medium
CN117111055A (en) * 2023-06-19 2023-11-24 山东高速集团有限公司 Vehicle state sensing method based on thunder fusion
CN116572995A (en) * 2023-07-11 2023-08-11 小米汽车科技有限公司 Automatic driving method and device of vehicle and vehicle
CN117111085A (en) * 2023-08-25 2023-11-24 河南科技大学 Automatic driving automobile road cloud fusion sensing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多传感器信息融合的3维目标实时检测;谢德胜等;《汽车工程》;20220331;第44卷(第3期);第340-350页 *

Also Published As

Publication number Publication date
CN117452392A (en) 2024-01-26

Similar Documents

Publication Publication Date Title
JP7297017B2 (en) Method and apparatus for calibrating external parameters of on-board sensors and related vehicles
CN109461211B (en) Semantic vector map construction method and device based on visual point cloud and electronic equipment
US10863166B2 (en) Method and apparatus for generating three-dimensional (3D) road model
US10240934B2 (en) Method and system for determining a position relative to a digital map
US11971274B2 (en) Method, apparatus, computer program, and computer-readable recording medium for producing high-definition map
EP4016457A1 (en) Positioning method and apparatus
US9201424B1 (en) Camera calibration using structure from motion techniques
JP2020500290A (en) Method and system for generating and using location reference data
CN110906954A (en) High-precision map test evaluation method and device based on automatic driving platform
EP3644013B1 (en) Method, apparatus, and system for location correction based on feature point correspondence
CN112740225A (en) Method and device for determining road surface elements
CN111353453B (en) Obstacle detection method and device for vehicle
CN111145248B (en) Pose information determining method and device and electronic equipment
US10949707B2 (en) Method, apparatus, and system for generating feature correspondence from camera geometry
CN112444258A (en) Method for judging drivable area, intelligent driving system and intelligent automobile
CN116997771A (en) Vehicle, positioning method, device, equipment and computer readable storage medium thereof
CN111323029B (en) Navigation method and vehicle-mounted terminal
CN112907659B (en) Mobile equipment positioning system, method and equipment
CN117452392B (en) Radar data processing system and method for vehicle-mounted auxiliary driving system
CN112150576B (en) High-precision vector map acquisition system and method
CN114127511A (en) Method and communication system for assisting at least partially automatic vehicle control
CN117470254B (en) Vehicle navigation system and method based on radar service
CN117452407B (en) Radar data service system and method for vehicle-mounted auxiliary driving system
CN117471461B (en) Road side radar service device and method for vehicle-mounted auxiliary driving system
WO2021056185A1 (en) Systems and methods for partially updating high-definition map based on sensor data matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant