CN113156440B - Defense method and system based on radar and image data fusion detection - Google Patents

Defense method and system based on radar and image data fusion detection Download PDF

Info

Publication number
CN113156440B
CN113156440B CN202110457163.2A CN202110457163A CN113156440B CN 113156440 B CN113156440 B CN 113156440B CN 202110457163 A CN202110457163 A CN 202110457163A CN 113156440 B CN113156440 B CN 113156440B
Authority
CN
China
Prior art keywords
radar
attack
data
automatic driving
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110457163.2A
Other languages
Chinese (zh)
Other versions
CN113156440A (en
Inventor
傅晨波
姚虹蛟
冯婷婷
徐倩
宣琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202110457163.2A priority Critical patent/CN113156440B/en
Publication of CN113156440A publication Critical patent/CN113156440A/en
Application granted granted Critical
Publication of CN113156440B publication Critical patent/CN113156440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/91Radar or analogous systems specially adapted for specific applications for traffic control
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/006Theoretical aspects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/36Means for anti-jamming, e.g. ECCM, i.e. electronic counter-counter measures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/414Discriminating targets with respect to background clutter
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/418Theoretical aspects

Abstract

A defending method based on radar and image data fusion detection comprises the following steps: s1, in the normal operation process of automatic driving, blind attack is conducted on a camera; s2, acquiring data by an automatic driving off-line platform and performing attack detection; s3, confirming that an attack exists through a main system, and carrying out tracking deployment again on an automatic driving offline platform; s4, the automatic driving online platform performs target detection by using the new model, so that a defending effect is achieved. The invention also comprises a defense system based on the fusion detection of the radar and the image data, which consists of an attack detection module, a tracking redeployment module and a target detection module which are connected in sequence. The invention utilizes the facula with adjustable size, position and brightness to simulate the blinding attack, and the simulation experiment result shows that the blinding attack can form potential safety hazard to the automatic driving, and the proposed defending method can resist the attack and improve the safety of the automatic driving.

Description

Defense method and system based on radar and image data fusion detection
Technical Field
The invention relates to the field of automatic driving target detection and artificial intelligent attack and defense, in particular to a method and a system for defending radar and image data fusion detection based on deep learning.
Background
In recent years, in the field of object detection for automatic driving, convolutional neural networks are most effective in a method of object detection using a camera image. However, even if the visual representation in the camera image is closely related to the visual perception of human beings, the visibility may be reduced under severe weather conditions such as heavy rain or heavy fog, and safe driving by the camera alone may not be guaranteed. Thus, new methods have emerged to fuse radar and camera sensor data with neural networks to improve target detection accuracy. Radar sensors are more robust to environmental conditions (e.g., changes in illumination, rain, fog) than camera sensors. In addition, papers have been proposed to address blinding attacks against cameras. By combining with the data fusion thought, we provide a defense method and a system for deep learning-based radar and image data fusion detection, which can be used for resisting blinding attacks.
A radar and camera sensor fusion architecture based on Deep Learning target detection is proposed in the paper of A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection. Research shows that the radar and camera data are fused in the neural network, so that the detection precision of the target detection network can be improved. However, no method for detecting and defending an attack is proposed in the paper.
According to the technical scheme disclosed in the patent with the application number of CN201711459314.8, the vehicle-mounted obstacle detection method based on deep learning by combining radar and image data is disclosed. The target detection algorithm based on intelligent device sensor data fusion and deep learning is provided, and the data feature types which can be perceived by a detection model are enriched through the fused radar point cloud data and camera data. And processing the fusion data by utilizing the Yolo deep convolutional neural network model, and detecting a target obstacle of the road scene. The detection method proposed in the patent improves the accuracy of target detection, however, the influence of blinding attacks on target detection is not considered.
Disclosure of Invention
The invention provides a defending method and a defending system for radar and image data fusion detection, which can cope with the influence of blind attack on target detection and ensure safer automatic driving.
In order to achieve the above object, the present invention provides the following solutions:
the invention provides a defense method based on radar and image data fusion detection, which comprises the following steps:
s1, in the normal operation process of automatic driving, blind attack is conducted on a camera;
s2, acquiring data by an automatic driving off-line platform and performing attack detection;
s3, confirming that an attack exists through a main system, and carrying out tracking deployment again on an automatic driving offline platform;
s4, the automatic driving online platform performs target detection by using a new defense model, so that a defense effect is achieved.
Further, the step S1 specifically includes:
s1.1, during normal operation of automatic driving, collecting relevant data by using a vehicle-mounted sensor; the simulation experiment is carried out by using a nuScens data set, and 5 millimeter wave radar data and 6 camera image data in a v1.0-mini data set provided by authorities are used;
s1.2, utilizing an LED lamp to attack the camera; simulation experiments add spots to 6 camera image data using the Python program.
Further, the S1.2 specifically includes:
s1.2.1, automatically setting the ratio of the radius of the light spot to the shortest length of the image, the size and the brightness of the center of the light spot;
s1.2.2, traversing the camera image data, randomly adding light spots on each image by using a random function to attack, and generating an attacked camera image data set.
Further, the step S2 specifically includes:
s2.1, performing attack detection on newly acquired image data and original data by using an SVM classification algorithm, and sending an error signal to a main system if classification errors are found; the newly acquired image data in the simulation experiment is the attacked camera image data set;
s2.2, if no classification error is found, a normal driving signal is sent to the main system, and subsequent steps are not needed.
Further, the step S3 specifically includes:
s3.1, the offline platform improves the proportion of radar data in training of the data fusion neural network model;
and S3.2, re-performing model training by using the attacked image data and radar data, thereby improving the target detection classification precision.
Further, the S3.1 specifically includes:
s3.1.1 when not attacked, preprocessing radar data, and mapping multicycle radar data from two-dimensional point cloud data to an image vertical plane at the same time, wherein the radar data comprises azimuth angles, distances and radar heights which are all set to be 3 meters; the characteristics of the radar echo are stored as pixel values in the enhanced image; at image pixel locations without radar returns, the projected radar channel value is set to a value of 0;
s3.1.2, when the detection is attacked, expanding the characteristics of the radar echo by 5 times, and storing the characteristics as pixel values in an enhanced image; at image pixel locations without radar returns, the projected radar channel value is set to a value of 0.
Further, the step S3.2 specifically includes:
s3.2.1 the input of the neural network is a four-dimensional channel consisting of three channels (red, green and blue) of the camera image and a radar channel, and the output is a two-dimensional regression coordinate of the target detection box and a classification score of the detection box;
s3.2.2, the deep learning neural network adopts a convolution block of VGG16 as a main processing unit, and processes input in a corresponding proportion through max-pooling;
s3.2.3, carrying out fusion treatment on each layer, adding a Feature Pyramid (FPN) to a deep network to generate classification and regression results, and finally enabling the model to reach a stable state through continuous training and optimizing loss;
s3.2.4, the trained new defense model is saved.
The system for implementing the defense method based on the radar and image data fusion detection comprises the following modules: attack detection module, tracking redeployment module, target detection module:
the attack detection module is used for carrying out attack detection on the newly acquired image data and the original data by utilizing an SVM classification algorithm, finding classification errors, and sending error notification to a main system by an offline platform; no classification error is found, and normal driving is performed without subsequent steps;
the tracking redeployment module is used for carrying out tracking deployment again on the automatic driving offline platform and storing a new defense model;
the target detection module loads the model saved by the tracking redeployment module into the automatic driving online platform to carry out automatic driving again;
the attack detection module, the tracking redeployment module and the target detection module are sequentially linked.
The technical conception of the invention: the invention provides a defense method for deep learning-based radar and image data fusion detection, which considers an online platform design and an offline platform design of automatic driving and provides a defense method for automatic driving. Secondly, when blind attack is simulated, the positions of the light spots added on the image are random, and the size of the light spots can be adjusted. Furthermore, the effect of the LED lamp on the attack of the camera is considered to be influenced by weather, so that the brightness of the light spot is adjustable, and the periphery of the middle light spot is weakened. In addition, the invention has carried out the simulation experiment, has confirmed the feasibility of this method.
The invention has the beneficial effects that:
1. according to the simulation experiment, millimeter wave radar data and camera image data do not need to be collected manually, and priori knowledge related to an automatic driving data set is not needed.
2. In the simulation experiment, the blind attack is realized by randomly adding light spots to the image data, and the light spot size and the brightness are adjustable, so that the blind attack is more suitable for the actual situation.
3. Through simulation experiments, the fact that the added light spots really have influence on the target detection result of the camera is verified, and the fact that the possibility that the camera cannot normally run after being attacked by the LED lamp exists in automatic driving in the real world is explained.
4. By utilizing the defending method provided by the invention, blind attack can be effectively resisted, and the safety of automatic driving is improved.
Drawings
FIG. 1 is a flow chart of a method and system for implementing the present invention;
FIG. 2 is a block diagram of a method and system embodying the present invention;
FIG. 3 is a diagram of a deep learning based radar and image data fusion neural network framework;
fig. 4 is a graph of simulated blinding attacks.
Detailed Description
The following describes the embodiments of the present invention in further detail with reference to the drawings.
The automatic driving on-line platform is a platform of a vehicle-mounted environment, completes real-time analysis of data and realizes the function of automatic driving of the vehicle. The automatic driving off-line platform realizes operation, data management and analysis, model training and off-line simulation of the vehicle. In the normal operation process of automatic driving, the online platform utilizes the model provided by the offline platform to carry out target detection, so that safe driving is ensured. The following embodiments take blind attack during automatic driving as an example.
Referring to fig. 1 to 4, a defense method and system based on radar and image data fusion detection includes the following steps:
s1, in the normal operation process of automatic driving, blind attack is conducted on a camera;
s1.1, during normal operation of automatic driving, collecting relevant data by using a vehicle-mounted sensor; simulation experiments utilized nuScenes dataset, which is the first large dataset providing full sensor data for automotive vehicles, including 6 cameras, 1 lidar, 5 millimeter wave radar, and GPS and IMU; this dataset consisted of 1000 scenes, each scene length of 20 seconds, containing a wide variety of scenes. In each scene there are 40 key frames, i.e. 2 key frames per second, the other frames being streps. Experiments were performed using 5 millimeter wave radar data and 6 camera image data in v1.0-mini dataset provided by its authorities.
S1.2, an LED lamp is utilized to attack the camera; simulation experiments used the Python program to add spots on 6 camera image data.
S1.2.1, the ratio of the light spot radius to the shortest length of the image, the light spot center size and the brightness are set automatically.
S1.2.2, traversing the camera image data, randomly adding light spots on each image by using a random function to attack, and generating an attacked camera image data set.
S2, acquiring data by an automatic driving off-line platform and performing attack detection;
s2.1, performing attack detection on newly acquired image data and original data by using an SVM classification algorithm, and sending an error signal to a main system if classification errors are found; the newly acquired image data in the simulation experiment is the camera image data set after attack.
S2.2, if no classification error is found, a normal driving signal is sent to the main system, and subsequent steps are not needed.
S3, confirming that an attack exists through a main system, and carrying out tracking deployment again on an automatic driving offline platform;
s3.1, the offline platform improves the proportion of the radar data in the training of the data fusion neural network model.
S3.1.1 when not attacked, preprocessing radar data, and mapping multicycle radar data from two-dimensional point cloud data to an image vertical plane at the same time, wherein the radar data comprises azimuth angles, distances and radar heights which are all set to be 3 meters; the characteristics of the radar echo are stored as pixel values in the enhanced image; at image pixel locations without radar returns, the projected radar channel value is set to a value of 0.
S3.1.2, when the detection is attacked, expanding the characteristics of the radar echo by 5 times, and storing the characteristics as pixel values in an enhanced image; at image pixel locations without radar returns, the projected radar channel value is set to a value of 0.
And S3.2, re-performing model training by using the attacked image data and radar data, thereby improving the target detection classification precision.
S3.2.1 the input of the neural network is a four-dimensional channel consisting of three channels (red, green and blue) of the camera image and a radar channel, and the output is the two-dimensional regression coordinates of the target detection box and the classification score of the detection box.
S3.2.2 the deep learning neural network adopts a convolution block of VGG16 as a main processing unit, and processes input in corresponding proportion through max-pooling.
S3.2.3, fusion processing is carried out on each layer, a Feature Pyramid (FPN) is added to a deep network to generate classification and regression results, and the model is finally enabled to reach a stable state through continuous training and optimizing loss.
S3.2.4, the trained new defense model is saved.
S4, the automatic driving online platform performs target detection by using a new defense model, so that a defense effect is achieved.
The system for implementing the defense method based on the fusion detection of the radar and the image data comprises the following components: an attack detection module, a tracking redeployment module and a target detection module;
the attack detection module is used for carrying out attack detection on the newly acquired image data and the original data by utilizing an SVM classification algorithm, finding classification errors, and sending error notification to a main system by an offline platform; no classification error is found, and normal driving is performed without subsequent steps; the method specifically comprises the following steps:
s2.1, performing attack detection on newly acquired image data and original data by using an SVM classification algorithm, and sending an error signal to a main system if classification errors are found; the newly acquired image data of the simulation laboratory is the video camera image data set after attack.
S2.2, if no classification error is found, a normal driving signal is sent to the main system, and subsequent steps are not needed.
The tracking redeployment module is used for carrying out tracking deployment again on the automatic driving offline platform and storing a new defense model; the method specifically comprises the following steps:
s3.1, the offline platform improves the proportion of the radar data in the training of the data fusion neural network model.
S3.1.1 when not attacked, preprocessing radar data, and mapping multicycle radar data from two-dimensional point cloud data to an image vertical plane at the same time, wherein the radar data comprises azimuth angles, distances and radar heights which are all set to be 3 meters; the characteristics of the radar echo are stored as pixel values in the enhanced image; at image pixel locations without radar returns, the projected radar channel value is set to a value of 0.
S3.1.2, after being attacked, expanding the characteristics of the radar echo by 5 times, and storing the characteristics as pixel values in an enhanced image; at image pixel locations without radar returns, the projected radar channel value is set to a value of 0.
And S3.2, re-performing model training by using the attacked image data and radar data, thereby improving the target detection classification precision.
S3.2.1 the input of the neural network is a four-dimensional channel consisting of three channels (red, green and blue) of the camera image and a radar channel, and the output is the two-dimensional regression coordinates of the target detection box and the classification score of the detection box.
S3.2.2 the deep learning neural network adopts a convolution block of VGG16 as a main processing unit, and processes input in corresponding proportion through max-pooling.
S3.2.3, fusion processing is carried out on each layer, a Feature Pyramid (FPN) is added to a deep network to generate classification and regression results, and the model is finally enabled to reach a stable state through continuous training and optimizing loss.
S3.2.4, the trained new defense model is saved.
And the target detection module loads the model saved by the tracking redeployment module into the automatic driving online platform to carry out automatic driving again.
The attack detection module, the tracking redeployment module and the target detection module are sequentially linked.
The invention utilizes the facula with adjustable size, position and brightness to simulate the blinding attack, and the simulation experiment result shows that the blinding attack can form potential safety hazard to the automatic driving, and the proposed defending method can resist the attack and improve the safety of the automatic driving.
The embodiments described in the present specification are merely examples of implementation forms of the inventive concept, and the scope of protection of the present invention should not be construed as being limited to the specific forms set forth in the embodiments, and the scope of protection of the present invention and equivalent technical means that can be conceived by those skilled in the art based on the inventive concept.

Claims (2)

1. A defending method based on radar and image data fusion detection is characterized in that: the method comprises the following steps:
s1, in the normal operation process of automatic driving, blind attack is conducted on a camera;
s2, acquiring data by an automatic driving off-line platform and performing attack detection;
s3, confirming that an attack exists through a main system, and carrying out tracking deployment again on an automatic driving offline platform;
s4, the automatic driving online platform performs target detection by using a new defense model, so that a defense effect is achieved;
the S1 specifically comprises the following steps:
s1.1, during normal operation of automatic driving, collecting relevant data by using a vehicle-mounted sensor; the simulation experiment is carried out by using a nuScens data set, and 5 millimeter wave radar data and 6 camera image data in a v1.0-mini data set in the nuScens data set are used;
s1.2, utilizing an LED lamp to attack the camera; the simulation experiment utilizes a Python program to add light spots on the image data of 6 cameras;
the S1.2 specifically comprises:
s1.2.1, automatically setting the ratio of the radius of the light spot to the shortest side length of the image, and the size and brightness of the center of the light spot;
s1.2.2 traversing the camera image data, randomly adding light spots on each image by using a random function to attack, and generating an attacked camera image data set;
the step S2 specifically comprises the following steps:
s2.1, performing attack detection on newly acquired image data and original data by using an SVM classification algorithm, and sending an error signal to a main system if classification errors are found; the newly acquired image data in the simulation experiment is the attacked camera image data set;
s2.2, if no classification error is found, a normal driving signal is sent to the main system, and subsequent steps are not needed;
the step S3 specifically comprises the following steps:
s3.1, the offline platform improves the proportion of radar data in training of the data fusion neural network model;
s3.2, re-performing model training by using the attacked image data and radar data, thereby improving the target detection classification precision;
the step S3.1 specifically comprises the following steps:
s3.1.1 when not attacked, preprocessing radar data, mapping multicycle radar data from two-dimensional point cloud data to an image vertical plane, wherein the radar data comprises azimuth angles, distances and radar heights which are all set to be 3 meters; the characteristics of the radar echo are stored as pixel values in the enhanced image; at image pixel locations without radar returns, the projected radar channel value is set to a value of 0;
s3.1.2, when the detection is attacked, expanding the characteristics of the radar echo by 5 times, and storing the characteristics as pixel values in an enhanced image; at image pixel locations without radar returns, the projected radar channel value is set to a value of 0;
the step S3.2 specifically comprises the following steps:
s3.2.1 the input of the neural network is a four-dimensional channel consisting of three channels (red, green and blue) of the camera image and a radar channel, and the output is a two-dimensional regression coordinate of the target detection box and a classification score of the detection box;
s3.2.2, the deep learning neural network adopts a convolution block of VGG16 as a main processing unit, and processes input in a corresponding proportion through max-pooling;
s3.2.3, carrying out fusion treatment on each layer, adding a Feature Pyramid (FPN) to a deep network to generate classification and regression results, and finally enabling the model to reach a stable state through continuous training and optimizing loss;
s3.2.4, the trained new defense model is saved.
2. A system for implementing a defense method based on radar and image data fusion detection according to claim 1, characterized by comprising the following modules: an attack detection module, a tracking redeployment module and a target detection module;
the attack detection module is used for carrying out attack detection on the newly acquired image data and the original data by utilizing an SVM classification algorithm, finding classification errors, and sending error notification to a main system by an offline platform; no classification error is found, and normal driving is performed without subsequent steps;
the tracking redeployment module is used for carrying out tracking deployment again on the automatic driving offline platform and storing a new defense model;
the target detection module loads the model saved by the tracking redeployment module into the automatic driving online platform to carry out automatic driving again;
the attack detection module, the tracking redeployment module and the target detection module are sequentially connected.
CN202110457163.2A 2021-04-27 2021-04-27 Defense method and system based on radar and image data fusion detection Active CN113156440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110457163.2A CN113156440B (en) 2021-04-27 2021-04-27 Defense method and system based on radar and image data fusion detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110457163.2A CN113156440B (en) 2021-04-27 2021-04-27 Defense method and system based on radar and image data fusion detection

Publications (2)

Publication Number Publication Date
CN113156440A CN113156440A (en) 2021-07-23
CN113156440B true CN113156440B (en) 2024-03-26

Family

ID=76871274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110457163.2A Active CN113156440B (en) 2021-04-27 2021-04-27 Defense method and system based on radar and image data fusion detection

Country Status (1)

Country Link
CN (1) CN113156440B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105842683A (en) * 2016-05-27 2016-08-10 南京博驰光电科技有限公司 Unmanned aerial vehicle integrated defense system and method
CN106094823A (en) * 2016-06-29 2016-11-09 北京奇虎科技有限公司 The processing method of vehicle hazard driving behavior and system
CN108413815A (en) * 2018-01-17 2018-08-17 上海鹰觉科技有限公司 A kind of anti-unmanned plane defence installation and method
CN108549940A (en) * 2018-03-05 2018-09-18 浙江大学 Intelligence defence algorithm based on a variety of confrontation sample attacks recommends method and system
CN108692617A (en) * 2018-03-30 2018-10-23 黄鑫 A kind of special vehicle defence method and system
CN211452855U (en) * 2019-12-16 2020-09-08 国汽(北京)智能网联汽车研究院有限公司 Intelligent networking automobile test platform

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9712553B2 (en) * 2015-03-27 2017-07-18 The Boeing Company System and method for developing a cyber-attack scenario

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105842683A (en) * 2016-05-27 2016-08-10 南京博驰光电科技有限公司 Unmanned aerial vehicle integrated defense system and method
CN106094823A (en) * 2016-06-29 2016-11-09 北京奇虎科技有限公司 The processing method of vehicle hazard driving behavior and system
CN108413815A (en) * 2018-01-17 2018-08-17 上海鹰觉科技有限公司 A kind of anti-unmanned plane defence installation and method
CN108549940A (en) * 2018-03-05 2018-09-18 浙江大学 Intelligence defence algorithm based on a variety of confrontation sample attacks recommends method and system
CN108692617A (en) * 2018-03-30 2018-10-23 黄鑫 A kind of special vehicle defence method and system
CN211452855U (en) * 2019-12-16 2020-09-08 国汽(北京)智能网联汽车研究院有限公司 Intelligent networking automobile test platform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于矩阵补全的无人车感知系统的攻击防御技术;李慧云等;集成技术;第9卷(第5期);第3-14页 *

Also Published As

Publication number Publication date
CN113156440A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
TWI703064B (en) Systems and methods for positioning vehicles under poor lighting conditions
CN109086668B (en) Unmanned aerial vehicle remote sensing image road information extraction method based on multi-scale generation countermeasure network
CN110531376B (en) Obstacle detection and tracking method for port unmanned vehicle
Khammari et al. Vehicle detection combining gradient analysis and AdaBoost classification
CN113111887B (en) Semantic segmentation method and system based on information fusion of camera and laser radar
US11620522B2 (en) Vehicular system for testing performance of headlamp detection systems
US20110184895A1 (en) Traffic object recognition system, method for recognizing a traffic object, and method for setting up a traffic object recognition system
CN107480676B (en) Vehicle color identification method and device and electronic equipment
Kum et al. Lane detection system with around view monitoring for intelligent vehicle
Spinneker et al. Fast fog detection for camera based advanced driver assistance systems
CN114418895A (en) Driving assistance method and device, vehicle-mounted device and storage medium
CN112330915B (en) Unmanned aerial vehicle forest fire prevention early warning method and system, electronic equipment and storage medium
CN113052159A (en) Image identification method, device, equipment and computer storage medium
Ren et al. Environment influences on uncertainty of object detection for automated driving systems
CN113160272B (en) Target tracking method and device, electronic equipment and storage medium
CN113156440B (en) Defense method and system based on radar and image data fusion detection
US20210165093A1 (en) Vehicles, Systems, and Methods for Determining an Entry of an Occupancy Map of a Vicinity of a Vehicle
CN107885231B (en) Unmanned aerial vehicle capturing method and system based on visible light image recognition
CN110727269B (en) Vehicle control method and related product
CN112509321A (en) Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium
CN113435232A (en) Object detection method, device, equipment and storage medium
CN116403186A (en) Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++
CN115690224A (en) External parameter calibration method for radar and camera, electronic device and storage medium
Beresnev et al. Automated Driving System based on Roadway and Traffic Conditions Monitoring.
DE102021131480A1 (en) SENSOR ATTACK SIMULATION SYSTEM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant