CN113221957A - Radar information fusion characteristic enhancement method based on Centernet - Google Patents

Radar information fusion characteristic enhancement method based on Centernet Download PDF

Info

Publication number
CN113221957A
CN113221957A CN202110414757.5A CN202110414757A CN113221957A CN 113221957 A CN113221957 A CN 113221957A CN 202110414757 A CN202110414757 A CN 202110414757A CN 113221957 A CN113221957 A CN 113221957A
Authority
CN
China
Prior art keywords
prediction
channels
picture
target
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110414757.5A
Other languages
Chinese (zh)
Other versions
CN113221957B (en
Inventor
郝岩
周冠
李祥
闫鹏飞
王鑫
梁帅
王琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202110414757.5A priority Critical patent/CN113221957B/en
Publication of CN113221957A publication Critical patent/CN113221957A/en
Application granted granted Critical
Publication of CN113221957B publication Critical patent/CN113221957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a radar information fusion characteristic enhancement method based on Centernet, relates to the field of computer vision, and mainly relates to a network model for fusing radar information and camera information to detect a target. The accuracy of the target detection of the Centernet can be obviously improved, the probability that the target cannot be detected is reduced, and the performance of a target detection network is improved. Step 1: collecting a data set of road vehicles and pedestrians; step 2: building a convolutional neural network; and step 3: predicting the characteristic image; and 4, step 4: training a neural network; and 5: and (4) carrying out the neural network training on the camera pictures acquired by the camera in actual use, and integrating the radar information acquired actually in the process to obtain the enhanced target characteristics. The accuracy of central point prediction is improved, and the problem that small targets or targets with low contrast with the background cannot be successfully detected is avoided.

Description

Radar information fusion characteristic enhancement method based on Centernet
Technical Field
The invention relates to the field of computer vision, in particular to a network model for fusing radar information and camera information to detect a target, wherein the target detection accuracy after the processing of the network model is obviously improved. The method is mainly applied to the aspects of automatic driving target identification, positioning and the like.
Background
In recent years, autopilot has become an increasingly popular research direction in the automotive field, and researchers in various fields have actively searched for autopilot purposes in various directions, wherein the application of target detection to the automotive autopilot field is an important basis for realizing intelligent control of automobiles, and more networks are emerging for road target detection and classification.
Object detection of deep learning in camera images has been rapidly developed in recent years, but under severe weather conditions, the sensor quality of a camera is limited, and the obtained pictures are difficult to be well recognized in regions with sparse light and at night. In these times, the information provided by the camera is not reliable enough, and a more reliable identification mode needs to be provided at the moment, so that more accurate information is provided for driving.
Compared with a camera, the information provided by the radar has stronger robustness in some scenes such as rapid light change, rainy days and foggy days, the radar acquires the distance and speed information of the target and positions the target by fusing the data information of the radar and the sensor through a neural network, and the network enhances the target identification effect.
The Centernet is a brand new target detection network which is proposed in recent years, can solve the problems of target detection, posture detection and 3D single target detection, innovatively proposes that a picture is converted into a thermodynamic diagram by regarding a target as one point, the central point of the target is predicted by detecting the peak point of the thermodynamic diagram, the position of the target in the picture is determined, and an excellent effect is achieved.
Therefore, how to fuse the photograph and the radar information by using the centeret and effectively improve the accuracy of the target detection of the centeret becomes a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
Aiming at the problems, the invention provides the radar information fusion characteristic enhancement method based on the Centernet, which can fuse the photos and the radar information, and can obviously improve the accuracy of the Centernet target detection, reduce the probability of not detecting the target and improve the performance of a target detection network.
The technical scheme of the invention is as follows: the fusion is carried out according to the following steps:
step 1: collecting a data set of the road vehicles and the pedestrians, wherein the data set comprises camera photos, label information and corresponding millimeter wave radar information, and preprocessing the pictures;
step 2: building a convolutional neural network, wherein the built neural network comprises two parts of feature extraction and radar data fusion;
and step 3: predicting the characteristic image, wherein the prediction of the image is divided into three parts, namely thermodynamic diagram prediction, central point prediction and width and height prediction;
and 4, step 4: training a neural network;
and 5: and (4) after the training is finished, using the network, carrying out the training on the camera pictures acquired by the camera in actual use through the neural network obtained in the step 4, and integrating the radar information acquired actually in the process to obtain the enhanced target characteristics.
Collecting road pictures and corresponding millimeter wave radar information as a network training data set in the step 1, wherein data sources comprise an online open source database and a self-made data set, and preprocessing all the pictures; the preprocessing in step 1 includes resizing the picture data to 512 × 512 by using a transform module in the pytorech, and in order to avoid the target ratio imbalance in the picture caused by the fact that the picture is stretched and compressed due to resizing, the picture is filled with gray bars in advance to make the picture become a square.
In the step 2, the feature extraction part adopts Resnet50 as a backbone network, inputs a picture with the size of 512 x 512, the number of channels is 3, firstly, a convolution with the step length of 2 is carried out to obtain a picture with the size of 256 x 256, and the number of channels is 64;
after one-step standardization and activation of functions, the radar data and the pictures are fused for the first time, the fused pictures are subjected to maximum pooling operation with the step length of 2, 128 x 128 pictures are obtained, and the number of channels is 64;
after the maximum pooling, performing dimension expansion and deepening on the network by using ConvBlock and IdentityBlock, wherein ConvBlock is used for expanding the dimension of the feature picture, IdentityBlock is used for deepening the network, the dimension of the feature picture is not affected, the finally obtained feature layer is 16 multiplied by 16, the number of channels is 2048, in order to obtain the high-resolution feature picture, deconvolution operation with the step length of 2 is performed on the final feature layer for three times, so that a high-resolution feature image with the number of 128 multiplied by 128 channels of 64 is obtained, and radar data and the high-resolution feature picture are subjected to second fusion, so that a fusion feature image is obtained.
In step 3, the feature map is divided into 128 multiplied by 128 areas, and the center point of the object in the area is determined by the feature point in the area; the thermodynamic diagram predicts that the number of convolution channels is n, a 128 x 128 image is obtained after convolution, the number of the channels is n, the channels represent whether an object exists in each thermal point and the type of the object, and the size of n depends on the type number of the target; whether an object exists and the type and the confidence coefficient of the object are obtained through thermodynamic diagram prediction;
the central point predicting channel number is 2, the convolution result is 128 multiplied by 128, representing the condition that each thermal point deviates relative to the target central point, and the central point coordinates of the object are obtained through central prediction and are adjusted;
the number of the width and height prediction channels is 2, and the width and height of each target are predicted;
thermodynamic diagram prediction, central point prediction and width and height prediction are carried out through convolution, firstly, a feature layer is convoluted, the size of a convolution kernel is 3 x 3, then the feature layer is standardized, a function is activated, the number of channels is adjusted by using a convolution kernel of 1 x 1, the number of thermodynamic diagram prediction channels is adjusted to n, and the number of width and height prediction channels and the number of central point prediction channels are adjusted to 2.
The step 4 specifically comprises the following steps: inputting 512 multiplied by 512 training pictures with the channel number of 3 and radar data into a fusion neural network, fusing radar information twice in the fusion neural network to enable target characteristics to be more prominent, and performing regression on loss functions of thermodynamic diagram prediction, center point prediction and width and height prediction in the prediction process.
The invention has the beneficial effects that:
the method comprises the steps of fusing radar data with pictures, improving the radar data on the basis of a Centernet network, and improving the central point prediction accuracy.
And secondly, the radar data is fused twice in the backbone network, so that the target characteristics are enhanced, the probability of target detection is improved, and the problem that small targets or targets with low contrast with the background cannot be successfully detected is avoided.
And thirdly, combining the characteristic that the target position is predicted by detecting the position of the target central point by the Centeret, processing the radar data, converting the radar data into pixel points, and fusing the pixel points with the picture.
And fourthly, estimating the size of the target and the position of the center point of the target according to the radar data, generating pixel points at the position of the estimated center point, and changing the size of the pixel points according to the estimated size of the target to ensure that the radar information features are obvious after data fusion.
Drawings
FIG. 1 is a schematic diagram of a model of an information fusion neural network according to the present invention;
FIG. 2 is a schematic diagram of a road image after radar information is fused according to the present invention;
FIG. 3 is a schematic structural diagram of a ConvBlock neural network according to the present invention;
fig. 4 is a schematic structural diagram of an identyblock in the neural network of the present invention.
Detailed Description
In order to clearly explain the technical features of the present patent, the following detailed description of the present patent is provided in conjunction with the accompanying drawings.
As shown in FIGS. 1 to 4, the fusion method comprises the following steps:
step 1: collecting a data set of the road vehicles and the pedestrians, wherein the data set comprises camera photos, label information and corresponding millimeter wave radar information, and preprocessing the pictures;
step 2: building a convolutional neural network, wherein the built neural network comprises two parts of feature extraction and radar data fusion;
and step 3: predicting the characteristic image, wherein the prediction of the image is divided into three parts, namely thermodynamic diagram prediction, central point prediction and width and height prediction;
and 4, step 4: training a neural network;
and 5: and (4) after the training is finished, using the network, carrying out the training on the camera pictures acquired by the camera in actual use through the neural network obtained in the step 4, and integrating the radar information acquired actually in the process to obtain the enhanced target characteristics.
Specifically, radar information is converted into pixel points through processing and combined with images, the size of the pixel points depends on the size of a target, the positions of the pixel points are roughly determined at the center of the target through radar data processing, and prediction of the center point of the target is assisted.
The radar data and the picture are fused, improvement is carried out on the basis of the Centeret network, and the accuracy of central point prediction is obviously improved. The radar data are fused in the backbone network twice, so that the target characteristics are enhanced, the probability of target detection is improved, and the situation that small targets or targets with low contrast with the background cannot be successfully detected is avoided. Finally, the purposes of remarkably improving the accuracy of the target detection of the Centernet, reducing the probability of not detecting the target and improving the performance of the target detection network can be achieved.
Collecting road pictures and corresponding millimeter wave radar information as a network training data set in the step 1, wherein data sources comprise an online open source database and a self-made data set, and preprocessing all the pictures; the preprocessing in step 1 includes resizing the picture data to 512 × 512 by using a transform module in the pytorech, and in order to avoid the target ratio imbalance in the picture caused by the fact that the picture is stretched and compressed due to resizing, the picture is filled with gray bars in advance to make the picture become a square.
In the step 2, the feature extraction part adopts Resnet50 as a backbone network, inputs a picture with the size of 512 x 512, the number of channels is 3, firstly, a convolution with the step length of 2 is carried out to obtain a picture with the size of 256 x 256, and the number of channels is 64;
the radar data and the picture are fused for the first time after one-step standardization and activation functions, the radar data are processed into pixel points with proper sizes, the pixel points are projected onto the picture, particularly, the values of the pixel points are larger than the numerical values projected to the periphery of the position of the picture, and the characteristic information provided by the radar data can be kept in the maximum pooling operation. Then, performing maximum pooling operation with the step length of 2 on the fused pictures to obtain 128 x 128 pictures, wherein the number of channels is 64;
after the maximum pooling, performing dimension expansion and deepening on the network by using ConvBlock and IdentityBlock, wherein ConvBlock is used for expanding the dimension of the feature picture, IdentityBlock is used for deepening the network, the dimension of the feature picture is not affected, the finally obtained feature layer is 16 multiplied by 16, the number of channels is 2048, in order to obtain the high-resolution feature picture, deconvolution operation with the step length of 2 is performed on the final feature layer for three times, so that a high-resolution feature image with the number of 128 multiplied by 128 channels of 64 is obtained, and radar data and the high-resolution feature picture are subjected to second fusion, so that a fusion feature image is obtained.
ConvBlock and IdentityBlock constitute by a main limit and a residual margin, ConvBlock main limit is by a convolutional layer connection standardization layer, later connect the activation function and carry out the feature extraction to the picture, such structure piles up on the main limit three times, the residual margin comprises a convolutional layer and a standardization layer, is used for changing the dimension of characteristic layer, IdentityBlock main limit is the same with ConvBlock main limit, the residual margin is not convoluteed, consequently IdentityBlock does not produce the influence to the dimension, consequently can pile up many times and deepen the network.
Specifically, after the maximum pooling, connecting ConvBlock to obtain a 128 × 128 × 256 feature layer, then connecting two IdentityBlock deepened network structures, then connecting ConvBlock with the step size of 2 to obtain a 64 × 64 × 512 feature layer, connecting three IdentityBlock to deepen the network, then connecting ConvBlock with the step size of 2 to obtain a 32 × 32 × 1024 feature layer, connecting 4 IdentityBlock, and finally connecting ConvBlock with the step size of 2 to obtain a 16 × 16 × 2048 feature layer, and connecting two IdentityBlock.
And in order to obtain a high-resolution feature image, performing deconvolution operation with the step length of 2 three times on the final feature layer to obtain a high-resolution feature image with the channel number of 128 x 128 of 64, and performing second fusion on the radar data and the high-resolution feature image to obtain a fusion feature image.
FIG. 2 is a schematic diagram of the fusion effect. And obtaining position information and size information of the existing target according to the millimeter wave radar data, estimating the center position of the target, generating pixel points at the center position of the target, and fusing the pixel points with the corresponding pictures shot by the camera after projection conversion.
Step 3, the prediction of the characteristic image is divided into three parts: thermodynamic diagram prediction, center point prediction and width and height prediction, a feature map is divided into 128 x 128 areas, and the center point of an object falling in an area is determined by the feature points in the area.
The thermodynamic diagram predicts that the number of convolution channels is n, a 128 x 128 characteristic image is obtained after convolution, the number of the channels is n, the n represents whether an object exists in each thermal point and the type of the object, and the size of the n depends on the type number of the target; and (3) applying a convolution kernel on the thermodynamic diagram to carry out target retrieval, wherein the size of the convolution kernel is 3 multiplied by 3, the number of channels is adjusted by a convolution kernel with the number of 1 multiplied by 1 and the number of channels is n. And predicting whether the object exists or not and the type and the confidence coefficient of the object through thermodynamic diagrams.
The central point prediction channel number is 2, the convolution result is 128 x 128, representing the condition that each thermal point deviates relative to the target central point, the central point coordinate of the object is obtained through central prediction and is adjusted, the width and height prediction channel number is 2, and the width and height of each target are predicted. The center point prediction and the width and height prediction are also obtained by a convolution kernel with the number of channels being 2 after a convolution kernel with the number of 3 x 3. By prediction, the position of the prediction frame on the original image can be obtained.
The step 4 specifically comprises the following steps: and obtaining the relation between the real frame and the characteristic points during training, and comparing the predicted result with the real result after obtaining the predicted result. Inputting 512 multiplied by 512, inputting training pictures with the channel number of 3 and radar data into a fusion neural network, fusing radar information twice in the fusion neural network, namely respectively fusing the radar data and corresponding pictures after activating a function for the first time and after sampling for three times, so that the target characteristics are more prominent, and regressing the loss functions of thermodynamic diagram prediction, central point prediction and width and height prediction in the prediction process.
While the invention has been described in terms of its preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (5)

1. A radar information fusion characteristic enhancement method based on Centernet is characterized by comprising the following steps of:
step 1: collecting a data set of the road vehicles and the pedestrians, wherein the data set comprises camera photos, label information and corresponding millimeter wave radar information, and preprocessing the pictures;
step 2: building a convolutional neural network, wherein the built neural network comprises two parts of feature extraction and radar data fusion;
and step 3: predicting the characteristic image, wherein the prediction of the image is divided into three parts, namely thermodynamic diagram prediction, central point prediction and width and height prediction;
and 4, step 4: training a neural network;
and 5: and (4) after the training is finished, using the network, carrying out the training on the camera pictures acquired by the camera in actual use through the neural network obtained in the step 4, and integrating the radar information acquired in actual in the process to obtain the picture for enhancing the target characteristics.
2. The method for enhancing radar information fusion characteristics based on the Centeret according to claim 1, wherein the road pictures and the corresponding millimeter wave radar information are collected as a network training data set in the step 1, data sources comprise an online open source database and a self-made data set, and all the pictures are preprocessed; the preprocessing in step 1 includes resizing the picture data to 512 × 512 by using a transform module in the pytorech, and in order to avoid the target ratio imbalance in the picture caused by the fact that the picture is stretched and compressed due to resizing, the picture is filled with gray bars in advance to make the picture become a square.
3. The method for enhancing radar information fusion features based on the Centeret according to claim 1, wherein the feature extraction part in the step 2 adopts Resnet50 as a backbone network, inputs a picture with a size of 512 x 512, the number of channels is 3, firstly, 256 x 256 pictures are obtained through convolution with a step length of 2, and the number of channels is 64;
after one-step standardization and activation of functions, the radar data and the pictures are fused for the first time, the fused pictures are subjected to maximum pooling operation with the step length of 2, 128 x 128 pictures are obtained, and the number of channels is 64;
after the maximum pooling, performing dimension expansion and deepening on the network by using ConvBlock and IdentityBlock, wherein ConvBlock is used for expanding the dimension of the feature picture, IdentityBlock is used for deepening the network, the dimension of the feature picture is not affected, the finally obtained feature layer is 16 multiplied by 16, the number of channels is 2048, in order to obtain the high-resolution feature picture, deconvolution operation with the step length of 2 is performed on the final feature layer for three times, so that a high-resolution feature image with the number of 128 multiplied by 128 channels of 64 is obtained, and radar data and the high-resolution feature picture are subjected to second fusion, so that a fusion feature image is obtained.
4. The method for enhancing radar information fusion features based on the Centernet according to the claim 1, wherein the feature map in the step 3 is divided into 128 x 128 regions, and the center point of the object falling in the region is determined by the feature points in the region; the thermodynamic diagram predicts that the number of convolution channels is n, a 128 x 128 image is obtained after convolution, the number of the channels is n, the channels represent whether an object exists in each thermal point and the type of the object, and the size of n depends on the type number of the target; whether an object exists and the type and the confidence coefficient of the object are obtained through thermodynamic diagram prediction;
the central point predicting channel number is 2, the convolution result is 128 multiplied by 128, representing the condition that each thermal point deviates relative to the target central point, and the central point coordinates of the object are obtained through central prediction and are adjusted;
the number of the width and height prediction channels is 2, and the width and height of each target are predicted;
thermodynamic diagram prediction, central point prediction and width and height prediction are carried out through convolution, firstly, a feature layer is convoluted, the size of a convolution kernel is 3 x 3, then the feature layer is standardized, a function is activated, the number of channels is adjusted by using a convolution kernel of 1 x 1, the number of thermodynamic diagram prediction channels is adjusted to n, and the number of width and height prediction channels and the number of central point prediction channels are adjusted to 2.
5. The method for enhancing radar information fusion features based on the centeret as claimed in claim 1, wherein the step 4 is specifically as follows: inputting 512 multiplied by 512 training pictures with the channel number of 3 and radar data into a fusion neural network, fusing radar information twice in the fusion neural network to enable target characteristics to be more prominent, and performing regression on loss functions of thermodynamic diagram prediction, center point prediction and width and height prediction in the prediction process.
CN202110414757.5A 2021-04-17 2021-04-17 Method for enhancing radar information fusion characteristics based on center Active CN113221957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110414757.5A CN113221957B (en) 2021-04-17 2021-04-17 Method for enhancing radar information fusion characteristics based on center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110414757.5A CN113221957B (en) 2021-04-17 2021-04-17 Method for enhancing radar information fusion characteristics based on center

Publications (2)

Publication Number Publication Date
CN113221957A true CN113221957A (en) 2021-08-06
CN113221957B CN113221957B (en) 2024-04-16

Family

ID=77087953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110414757.5A Active CN113221957B (en) 2021-04-17 2021-04-17 Method for enhancing radar information fusion characteristics based on center

Country Status (1)

Country Link
CN (1) CN113221957B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419572A (en) * 2022-03-31 2022-04-29 国汽智控(北京)科技有限公司 Multi-radar target detection method and device, electronic equipment and storage medium
CN117172411A (en) * 2023-09-06 2023-12-05 江苏省气候中心 All-weather cyanobacteria bloom real-time automatic identification early warning method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110363151A (en) * 2019-07-16 2019-10-22 中国人民解放军海军航空大学 Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN111462237A (en) * 2020-04-03 2020-07-28 清华大学 Target distance detection method for constructing four-channel virtual image by using multi-source information
CN111882002A (en) * 2020-08-06 2020-11-03 桂林电子科技大学 MSF-AM-based low-illumination target detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110363151A (en) * 2019-07-16 2019-10-22 中国人民解放军海军航空大学 Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN111462237A (en) * 2020-04-03 2020-07-28 清华大学 Target distance detection method for constructing four-channel virtual image by using multi-source information
CN111882002A (en) * 2020-08-06 2020-11-03 桂林电子科技大学 MSF-AM-based low-illumination target detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘志宏;李玉峰;: "基于特征融合卷积神经网络的SAR图像目标检测方法", 微处理机, no. 02, 15 April 2020 (2020-04-15), pages 33 - 39 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419572A (en) * 2022-03-31 2022-04-29 国汽智控(北京)科技有限公司 Multi-radar target detection method and device, electronic equipment and storage medium
CN117172411A (en) * 2023-09-06 2023-12-05 江苏省气候中心 All-weather cyanobacteria bloom real-time automatic identification early warning method and system

Also Published As

Publication number Publication date
CN113221957B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN108983219B (en) Fusion method and system for image information and radar information of traffic scene
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
CN111563415B (en) Binocular vision-based three-dimensional target detection system and method
CN113111887B (en) Semantic segmentation method and system based on information fusion of camera and laser radar
CN111582339B (en) Vehicle detection and recognition method based on deep learning
CN113221957A (en) Radar information fusion characteristic enhancement method based on Centernet
CN112801027A (en) Vehicle target detection method based on event camera
CN112825192A (en) Object identification system and method based on machine learning
TWI745204B (en) High-efficiency LiDAR object detection method based on deep learning
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN115187964A (en) Automatic driving decision-making method based on multi-sensor data fusion and SoC chip
CN112861748A (en) Traffic light detection system and method in automatic driving
CN115187737A (en) Semantic map construction method based on laser and vision fusion
CN116030130A (en) Hybrid semantic SLAM method in dynamic environment
CN117058646A (en) Complex road target detection method based on multi-mode fusion aerial view
CN112215073A (en) Traffic marking line rapid identification and tracking method under high-speed motion scene
CN111444916A (en) License plate positioning and identifying method and system under unconstrained condition
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
Chougula et al. Road segmentation for autonomous vehicle: A review
CN112529917A (en) Three-dimensional target segmentation method, device, equipment and storage medium
CN116958927A (en) Method and device for identifying short column based on BEV (binary image) graph
CN116403186A (en) Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++
CN115359457A (en) 3D target detection method and system based on fisheye image
CN114882469A (en) Traffic sign detection method and system based on DL-SSD model
CN114648549A (en) Traffic scene target detection and positioning method fusing vision and laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant