CN113221957B - Method for enhancing radar information fusion characteristics based on center - Google Patents

Method for enhancing radar information fusion characteristics based on center Download PDF

Info

Publication number
CN113221957B
CN113221957B CN202110414757.5A CN202110414757A CN113221957B CN 113221957 B CN113221957 B CN 113221957B CN 202110414757 A CN202110414757 A CN 202110414757A CN 113221957 B CN113221957 B CN 113221957B
Authority
CN
China
Prior art keywords
prediction
picture
target
network
radar information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110414757.5A
Other languages
Chinese (zh)
Other versions
CN113221957A (en
Inventor
郝岩
周冠
李祥
闫鹏飞
王鑫
梁帅
王琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202110414757.5A priority Critical patent/CN113221957B/en
Publication of CN113221957A publication Critical patent/CN113221957A/en
Application granted granted Critical
Publication of CN113221957B publication Critical patent/CN113221957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a method for enhancing radar information fusion characteristics based on a central, relates to the field of computer vision, and mainly relates to a network model for fusing radar information and camera information to perform target detection. The accuracy of the detection of the central target can be remarkably improved, the probability of undetected targets is reduced, and the performance of the target detection network is improved. Step 1: collecting a data set of road vehicle pedestrians; step 2: building a convolutional neural network; step 3: predicting the characteristic image; step 4: training a neural network; step 5: and 4, training the camera photo acquired by the camera in actual use through the neural network in the step 4, and integrating the actually acquired radar information in the process to obtain the enhanced target characteristics. The accuracy of the central point prediction is improved, and the situation that a small target or a target with low contrast with the background cannot be detected successfully is avoided.

Description

Method for enhancing radar information fusion characteristics based on center
Technical Field
The invention relates to the field of computer vision, in particular to a network model for performing target detection by fusing radar information and camera information, and the accuracy of target detection after the network model is processed is obviously improved. The method is mainly applied to aspects of automatic driving target identification, positioning and the like.
Background
In recent years, automatic driving is becoming an increasingly popular research direction in the automotive field, and researchers in various fields are actively exploring in various directions in order to realize automatic driving, wherein application of object detection to the automotive automatic driving field is an important basis for realizing intelligent control of automobiles, and more networks are being created for road object detection and classification.
Object detection in camera images for deep learning has progressed rapidly in recent years, but in severe weather conditions, the sensor quality of cameras is limited, and the obtained pictures are difficult to recognize well in areas with sparse light and at night. The information provided by the camera is not reliable enough in the time, and a more reliable identification mode is needed to provide more accurate information for driving.
Compared with a camera, the information provided by the radar has stronger robustness in some scenes such as rapid change of light, rainy days and foggy days, the radar acquires the distance and speed information of a target and positions the target by fusing the data information of the radar and the sensor through a neural network, and the network enhances the target recognition effect.
The centrnet is a brand new target detection network which is proposed in recent years, can solve the problems of target detection, gesture detection and 3D single target detection, innovatively proposes that the target is regarded as a point, a picture is converted into a thermodynamic diagram, the center point of the target is predicted by detecting the peak point of the thermodynamic diagram, and the position of the target in the picture is determined, so that excellent effects are achieved.
Therefore, how to use the center to fuse the photo and the radar information and effectively improve the accuracy of the center target detection becomes a technical problem to be solved by the person skilled in the art.
Disclosure of Invention
Aiming at the problems, the invention provides the method for enhancing the radar information fusion characteristic based on the center et, which can fuse the photo and the radar information, can remarkably improve the accuracy of center et target detection, reduce the probability of not detecting the target and improve the performance of a target detection network.
The technical scheme of the invention is as follows: the fusion is carried out according to the following steps:
step 1: collecting a data set of road vehicles and pedestrians, wherein the data set comprises a camera photo, tag information and corresponding millimeter wave radar information, and preprocessing the picture;
step 2: building a convolutional neural network, wherein the built neural network comprises two parts of feature extraction and radar data fusion;
step 3: predicting the characteristic image, wherein the prediction of the image is divided into three parts, namely thermodynamic diagram prediction, central point prediction and wide-high prediction;
step 4: training a neural network;
step 5: and (3) after training, using the network, training the neural network by using the camera photo acquired by the camera in actual use and integrating the actual acquired radar information in the process to obtain the enhanced target characteristics.
Step 1, collecting road pictures and corresponding millimeter wave radar information as a network training data set, wherein data sources comprise an online open source database and a homemade data set, and preprocessing all the pictures; the preprocessing in step 1 includes adjusting the size of the picture data to 512×512 by using a transform module in a pytorch, and filling the picture with gray bars in advance to make the picture square in order to avoid the target scale imbalance in the picture caused by the condition that the picture is stretched and compressed due to the size adjustment.
In the step 2, the feature extraction part adopts Resnet50 as a backbone network, inputs a picture with the size of 512 multiplied by 512, the number of channels is 3, firstly, a 256 multiplied by 256 picture is obtained through convolution with the step length of 2, and the number of channels is changed to 64;
after one-step normalization and activation function, the radar data and the pictures are fused for the first time, and the fused pictures are subjected to maximum pooling operation with the step length of 2, so that 128 multiplied by 128 pictures are obtained, and the channel number is 64;
after the maximum pooling, convBlock and IdentityBlock are adopted to expand and deepen the dimension of the network, wherein ConvBlock is used for expanding the dimension of the feature picture, identityBlock is used for deepen the network, the dimension of the feature picture is not affected, the finally obtained feature layer is 16 multiplied by 16, the channel number is 2048, in order to obtain the high-resolution feature picture, deconvolution operation with the step length of 2 is carried out on the final feature layer for three times, the high-resolution feature picture with the channel number of 64 is obtained, and radar data and the high-resolution feature picture are fused for the second time, so that the fused feature picture is obtained.
In the step 3, the feature map is divided into 128×128 areas, and the center point of the object falls in the area to be determined by the feature points in the area; the thermodynamic diagram predicts that the number of convolution channels is n, 128×128 images are obtained after convolution, the number of channels is n, the number of channels represents whether an object exists in each thermodynamic point and the type of the object, and the size of n depends on the type number of targets; predicting whether an object exists or not and obtaining the type and the confidence of the object through thermodynamic diagrams;
the number of the central point prediction channels is 2, the convolution result is 128 multiplied by 128, the condition that each thermodynamic point deviates from the target central point is represented, and the central point of the object is obtained through central prediction and adjusted;
the number of the wide-high prediction channels is 2, and the width and the height of each target are predicted;
the thermodynamic diagram prediction, the central point prediction and the wide-high prediction are performed through convolution, the feature layer is convolved first, the convolution kernel is 3×3 in size, the feature layer is standardized, an activation function is connected, the number of channels is adjusted by using a convolution check of 1×1, the thermodynamic diagram prediction channel number is adjusted to n, and the wide-high prediction and the central point prediction channel number is adjusted to 2.
The step 4 is specifically as follows: and inputting 512 multiplied by 512, enabling training pictures with the channel number of 3 and radar data to enter a fusion neural network, and merging radar information into the fusion neural network twice to make the target characteristics more prominent, and carrying out regression on the thermodynamic diagram prediction, the central point prediction and the loss function of the wide-high prediction in the prediction process.
The beneficial effects of the invention are as follows:
1. the radar data and the pictures are fused, and the accuracy of the central point prediction is improved on the basis of a central network.
2. And the radar data is fused twice in the backbone network, so that the target characteristics are enhanced, the probability of target detection is improved, and the condition that a small target or a target with low contrast with the background cannot be detected successfully is avoided.
3. And combining the characteristic that the central point of the target predicts the position of the target by detecting the central point of the target, processing radar data, converting the radar data into pixel points, and fusing the pixel points with the picture.
4. And estimating the size of the target and the position of a target center point according to the radar data, generating pixel points at the estimated center point position, and changing the size of the pixel points according to the estimated size of the target to ensure that the radar information features are obvious after the data fusion.
Drawings
FIG. 1 is a schematic diagram of a model of an information fusion neural network of the present invention;
FIG. 2 is a schematic view of a road image after radar information fusion according to the present invention;
FIG. 3 is a schematic diagram of the structure of a neural network ConvBlock according to the present invention;
fig. 4 is a schematic diagram of the structure of an identify block in the neural network of the present invention.
Detailed Description
In order to clearly illustrate the technical features of the present patent, the following detailed description will make reference to the accompanying drawings.
The invention, as shown in fig. 1-4, performs fusion according to the following steps:
step 1: collecting a data set of road vehicles and pedestrians, wherein the data set comprises a camera photo, tag information and corresponding millimeter wave radar information, and preprocessing the picture;
step 2: building a convolutional neural network, wherein the built neural network comprises two parts of feature extraction and radar data fusion;
step 3: predicting the characteristic image, wherein the prediction of the image is divided into three parts, namely thermodynamic diagram prediction, central point prediction and wide-high prediction;
step 4: training a neural network;
step 5: and (3) after training, using the network, training the neural network by using the camera photo acquired by the camera in actual use and integrating the actual acquired radar information in the process to obtain the enhanced target characteristics.
Specifically, the radar information is converted into pixel points through processing and combined with the image, the size of the pixel points depends on the size of the target, the positions of the pixel points are roughly determined to be at the center of the target through radar data processing, and the prediction of the center point of the target is assisted.
According to the invention, radar data and pictures are fused, and improvement is performed on the basis of a central network, so that the accuracy of central point prediction is remarkably improved. The radar data is fused twice in the backbone network, so that the target characteristics are enhanced, the probability of target detection is improved, and the condition that a small target or a target with low contrast with the background cannot be detected successfully is avoided. Finally, the aim of remarkably improving the accuracy of the detection of the central target, reducing the probability of undetected targets and improving the performance of a target detection network can be fulfilled.
Step 1, collecting road pictures and corresponding millimeter wave radar information as a network training data set, wherein data sources comprise an online open source database and a homemade data set, and preprocessing all the pictures; the preprocessing in step 1 includes adjusting the size of the picture data to 512×512 by using a transform module in a pytorch, and filling the picture with gray bars in advance to make the picture square in order to avoid the target scale imbalance in the picture caused by the condition that the picture is stretched and compressed due to the size adjustment.
In the step 2, the feature extraction part adopts Resnet50 as a backbone network, inputs a picture with the size of 512 multiplied by 512, the number of channels is 3, firstly, a 256 multiplied by 256 picture is obtained through convolution with the step length of 2, and the number of channels is changed to 64;
the radar data and the picture are fused for the first time after one-step normalization and activation functions, the radar data are processed into pixel points with proper size and projected onto the picture, particularly, the value of the pixel points is larger than the value projected around the position of the picture, and the characteristic information provided by the radar data can be reserved in the maximum pooling operation. Performing maximum pooling operation with the step length of 2 on the fused pictures to obtain 128×128 pictures, wherein the number of channels is 64;
after the maximum pooling, convBlock and IdentityBlock are adopted to expand and deepen the dimension of the network, wherein ConvBlock is used for expanding the dimension of the feature picture, identityBlock is used for deepen the network, the dimension of the feature picture is not affected, the finally obtained feature layer is 16 multiplied by 16, the channel number is 2048, in order to obtain the high-resolution feature picture, deconvolution operation with the step length of 2 is carried out on the final feature layer for three times, the high-resolution feature picture with the channel number of 64 is obtained, and radar data and the high-resolution feature picture are fused for the second time, so that the fused feature picture is obtained.
ConvBlock and IdentityBlock are formed by a main edge and a residual edge, the main edge of ConvBlock is connected with a standardized layer by a convolution layer, then a characteristic extraction is carried out on pictures by connecting an activation function, the structure is stacked for three times on the main edge, the residual edge is formed by a convolution layer and a standardized layer and used for changing the dimension of the characteristic layer, the main edge of IdentityBlock is identical with the main edge of ConvBlock, and the residual edge does not carry out convolution, so that IdentityBlock does not affect the dimension, and a deepened network can be stacked for many times.
Specifically, after maximum pooling, convBlock is connected to obtain a 128×128×256 feature layer, then two IdentityBlock deepened network structures are connected, then ConvBlock with a step length of 2 is connected to obtain a 64×64×512 feature layer, three IdentityBlock is connected to deepen the network, then ConvBlock with a step length of 2 is connected to obtain a 32×32×1024 feature layer, 4 IdentityBlock are connected, finally ConvBlock with a step length of 2 is connected to obtain a 16×16×2048 feature layer, and two IdentityBlocks are connected.
And in order to obtain a high-resolution characteristic image, the final characteristic layer is subjected to deconvolution operation with the step length of 2 for three times to obtain a high-resolution characteristic image with the channel number of 128 multiplied by 128 being 64, and radar data and the high-resolution characteristic image are subjected to secondary fusion to obtain a fusion characteristic image.
Fig. 2 is a schematic diagram of the fusion effect. And obtaining position information and size information of a target according to millimeter wave radar data, estimating the center position of the target, generating pixel points at the center position of the target, and fusing the pixel points with corresponding pictures shot by a camera after projection transformation.
Step 3, predicting the characteristic image is divided into three parts: thermodynamic diagram prediction, center point prediction and wide-high prediction, a feature map is divided into 128×128 regions, and the center point of an object falling within a region will be determined by the feature points within that region.
The thermodynamic diagram predicts that the number of convolution channels is n, 128×128 characteristic images are obtained after convolution, the number of channels is n, the number of channels represents whether an object exists at each thermodynamic point and the type of the object, and the size of n depends on the type and the number of targets; target search is performed on the thermodynamic diagram by applying a convolution kernel, the size of the convolution kernel is 3×3, and the number of channels is adjusted by a convolution kernel of 1×1 and the number of channels is n. The presence of the object and the type and confidence of the object are predicted by thermodynamic diagrams.
The number of the central point prediction channels is 2, the convolution result is 128 multiplied by 128, the condition that each heating point deviates from the central point of the target is represented, the central point of the object is obtained through central prediction and adjusted, the number of the wide and high prediction channels is 2, and the width and the height of each target are predicted. The center point prediction and the wide-height prediction are also obtained by a convolution kernel of 3×3 followed by a convolution kernel of 1×1 with a channel number of 2. The position of the prediction frame on the original image can be obtained through prediction.
The step 4 is specifically as follows: and obtaining the relation between the real frame and the characteristic points during training, and comparing the predicted result with the real result after obtaining the predicted result. Inputting 512 multiplied by 512, inputting training pictures with the channel number of 3 and radar data into a fusion neural network, and fusing radar information into the fusion neural network twice, namely respectively fusing the radar data with corresponding pictures after the first activation function and after the third up-sampling, so that the target characteristics are more prominent, and carrying out regression on thermodynamic diagram prediction, central point prediction and loss functions of wide-high prediction in the prediction process.
While there have been described what are believed to be the preferred embodiments of the present invention, it will be apparent to those skilled in the art that many more modifications are possible without departing from the principles of the invention.

Claims (4)

1. The method for enhancing the radar information fusion characteristics based on the center is characterized by comprising the following steps of:
step 1: collecting a data set of road vehicles and pedestrians, wherein the data set comprises a camera photo, tag information and corresponding millimeter wave radar information, and preprocessing the picture;
step 2: building a convolutional neural network, wherein the built neural network comprises two parts of feature extraction and radar data fusion;
in the step 2, the feature extraction part adopts Resnet50 as a backbone network, inputs a picture with the size of 512 multiplied by 512, the number of channels is 3, firstly, a 256 multiplied by 256 picture is obtained through convolution with the step length of 2, and the number of channels is changed to 64;
after one-step normalization and activation function, the radar data and the pictures are fused for the first time, and the fused pictures are subjected to maximum pooling operation with the step length of 2, so that 128 multiplied by 128 pictures are obtained, and the channel number is 64;
performing dimension expansion and deepening on a network by adopting ConvBlock and IdentityBlock after the maximum pooling, wherein ConvBlock is used for expanding the dimension of a feature picture, identityBlock is used for deepening the network, the dimension of a feature picture is not affected, the finally obtained feature layer is 16 multiplied by 16, the channel number is 2048, in order to obtain a high-resolution feature picture, deconvolution operation with the step length of 2 is performed on the final feature layer for three times to obtain a high-resolution feature picture with the channel number of 128 multiplied by 128 being 64, and radar data and the high-resolution feature picture are fused for the second time to obtain a fused feature picture;
step 3: predicting the characteristic image, wherein the prediction of the image is divided into three parts, namely thermodynamic diagram prediction, central point prediction and wide-high prediction;
step 4: training a neural network;
step 5: and (3) after training, using the network, wherein the camera photos acquired by the cameras in actual use are also trained by the step (4) to obtain the images of the enhanced target features by integrating the actually acquired radar information in the process.
2. The method for enhancing radar information fusion characteristics based on central et according to claim 1, wherein in step 1, road pictures and corresponding millimeter wave radar information are collected as a network training data set, and data sources comprise an online open source database and a homemade data set, and all pictures are preprocessed; the preprocessing in step 1 includes adjusting the size of the picture data to 512×512 by using a transform module in a pytorch, and filling the picture with gray bars in advance to make the picture square in order to avoid the target scale imbalance in the picture caused by the condition that the picture is stretched and compressed due to the size adjustment.
3. The method for enhancing radar information fusion characteristics based on central et according to claim 1, wherein in step 3, the characteristic map is divided into 128×128 regions, and the center point of the object falling in the region is determined by the characteristic points in the region; the thermodynamic diagram predicts that the number of convolution channels is n, 128×128 images are obtained after convolution, the number of channels is n, the number of channels represents whether an object exists in each thermodynamic point and the type of the object, and the size of n depends on the type number of targets; predicting whether an object exists or not and obtaining the type and the confidence of the object through thermodynamic diagrams;
the number of the central point prediction channels is 2, the convolution result is 128 multiplied by 128, the condition that each thermodynamic point deviates from the target central point is represented, and the central point of the object is obtained through central prediction and adjusted;
the number of the wide-high prediction channels is 2, and the width and the height of each target are predicted;
the thermodynamic diagram prediction, the central point prediction and the wide-high prediction are performed through convolution, the feature layer is convolved first, the convolution kernel is 3×3 in size, the feature layer is standardized, an activation function is connected, the number of channels is adjusted by using a convolution check of 1×1, the thermodynamic diagram prediction channel number is adjusted to n, and the wide-high prediction and the central point prediction channel number is adjusted to 2.
4. The method for enhancing radar information fusion characteristics based on central et according to claim 1, wherein step 4 specifically comprises: and inputting 512 multiplied by 512, enabling training pictures with the channel number of 3 and radar data to enter a fusion neural network, and merging radar information into the fusion neural network twice to make the target characteristics more prominent, and carrying out regression on the thermodynamic diagram prediction, the central point prediction and the loss function of the wide-high prediction in the prediction process.
CN202110414757.5A 2021-04-17 2021-04-17 Method for enhancing radar information fusion characteristics based on center Active CN113221957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110414757.5A CN113221957B (en) 2021-04-17 2021-04-17 Method for enhancing radar information fusion characteristics based on center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110414757.5A CN113221957B (en) 2021-04-17 2021-04-17 Method for enhancing radar information fusion characteristics based on center

Publications (2)

Publication Number Publication Date
CN113221957A CN113221957A (en) 2021-08-06
CN113221957B true CN113221957B (en) 2024-04-16

Family

ID=77087953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110414757.5A Active CN113221957B (en) 2021-04-17 2021-04-17 Method for enhancing radar information fusion characteristics based on center

Country Status (1)

Country Link
CN (1) CN113221957B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419572B (en) * 2022-03-31 2022-06-17 国汽智控(北京)科技有限公司 Multi-radar target detection method and device, electronic equipment and storage medium
CN117172411A (en) * 2023-09-06 2023-12-05 江苏省气候中心 All-weather cyanobacteria bloom real-time automatic identification early warning method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110363151A (en) * 2019-07-16 2019-10-22 中国人民解放军海军航空大学 Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN111462237A (en) * 2020-04-03 2020-07-28 清华大学 Target distance detection method for constructing four-channel virtual image by using multi-source information
CN111882002A (en) * 2020-08-06 2020-11-03 桂林电子科技大学 MSF-AM-based low-illumination target detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110363151A (en) * 2019-07-16 2019-10-22 中国人民解放军海军航空大学 Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN111462237A (en) * 2020-04-03 2020-07-28 清华大学 Target distance detection method for constructing four-channel virtual image by using multi-source information
CN111882002A (en) * 2020-08-06 2020-11-03 桂林电子科技大学 MSF-AM-based low-illumination target detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于特征融合卷积神经网络的SAR图像目标检测方法;刘志宏;李玉峰;;微处理机;20200415(02);第33-39页 *

Also Published As

Publication number Publication date
CN113221957A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN108983219B (en) Fusion method and system for image information and radar information of traffic scene
CN108171112B (en) Vehicle identification and tracking method based on convolutional neural network
CN108304808B (en) Monitoring video object detection method based on temporal-spatial information and deep network
Khammari et al. Vehicle detection combining gradient analysis and AdaBoost classification
CN111274976A (en) Lane detection method and system based on multi-level fusion of vision and laser radar
CN109145798B (en) Driving scene target identification and travelable region segmentation integration method
CN113221957B (en) Method for enhancing radar information fusion characteristics based on center
CN111462128B (en) Pixel-level image segmentation system and method based on multi-mode spectrum image
CN111582339B (en) Vehicle detection and recognition method based on deep learning
CN111709416A (en) License plate positioning method, device and system and storage medium
CN112825192A (en) Object identification system and method based on machine learning
CN115187964A (en) Automatic driving decision-making method based on multi-sensor data fusion and SoC chip
CN110837769B (en) Image processing and deep learning embedded far infrared pedestrian detection method
CN111444916A (en) License plate positioning and identifying method and system under unconstrained condition
CN114743126A (en) Lane line sign segmentation method based on graph attention machine mechanism network
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
Chougula et al. Road segmentation for autonomous vehicle: A review
CN113139615A (en) Unmanned environment target detection method based on embedded equipment
CN117036412A (en) Twin network infrared pedestrian target tracking method integrating deformable convolution
CN109191473B (en) Vehicle adhesion segmentation method based on symmetry analysis
CN116403186A (en) Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++
CN116311154A (en) Vehicle detection and identification method based on YOLOv5 model optimization
CN114882469A (en) Traffic sign detection method and system based on DL-SSD model
CN114898144A (en) Automatic alignment method based on camera and millimeter wave radar data
Chen et al. Real-time road object segmentation using improved light-weight convolutional neural network based on 3D LiDAR point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant