CN116206212A - SAR image target detection method and system based on point characteristics - Google Patents

SAR image target detection method and system based on point characteristics Download PDF

Info

Publication number
CN116206212A
CN116206212A CN202310146236.5A CN202310146236A CN116206212A CN 116206212 A CN116206212 A CN 116206212A CN 202310146236 A CN202310146236 A CN 202310146236A CN 116206212 A CN116206212 A CN 116206212A
Authority
CN
China
Prior art keywords
point
frame
feature
detection
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310146236.5A
Other languages
Chinese (zh)
Inventor
王晗
陈军
郝红星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Forestry University
Original Assignee
Beijing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Forestry University filed Critical Beijing Forestry University
Priority to CN202310146236.5A priority Critical patent/CN116206212A/en
Publication of CN116206212A publication Critical patent/CN116206212A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention relates to a SAR image target detection method and system based on point characteristics, wherein the method comprises the following steps: s1: inputting the SAR image into a feature extraction module to obtain feature graphs with different scales; s2: carrying out convolution processing on the feature map input point feature detection network to extract point features; s3: converting the minimum and maximum x and y values in the point characteristics into a false detection frame; s4: performing deformable convolution operation on the processed feature map and the point features, and then outputting a false detection frame correction frame and a target class through 1X 1 convolution; adding the false detection frame correction frame and the false detection frame to obtain a final prediction target bounding frame; s5: constructing a total loss function L for training a point feature detection network; step S6: and inputting the SAR image to be detected into a trained point feature detection network to obtain a target bounding box and target class information as detection results. The method provided by the invention is used for carrying out targeted design on the discrete characteristics of the target, and has higher detection speed and accuracy.

Description

SAR image target detection method and system based on point characteristics
Technical Field
The invention relates to the technical field of target detection, in particular to a SAR image target detection method and system based on point characteristics.
Background
Synthetic Aperture Radar (SAR) is a radar that acquires images all the time throughout the day. Is widely applied to military and civil use. Along with the development of the intelligent processing field of images, the convolutional neural network has increasingly strong functions, and the detection method based on the convolutional neural network has strong performance in the detection of various targets. With the development of the SAR image field, a plurality of SAR images can be obtained, and interpretation of the SAR images by using a convolutional neural network has great value.
In SAR images, targets generally have the characteristics of high discretization degree and strong attitude variability, and mainstream target detectors are designed for detecting optical images, so that most of the existing detectors cannot adapt to SAR target detection in the face of the characteristics of strong target scattering characteristics and strong variability in SAR images, and therefore better detection performance cannot be realized.
In addition, most existing algorithms blindly stack the number of network layers and use new techniques resulting in huge algorithm computation and parameter volumes. In practical application scenarios, however, the detection algorithm is typically arranged on edge devices and mobile devices with limited computing power. Therefore, the algorithm with large operand and parameter quantity cannot adapt to the actual application scene of SAR image target detection. It is extremely efficient to reduce the number of algorithm parameters and calculations in SAR target measurement.
Finally, the existing SAR image target detection algorithm is mainly based on the design of an anchor frame, but the parameter setting of the anchor frame is extremely dependent on expert knowledge, and the algorithm based on the anchor frame has limitation on the detection of small targets in SAR.
Disclosure of Invention
In order to solve the technical problems, the invention provides a SAR image target detection method and system based on point characteristics.
The technical scheme of the invention is as follows: a SAR image target detection method based on point characteristics comprises the following steps:
step S1: inputting the SAR image into a feature extraction module to obtain feature graphs with different scales;
step S2: performing convolution processing on the feature map input point feature detection network to obtain a processed feature map, and outputting point features of the processed feature map through a 1 multiplied by 1 convolution;
step S3: selecting the minimum and maximum x and y values in the point features as coordinates of the upper left corner and the lower right corner of the false detection frame, and converting the point features into the false detection frame;
step S4: performing deformable convolution DCNv2 operation on the processed feature map and the point features, and outputting a false detection frame correction frame and a target class through 1X 1 convolution of an operation result; adding the false detection frame correction frame and the false detection frame to obtain a final prediction target bounding frame;
step S5: based on predictive category loss L Category(s) Predicted target bounding box loss L Surrounding frame And false detection frame loss L Pseudo detection frame Constructing a total loss function L for training the point feature detection network, updating network parameters through back propagation, repeating the steps until the performance of the point feature detection network converges, and storing network structures and parameters to obtain a trained point feature detection network;
step S6: inputting the SAR image to be detected into the trained point feature detection network to obtain a target bounding box and target class information as detection results.
Compared with the prior art, the invention has the following advantages:
1. the invention discloses a SAR image target detection method based on point characteristics, which aims at the characteristic of strong scattering characteristics of SAR targets, and adopts a smart design of describing the SAR targets by using a discrete point set, so that the detection precision of the SAR targets is improved.
2. The invention solves the problem that the anchor frame parameter setting in the prior art is highly dependent on expert experience, and the method has better detection performance for small targets in SAR by adopting a non-anchor frame algorithm.
Drawings
FIG. 1 is a flowchart of a SAR image target detection method based on point features in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a point feature detection network according to an embodiment of the present invention;
FIG. 3 is a schematic view of the visualization of the spatial position of the point feature output by the point feature detection network and the predicted target bounding box according to the embodiment of the present invention;
fig. 4 is a block diagram of a system for detecting a target of an SAR image based on a point feature in an embodiment of the present invention.
Detailed Description
The invention provides a SAR image target detection method based on point characteristics, which is used for carrying out targeted design on the discrete characteristics of targets and has higher detection speed and accuracy.
The present invention will be further described in detail below with reference to the accompanying drawings by way of specific embodiments in order to make the objects, technical solutions and advantages of the present invention more apparent.
Example 1
As shown in fig. 1, the method for detecting the target of the SAR image based on the point characteristics provided by the embodiment of the invention comprises the following steps:
step S1: inputting the SAR image into a feature extraction module to obtain feature graphs with different scales;
step S2: carrying out convolution processing on the characteristic image input point characteristic detection network to obtain a processed characteristic image, and outputting the point characteristic of the processed characteristic image through a 1 multiplied by 1 convolution;
step S3: selecting the minimum and maximum x and y values in the point features as coordinates of the upper left corner and the lower right corner of the false detection frame, and converting the point features into the false detection frame;
step S4: carrying out deformable convolution DCNv2 operation on the processed feature map and point features, and carrying out 1×1 convolution on an operation result to output a false detection frame correction frame and a target class; adding the false detection frame correction frame and the false detection frame to obtain a final prediction target bounding frame;
step S5: based on predictive category loss L Category(s) Predicted target bounding box loss L Surrounding frame And false detection frame loss L Pseudo detection frame Constructing a total loss function L for training a point feature detection network, updating network parameters through back propagation, repeating the steps until the performance of the point feature detection network converges, and storing network structures and parameters to obtain a trained point feature detection network;
step S6: and inputting the SAR image to be detected into a trained point feature detection network to obtain a target bounding box and target class information as detection results.
In one embodiment, step S1 described above: inputting the SAR image into a feature extraction module to obtain feature graphs with different scales, wherein the method specifically comprises the following steps:
the feature extraction module comprises: resNet50 and FPN; firstly, inputting an SAR image into the ResNet50, carrying out five downsampling, then, inputting the characteristic images of the three downsampling into the FPN for characteristic fusion, and outputting three characteristic images with different sizes for the prediction of different scale targets.
The feature extraction module in the embodiment of the invention consists of ResNet50 and FPN, the SAR image is subjected to five times of downsampling by ResNet50, and then the feature images of the three times of downsampling are transmitted into the FPN for feature fusion, three feature images with different sizes are output, and the channel numbers of the three feature images output by the FPN are 256 dimensions.
In one embodiment, step S2 above: carrying out convolution processing on the characteristic image input point characteristic detection network to obtain a processed characteristic image, and outputting the characteristic image after processing through a 1X 1 convolution, wherein the method specifically comprises the following steps:
step S21: processing the feature map through three convolution layers to obtain a processed feature map;
as shown in fig. 2, the feature map with three different sizes output in step S1 is further processed by three 3×3 convolution layers, and the width and height of the feature map and the number of channels are kept unchanged during three-layer convolution operation.
Step S22: the processed feature map passes through a 1X 1 convolution layer and outputs a 3X N-dimensional point feature matrix, wherein N represents the number of discrete points in a point set; the first dimension is the coordinate displacement of the discrete point on the x-axis relative to the center point, the second dimension is the coordinate displacement of the discrete point on the y-axis relative to the center point, and the third dimension is the weight value of each point of the N points, and the importance of each point is respectively represented.
The processed feature map output in step S21 is passed through a 1×1 convolved branch. After the convolution with 1×1, the 256-dimensional feature map becomes a 3×n-dimensional point feature. The point features adaptively learn key semantic points of the targets, and describe the structures and positions of the targets in the feature map. In the 3 XN-dimensional point feature matrix, outputting a 3 XN-dimensional point feature matrix, wherein N represents the number of discrete points in the point set; the first dimension is the coordinate displacement of the discrete point on the x-axis relative to the center point, the second dimension is the coordinate displacement of the discrete point on the y-axis relative to the center point, and the third dimension is the weight value of each point of the N points, and the importance of each point is respectively represented.
Through iterative training, N discrete points represented by the point features adaptively learn the position and structure information of the target. The embodiment of the invention outputs a set containing N discrete points for each target in the input SAR image, and symbolizes and describes the position and structure of the target by using the discrete point set.
In one embodiment, the step S3: and selecting the minimum and maximum x and y values in the point features as coordinates of the upper left corner and the lower right corner of the false detection frame, and converting the point features into the false detection frame.
And (3) according to the point characteristics obtained in the step (S2), determining a pseudo detection frame capable of containing all N discrete points according to the coordinate values of the point characteristics.
In one embodiment, step S4 above: carrying out deformable convolution DCNv2 operation on the processed feature map and point features, and carrying out 1×1 convolution on an operation result to output a false detection frame correction frame and a target class; adding the false detection frame correction frame and the false detection frame to obtain a final prediction target bounding frame, which specifically comprises the following steps:
and (2) carrying out deformable convolution DCNv2 operation on the point characteristics and the processed characteristic diagram output in the step (S21), wherein the point characteristics represent sampling positions and adding weights of each point when convolution kernel operation is carried out to the point of the characteristic diagram, and in the process, the point characteristics adaptively model scattering characteristics of a target, so that a more efficient characteristic diagram is provided for result prediction. In the deformable convolution DCNv2 operation, the width and the height of the characteristic diagram are unchanged, and the number of channels is 256 dimensions; then, a ReLu activation operation is performed to activate the operation result of the deformable convolution DCNv 2. Finally, the activation result is subjected to 1X 1 convolution, and a 4+C-dimensional prediction vector is output; the predicted false detection frame correction frame is 4-dimensional, the target category information is C-dimensional, C is the number of target categories, the C-dimensional represents the confidence degree of each different category, and the category represented by the dimension with the largest numerical value is used as the predicted category.
And finally, adding the false detection frame correction frame obtained in the step S3 with the false detection frame obtained in the step S to obtain a final target bounding box.
In one embodiment, the step S5 is as follows: based on predictive category loss L Category(s) Predicted target bounding box loss L Surrounding frame And false detection frame loss L Pseudo detection frame The method comprises the steps of constructing a total loss function L for training a point feature detection network, updating network parameters through back propagation, repeating the steps until the performance of the point feature detection network converges, and storing network structures and parameters to obtain the trained point feature detection network, and specifically comprises the following steps:
constructing a loss function: predictive category loss L Category(s) Predicted target bounding box loss L Surrounding frame And predicting false detection frame loss L Pseudo detection frame The calculation formulas of the parts are as follows; the prediction represents a predicted output value, and the GT represents a tag true value; f (F) Surrounding frame 、F Pseudo detection frame And F Category(s) Representing loss calculation functions, smoothL1, focalLoss functions, respectively:
L surrounding frame =F Surrounding frame (predict Surrounding frame ,GT Surrounding frame )
L Pseudo detection frame =F Pseudo detection frame (predict Pseudo detection frame ,GT Surrounding frame )
L Category(s) =F Category(s) (predict Category(s) ,GT Category(s) )
Constructing a total loss function:
L=μ 1 *L category(s)2 *L Surrounding frame3 *L Pseudo detection frame
Wherein mu 1 、μ 2 、μ 3 The preset weights of the three loss functions are 1.0, 1.0 and 0.5 in the embodiment of the invention.
Because the point features participate in the deformable convolution DCNv2 calculation, the point features are supervised by indirect category loss, target bounding box loss and direct pseudo detection box loss, so that discrete points in the point features can learn information of target key semantic points.
In one embodiment, step S6 above: and inputting the SAR image to be detected into a trained point feature detection network to obtain a target bounding box and target class information as detection results.
As shown in fig. 3, the visualized results of the spatial position of the point feature (9 discrete point sets) and the predicted target bounding box obtained by the point feature detection network are shown.
The invention discloses a SAR image target detection method based on point characteristics, which aims at the characteristic of strong scattering characteristics of SAR targets, and adopts a smart design of describing the SAR targets by using a discrete point set, so that the detection precision of the SAR targets is improved. The invention solves the problem that the anchor frame parameter setting in the prior art is highly dependent on expert experience, and the method has better detection performance for small targets in SAR by adopting a non-anchor frame algorithm.
Example two
As shown in fig. 4, an embodiment of the present invention provides a SAR image target detection system based on point characteristics, which includes the following modules:
the feature extraction module 71 is configured to input the SAR image into the feature extraction module, and obtain feature graphs with different scales;
the extracting point feature module 72 is configured to perform convolution processing on the feature map input point feature detection network to obtain a processed feature map, and output point features of the processed feature map through a 1×1 convolution;
the pseudo detection frame generation module 73 is configured to select the x and y values with the smallest and largest values in the point features as coordinates of the upper left corner and the lower right corner of the pseudo detection frame, and convert the point features into the pseudo detection frame;
the prediction target bounding box module 74 is configured to perform a deformable convolution DCNv2 operation on the processed feature map and the point feature, and then convolve the operation result by 1×1 to output a false detection frame correction frame and a target class; adding the false detection frame correction frame and the false detection frame to obtain a final prediction target bounding frame;
a construction loss function module 75 for loss L based on prediction category Category(s) Predicted target bounding box loss L Surrounding frame And false detection frame loss L Pseudo detection frame Constructing a total loss function L for training a point feature detection network, updating network parameters through back propagation, repeating the steps until the performance of the point feature detection network converges, and storing network structures and parameters to obtain a trained point feature detection network;
the detection module 76 is configured to input the SAR image to be detected into a trained point feature detection network, and obtain the target bounding box and the target class information as detection results.
The above examples are provided for the purpose of describing the present invention only and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalents and modifications that do not depart from the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (6)

1. The SAR image target detection method based on the point characteristics is characterized by comprising the following steps of:
step S1: inputting the SAR image into a feature extraction module to obtain feature graphs with different scales;
step S2: performing convolution processing on the feature map input point feature detection network to obtain a processed feature map, and outputting point features of the processed feature map through a 1 multiplied by 1 convolution;
step S3: selecting the minimum and maximum x and y values in the point features as coordinates of the upper left corner and the lower right corner of the false detection frame, and converting the point features into the false detection frame;
step S4: performing deformable convolution DCNv2 operation on the processed feature map and the point features, and outputting a false detection frame correction frame and a target class through 1X 1 convolution of an operation result; adding the false detection frame correction frame and the false detection frame to obtain a final prediction target bounding frame;
step S5: based on predictive category loss L Category(s) Predicted target bounding box loss L Surrounding frame And false detection frame loss L Pseudo detection frame Constructing a total loss function L for training the point feature detection network, updating network parameters through back propagation, repeating the steps until the performance of the point feature detection network converges, and storing network structures and parameters to obtain a trained point feature detection network;
step S6: inputting the SAR image to be detected into the trained point feature detection network to obtain a target bounding box and target class information as detection results.
2. The method for detecting the target of the SAR image based on the point feature according to claim 1, wherein said step S1: inputting the SAR image into a feature extraction module to obtain feature graphs with different scales, wherein the method specifically comprises the following steps:
the feature extraction module includes: resNet50 and FPN; firstly, inputting an SAR image into the ResNet50, carrying out five downsampling, then, inputting the characteristic images of the three downsampling into the FPN for characteristic fusion, and outputting three characteristic images with different sizes for the prediction of different scale targets.
3. The SAR image target detection method based on point features according to claim 2, wherein said step S2: performing convolution processing on the feature map input point feature detection network to obtain a processed feature map, and outputting the point feature of the processed feature map through a 1×1 convolution, wherein the convolution processing specifically comprises the following steps:
step S21: processing the feature map through three convolution layers to obtain a processed feature map;
step S22: outputting a 3 XN-dimensional point feature matrix from the processed feature map through a 1X 1 convolution layer, wherein N represents the number of discrete points in a point set; the first dimension is the coordinate displacement of the discrete point on the x-axis relative to the center point, the second dimension is the coordinate displacement of the discrete point on the y-axis relative to the center point, and the third dimension is the weight value of each point of the N points, and the importance of each point is respectively represented.
4. The SAR image target detection method based on point feature of claim 3, wherein said step S4: performing deformable convolution DCNv2 operation on the processed feature map and the point features, and outputting a false detection frame correction frame and a target class through 1X 1 convolution of an operation result; adding the false detection frame correction frame and the false detection frame to obtain a final prediction target bounding frame, wherein the method specifically comprises the following steps:
performing deformable convolution DCNv2 on the processed feature map and the point features to perform scattering feature sampling, and then performing 1×1 convolution to output 4+C-dimensional prediction vectors; the predicted false detection frame correction frame is 4-dimensional, the target category information is C-dimensional, C is the number of target categories, the C-dimensional represents the confidence degree of each different category, and the category represented by the dimension with the largest numerical value is used as the predicted category; in the deformable convolution DCNv2 operation, the point feature represents the sampling position and the addition weight of each sampling point when the convolution kernel reaches the point of the feature map.
5. The method for detecting the target of the SAR image based on the point feature according to claim 4, wherein said step S5: based on predictive category loss L Category(s) Predicted target bounding box loss L Surrounding frame And false detection frame loss L Pseudo detection frame Constructing a total loss function L for training the point feature detection network, updating network parameters through back propagation, repeating the steps until the performance of the point feature detection network converges, and storing network structures and parameters to obtain a trained point feature detection network, wherein the method specifically comprises the following steps of:
constructing a loss function: predictive category loss L Category(s) Predicted target bounding box loss L Surrounding frame And predicting false detection frame loss L Pseudo detection frame The calculation formulas of the parts are as follows; the prediction represents a predicted output value, and the GT represents a tag true value; f (F) Surrounding frame 、F Pseudo detection frame And F Category(s) Representing loss calculation functions, smoothL1, focalLoss functions, respectively:
L surrounding frame =F Surrounding frame (predict Surrounding frame ,GT Surrounding frame )
L Pseudo detection frame =F Pseudo detection frame (predict Pseudo detection frame ,GT Surrounding frame )
L Category(s) =F Category(s) (predict Category(s) ,GT Category(s) )
Constructing a total loss function:
L=μ 1 *L category(s)2 *L Surrounding frame3 *L Pseudo detection frame
Wherein mu 1 、μ 2 、μ 3 Preset weights for three loss functions.
6. The SAR image target detection system based on the point characteristics is characterized by comprising the following modules:
the feature extraction module is used for inputting the SAR image into the feature extraction module to obtain feature images with different scales;
the extraction point feature module is used for carrying out convolution processing on the feature map input point feature detection network to obtain a processed feature map, and outputting point features of the processed feature map through a 1 multiplied by 1 convolution;
the pseudo detection frame generation module is used for selecting the minimum and maximum x and y values in the point features as coordinates of the upper left corner and the lower right corner of the pseudo detection frame and converting the point features into the pseudo detection frame;
the prediction target bounding box module is used for carrying out deformable convolution DCNv2 operation on the processed feature map and the point features, convoluting an operation result by 1 multiplied by 1, and outputting a false detection frame correction frame and a target class; adding the false detection frame correction frame and the false detection frame to obtain a final prediction target bounding frame;
a loss function module is constructed for predicting class loss L Category(s) Predicted target bounding box loss L Surrounding frame And false detection frame loss L Pseudo detection frame Constructing a total loss function L for training the point feature detection network, updating network parameters through back propagation, repeating the steps until the performance of the point feature detection network converges, and storing network structures and parameters to obtain a trained point feature detection network;
the detection module is used for inputting the SAR image to be detected into the trained point feature detection network to obtain a target bounding box and target class information as detection results.
CN202310146236.5A 2023-02-09 2023-02-09 SAR image target detection method and system based on point characteristics Pending CN116206212A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310146236.5A CN116206212A (en) 2023-02-09 2023-02-09 SAR image target detection method and system based on point characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310146236.5A CN116206212A (en) 2023-02-09 2023-02-09 SAR image target detection method and system based on point characteristics

Publications (1)

Publication Number Publication Date
CN116206212A true CN116206212A (en) 2023-06-02

Family

ID=86509011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310146236.5A Pending CN116206212A (en) 2023-02-09 2023-02-09 SAR image target detection method and system based on point characteristics

Country Status (1)

Country Link
CN (1) CN116206212A (en)

Similar Documents

Publication Publication Date Title
CN111797983A (en) Neural network construction method and device
CN111507222B (en) Three-dimensional object detection frame based on multisource data knowledge migration
CN112183718A (en) Deep learning training method and device for computing equipment
CN113570029A (en) Method for obtaining neural network model, image processing method and device
CN114255361A (en) Neural network model training method, image processing method and device
CN111882031A (en) Neural network distillation method and device
CN112215332A (en) Searching method of neural network structure, image processing method and device
CN113592060A (en) Neural network optimization method and device
EP4318313A1 (en) Data processing method, training method for neural network model, and apparatus
CN116310850B (en) Remote sensing image target detection method based on improved RetinaNet
CN113807399A (en) Neural network training method, neural network detection method and neural network detection device
CN112598657B (en) Defect detection method and device, model construction method and computer equipment
CN111105017A (en) Neural network quantization method and device and electronic equipment
CN114091554A (en) Training set processing method and device
CN112149590A (en) Hand key point detection method
CN110782430A (en) Small target detection method and device, electronic equipment and storage medium
CN112967388A (en) Training method and device for three-dimensional time sequence image neural network model
CN111832228A (en) Vibration transmission system based on CNN-LSTM
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN116563726A (en) Remote sensing image ship target detection method based on convolutional neural network
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN114821823A (en) Image processing, training of human face anti-counterfeiting model and living body detection method and device
WO2024078112A1 (en) Method for intelligent recognition of ship outfitting items, and computer device
CN115346125B (en) Target detection method based on deep learning
CN116206212A (en) SAR image target detection method and system based on point characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination