CN116452818A - Small sample remote sensing image target detection method based on feature enhancement - Google Patents

Small sample remote sensing image target detection method based on feature enhancement Download PDF

Info

Publication number
CN116452818A
CN116452818A CN202310501608.1A CN202310501608A CN116452818A CN 116452818 A CN116452818 A CN 116452818A CN 202310501608 A CN202310501608 A CN 202310501608A CN 116452818 A CN116452818 A CN 116452818A
Authority
CN
China
Prior art keywords
remote sensing
feature
small sample
sensing image
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310501608.1A
Other languages
Chinese (zh)
Inventor
袁正午
周亚涛
占希玲
唐培贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202310501608.1A priority Critical patent/CN116452818A/en
Publication of CN116452818A publication Critical patent/CN116452818A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention relates to a small sample remote sensing image target detection method based on feature enhancement, and belongs to the technical field of image processing. The method comprises the following steps: firstly, dividing a remote sensing image data set, and then, inputting a large amount of basic data into a feature extraction network in a training stage; a contextual feature enhancement module is then added after the feature extraction network, this can help the object detector to better understand the scene by selectively enhancing the class-aware features with image-level context information indicating the presence or absence of an object class. In the small sample fine tuning stage, the trained characteristic extraction network is utilized to freeze part of parameters of the small sample fine tuning stage, balanced basic type data and new type data are input into the network, a corrected proposal frame is obtained by a rear RoI characteristic extractor and a representation compensation module, and target prediction is carried out on the new type remote sensing image. The invention can solve the problem of small samples in remote sensing image target detection to a certain extent.

Description

Small sample remote sensing image target detection method based on feature enhancement
Technical Field
The invention belongs to the technical field of image processing, and relates to a small sample remote sensing image target detection method based on feature enhancement.
Background
The object detection is a hot spot field in the visual direction of a computer, and the task of the object detection is to select any number of interested objects in an image by using circumscribed rectangular frames and identify the object types. With the rise of deep learning, especially deep convolutional neural networks, a series of excellent target detection algorithms appear in natural scene images by virtue of the strong feature extraction capability of the deep learning, especially the deep convolutional neural networks. Clearly, unlike natural scenes, remote sensing images typically feature arbitrary directions, highly dispersed object appearance, and complex backgrounds. The identification of foreground objects from the complex background environment of the remote sensing image is also a problem. Therefore, how to combine the small sample learning method and expand and popularize the detection model through a small amount of labeled samples, so that the detection model can more accurately detect new category targets of the remote sensing image to meet the actual application demands, and the method is a research with challenging significance.
The general procedure of the existing small sample object detection model based on fine tuning is to first divide a set of data into two sets, one set being base class C with many instances base Another group is divided into a new class C with only K (K is typically less than 10) instances per class novel . The training strategy is that the first stage, namely the basic training stage, firstly trains a feature extractor and a proposal frame predictor in a large number of basic classes; then a second phase, namely a small sample fine tuning phase, creates a small balance training set, eachThe class has K samples, including a base class and a new class. The new class of proposed box prediction networks are then assigned randomly initialized weights and only the proposed box classification and regression networks, the last few layers of the detection model, are fine-tuned.
At present, the related research of small sample learning focuses on the detection and identification problem of natural scene images, but the detection and identification research of small sample targets in a remote sensing scene is deficient, and the problem is more difficult due to some characteristics of the remote sensing images: the remote sensing images have higher similarity among different types of targets, and the targets in the same type possibly contain larger difference, namely the remote sensing images have the characteristics of high similarity among the types and large intra-type difference; the ground object scale/degree in the remote sensing image has large change, and a common single target detection head can be difficult to adapt; compared with a natural scene, the remote sensing scene image has fewer samples, so that the detection model is easy to confuse a foreground object and a background. If the existing small sample learning algorithm applied to the natural scene is directly applied to the remote sensing image target detection task, the detection performance may be reduced.
Therefore, a new method for detecting the target in the remote sensing image with a small sample is needed to improve the accuracy of the target detection.
Disclosure of Invention
In view of the above, the invention aims to provide a small sample remote sensing image target detection method based on feature enhancement, which solves the problem of detection performance reduction of a detection model caused by a small number of remote sensing scene image samples in the prior art.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a small sample remote sensing image target detection method based on feature enhancement is characterized in that more useful feature information is obtained by adding a contextual feature enhancement module, the added contextual feature enhancement module is expected to extract more semantic features related to a detection target, and further positioning of target information in a remote sensing image is more accurate, after a proposal frame is selected in a fine tuning stage, a representation compensation module is introduced, and the representation compensation module can be used for compensating forgetting of basic features in the fine tuning stage and learning new types of features at the same time. The method specifically comprises the following steps:
s1: acquiring small sample remote sensing image data, and dividing a remote sensing image data set into a basic class and a new class to obtain a balanced small sample remote sensing image data set containing the basic class and the new class;
s2: constructing a small sample target detection model based on feature enhancement;
s3: carrying out organization division on the small sample remote sensing image data obtained in the step S1 to obtain a support set Si and a query set Qi;
s4: feature extraction is carried out on the support samples in the support set Si and the query samples in the query set Qi to obtain a support image feature map F si And query image feature map F Qi
S5: will support image feature map F si Input to a context feature enhancement module to obtain an enhanced feature map F si ′;
S6: feature map F to be enhanced si ' and query image feature map F Qi After matching, inputting the data to a regional proposal network module to generate a proposal frame;
s7: query image feature map F using a proposal box Qi Extracting candidate targets, and classifying and regressing the proposal frame through a classifier and a box regressing device;
s8: small sample fine tuning stage: the method comprises the steps of inputting balanced small sample remote sensing image data into a trained backbone network to extract characteristics, wherein parameters related to the backbone network, a context characteristic enhancement module and a characteristic pyramid network in a freezing detection model;
s9: processing the extracted features and outputting the position and the category of the predicted central point; then entering a representation compensation module to obtain a corrected proposal frame;
s10: using the real annotation information of the current image and the revised proposal frame as the IOU;
s11: and calculating accuracy and Recall rate Recall by using the IOU, and calculating an evaluation index F1.
Further, in step S1, the remote sensing image dataset is divided into a base class and a new class, which specifically includes: taking a plurality of categories in the data set as basic categories, and taking the rest categories as new categories; during the division, the data set can be divided into a plurality of different groups, the types of the new types set in each group are different, and each type of the data set can be made into a new type as much as possible.
Further, in step S2, a small sample target detection model based on feature enhancement is constructed, specifically including: using Faster R-CNN as a main frame of a detection model; resnet101 serves as a backbone network for feature extraction.
Further, the step S5 specifically includes: will enhance the feature map F si ' input to a context feature enhancement module, the obtained features of different levels are subjected to expansion convolution with different expansion rates to obtain context information of different receptive fields, the sum of the expansion rates is 3×3, the expansion rates are 1, 3 and 5, and finally, the high-level semantic features and the low-level detail features are fused through a conv1×1 convolution layer; and adding two full-connected convolution layers with sigmoid activation functions on the basis of the fusion characteristics to predict the credibility of the target category in the remote sensing scene, and training by adopting binary cross entropy loss.
Further, in step S7, the feature-enhanced detection model is fine-tuned by a loss function, where the loss function expression is:
L det =L cls +L reg +θL CFE
wherein L is cls Representing the target class classification loss, L reg Representing proposed box regression loss, θ is control context feature enhancement module loss L CFE Is a super parameter of (a).
Further, in step S9, the representation compensation module is configured by a representation decoupling mechanism module, and is configured to compensate for forgetting of the base class feature in the fine tuning stage, and learn the new class feature at the same time, so as to obtain a corrected proposal frame.
Further, in step S9, the representation compensation module specifically includes: introducing a representation decoupling mechanism into the RoI feature extractor, which modifies the individual branches with parallel branches to decouple representations of its prior knowledge and new knowledge; specifically, for two consecutive Fully Connected (FC) layers in the RoI feature extractor, a parallel Fully Connected (FC) layer is added with pre-training parameters for each component to represent prior knowledge learned from the basic training phase; in the fine-tuning phase, the newly added branch is frozen to protect the previous knowledge, and the other branch is trainable to learn the new knowledge; furthermore, by fusing the outputs of the two parallel Fully Connected (FC) layers, the RoI feature extractor will take into account the prior knowledge from the freeze layer and the new knowledge from the trainable layer; each Full Connection (FC) layer computation equipped with a representation decoupling mechanism can be described as:
F=act(λ·(W fc f1 i +b fc (1-lambda) · (W) fc F i +b fc ))
Wherein W is fc C, b fc And denotes the parameters of the frozen branch; w (W) fc ,b fc Trainable parameters representing learning new knowledge; λ represents a weight vector set to 0.5 for balancing the effects of the prior knowledge and the new knowledge; act (x) represents a nonlinear activation function; f represents the output of the full connection layer, F i Representing the input of the fully connected layer.
The invention has the beneficial effects that: the method can effectively ensure that the small target can still be accurately detected under the condition of less remote sensing scene image samples. Specifically, in order to cope with the characteristics of high similarity among the remote sensing images and large intra-class variability, the invention adds a context feature enhancement module after the feature extraction network, selectively enhances the class perception features by utilizing image-level context information indicating the existence or non-existence of the object class, thereby helping the object detector to better understand the scene. In the small sample fine tuning stage, the trained characteristic extraction network is utilized to freeze part of parameters of the small sample fine tuning stage, balanced basic type data and new type data are input into the network, a corrected proposal frame is obtained by a rear RoI characteristic extractor and a representation compensation module, and target prediction is carried out on the new type remote sensing image.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objects and other advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the specification.
Drawings
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in the following preferred detail with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of a small sample remote sensing image detection method based on feature enhancement in the invention;
FIG. 2 is a diagram of a small sample remote sensing image detection model structure based on feature enhancement in the invention;
FIG. 3 is a block diagram of a context information enhancement module (CFE) of the present invention;
fig. 4 is a block diagram showing a compensation module according to the present invention.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the illustrations provided in the following embodiments merely illustrate the basic idea of the present invention by way of illustration, and the following embodiments and features in the embodiments may be combined with each other without conflict.
Referring to fig. 1 to 4, the invention provides a small sample remote sensing image detection method based on feature enhancement, which specifically comprises the following steps:
step 1: and selecting a plurality of categories of the remote sensing image data set to be respectively divided into a basic category and a new category. During the division, the data set can be divided into a plurality of different groups, the types of the new types set in each group are different, and each type of the data set can be made into a new type as much as possible.
Step 2: constructing a small sample target detection model based on characteristic enhancement, which specifically comprises the following steps: using fast R-CNN as the main framework of the detection model, resnet101 is the backbone network.
Step 3: the selected small sample data is organized to obtain a support set Si and a query set Qi.
Step 4: extracting features of the support samples in the support set Si and the query samples in the query set Qi to obtain a support image feature map F si And query image feature map F Qi
Step 5: the obtained supporting image feature map F si Inputting the following context characteristic enhancement module, and obtaining a characteristic diagram F with more accurate target position information through the module si ′。
And the context characteristic enhancement module acquires context information of different receptive fields by using the expansion convolution of different expansion rates of the characteristics of different levels obtained in the previous step, wherein the sum of the expansion rates is 3 multiplied by 3, the expansion rates are 1, 3 and 5, and finally, the high-level semantic characteristics and the low-level detail characteristics are fused through the conv1 multiplied by 1 convolution layer. And adding two full-connected convolution layers with sigmoid activation functions on the basis of the fusion characteristics to predict the credibility of the target category in the remote sensing scene, and training by adopting binary cross entropy loss.
Step 6: map F of the characteristics si ' injecting the feature pyramid network from top to bottom to enrich the context information, and fusing the feature pyramid network to output a feature map and a query image feature map F Qi After matching, the data is input to a regional proposal network module to generate a proposal frame.
Step 7: query image feature map F using a proposal box Qi Extracting candidate targets.
Step 8: classifying and regressing the proposal frame through a classifier and a box regressing device; finally, the feature-enhanced detection model is fine-tuned by a loss function, and the loss function expression is as follows:
L det =L cls +L reg +θL CFE
wherein L is cls Representing the target class classification loss, L reg Representing proposed box regression loss, θ is control context feature enhancement module loss L CFE Is a super parameter of (a).
Step 9: and in the second stage, namely a small sample fine tuning stage, freezing parameters related to a backbone network, a context characteristic enhancement module and a characteristic pyramid network in the model, and inputting balanced small sample remote sensing image data, namely a basic class and a new class, wherein each class has K samples, into a trained characteristic extraction network.
In the small sample trimming phase, the parameters of the regional proposal network will be frozen, which will prevent the regional proposal network from learning new class features, for which reason the regional proposal network is trimmed at this stage. The hi×wi×k anchor boxes generated by the anchor generator are subjected to classification and regression tasks by 3×3 convolution and 1×1 convolution, respectively. The RPN then randomly selects the first m positive anchors in each layer that may contain the object. To reserve more front boxes for new categories, the preset m is doubled.
Step 10: and processing the extracted features and outputting the predicted center point position and category.
Step 11: and then enters the representation compensation module. The representation compensation module is composed of a representation decoupling mechanism module and can be used for compensating forgetting of the basic class characteristics in the fine tuning stage and learning the characteristics of the new class at the same time, so that a corrected proposal frame is obtained.
The representation decoupling mechanism will be introduced into the RoI feature extractor, which modifies the single branch with parallel branches to decouple the representation of its prior knowledge and new knowledge. Specifically, for two consecutive Fully Connected (FC) layers in the RoI feature extractor, a parallel FC layer is added with pre-training parameters for each component to represent prior knowledge learned from the basic training phase. In the fine-tuning phase, the newly added branch is frozen to protect the previous knowledge, and the other branch is trainable to learn the new knowledge. Furthermore, by fusing the outputs of the two parallel FC layers, the RoI feature extractor will take into account the prior knowledge from the freeze layer and the new knowledge from the trainable layer. Each FC layer computation equipped with a representation decoupling mechanism can be described as:
F=act(λ·(W fc `F i +b fc `)+(1-λ)·(W fc F i +b fc ))
wherein W is fc `,b fc ' represents the parameters of the frozen branch; w (W) fc ,b fc Trainable parameters representing learning new knowledge; λ represents a weight vector set to 0.5 for balancing the effects of the prior knowledge and the new knowledge; act (x) represents a nonlinear activation function; f represents the output of the full connection layer, F i Representing the input of the fully connected layer.
Step 12: and using the real annotation information of the current image and the corrected proposal frame as the IOU.
Step 13: and calculating accuracy and Recall rate Recall by using the IOU, and calculating an evaluation index F1. The calculation method comprises the following steps:
where TP represents the positive samples that are model predicted as positive classes and FP represents the negative samples that are model predicted as positive classes.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the present invention, which is intended to be covered by the claims of the present invention.

Claims (6)

1. The small sample remote sensing image target detection method based on characteristic enhancement is characterized by comprising the following steps of:
s1: acquiring small sample remote sensing image data, and dividing a remote sensing image data set into a basic class and a new class to obtain a balanced small sample remote sensing image data set containing the basic class and the new class;
s2: constructing a small sample target detection model based on feature enhancement;
s3: carrying out organization division on the small sample remote sensing image data obtained in the step S1 to obtain a support set Si and a query set Qi;
s4: feature extraction is carried out on the support samples in the support set Si and the query samples in the query set Qi to obtain a support image feature map F si And query image feature map F Qi
S5: will support image feature map F si Input to a context feature enhancement module to obtain an enhanced feature map F si ′;
S6: feature map F to be enhanced si ' and query image feature map F Qi After matching, inputting the data to a regional proposal network module to generate a proposal frame;
s7: query image feature map F using a proposal box Qi Extracting candidate targets, classifying and regressing the proposed frames through a classifier and a box regressing device, and finally fine-tuning a feature-enhanced detection model through a loss function;
s8: small sample fine tuning stage: the method comprises the steps of inputting balanced small sample remote sensing image data into a trained backbone network to extract characteristics, wherein parameters related to the backbone network, a context characteristic enhancement module and a characteristic pyramid network in a freezing detection model;
s9: processing the extracted features and outputting the position and the category of the predicted central point; then entering a representation compensation module to obtain a corrected proposal frame;
s10: using the real annotation information of the current image and the revised proposal frame as the IOU;
s11: and calculating accuracy and Recall rate Recall by using the IOU, and calculating an evaluation index F1.
2. The feature enhancement-based small sample remote sensing image target detection method according to claim 1, wherein in step S1, the remote sensing image dataset is divided into a base class and a new class, and specifically comprises: taking a plurality of categories in the data set as basic categories, and taking the rest categories as new categories; during the division, the data set can be divided into a plurality of different groups, the types of the new types set in each group are different, and each type of the data set can be a new type.
3. The feature-enhancement-based small sample remote sensing image target detection method according to claim 1, wherein in step S2, a feature-enhancement-based small sample target detection model is constructed, and specifically comprises: using Faster R-CNN as a main frame of a detection model; resnet101 serves as a backbone network for feature extraction.
4. The feature-enhancement-based small sample remote sensing image target detection method according to claim 1, wherein step S5 specifically comprises: will enhance the feature map F si ' input to a context feature enhancement module, the obtained features of different levels are subjected to expansion convolution with different expansion rates to obtain context information of different receptive fields, and finally, high-level semantic features and low-level detail features are fused through a conv1×1 convolution layer; and adding two full-connected convolution layers with sigmoid activation functions on the basis of the fusion characteristics to predict the credibility of the target category in the remote sensing scene, and training by adopting binary cross entropy loss.
5. The method according to claim 1, wherein in step S9, the representation compensation module is configured by a representation decoupling mechanism module, and is configured to compensate forgetting of the base class feature in the fine tuning stage, and learn the new class feature at the same time, so as to obtain a corrected proposal frame.
6. The feature enhancement-based small sample remote sensing image target detection method according to claim 5, wherein in step S9, the representation compensation module specifically comprises: introducing a representation decoupling mechanism into the RoI feature extractor, which modifies the individual branches with parallel branches to decouple representations of its prior knowledge and new knowledge; specifically, for two consecutive fully connected layers in the RoI feature extractor, add parallel fully connected layers with pre-training parameters for each component to represent prior knowledge learned from the basic training phase; in the fine-tuning phase, the newly added branch is frozen to protect the previous knowledge, and the other branch is trainable to learn the new knowledge; furthermore, by fusing the outputs of the two parallel fully connected layers, the RoI feature extractor will take into account the prior knowledge from the frozen layer and the new knowledge from the trainable layer; each full connection layer computation equipped with a representation decoupling mechanism is described as:
F=act(λ·(W fc `F i +b fc `)+(1-λ)·(W fc F i +b fc ))
wherein W is fc `,b fc ' represents the parameters of the frozen branch; w (W) fc ,b fc Trainable parameters representing learning new knowledge; λ represents a weight vector for balancing the prior knowledge and the new knowledge; act (x) represents a nonlinear activation function; f represents the output of the full connection layer, F i Representing the input of the fully connected layer.
CN202310501608.1A 2023-05-06 2023-05-06 Small sample remote sensing image target detection method based on feature enhancement Pending CN116452818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310501608.1A CN116452818A (en) 2023-05-06 2023-05-06 Small sample remote sensing image target detection method based on feature enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310501608.1A CN116452818A (en) 2023-05-06 2023-05-06 Small sample remote sensing image target detection method based on feature enhancement

Publications (1)

Publication Number Publication Date
CN116452818A true CN116452818A (en) 2023-07-18

Family

ID=87122008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310501608.1A Pending CN116452818A (en) 2023-05-06 2023-05-06 Small sample remote sensing image target detection method based on feature enhancement

Country Status (1)

Country Link
CN (1) CN116452818A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977635A (en) * 2023-07-19 2023-10-31 中国科学院自动化研究所 Category increment semantic segmentation learning method and semantic segmentation method
CN117237697A (en) * 2023-08-01 2023-12-15 北京邮电大学 Small sample image detection method, system, medium and equipment
CN117576404A (en) * 2024-01-15 2024-02-20 之江实验室 Semantic segmentation system, method and device based on image large model fine tuning strategy

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977635A (en) * 2023-07-19 2023-10-31 中国科学院自动化研究所 Category increment semantic segmentation learning method and semantic segmentation method
CN116977635B (en) * 2023-07-19 2024-04-16 中国科学院自动化研究所 Category increment semantic segmentation learning method and semantic segmentation method
CN117237697A (en) * 2023-08-01 2023-12-15 北京邮电大学 Small sample image detection method, system, medium and equipment
CN117576404A (en) * 2024-01-15 2024-02-20 之江实验室 Semantic segmentation system, method and device based on image large model fine tuning strategy

Similar Documents

Publication Publication Date Title
CN110956185B (en) Method for detecting image salient object
CN110443818B (en) Graffiti-based weak supervision semantic segmentation method and system
Hao et al. An end-to-end architecture for class-incremental object detection with knowledge distillation
CN116452818A (en) Small sample remote sensing image target detection method based on feature enhancement
CN110866140A (en) Image feature extraction model training method, image searching method and computer equipment
CN112131978B (en) Video classification method and device, electronic equipment and storage medium
CN112036447B (en) Zero-sample target detection system and learnable semantic and fixed semantic fusion method
CN109635676A (en) A method of positioning source of sound from video
CN111680706A (en) Double-channel output contour detection method based on coding and decoding structure
CN104156734A (en) Fully-autonomous on-line study method based on random fern classifier
CN114998220B (en) Tongue image detection and positioning method based on improved Tiny-YOLO v4 natural environment
Liang et al. Comparison detector for cervical cell/clumps detection in the limited data scenario
Li et al. Robust deep neural networks for road extraction from remote sensing images
CN114821152B (en) Domain self-adaptive target detection method and system based on foreground-class perception alignment
CN115273154B (en) Thermal infrared pedestrian detection method and system based on edge reconstruction and storage medium
Li et al. A review of deep learning methods for pixel-level crack detection
CN115171165A (en) Pedestrian re-identification method and device with global features and step-type local features fused
CN113569895A (en) Image processing model training method, processing method, device, equipment and medium
CN115063664A (en) Model learning method, training method and system for industrial vision detection
Fan et al. A novel sonar target detection and classification algorithm
CN111126155A (en) Pedestrian re-identification method for generating confrontation network based on semantic constraint
CN114708637A (en) Face action unit detection method based on meta-learning
CN112560823B (en) Adaptive variance and weight face age estimation method based on distribution learning
Fu et al. Stereo matching confidence learning based on multi-modal convolution neural networks
CN115082762A (en) Target detection unsupervised domain adaptation system based on regional recommendation network center alignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination