CN114818963B - Small sample detection method based on cross-image feature fusion - Google Patents

Small sample detection method based on cross-image feature fusion Download PDF

Info

Publication number
CN114818963B
CN114818963B CN202210506243.7A CN202210506243A CN114818963B CN 114818963 B CN114818963 B CN 114818963B CN 202210506243 A CN202210506243 A CN 202210506243A CN 114818963 B CN114818963 B CN 114818963B
Authority
CN
China
Prior art keywords
feature
image
support set
module
cross
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210506243.7A
Other languages
Chinese (zh)
Other versions
CN114818963A (en
Inventor
贾海涛
田浩琨
胡佳丽
王子彦
吴俊男
任利
刘子骥
许文波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210506243.7A priority Critical patent/CN114818963B/en
Publication of CN114818963A publication Critical patent/CN114818963A/en
Application granted granted Critical
Publication of CN114818963B publication Critical patent/CN114818963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a small sample detection method based on cross-image feature fusion. The main structure of the invention is a small sample learning algorithm constructed based on a two-stage target detection algorithm Faster-RCNN. Firstly, inputting a query image and a support set image to extract a feature image, sending the obtained feature image into a cross-image feature fusion module to strengthen the expression of target feature information in the query set by using feature information in the support set, then sending the feature image into an improved RPN module to generate an ROI feature vector, screening candidate frames by the improved feature aggregation module and completing the spatial alignment of the support set vector and the ROI feature vector, finally sending the processed ROI and the support set vector into a classifier to classify, and finally outputting the accurate positioning of the target type and the frame. Finally, a plurality of groups of ablation experiments and comparison experiments are designed on the PASCAL VOC data set, so that good detection precision is obtained, and the effectiveness of a detection algorithm is verified.

Description

Small sample detection method based on cross-image feature fusion
Technical Field
The invention relates to the field of small sample detection in deep learning, in particular to a target detection technology under the condition of small samples.
Background
In recent years, a target detection technology based on deep learning has attracted great attention and has been applied to actual industry and daily life, such as intelligent monitoring, automatic driving, face authentication, and the like. Current deep learning-based object detection algorithms require extensive amounts of data for model training compared to conventional image algorithms. Typically, marking an instance takes around 10 seconds, creating tens of thousands of data needed for a data set, and the marking process is time consuming. Since the distribution of real data follows a long tail distribution, some sample data exists in a very low proportion, for example: some medical images, or some animals and plants in nature. Along with the development of small sample image classification algorithms, the development of small sample learning scientific research emphasis in the image field is gradually placed on small sample target detection.
Inspired by the research of small sample classification algorithms, most of the current small sample target detection algorithms mostly adopt a two-stage mode, namely, the approximate position of a target is determined firstly, and then the target is classified and the position is estimated accurately through a classifier. On the basis of a two-stage mode, changes are required to be made for application scenes of small samples, and the changes of the current mainstream can be summarized into training and testing by adopting a meta-learning method, aggregating the features of a support set and a query set, increasing the relevance of feature vectors, modifying a loss function, improving vector differentiation and the like. Therefore, the invention provides a small sample detection method based on cross-image feature fusion, wherein a cross-image feature fusion module is added, and an RPN module and a feature aggregation module are improved.
Disclosure of Invention
In order to solve the problem of target detection under the condition of a small sample, the invention designs a cross-image feature fusion network for small sample target detection. Aiming at the problem that the RPN easily ignores new types of characteristic information, a cross-image characteristic fusion mechanism is designed, and the vector of the support set is used for weighting the query set vector and highlighting the characteristic information of a new target in the query set image, so that omission of the RPN on the target area can be obviously reduced; aiming at the problem that the classification error of the binary classifier in the RPN possibly causes the leakage of the suggested area of the high IOU, the invention adopts a strategy that a plurality of classifiers are connected in parallel to reduce the leakage probability; aiming at the defects of feature aggregation, the invention designs a feature aggregation mode based on sequencing, filters out some ROI feature vectors with lower similarity with a support set, spatially aligns the features of the query set and the features of the support set, and then sends the feature vectors into a subsequent classification module.
The technical scheme adopted by the invention is as follows:
step 1: extracting image features of a support set and a query set through a shared feature extraction network, denoted as f s And f q Sending the group of feature images into a cross-image feature fusion module;
step 2: calculating f s And f q Is given f by the degree of similarity of s The feature images in the support set feature images f 'are weighted to generate weighted support set feature images f' s
Step 3: calculate f' s And f q Attention profile f in between a And map the attention characteristic map f a And query set feature map f q Matrix multiplication is performed to obtain a query set feature map f 'with support set attention information' q
Step 4: will f' q Generation in an improved RPN networkCandidate frame, generating ROI feature vector after ROI Pooling, and combining with f' s And sending the target detection result to an improved feature aggregation module to generate a final feature vector, sending the final feature vector to a classifier for classification, sending the final feature vector to a frame regression module for accurate positioning of a prediction frame, and outputting a final target detection result to comprise a target category and a target frame coordinate.
Compared with the prior art, the invention has the beneficial effects that:
(1) Compared with the traditional small sample target detection algorithm, the method has higher detection precision;
(2) The method has better detection effect on small sample detection of various types.
Drawings
Fig. 1 is: small sample detection algorithm structure diagram based on cross-image feature fusion.
Fig. 2 is: a weighting module schematic is supported.
Fig. 3 is: a schematic diagram of a cross-image space attention feature fusion module.
Fig. 4 is: the improved RPN network structure is schematically shown.
Fig. 5 is: and (5) an improved characteristic aggregation module schematic diagram.
Fig. 6 is: CIF-FSOD targets the identified mAP value versus mAP on the PASCAL VOC dataset.
Detailed Description
The present invention will be described in further detail with reference to the embodiments and the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present invention more apparent.
The invention designs a small sample target detection algorithm based on cross-image feature fusion based on a two-stage target detection network, which is also called CIF-FSOD, and the network structure is shown in figure 1. The whole network structure mainly comprises three parts, namely a cross-image feature fusion module, an improved RPN module and an improved feature aggregation module.
Firstly, the input image extracts image features of a support set and a query set through a shared feature extraction network, and the group of feature images are sent to a cross-image feature fusion module. The cross-image feature fusion module designed in the invention mainly comprises a support set weighting module and a cross-image space attention fusion module, wherein the support set weighting module and the cross-image space attention fusion module are utilized to strengthen the expression of the features in the query set, the feature map strengthened by the support set weighting module is input into the RPN network in the next part to generate the candidate frame, and the purpose of the support set weighting module is to improve the quality and the efficiency of the candidate frame generated in the RPN.
The support set weighting module is configured to weight the feature map in the support set, as shown in fig. 2. Because the shapes, angles and brightness of the same object in different support set feature graphs are different, the contribution degree of the feature information to the query set features is also different, the weighted information of the support set can be obtained by calculating the similarity of the support set feature graphs and the query set feature graphs, the support set feature graphs with higher similarity are assigned with higher weights, and the support set feature graphs with lower similarity are assigned with lower weights.
The similarity between the two feature graphs is calculated by adopting a relation network, and the specific calculation is shown in a formula (1):
sim(f s ,f q )=R φ [E(f s ),E(f q )] (1)
wherein f s Representing a support set feature map, f q Representing a query set feature map, E (·) representing an embedded function, R φ (. Cndot.) represents a relational network. sim (f) s ,f q ) The final output similarity result is also required to be sent into a softmax function for normalization to generate weight information, as shown in a formula (2);
Figure GDA0004147762130000031
wherein W is i Weight, sim, representing the ith feature map in the support set i Representing the similarity of the ith feature map in the support set to the feature map in the query set. And finally, weighting the original support set feature set by using the generated weight information, as shown in a formula (3):
Figure GDA0004147762130000033
the cross-image-space attention feature fusion module generates a cross-image-space attention feature map by using a support set feature map and a query set feature map as shown in fig. 3, and the specific processes are as follows in formulas (4) and (5):
A(f s ,f q )=σ((f s ) T ·f q ) (4)
Figure GDA0004147762130000032
in the formula (4) of the present invention,
Figure GDA0004147762130000041
representing a query set feature map, < >>
Figure GDA0004147762130000042
Representing a support set feature map, A (f q ,f s ) For the generated cross-image-space attention profile, wherein +.>
Figure GDA0004147762130000043
Figure GDA0004147762130000044
Wherein, sigma (·) represents a softmax function, and the specific calculation process in the function is shown in formula (5), wherein +.>
Figure GDA0004147762130000045
Representation of
Figure GDA0004147762130000046
S, transposed result ji Representing the generated attention profile, the influence of the ith position on the jth position, namely the influence of the ith spatial position in the support set profile on the jth spatial position in the query set profile. The generated space attention feature image and the query feature image are then utilized to carry out matrix multiplication to generate a query set feature image containing space feature information, and the method comprises the following specific processesAs in formula (6):
Figure GDA0004147762130000047
wherein p is i Feature information representing the ith position of the output cross-image feature map and finally generating a query set feature map containing space attention information
Figure GDA0004147762130000048
Original features of query set->
Figure GDA0004147762130000049
Figure GDA00041477621300000410
Merging and finally outputting a cross-image feature fusion feature map +.>
Figure GDA00041477621300000411
f output In two parts of information, one part of which is the original query set feature map f q Another part is a query set feature map f weighted by support set spatial attention information p
And secondly, sending the output cross-image feature fusion feature map into a predicted candidate region in the improved RPN network. The invention improves the classifier in the RPN, and the improved RPN network can reduce the probability of missing the high IOU candidate frame, and the specific network structure is shown in figure 4. Compared with the traditional RPN network, the improved RPN network is added with a plurality of classifiers connected in parallel, wherein the RPN network is divided into two parts, one part is frozen in the fine tuning process and is mainly used for extracting candidate frames of base classes, and the other part is also used for learning new parameters in the fine tuning process and is used for avoiding missing detection of the new classes and reducing the probability of missing high IOU candidate frames. One of the classifiers used for detecting the base class is called a base class classifier, and the other part of the classifiers which are more sensitive to the new class consists of three sub-classifiers which are connected in parallel and are called a new class classifier. The classification score generated by three parallel classifiers on the same anchor frame should beDifferently, some classifiers may miss high IOU candidates due to misclassification, but the remaining classifiers can correct the error in time. The final output classification result is determined by the outputs of the base class classifier and the new class classifier. First, the running flow of the new class classifier is analyzed, for each anchor box x i Three scores are simultaneously output to judge whether the current anchor frame belongs to the foreground or the background, such as a formula (7):
Figure GDA0004147762130000051
wherein the method comprises the steps of
Figure GDA0004147762130000052
Representing a binary classifier>
Figure GDA0004147762130000053
Representing the score of the j-th classifier output for the i-th anchor box. The fraction output by the classifier is sent to a sigmoid function, and the numerical value is mapped to [0,1 ]]In the interval range, finally selecting a classifier with the best score, wherein the specific selection mode is as formula (8):
Figure GDA0004147762130000054
wherein j is * Representing the classifier with the best current classification effect, α (·) represents the sigmod function. The invention adopts the calculation method of the formula (8). When a situation that the deviation of one classifier is too large occurs, the result which is calculated by the formula (8) and is output by the classifier with the lowest score is more suitable for the actual situation. The score output by the new class classifier is not the final result, and the classifier mainly used for extracting the base class candidate frame is also required to be maximized, as shown in formula (9):
score=max(S novel ,S base ) (9)
wherein S is novel Representing classifier output scores mainly used for extracting new class candidate frames, S base The classifier output score, score being the final output score, is represented for the primary extraction of base class candidate boxes.
In the reasoning process, the feature extraction module is shared as the traditional RPN network, part of the feature information obtained from the feature extraction module is sent to the classifier, part of the feature information is sent to the frame regression module, the score of the current anchor frame as the foreground is output through the classifier, the relative accurate positioning information of the anchor frame, namely the deviation relative to the anchor frame, is output through the frame regression module, the front W anchor frames with higher scores are selected as candidate frames according to the sequence from high to low of the foreground score of all the anchor frames, and K anchor frames are selected as final candidate frames through non-maximum suppression NMS.
And finally, generating an ROI feature vector by the generated candidate frame after ROI Pooling, sending the ROI feature vector and the support set feature map into an improved feature aggregation module to generate a final feature vector, sending the final feature vector into a classifier to be classified, sending the final feature vector into a frame regression module to accurately position a predicted frame, and outputting a final target detection result to comprise a target category and a target frame coordinate. The improved feature aggregation module designed by the invention is shown in fig. 5, the ROI features are ordered by utilizing the similarity between the image features of the support set and the image features of the query set, partial ROI feature vectors with poor quality are filtered out by a threshold value set in advance, the preserved ROI features are spatially aligned with the image features of the support set, and feature dislocation between the image features of the support set and the image of the query set is reduced.
In order to filter low quality ROI candidate vectors, the present invention uses a relational network to calculate the similarity between candidate vectors and support set vectors, as in equation (10):
Figure GDA0004147762130000061
wherein x is roi Representing the ROI feature vector, x, output by the RPN s Representing the vector of the support set,
Figure GDA0004147762130000062
an embedding module in the representative relation network, which is responsible for embedding the feature vector into a specific space, concate [. Cndot.]Representing the merging and superposition of two eigenvectors, R φ And the {.cndot } represents a relation module which is used for calculating the similarity between the two feature vectors. And sequencing the obtained similarity, and filtering out partial low-quality candidate regions. The remaining feature vectors need to be spatially aligned, firstly, the invention calculates a similarity matrix between the ROI feature vectors and the support set feature vectors, as shown in formula (11):
M(i,j)=x′ roi ⊙x′ s (11)
wherein x is roi ∈R H×W×C Representing the ROI feature vector, x, output by the RPN s ∈R H×W×C Class prototype representing support set vector generation, for x roi Performing reshape operation to generate x' roi ∈R N×C (where n=h×w), pair x s Performing reshape and transpose operations to generate x' s ∈R C×N (where n=h×w), M (i, j) represents the generated similarity matrix expressed in x' roi Line i and x' s Similarity between the j-th columns in (a). The other way around represents a special matrix multiplication method by replacing the normal sum addition with the calculation of the cosine similarity between the two vectors as shown in equation (12):
Figure GDA0004147762130000063
after obtaining the similarity matrix, sending the similarity matrix into a softmax function to calculate a normalized result, wherein the specific calculation process is as shown in formula (13):
Figure GDA0004147762130000064
where S (i, j) represents the similarity weight after softmax calculation, and spatial alignment is completed by assigning the weight to the support set feature, as shown in formula (14):
Figure GDA0004147762130000065
as can be seen, each vector f 'in the final support set' s (i) Is determined by the similarity S (i, j) between the vector and the query set ROI vector, and the vector with higher similarity is at f' s (i) The larger the proportion occupied, the more similar the similarity is, the closer the spatial features are, and the design achieves the aim of spatial alignment.
To demonstrate the effectiveness of the present invention, the improved algorithm of the present invention was compared to several advanced algorithms, and a comparison was made on the pasal VOC dataset. The invention trains 20 Epochs in total and sets the initial learning rate to be 10 -3 Setting the learning rate as exponential decay (halving the learning rate of every 2 epohs), and using Adam as an optimizer to increase the convergence rate, the parameter is set to beta 1 =0.5,β 2 =0.9999, epsilon=0.5, to prevent model overfitting, set the random inactivation rate to 0.1, set the weight decay parameter to 10 -6 . As shown in FIG. 6, the detection accuracy of CIF-FSOD is more than that of Meta YOLO, meta R-CNN, repMet and FsDetView algorithm.
While the invention has been described with respect to specific embodiments thereof, it will be understood by those skilled in the art that any of the features disclosed in this specification, unless otherwise indicated, may be substituted for those illustrated and other equivalent or similarly accomplished; all of the features disclosed, or all of the steps in a method or process, may be combined in any combination, except mutually exclusive features or/and steps.

Claims (1)

1. The small sample detection method based on cross-image feature fusion is characterized by comprising the following steps of:
step 1: extracting image features of a support set and a query set through a shared feature extraction network, denoted as f s And f q Sending the group of feature images into a cross-image feature fusion module; wherein the cross-image feature fusion module comprises a support set weighting module and a cross-image space attention fusion moduleA module;
step 2: calculating f s And f q Is given f by the degree of similarity of s The feature images in the support set feature image f are weighted to generate a weighted support set feature image f s ′;
Step 3: calculating f s ' and f q Attention profile f in between a And map the attention characteristic map f a And query set feature map f q Matrix multiplication is performed to obtain a query set feature map f 'with support set attention information' q
Step 4: will f' q Generating candidate frames in the improved RPN network, generating ROI feature vectors after the ROI Pooling of the region of interest, and combining with f s The' sending the same to the improved feature aggregation module to generate a final feature vector, sending the final feature vector to the classifier for classification, sending the final feature vector to the frame regression module for accurate positioning of a prediction frame, and outputting a final target detection result containing a target category and a target frame coordinate; the improved RPN network comprises a plurality of classifiers connected in parallel, and the improved feature aggregation module comprises the steps of screening candidate frames and completing the spatial alignment of support set vectors and ROI feature vectors.
CN202210506243.7A 2022-05-10 2022-05-10 Small sample detection method based on cross-image feature fusion Active CN114818963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210506243.7A CN114818963B (en) 2022-05-10 2022-05-10 Small sample detection method based on cross-image feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210506243.7A CN114818963B (en) 2022-05-10 2022-05-10 Small sample detection method based on cross-image feature fusion

Publications (2)

Publication Number Publication Date
CN114818963A CN114818963A (en) 2022-07-29
CN114818963B true CN114818963B (en) 2023-05-09

Family

ID=82513916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210506243.7A Active CN114818963B (en) 2022-05-10 2022-05-10 Small sample detection method based on cross-image feature fusion

Country Status (1)

Country Link
CN (1) CN114818963B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091787A (en) * 2022-10-08 2023-05-09 中南大学 Small sample target detection method based on feature filtering and feature alignment
CN116403071B (en) * 2023-03-23 2024-03-26 河海大学 Method and device for detecting few-sample concrete defects based on feature reconstruction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017039967A1 (en) * 2015-09-01 2017-03-09 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Method and device for detecting antigen-specific antibodies in a biological fluid sample by using neodymium magnets
CN112434721A (en) * 2020-10-23 2021-03-02 特斯联科技集团有限公司 Image classification method, system, storage medium and terminal based on small sample learning
CN112818903A (en) * 2020-12-10 2021-05-18 北京航空航天大学 Small sample remote sensing image target detection method based on meta-learning and cooperative attention
CN113052185A (en) * 2021-03-12 2021-06-29 电子科技大学 Small sample target detection method based on fast R-CNN
CN113705570A (en) * 2021-08-31 2021-11-26 长沙理工大学 Few-sample target detection method based on deep learning
CN114220063A (en) * 2021-11-17 2022-03-22 浙江大华技术股份有限公司 Target detection method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017039967A1 (en) * 2015-09-01 2017-03-09 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Method and device for detecting antigen-specific antibodies in a biological fluid sample by using neodymium magnets
CN112434721A (en) * 2020-10-23 2021-03-02 特斯联科技集团有限公司 Image classification method, system, storage medium and terminal based on small sample learning
CN112818903A (en) * 2020-12-10 2021-05-18 北京航空航天大学 Small sample remote sensing image target detection method based on meta-learning and cooperative attention
CN113052185A (en) * 2021-03-12 2021-06-29 电子科技大学 Small sample target detection method based on fast R-CNN
CN113705570A (en) * 2021-08-31 2021-11-26 长沙理工大学 Few-sample target detection method based on deep learning
CN114220063A (en) * 2021-11-17 2022-03-22 浙江大华技术股份有限公司 Target detection method and device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Ziying Song等.MSFYOLO: Feature fusion-based detection for small objects.《IEEE Latin America Transactions》.2022,第20卷(第5期),第823-830页. *
刘志宏.基于特征融合卷积神经网络的SAR图像目标检测方法.《微处理机》.2020,第41卷(第2期),第31-37页. *
彭豪等.多尺度选择金字塔网络的小样本目标检测算法.《计算机科学与探索》.2021,第16卷(第7期),第1649-1660页. *
田浩琨.小样本条件下的目标分类与识别算法研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2023,(第1期),第I138-2086页. *
贾海涛.运动目标检测与识别算法的研究.《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑》.2006,(第12期),第I140-529页. *
邓杰航等.融合多尺度多头自注意力和在线难例挖掘的小样本硅藻检测.《计算机应用》.2022,第42卷(第8期),第2593-2600页. *

Also Published As

Publication number Publication date
CN114818963A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN107273845B (en) Facial expression recognition method based on confidence region and multi-feature weighted fusion
CN114818963B (en) Small sample detection method based on cross-image feature fusion
CN105447473B (en) A kind of any attitude facial expression recognizing method based on PCANet-CNN
US7362892B2 (en) Self-optimizing classifier
CN104850865B (en) A kind of Real Time Compression tracking of multiple features transfer learning
CN103996018B (en) Face identification method based on 4DLBP
CN108846413B (en) Zero sample learning method based on global semantic consensus network
Jing et al. Yarn-dyed fabric defect classification based on convolutional neural network
CN109191455A (en) A kind of field crop pest and disease disasters detection method based on SSD convolutional network
CN113408605B (en) Hyperspectral image semi-supervised classification method based on small sample learning
CN107918772B (en) Target tracking method based on compressed sensing theory and gcForest
CN110827260B (en) Cloth defect classification method based on LBP characteristics and convolutional neural network
CN111582345A (en) Target identification method for complex environment under small sample
CN109583375B (en) Multi-feature fusion face image illumination identification method and system
CN105930792A (en) Human action classification method based on video local feature dictionary
CN114419413A (en) Method for constructing sensing field self-adaptive transformer substation insulator defect detection neural network
CN112749675A (en) Potato disease identification method based on convolutional neural network
CN113963026A (en) Target tracking method and system based on non-local feature fusion and online updating
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
CN114255371A (en) Small sample image classification method based on component supervision network
CN113033345B (en) V2V video face recognition method based on public feature subspace
CN110414626A (en) A kind of pig variety ecotype method, apparatus and computer readable storage medium
Jeevanantham et al. Deep Learning Based Plant Diseases Monitoring and Detection System
CN111339950B (en) Remote sensing image target detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant