CN114549894A - Small sample image increment classification method and device based on embedded enhancement and self-adaptation - Google Patents

Small sample image increment classification method and device based on embedded enhancement and self-adaptation Download PDF

Info

Publication number
CN114549894A
CN114549894A CN202210067078.XA CN202210067078A CN114549894A CN 114549894 A CN114549894 A CN 114549894A CN 202210067078 A CN202210067078 A CN 202210067078A CN 114549894 A CN114549894 A CN 114549894A
Authority
CN
China
Prior art keywords
image
training
classification
module
incremental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210067078.XA
Other languages
Chinese (zh)
Inventor
宋美娜
鄂海红
张如如
何佳雯
王莉菲
袁立飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202210067078.XA priority Critical patent/CN114549894A/en
Priority to PCT/CN2022/087211 priority patent/WO2023137889A1/en
Publication of CN114549894A publication Critical patent/CN114549894A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a small sample image increment classification method based on embedded enhancement and self-adaptation, which comprises the following steps: acquiring an image increment classification system, wherein the system is used for performing classification tasks on images to be classified; acquiring images to be classified, uploading the images to a system for identification, acquiring a small number of images of the class as training samples when the system identification fails, calculating the training samples through a feature pre-training module to obtain target prototypes, and performing adaptive adjustment on the target prototypes and/or the original prototypes through a mixed relation mapping module to update all prototypes in the system so as to realize the classification identification of the images to be classified of the class; and when the system successfully identifies, classifying and identifying the images to be classified through the feature pre-training module, the mixed relation mapping module and the classifier, and outputting a classification result. The method is used for enhancing the expandability of the classifier, introducing a mixed relation mapping mechanism, optimizing the prototype representation of the sample, and gradually adapting the system to the identification of all visible images.

Description

Small sample image increment classification method and device based on embedded enhancement and self-adaptation
Technical Field
The invention relates to the technical field of automatic identification, in particular to a small sample image increment classification method and device based on embedded enhancement and self-adaptation.
Background
Deep learning techniques have enjoyed significant success in many computer vision tasks, thanks to the availability of a large number of labeled data sets. Manually labeling data is an expensive and time-consuming process, and the images are very diverse, and it is almost impossible to completely label all possible image types at once, so most of the classification algorithms currently developed are developed in a closed and static environment for one or more specific classes. However, the actual scene is usually dynamic, open and unstable, and with the continuous appearance of new-class samples, the classification algorithm needs a large amount of new-class labeled data and old data to retrain the model, which results in high cost. Meanwhile, the data volume of the new category is often small, and the model is seriously overfitting due to scarcity of the data, so that incremental learning is not facilitated. The main research schemes related to the image classification field at present are as follows:
scheme 1) incremental learning: the method aims to process enough newly added data continuously appearing in the real world, and retain, even integrate and optimize the old knowledge while learning the new knowledge. Most research methods to obtain knowledge from old classes by storing limited samples of old classes or based on learning with loss to prevent forgetting previous tasks work well with enough new sample data volume, but underperforming with less incremental samples, the data shortage problem in small sample incremental learning will further exacerbate the knowledge forgetting and overfitting problems.
Scheme 2) small sample learning: it is intended that the model can classify invisible images when trained from only rare label training examples. Most researches train any model parameter through meta learning, metric learning or optimization learning and the like, and enable the model to be quickly adapted to new small sample data, but the model focuses more on capturing useful features of the current task, and discards data features which are discriminant to the previous task or the future task, so that incremental learning is not facilitated.
Scheme 3) incremental small sample learning: the ability to classify new classes without forgetting old classes is intended to be provided incrementally from a small amount of sample data. There are two main approaches in current research: one is to finely tune the embedded model by a small amount of new sample data and unify the classifier, but fine tuning the network in a new session can cause the forgetting of the knowledge of the old category; the other method is to decouple the embedded representation from the learning of the classifier, only update the classifier when learning a new task, but the frozen embedded representation network has no adaptability to the feature embedding of the subsequent small sample increment task, and is not beneficial to the adaptive learning of a new sample.
Therefore, in the prior art, most of research works have not yet provided an effective method to alleviate the problem of dynamic development of a real scene, so that the development of the deep learning technology in the related field is limited. Aiming at the defects in the prior art, the invention designs a small sample image increment classification method and device based on embedding enhancement and self-adaptation on the basis of decoupling of embedding representation and a classifier in a scheme 3), and is different from the small sample image increment classification method and device, the difference is that the well generalized feature embedding is considered to be important for the subsequent small sample increment task, so the feature extraction network is further enhanced; and meanwhile, the prototype representation and the query data feature embedded representation are adaptively adjusted in a mixed relation mapping mode.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the first objective of the present invention is to provide a small sample image incremental classification method based on embedded enhancement and self-adaptation, so as to avoid the complete retraining of the model, greatly reduce the computational resource overhead, and promote the long-term operation of the system when new class images appear.
The second purpose of the invention is to provide a small sample image incremental classification device based on embedded enhancement and self-adaptation.
A third object of the invention is to propose a non-transitory computer-readable storage medium.
A fourth object of the invention is to propose a computer program product.
To achieve the above object, an embodiment of a first aspect of the present invention provides a method, including:
acquiring an image increment classification system, wherein the image increment classification system is used for performing increment classification tasks on images to be classified;
acquiring an image to be classified, and uploading the image to be classified to the image incremental classification system for identification;
when the image incremental classification system fails to identify, acquiring a small number of images of the class from the images to be classified as training samples, calculating the training samples through a feature pre-training module to obtain target prototypes, and performing adaptive adjustment on the target prototypes and/or the original prototypes through a mixed relation mapping module to update all prototypes in the image incremental classification system, so that the image to be classified of the unidentified class is identified by the adjusted image incremental classification system;
and when the image increment classification system successfully identifies, classifying and identifying the images to be classified through a feature pre-training module, a mixed relation mapping module and an NCM classifier according to the images to be classified, and outputting a classification result.
Optionally, in an embodiment of the present invention, the system for incrementally classifying acquired images includes:
selecting an embedded representation model according to the data set and the task characteristics, forming a feature pre-training module by combining self-supervision learning and an attention mechanism, and pre-training the feature pre-training module of the model based on the image classification labeling result;
acquiring an output result of the characteristic pre-training module;
and finishing the training of the image increment classification system based on the output result of the characteristic pre-training module.
Optionally, in an embodiment of the present invention, the obtaining an output result of the feature pre-training module may include:
the output result of the characteristic pre-training module does not reach the expected precision, the hyper-parameters of the characteristic pre-training module are adjusted, and the characteristic pre-training module is further pre-trained;
and the output result of the characteristic pre-training module reaches the expected precision, the parameters of the characteristic pre-training module are frozen, the pseudo-incremental plot is selected through the pseudo-incremental plot selection module, and the mixed relation mapping module and the NCM classifier are trained.
Optionally, in an embodiment of the present invention, the pseudo-incremental episode selection module includes:
a pseudo base class, the pseudo base class comprising: generating a support set S in the form of an N-way K-shot from a base data set in each iterationbCorresponding query set QbIs formed by sampling N categories, each category comprises E query samples, wherein N, E is a positive integer, E>N, the query set is different from the samples in the support set and can be used (S)b,Qb) To represent;
a pseudo-delta class, the pseudo-delta class comprising: rotate each sample of the pseudo base class 270 degrees, can use (S)i,Qi) To indicate.
Optionally, in an embodiment of the present invention, the hybrid relationship mapping module includes:
according to the pseudo-incremental scenario, a feature pre-training module F is used to extract feature representations of the pseudo-base class and the pseudo-incremental class, for the support set feature representation we use the mean vector to calculate a prototype vector for each class as the initial weight of the classifier,
Figure RE-GDA0003617330830000031
wherein c represents a class, and the pseudo base class and the pseudo delta class have a total of 2N classes, sjIs Sb∪SiIs measured for the one sample of (a),
Figure RE-GDA0003617330830000032
is s isjThe features of (1) are embedded. Merging prototype representations and query set embedded representations of a pseudo base class and a pseudo increment class respectively to obtain prototype representation sets and query set embedded representation sets of all classes, and respectively using McAnd XqRepresents;
prototype self-mapping (PSP), the PSP adaptively adjusting all prototypes by establishing a global dependency relationship between an original prototype representation and a new prototype representation; the input of the PSP adopts a (Query, Key, Value) triple form, and the Query, Key and Value share the same input source McThe PSP can be expressed as:
Query=McWQKey=McWKValue=McWV
Figure RE-GDA0003617330830000033
d is the dimension of Query, WQ/WK/WVThe method comprises the steps that learnable parameters of three linear projection layers are adopted, the original prototype is projected to a shared measurement space, a relation matrix between prototype representations in the shared space is obtained through softmax normalization, the relation matrix serves as a weight coefficient to aggregate information from all prototype representations in Value, and the relation matrix is fused with the original prototype to obtain an updated prototype Mc′;
A query set cross-mapping (QCP) that establishes a correlation between a query set embedded representation and each prototype, adapting to current classification tasks; where the query set embedded representation needs to be classified by distance from the prototype representation, for this we introduce embedding the representation X from the query setqTo prototype Mc' the cross-mapping adjusts the query set embedding representation so that the query set samples can better adapt to the target classification task, and the formula is as follows:
Figure RE-GDA0003617330830000041
and optimizing training, wherein the optimizing training calculates semantic differences between all the query set embedded representations and prototypes through a cosine similarity function, classifies by using a nearest neighbor mean value NCM classifier, and optimizes model parameters by using a cross entropy loss function.
Optionally, in an embodiment of the present invention, the training the mixture relation mapping module and the NCM classifier includes:
the mixed relation mapping module and the NCM classifier are trained continuously after the training results of the mixed relation mapping module and the NCM classifier do not reach the expected precision, the super parameters are adjusted, the pseudo-increment type is selected through the pseudo-increment plot selection module;
and (5) the mixed relation mapping module and the training result of the NCM classifier reach the expected precision, the parameters of the mixed relation mapping module are frozen, and the process is ended.
In order to achieve the above object, a second aspect of the present invention provides an apparatus for classifying small sample image increments based on embedded enhancement and adaptation, including:
the image incremental classification system is used for performing an incremental classification task on an image to be classified;
the second acquisition module is used for acquiring an image to be classified and uploading the image to be classified to the image increment classification system for identification;
the first classification module is used for acquiring a small number of images of the category from the images to be classified as training samples when the image incremental classification system fails to identify, calculating the training samples through a feature pre-training module to obtain target prototypes, and performing adaptive adjustment on the target prototypes and/or the original prototypes through a mixed relation mapping module to update all prototypes in the image incremental classification system, so that the image incremental classification system after adjustment can realize classification and identification on the images to be classified of the unidentified category;
and the second classification module is used for performing classification and identification through the feature pre-training module, the mixed relationship mapping module and the NCM classifier according to the image to be classified when the image increment classification system is successfully identified, and outputting a classification result.
Optionally, in an embodiment of the present invention, the system for incrementally classifying acquired images includes:
selecting an embedded representation model according to the data set and the task characteristics, forming a feature pre-training module by combining self-supervision learning and an attention mechanism, and pre-training the feature pre-training module of the model based on the image classification labeling result;
acquiring an output result of the characteristic pre-training module;
and finishing the training of the image increment classification system based on the output result of the characteristic pre-training module.
In order to achieve the above object, a third aspect of the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the incremental classification method based on embedded enhancement and adaptation for small samples according to the first aspect of the present invention is implemented.
To achieve the above object, a non-transitory computer-readable storage medium is provided in a fourth embodiment of the present invention, and a computer program is stored thereon, where the computer program, when executed by a processor, implements a small sample image incremental classification method based on embedded enhancement and adaptation as described in the first embodiment of the present invention.
In summary, the present invention provides a small sample image incremental classification method, apparatus, computer device and non-transitory computer-readable storage medium based on embedded enhancement and self-adaptation, which make the model adaptive to better process new and old classes, so that the system can quickly adapt to the embedded expression of new classes from less sample data, and has the ability to identify new samples incrementally, thereby avoiding complete retraining of new models to reduce a large amount of computing resource overhead and promote long-term operation of the system.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a small sample image incremental classification method based on embedded enhancement and self-adaptation according to an embodiment of the present invention;
FIG. 2 is a block diagram of an image incremental classification system according to the present invention;
FIG. 3 is a diagram of an incremental classification model according to the present invention;
FIG. 4 is a flowchart illustrating a training process of an incremental image classification system according to the present invention;
FIG. 5 is a flow chart of a small sample image incremental classification method based on embedded enhancement and self-adaptation provided by the present invention;
fig. 6 is a schematic structural diagram of a small sample image incremental classification apparatus based on embedded enhancement and self-adaptation according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A small sample image incremental classification method and apparatus based on embedded enhancement and adaptation according to an embodiment of the present invention is described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a small sample image incremental classification method based on embedded enhancement and self-adaptation according to an embodiment of the present invention.
As shown in fig. 1, the small sample image incremental classification method based on embedded enhancement and self-adaptation includes the following steps:
step S1: and acquiring an image increment classification system, wherein the image increment classification system is used for performing an increment classification task on the image to be classified.
In one embodiment of the present invention, the image incremental classification system is composed of four modules including: the system comprises a feature pre-training module, a pseudo-incremental plot selection module, a mixing relation mapping module and an NCM classifier.
In one embodiment of the invention, the feature pre-training module can select a ResNet series model combining self-supervised learning and attention mechanism as a backbone network of the feature embedding module to obtain an embedded representation network with generalization capability and certain discrimination capability. The feature pre-training module F takes the image I as input and generates a two-dimensional feature vector: and x ═ F (I), and performing classification training by a cosine classifier. The problem of forgetting old knowledge and overfitting new data can be avoided to the maximum extent by freezing the parameters of the feature pre-training module in the learning process of a new task.
In an embodiment of the present invention, the self-supervised learning may learn a more complete data space structure, extract more comprehensive information, learn a more robust and generalized feature representation between new and old classes, use the rotation prediction as an auxiliary task, rotate each training sample in the base class by 0, 90, and 180 degrees to expand the original training set, and expand the original K-class problem into a new 3K-class problem:
I′i=Rotate(Ii,θ),θ∈{0°,90°,180°})
Iirepresents the ith input image, theta represents the image rotation degree, I'iRepresents the rotated image and the rotated sample is assigned a new label Y'iThe label may be generated automatically.
And, in an embodiment of the invention, the attention mechanism can better utilize the unique discriminant features of the sparse samples when processing the small sample increment task, obtain more representative prototype representation, maintain effective balance between generalization and discriminant of the feature pre-training module, and facilitate the decision of new and old categories.
Specifically, in one embodiment of the invention, the attention mechanism captures remote dependency relationship without being limited to adjacent points by calculating interaction information between any two positions of the feature map through Non-local, so that more information can be fused to obtain the self-attention map MA (x)i):
Figure RE-GDA0003617330830000071
Wherein x represents a characteristic graph to be classified, and xiRepresented is the information of the current location of interest, xjGlobal information is represented. The convolution operations for θ, φ and g are all 1 × 1.
Figure RE-GDA0003617330830000072
Representing the normalization operation. The final attention profile becomes:
Figure RE-GDA0003617330830000073
wherein, WzIs a learnable weight matrix, which is realized by convolution operation of 1 x 1. + xiRepresenting residual concatenation, and adding context information back to the original feature map for enhancement.
In one embodiment of the invention, the pseudo-incremental episode selection module has two important concepts: and the pseudo base class and the pseudo increment class are both composed of a support set and a query set. The pseudo-increment sample is a new class which cannot be identified in the training process of the feature pre-training module so as to help the training of the mixed relation mapping module, and the synthetic image can lose part of semantic information due to the fact that data is rotated by a large angle, so that the base class image is rotated by 270 degrees to serve as the pseudo-increment class.
In particular, in one embodiment of the invention, a support set S in the form of an N-way K-shot is generated in each iteration using data from the underlying datasetbCorresponding query set QbIs formed by sampling from N classes, each class includingContaining E query samples, the query set being different from the samples in the support set, we use (S)b,Qb) Is called a pseudo base class. Rotate each sample of the pseudo base class 270 degrees, form a pseudo delta class for us (S)i,Qi) To indicate.
In one embodiment of the invention, the pseudo-incremental plot selection module may solve the problem of limited data size and the difficulty of the mixed-relationship mapping module to get fully trained.
In an embodiment of the present invention, the hybrid relationship mapping module includes a Prototype Self-map (PSP), a Query set Cross-map (QCP), and optimization training.
Specifically, in an embodiment of the present invention, the hybrid relationship mapping module may use the feature pre-training module F to extract feature representations of the pseudo base class and the pseudo incremental class according to the pseudo incremental scenario, and use the mean vector for the feature representation of the support set to calculate a prototype vector for each class as the initial weight of the classifier.
Figure RE-GDA0003617330830000074
Wherein c represents a total of 2N classes, s, of class, pseudo base class and pseudo delta classjIs Sb∪SiIs measured for the one sample of (a),
Figure RE-GDA0003617330830000075
is s isjThe features of (1) are embedded. Combining prototype representation and query set embedding representation of the pseudo base class and the pseudo increment class respectively to obtain prototype representation sets and query set embedding sets of all classes, and respectively using McAnd XqAnd (4) showing.
To model complex interactions between prototypes and with query set embedding, we developed PSP and QCP modules based on transformers and based on McAnd XqAnd training PSP and QCP modules, guiding prototype vectors and query sets to be embedded and represent updates, and adapting to global classification tasks.
In one embodiment of the present invention, prototype self-mapping (PSP) adaptively adjusts all prototypes by establishing a global dependency relationship between an original prototype representation and a target prototype representation. The input of the PSP takes the form of a triplet of (Query, Key, Value). Query, Key and Value share the same input source McThe PSP can be expressed as:
Query=McWQKey=McWkValue=McWV
Figure RE-GDA0003617330830000081
d is the dimension of Query, WQ/WK/WVAre learnable parameters of three linear projection layers that project the original prototype representation to a shared metric space. And (5) obtaining a similarity score between the prototype representations in the shared space through softmax normalization, namely a relation matrix. We aggregate the information from all prototype representations in Value using the relationship matrix as weight coefficients and fuse with the original prototype representation to obtain an updated prototype representation Mc′。
And, in one embodiment of the invention, query set cross mapping (QCP). After prototype update, the representation X is embedded by introducing a query setqTo prototype McThe cross mapping process of' establishes correlation between the query set embedded representation and each prototype, and adjusts the embedded representation of the query data, so that the query set sample can better adapt to the target classification task, and the formula is as follows:
Figure RE-GDA0003617330830000082
based on the query set sample, the context information of the global task can be enriched.
Further, in one embodiment of the present invention, the optimization training employs an objective loss function based on the prototype.
Specifically, in one embodiment of the present invention, all query set embeddings are computed by a cosine similarity function d (·,) of
Figure RE-GDA0003617330830000083
And all prototypes
Figure RE-GDA0003617330830000084
Semantic differences between them, and classification with the nearest neighbor mean (NCM) classifier:
Figure RE-GDA0003617330830000085
using query set class labels ycCross-entropy losses can be derived over 2N classes of the current task as a trained target loss function:
Figure RE-GDA0003617330830000086
and after learning is finished, freezing parameters in the mixed relation mapping module, and deploying in the actual incremental session. The pseudo-base class prototypes in the computation process are replaced by NCM classifier weights (original prototypes of all previously visible classes), the pseudo-incremental class prototypes are replaced by class prototypes in the actual incremental task, McRepresenting the set of all currently visible category prototype vectors. After the round of task is finished, the weight of the NCM classifier is replaced by McSaved, the original prototype as the next incremental task participates in the computation.
Based on the above description, for convenience of understanding, fig. 2 is a schematic diagram of an image incremental classification system framework provided by the present invention, fig. 3 is a structural diagram of an image incremental classification model provided by the present invention, and fig. 4 is a training flowchart of an image incremental classification system provided by the present invention.
Step S2: and acquiring an image to be classified, and uploading the image to be classified to an image increment classification system for identification.
Step S3: when the image incremental classification system fails to recognize, a small number of images of the class are obtained from the images to be classified as training samples, the training samples are calculated through a feature pre-training module to obtain target prototypes, the target prototypes and/or the original prototypes are adaptively adjusted through a mixed relation mapping module to update all the prototypes in the image incremental classification system, and the classified recognition of the images to be classified of the classes which are not recognized is realized through the adjusted image incremental classification system.
Step S4: and when the image increment classification system successfully identifies, classifying and identifying the images to be classified through the feature pre-training module, the mixed relation mapping module and the NCM classifier according to the images to be classified, and outputting a classification result.
Based on the above description, for convenience of understanding, fig. 5 is a flowchart structure diagram of a small sample image incremental classification method based on embedded enhancement and adaptation provided by the present invention.
In summary, the present invention provides an incremental classification method for small sample images based on embedded enhancement and self-adaptation, so that the model has adaptability to better process new and old classes, and thus the system can quickly adapt to embedded expression of new classes from few sample data, has the ability to identify new samples incrementally, and avoids complete retraining of new models to reduce a large amount of computational resource overhead and promote long-term operation of the system.
Fig. 6 is a schematic structural diagram of a small sample image incremental classification apparatus based on embedded enhancement and self-adaptation according to an embodiment of the present invention.
As shown in fig. 6, the small sample image incremental classification device based on embedded enhancement and adaptation includes the following modules:
the image incremental classification system is used for performing an incremental classification task on an image to be classified;
the second acquisition module is used for acquiring the image to be classified and uploading the image to be classified to the image increment classification system for identification;
the first classification module is used for acquiring a small number of images of the category from the images to be classified as training samples when the image incremental classification system fails to identify, calculating the training samples through the feature pre-training module to obtain target prototypes, and performing adaptive adjustment on the target prototypes and/or the original prototypes through the mixed relation mapping module to update all prototypes in the image incremental classification system, so that the image incremental classification system after adjustment can realize the classification and identification of the images to be classified of the unidentified category;
and the second classification module is used for performing classification and identification through the feature pre-training module, the mixed relation mapping module and the NCM classifier according to the image to be classified when the image increment classification system is successfully identified, and outputting a classification result.
In one embodiment of the present disclosure, an image incremental classification system is obtained, comprising:
selecting an embedded representation model according to the data set and the task characteristics, forming a feature pre-training module by combining self-supervision learning and attention mechanism, and pre-training the feature pre-training module of the model based on the image classification labeling result;
acquiring an output result of the characteristic pre-training module;
and finishing the training of the image increment classification system based on the output result of the characteristic pre-training module.
In summary, the present invention provides an incremental classification apparatus for small sample images based on embedded enhancement and self-adaptation, which enables the model to have adaptability to better process new and old classes, so that the system can quickly adapt to the embedded expression of the new class from a small amount of sample data, and incrementally has a model with the capability of identifying the new sample, thereby avoiding the complete retraining of the new model to reduce the overhead of a large amount of computing resources and promote the long-term operation of the system.
To achieve the above object, a third aspect of the present application provides a computer device, a memory thereon, a processor, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements a small sample image incremental classification method based on embedded enhancement and adaptation according to the first aspect of the present application.
To achieve the above object, a non-transitory computer-readable storage medium is provided in a fourth embodiment of the present application, and a computer program is stored thereon, and when executed by a processor, the computer program implements a method for small sample image incremental classification based on embedded enhancement and adaptation, which is described in the first embodiment of the present application.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A small sample image incremental classification method based on embedded enhancement and self-adaptation is characterized by comprising the following steps:
acquiring an image increment classification system, wherein the image increment classification system is used for performing increment classification tasks on images to be classified;
acquiring an image to be classified, and uploading the image to be classified to the image incremental classification system for identification;
when the image incremental classification system fails to identify, acquiring a small number of images of the class from the images to be classified as training samples, calculating the training samples through a feature pre-training module to obtain target prototypes, and performing adaptive adjustment on the target prototypes and/or the original prototypes through a mixed relation mapping module to update all prototypes in the image incremental classification system, so that the image to be classified of the unidentified class is identified by the adjusted image incremental classification system;
and when the image increment classification system successfully identifies, classifying and identifying the images to be classified through a feature pre-training module, a mixed relation mapping module and an NCM classifier according to the images to be classified, and outputting a classification result.
2. The method of claim 1, wherein the acquiring an image incremental classification system comprises:
selecting an embedded representation model according to the data set and the task characteristics, forming a feature pre-training module by combining self-supervision learning and an attention mechanism, and pre-training the feature pre-training module based on the image classification and labeling result;
acquiring an output result of the characteristic pre-training module;
and finishing the training of the image increment classification system based on the output result of the characteristic pre-training module.
3. The method of claim 2, wherein obtaining the output of the feature pre-training module comprises:
the output result of the characteristic pre-training module does not reach the expected precision, the hyper-parameters of the characteristic pre-training module are adjusted, and the characteristic pre-training module is further pre-trained;
and the output result of the characteristic pre-training module reaches the expected precision, the parameters of the characteristic pre-training module are frozen, the pseudo-incremental plot is selected through the pseudo-incremental plot selection module, and the mixed relation mapping module and the NCM classifier are trained.
4. The method of claim 3, wherein the pseudo-incremental episode selection module comprises:
a pseudo base class, the pseudo base class comprising: generating a support set S in the form of an N-way K-shot from a base data set in each iterationbCorresponding query set QbIs formed by sampling N categories, each category comprises E query samples, wherein N, E is a positive integer, E>N, the query set is different from the samples in the support set and can be used (S)b,Qb) To represent;
a pseudo-delta class, the pseudo-delta class comprising: rotate each sample of the pseudo base class 270 degrees, can use (S)i,Qi) To indicate.
5. The method of claim 3, wherein the hybrid relational mapping module comprises:
according to the pseudo-incremental scenario, a feature pre-training module F is used to extract feature representations of the pseudo-base class and the pseudo-incremental class, for the support set feature representation we use the mean vector to calculate a prototype vector for each class as the initial weight of the classifier,
Figure FDA0003480552000000021
wherein c represents a class, and the pseudo base class and the pseudo delta class have a total of 2N classes, sjIs Sb∪SiIs measured for the one sample of (a),
Figure FDA0003480552000000022
is s isjThe features of (1) are embedded. Merging prototype representations and query set embedded representations of a pseudo base class and a pseudo increment class respectively to obtain prototype representation sets and query set embedded representation sets of all classes, and respectively using McAnd XqRepresents;
prototype self-mapping (PSP), the PSP adaptively adjusting all prototypes by establishing a global dependency relationship between an original prototype representation and a new prototype representation; the PSP input adopts a (Query, Key, Value) triple form, and the Query, Key and Value share the same input source McThe PSP can be expressed as:
Query=McWQKey=McWKValue=McWV
Figure FDA0003480552000000023
d is the dimension of Query, WQ/WK/WVThe method comprises the steps that learnable parameters of three linear projection layers are adopted, the original prototype is projected to a shared measurement space, a relation matrix between prototype representations in the shared space is obtained through softmax normalization, the relation matrix serves as a weight coefficient to aggregate information from all prototype representations in Value, and the relation matrix is fused with the original prototype to obtain an updated prototype Mc′;
A query set cross-mapping (QCP) that establishes a correlation between a query set embedded representation and each prototype, adapting to current classification tasks; where the query set embedded representation needs to be classified by distance from the prototype representation, for this we introduce embedding the representation X from the query setqTo prototype Mc' the cross-mapping adjusts the query set embedding representation so that the query set samples can better adapt to the target classification task, and the formula is as follows:
Figure FDA0003480552000000024
and optimizing training, wherein the optimizing training calculates semantic differences between all the query set embedded representations and prototypes through a cosine similarity function, classifies by using a nearest neighbor mean (NCM) classifier, and optimizes model parameters by using a cross entropy loss function.
6. The method of claim 3, wherein training the hybrid relational mapping module and the NCM classifier comprises:
the mixed relation mapping module and the NCM classifier are trained continuously after the training results of the mixed relation mapping module and the NCM classifier do not reach the expected precision, the super parameters are adjusted, the pseudo-increment type is selected through the pseudo-increment plot selection module;
and (5) the mixed relation mapping module and the training result of the NCM classifier reach the expected precision, the parameters of the mixed relation mapping module are frozen, and the process is ended.
7. A small sample image incremental classification device based on embedded enhancement and self-adaptation is characterized by comprising the following modules:
the image incremental classification system is used for performing an incremental classification task on an image to be classified;
the second acquisition module is used for acquiring an image to be classified and uploading the image to be classified to the image increment classification system for identification;
the first classification module is used for acquiring a small number of images of the category from the images to be classified as training samples when the image incremental classification system fails to identify, calculating the training samples through a feature pre-training module to obtain target prototypes, and performing adaptive adjustment on the target prototypes and/or the original prototypes through a mixed relation mapping module to update all prototypes in the image incremental classification system, so that the image incremental classification system after adjustment can realize classification and identification on the images to be classified of the unidentified category;
and the second classification module is used for performing classification and identification through the feature pre-training module, the mixed relationship mapping module and the NCM classifier according to the image to be classified when the image increment classification system is successfully identified, and outputting a classification result.
8. The apparatus of claim 7, wherein the acquired image incremental classification system comprises:
selecting an embedded representation model according to the data set and the task characteristics, forming a feature pre-training module by combining self-supervision learning and an attention mechanism, and pre-training the feature pre-training module of the model based on the image classification labeling result;
acquiring an output result of the characteristic pre-training module;
and finishing the training of the image increment classification system based on the output result of the characteristic pre-training module.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1-6 when executing the computer program.
10. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any one of claims 1-6.
CN202210067078.XA 2022-01-20 2022-01-20 Small sample image increment classification method and device based on embedded enhancement and self-adaptation Pending CN114549894A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210067078.XA CN114549894A (en) 2022-01-20 2022-01-20 Small sample image increment classification method and device based on embedded enhancement and self-adaptation
PCT/CN2022/087211 WO2023137889A1 (en) 2022-01-20 2022-04-15 Few-shot image incremental classification method and apparatus based on embedding enhancement and adaption

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210067078.XA CN114549894A (en) 2022-01-20 2022-01-20 Small sample image increment classification method and device based on embedded enhancement and self-adaptation

Publications (1)

Publication Number Publication Date
CN114549894A true CN114549894A (en) 2022-05-27

Family

ID=81671841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210067078.XA Pending CN114549894A (en) 2022-01-20 2022-01-20 Small sample image increment classification method and device based on embedded enhancement and self-adaptation

Country Status (2)

Country Link
CN (1) CN114549894A (en)
WO (1) WO2023137889A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115203420A (en) * 2022-07-25 2022-10-18 腾讯科技(深圳)有限公司 Entity relationship classification model training method, entity relationship classification method and device
CN116778264A (en) * 2023-08-24 2023-09-19 鹏城实验室 Object classification method, image classification method and related equipment based on class reinforcement learning
CN117095243A (en) * 2023-10-18 2023-11-21 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Small sample network intrusion detection incremental learning classification method based on branch strategy

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958904B (en) * 2023-08-07 2024-05-07 济宁安泰矿山设备制造有限公司 Underground foreign matter identification method based on small sample detection
CN116910571B (en) * 2023-09-13 2023-12-08 南京大数据集团有限公司 Open-domain adaptation method and system based on prototype comparison learning
CN117113198B (en) * 2023-09-24 2024-06-28 元始智能科技(南通)有限公司 Rotary equipment small sample fault diagnosis method based on semi-supervised contrast learning
CN116994076B (en) * 2023-09-28 2024-01-19 中国海洋大学 Small sample image recognition method based on double-branch mutual learning feature generation
CN117574981B (en) * 2024-01-16 2024-04-26 城云科技(中国)有限公司 Training method of information analysis model and information analysis method
CN118172617B (en) * 2024-05-16 2024-07-23 江西师范大学 Cross-domain small sample fine granularity classification method based on cross-view semantic consistency

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019193462A1 (en) * 2018-04-02 2019-10-10 King Abdullah University Of Science And Technology Incremental learning method through deep learning and support data
CN112434721B (en) * 2020-10-23 2023-09-01 特斯联科技集团有限公司 Image classification method, system, storage medium and terminal based on small sample learning
CN112559784B (en) * 2020-11-02 2023-07-04 浙江智慧视频安防创新中心有限公司 Image classification method and system based on incremental learning
CN113222011B (en) * 2021-05-10 2022-12-02 西北工业大学 Small sample remote sensing image classification method based on prototype correction
CN113837156A (en) * 2021-11-26 2021-12-24 北京中超伟业信息安全技术股份有限公司 Intelligent warehousing sorting method and system based on incremental learning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115203420A (en) * 2022-07-25 2022-10-18 腾讯科技(深圳)有限公司 Entity relationship classification model training method, entity relationship classification method and device
CN115203420B (en) * 2022-07-25 2024-04-26 腾讯科技(深圳)有限公司 Entity relationship classification model training method, entity relationship classification method and device
CN116778264A (en) * 2023-08-24 2023-09-19 鹏城实验室 Object classification method, image classification method and related equipment based on class reinforcement learning
CN116778264B (en) * 2023-08-24 2023-12-12 鹏城实验室 Object classification method, image classification method and related equipment based on class reinforcement learning
CN117095243A (en) * 2023-10-18 2023-11-21 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Small sample network intrusion detection incremental learning classification method based on branch strategy
CN117095243B (en) * 2023-10-18 2024-05-07 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Small sample network intrusion detection incremental learning classification method based on branch strategy

Also Published As

Publication number Publication date
WO2023137889A1 (en) 2023-07-27

Similar Documents

Publication Publication Date Title
CN114549894A (en) Small sample image increment classification method and device based on embedded enhancement and self-adaptation
JP5373536B2 (en) Modeling an image as a mixture of multiple image models
CN105912611B (en) A kind of fast image retrieval method based on CNN
WO2020228525A1 (en) Place recognition method and apparatus, model training method and apparatus for place recognition, and electronic device
CN108132968A (en) Network text is associated with the Weakly supervised learning method of Semantic unit with image
CN114241273B (en) Multi-modal image processing method and system based on Transformer network and hypersphere space learning
WO2009030124A1 (en) Method, device, and system for searching multimedia model
CN113469186B (en) Cross-domain migration image segmentation method based on small number of point labels
CN114049515A (en) Image classification method, system, electronic device and storage medium
CN114357221A (en) Self-supervision active learning method based on image classification
CN117437395A (en) Target detection model training method, target detection method and target detection device
CN115104131A (en) Hybrid learning agent for small sample classification
CN114997287A (en) Model training and data processing method, device, equipment and storage medium
WO2024012179A1 (en) Model training method, target detection method and apparatuses
Zerrouk et al. Evolutionary algorithm for optimized CNN architecture search applied to real-time boat detection in aerial images
CN114549969B (en) Saliency detection method and system based on image information fusion
CN114595741B (en) High-dimensional data rapid dimension reduction method and system based on neighborhood relation
CN112200224B (en) Medical image feature processing method and device
CN113313210A (en) Method and apparatus for data processing
Zhu et al. Boosted cross-domain dictionary learning for visual categorization
WO2019116494A1 (en) Learning device, learning method, sorting method, and storage medium
WO2024012217A1 (en) Model training method and device, and target detection method and device
CN118506101B (en) Class increment image classification method based on virtual feature generation and replay
JP7221892B2 (en) LEARNING APPARATUS, LEARNING METHOD, AND LEARNING PROGRAM
CN114187510B (en) Small sample remote sensing scene classification method based on metanuclear network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination