CN112633419B - Small sample learning method and device, electronic equipment and storage medium - Google Patents

Small sample learning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112633419B
CN112633419B CN202110252710.3A CN202110252710A CN112633419B CN 112633419 B CN112633419 B CN 112633419B CN 202110252710 A CN202110252710 A CN 202110252710A CN 112633419 B CN112633419 B CN 112633419B
Authority
CN
China
Prior art keywords
image
label
matrix
model
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110252710.3A
Other languages
Chinese (zh)
Other versions
CN112633419A (en
Inventor
周迪
曹广
徐爱华
王勋
何斌
汪鹏君
王建新
章坚武
骆建军
樊凌雁
肖海林
鲍虎军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN202110252710.3A priority Critical patent/CN112633419B/en
Publication of CN112633419A publication Critical patent/CN112633419A/en
Application granted granted Critical
Publication of CN112633419B publication Critical patent/CN112633419B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a small sample learning method and device, electronic equipment and a storage medium. The small sample learning method comprises the following steps: coding an image training sample set according to an image representation model to obtain an image matrix formed by image vector representation of each image training sample; encoding labels of the image training sample set according to the label preprocessing model to obtain a label matrix formed by label vector representations of the labels of each image training sample; and performing back propagation according to the loss values of the image matrix and the label matrix to perform parameter optimization on the image representation model and the label preprocessing model to obtain the trained image representation model and the trained label preprocessing model. The knowledge in the natural language task is introduced into the feature recognition task of the image, the fusion of different task knowledge is realized, the learning of the image features under the condition of a small sample data set is accelerated, and the efficiency and the accuracy of the learning of the image features under the small sample data set are improved.

Description

Small sample learning method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of machine learning, in particular to a small sample learning method and device, electronic equipment and a storage medium.
Background
The conventional deep learning algorithm needs a large amount of training sample sets to achieve a good effect, so that the application of the conventional deep learning algorithm in many fields is limited.
At present, for feature learning of a small sample training set, common technical means include data enhancement, transfer learning, meta learning and the like, but these methods still have the defects of poor performance of tasks across fields, catastrophic forgetting and the like, and the effect of deep learning under the small sample training set is not ideal as a whole.
Disclosure of Invention
The embodiment of the invention provides a small sample learning method and device, electronic equipment and a storage medium, and aims to improve the learning effect under the condition of small samples.
In a first aspect, an embodiment of the present invention provides a small sample learning method, including:
coding an image training sample set according to an image representation model to obtain an image matrix formed by image vector representation of each image training sample;
encoding the labels of the image training sample set according to a label preprocessing model to obtain a label matrix formed by label vector representations of the labels of each image training sample;
and performing back propagation according to the loss values of the image matrix and the label matrix to perform parameter optimization on the image representation model and the label preprocessing model to obtain the trained image representation model and label preprocessing model.
In a second aspect, an embodiment of the present invention further provides a small sample learning device, including:
the image coding module is used for coding the image training sample set according to the image representation model to obtain an image matrix formed by image vector representation of each image training sample;
the label coding module is used for coding the labels of the image training sample set according to the label preprocessing model to obtain a label matrix formed by label vector representation of each image training sample label;
and the model optimization module is used for performing back propagation according to the loss values of the image matrix and the label matrix so as to perform parameter optimization on the image representation model and the label preprocessing model to obtain the trained image representation model and the trained label preprocessing model.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a small sample learning method as in any embodiment of the invention.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the small sample learning method according to any embodiment of the present invention.
The embodiment of the invention encodes the image and the image label to obtain the image matrix and the label matrix, determines the loss value jointly through the image matrix and the label matrix to carry out back propagation, and optimizes the model parameters. The knowledge in the natural language task is introduced into the feature recognition task of the image, the fusion of different task knowledge is realized, the learning of the image features under the condition of a small sample data set is accelerated, and the efficiency and the accuracy of the learning of the image features under the small sample data set are improved.
Drawings
FIG. 1 is a flow chart of a small sample learning method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a process for encoding labels of a training sample set of images according to a label pre-processing model;
FIG. 3 is a flowchart of a small sample learning method according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a small sample learning apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device in a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a small sample learning method according to a first embodiment of the present invention, which is applicable to a case where an image classification training is performed on a small sample data set. The method may be performed by a small sample learning apparatus, which may be implemented in software and/or hardware, and may be configured in an electronic device, for example, the electronic device may be a device with communication and computing capabilities, such as a background server. As shown in fig. 1, the method specifically includes:
step 101, coding an image training sample set according to an image representation model to obtain an image matrix formed by image vector representation of each image training sample.
The image representation model is used for learning and extracting features in the image, and may be, for example, an encoder composed of a convolutional neural network. The image training sample set is composed of partial images in the complete data set of this image feature learning task. Illustratively, the complete data set is divided into a training set and a testing set, wherein the classes in the training set and the testing set are not repeated, n classes are randomly selected from all the classes in the training set respectively in different scene training, m samples are randomly selected from each class in the n classes, and the image training sample set is formed by the n × m samples. Optionally, the randomly selected n categories are smaller than the total number of all the categories in the complete data set, so as to ensure that images are selected from the remaining categories as a test sample set during subsequent tests. The number randomly selected from each category is not limited, and may be the same or different, and is determined according to the total number of images in each category. The image vector representation is composed of characters, and describes the characteristics of a single image. The image matrix is composed of characters and describes the characteristics of all images in the image training sample set. Illustratively, the image matrix is composed of image vector representations stacked in rows.
Specifically, each image in the image training sample set is sequentially input into an image representation model formed by a convolutional neural network, an image vector representation of each image is output by the image representation model, namely, each image corresponds to one image vector representation, and finally, the image vector representations of all the images are spliced according to rows to form an image matrix, which is recorded as embeddings _ x. Illustratively, the image vector is represented as a multi-column 1-row vector, and then each row in the image matrix is represented as an image vector of one image. The specific structure of the convolutional neural network is not limited in the embodiments of the present invention.
And 102, coding the labels of the image training sample set according to the label preprocessing model to obtain a label matrix formed by label vector representations of the labels of each image training sample.
The image samples in the complete data set in the embodiment of the invention are provided with respective labels, and the labels are used for describing the result of the learning task. Illustratively, the small sample learning task is an image classification task, and then each image sample label is an image classification result, and the label is composed of a plurality of words.
In the related art, the class labels are usually represented as numerical indexes or unique heat vectors in the image classification task, that is, word numbers are set as 1 as elements of index positions, and the rest positions are set as 0, but obviously, since all word vectors are orthogonal to each other, the similarity of words cannot be known through such unique heat vectors. In the embodiment of the invention, the label is coded based on the label preprocessing model, the single similarity characteristic of the label is obtained, and the label vector representation is determined according to the characteristic. Finally, the label vectors of all the image labels are spliced according to rows to form a label matrix, and the label matrix is recorded as embeddings _ y. Illustratively, the label vector is represented as a multi-column 1-row vector, and then each row in the label matrix is represented by a label vector of one image label.
The label preprocessing model is built based on a natural language semantic feature extraction model, learning of semantic feature information of label words is achieved based on the natural language semantic feature extraction model, relationships among the words are determined according to the extracted semantic information, and then similarity among the labels is determined. And the semantic information of the image label is introduced into an image classification task. Illustratively, the label preprocessing model is constructed based on a GloVe model, the GloVe model is a model represented by word vectors, the vector dimension for describing words is reduced to 50-300 dimensions, semantic feature information of the words is well extracted, and therefore the relation among the words can be deduced through the relation among the vectors, and natural language processing by a computer is facilitated.
The label vector representation is composed of characters and describes label semantic information of a single image, at least one label word of the single image is provided, and the label of each image forms a label vector representation. The label matrix is composed of characters and describes the label characteristics of all images in the image training sample set. Illustratively, the label matrix is composed of label vector representations stacked in rows.
In one possible embodiment, the image training sample set is a subset of the complete data set;
accordingly, step 102 includes:
determining complete label information for the complete data set;
determining complete word vector representation according to the complete label information and the original word vector model;
determining label information of a training sample according to labels and complete label information of an image training sample set;
and determining a label matrix formed by label vector representations of the labels of each image training sample according to the label information of the training sample and the complete word vector representations.
The complete data set is all the data sets, i.e. small samples, acquired for this learning task. The image training sample set is a part of image samples randomly selected from the complete data set so as to ensure that the image training sample set is different from the image training sample set when the image testing sample set is determined subsequently, and further ensure the accuracy of the testing result.
The complete tag information includes tag information of all images in the complete data set, and exemplarily, tag words appearing in the complete data set are combined into a non-repetitive list Word-list, which is the complete tag information. The original word vector model is a semantic feature extraction model based on natural language, such as a GloVe model.
Specifically, after the complete label information is determined, the complete label information is input into an original word vector model, the semantic information of each label word is extracted by the original word vector model, the word vector representation of each label word is determined according to the semantic information, and finally the complete word vector representation of the complete label information is formed. Illustratively, Word-vector pairs of interest are searched in a GloVe Word vector model through Word-list, and are formed into a matrix K-V, namely, the Word is considered as a keyword key, the corresponding Word vector is value, and the K-V matrix is represented as a complete Word vector.
Since the image training sample set is a subset of the complete data set, there is label word information in the complete label information that is not in the image training sample set. Obtaining labels of all images in the image training sample set, obtaining training sample label information according to label representation in the complete label information, determining a label matrix according to the training sample label information and a matrix multiplication result represented by the complete word vector, wherein a vector of each row in the label matrix is represented by a label vector of one image training sample label.
For example, for a tagged Word Y in an image training sample set of a current training batch, firstly, converting the tagged Word into a vector consisting of 0 and 1 according to an appearance sequence of each tagged Word in the Word-list in the Y, for example, the tagged Word in the Y is cat, the appearance sequence of the Word in the Word-list is the second bit, the vector corresponding to the tagged Word is a vector with 0 other than the second bit being 1, and the length of the vector is equal to the total number of the tagged words in the Word-list; and when the number of the label words in the Y is at least two, respectively determining the appearance sequence of each Word in the Word-list, setting the corresponding position in the formed vector to be 1, and setting other positions to be 0.
After the conversion is finished, all labels in the image training sample set form a matrix Yword-hot,Yword-hotWherein each row is a label, and the number of the rows is the number of samples in the image training sample set. For Yword-hotThe matrix is normalized in the row direction (norm) to form a matrix Ynorm,YnormEach element value in the matrix is Yword-hotThe element of the corresponding position in the matrix is divided by the Y of the elementword-hotSum of rows of the matrix, let YnormThe sum of the elements of each row of the matrix is 1, and finally Y is addednormAnd performing matrix multiplication on the matrix and the K-V matrix to obtain a label matrix embeddings _ y, wherein each row in the label matrix embeddings _ y represents the label vector representation of one image training sample label. Fig. 2 is a schematic diagram of a process of obtaining a label matrix embeddings _ Y by encoding a label Y of an image training sample set according to a label preprocessing model preprocess.
And 103, performing back propagation according to the loss values of the image matrix and the label matrix to perform parameter optimization on the image representation model and the label preprocessing model to obtain the trained image representation model and the trained label preprocessing model.
Wherein the loss value is determined according to a preset loss function and is used for representing the difference between the information learned from the image characteristics and the information learned from the label semantic characteristics. And performing back propagation on the difference reflected by the loss value, and optimizing parameters in the image representation model and the label preprocessing model to gradually reduce the loss value until the optimization times or a preset value is reached to obtain the trained image representation model and the trained label preprocessing model. Illustratively, gradient values are calculated according to the loss values of the image matrix and the label matrix, and the global minimum of the loss values is obtained by gradually optimizing the model parameters by using a gradient descent method, so that the parameter optimization of the image representation model and the label preprocessing model is realized.
In one possible embodiment, the back propagation is performed according to the loss values of the image matrix and the label matrix to perform parameter optimization on the label preprocessing model, and the method comprises the following steps:
and performing back propagation according to the loss values of the image matrix and the label matrix so as to perform parameter optimization on the complete word vector representation.
The complete word vector is obtained according to the complete label information and the original word vector model, the complete word vector is taken as an initialized matrix parameter in the label preprocessing model, and the matrix parameter is finely adjusted through reverse propagation of a loss value, so that the label vector is adjusted, namely, parameter optimization is carried out on the complete word vector representation. In the embodiment, the loss value is obtained by the image matrix and the label matrix together, so that the loss value comprises both the natural language knowledge and the image information knowledge, and the complete word vector representation obtained by the original word vector model is optimized and updated through the loss value, thereby realizing the further fusion of the natural language knowledge and the image information knowledge and being beneficial to improving the effect of small sample learning.
In a possible embodiment, after obtaining the trained label preprocessing model, the method further includes:
and performing optimization updating on the original word vector model according to the complete word vector representation in the trained label preprocessing model.
Because the original word vector model is trained in advance according to the corpus and only trained according to the semantic information of the pure natural language, the original word vector model lacks the adaptability to more tasks. For example, the GloVe word vector model is trained using only the corpus. In the embodiment of the invention, the complete word vector in the trained label preprocessing model represents and fuses the image information, so that the semantic information of the label word is better expressed. Therefore, the original word vector model is optimized and updated according to the optimized complete word vector representation, and the adaptability of the original word vector model to more tasks can be further improved. For example, the K-V matrix is updated by fusion with the original GloVe model.
Illustratively, after training is finished, averaging vectors are sequentially obtained through a K-V matrix and original Word vectors in a GloVe model according to Word orders of Word-list, and then the original Word vectors are updated and replaced, so that the original Word vector model is updated. In addition, some label words can not be searched in the GloVe model, but word vector representation of the label words exists in the trained K-V matrix and is written into the GloVe model as a new row, so that the updating of knowledge in the GloVe model and the learning of brand new knowledge are further realized.
The embodiment of the invention not only realizes the image classification learning effect under the condition of enhancing the small sample by using the knowledge learned by the natural language task, but also further improves the learning accuracy of the semantic features in the natural language task through the image features.
In a possible embodiment, before performing back propagation according to the loss values of the image matrix and the tag matrix, the method further includes:
carrying out dimension consistency processing and normalization processing on the image matrix and the label matrix to obtain a standard image matrix and a standard label matrix;
correspondingly, the back propagation is carried out according to the loss values of the image matrix and the label matrix, and the back propagation comprises the following steps:
and performing back propagation according to the loss values of the standard image matrix and the standard label matrix.
Although each row in the image matrix and the label matrix describes one image, the image matrix and the label matrix have the problem of inconsistent dimensionality due to inconsistent extraction dimensionality of the semantic features of the words and the image features, and the image matrix and the label matrix need to be subjected to dimensionality consistency processing to realize consistent dimensionality of the image matrix and the label matrix, so that subsequent calculation processing is facilitated. Also, since the values of the elements of the two matrices are likely not on the same order of magnitude, a normalization process operation is required to increase the speed of training learning.
Specifically, by using the idea of the graph network, each row of the image matrix embeddings _ x can be regarded as a node of the graph network, and therefore the embeddings _ x can be further denoted as a node matrix VxBy a predetermined distanceThe function (e.g. Euclidean distance function) yields an edge matrix Ex,ExIs V at the ith row and the jth column elementxIs calculated by a predetermined distance function, and thus ExThe number of rows and columns being equal to the number of rows in embeddings _ x, i.e. the number of image samples, the edge matrix ExThe result is obtained after the dimension consistency processing is carried out on the image matrix. Similarly, the label matrix embeddings _ y is regarded as a node V of the graph networkyCalculating an edge matrix EyAnd recording as a result of performing the dimension unification processing on the tag matrix.
Illustratively, the normalization process may be determined by the following function:
Figure 410732DEST_PATH_IMAGE001
wherein k = x or y,
Figure 390190DEST_PATH_IMAGE002
represents a pair ExThe result after the normalization processing is carried out,
Figure 966665DEST_PATH_IMAGE003
represents a pair EyCarrying out normalization processing on the result;
Figure 893032DEST_PATH_IMAGE004
is composed of
Figure 238563DEST_PATH_IMAGE005
The degree matrix is a diagonal matrix, and the elements on the diagonal are
Figure 143850DEST_PATH_IMAGE005
The sum of all elements of the row.
Alternatively, the normalization process may be determined by the following function:
Figure 942042DEST_PATH_IMAGE006
wherein k = x or y,
Figure 672101DEST_PATH_IMAGE002
represents a pair ExThe result after the normalization processing is carried out,
Figure 872138DEST_PATH_IMAGE003
represents a pair EyCarrying out normalization processing on the result; function mean (
Figure 131081DEST_PATH_IMAGE005
) And std (
Figure 150989DEST_PATH_IMAGE005
) Respectively express and solve
Figure 419160DEST_PATH_IMAGE005
The mean and standard deviation of all elements of the matrix, and the subtraction of the matrix and the scalar in the above formula can be regarded as the matrix subtracting the scalar element by element.
In one possible embodiment, the back propagation based on the loss values of the image matrix and the tag matrix comprises:
calculating loss values of the image matrix and the label matrix based on a preset loss function;
carrying out backward propagation according to the loss value;
wherein the preset loss function is expressed by the following formula:
Figure 739283DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 700285DEST_PATH_IMAGE008
in accordance with the determination of the image matrix,
Figure 738648DEST_PATH_IMAGE009
as determined from the tag matrix,
Figure 544930DEST_PATH_IMAGE010
watch (A)
Figure 719560DEST_PATH_IMAGE008
The value of the element in the ith row and the jth column of the matrix,
Figure 320305DEST_PATH_IMAGE011
to represent
Figure 580386DEST_PATH_IMAGE009
The element values of the ith row and the jth column in the matrix are N, which represents the total number of samples in the image training sample set.
The difference between the image characteristic information represented by the image matrix and the word semantic characteristic information represented by the label matrix can be obtained through the formula of the loss function, and the loss function integrates knowledge obtained by training in the natural language processing field. And calculating loss values, and then performing parameters in the back propagation training model by optimization methods such as random gradient descent and the like.
For in the loss function
Figure 924779DEST_PATH_IMAGE008
Matrix sum
Figure 688336DEST_PATH_IMAGE009
The matrix can be directly represented by an image matrix and a label matrix, but the image matrix and the label matrix have the problem of inconsistent dimension due to inconsistent extraction dimensions of the semantic features of the words and the image features. In one possible embodiment of the present invention,
Figure 991141DEST_PATH_IMAGE008
an edge matrix calculated from the graph network for the image matrix,
Figure 475868DEST_PATH_IMAGE009
and calculating an edge matrix for the label matrix according to the graph network.
Specifically, for the specific operation step of obtaining the edge matrix according to the graph network calculation, the operation of performing the dimension unification processing in the above embodiment may be adopted, and details are not described here.
Optionally, since the values of the elements of the two matrices are likely not in the same order of magnitude, a normalization process operation is required to increase the speed of training learning. In one possible embodiment of the present invention,
Figure 623952DEST_PATH_IMAGE008
in order to obtain a standard image matrix by normalizing the image matrix according to the edge matrix calculated by the graph network,
Figure 242016DEST_PATH_IMAGE009
the standard label matrix is obtained by normalizing the label matrix according to the edge matrix obtained by the graph network calculation. For the specific normalization processing operation, the example in the above embodiment can be adopted, and details are not described here.
Exemplarily, after performing dimension unification processing and normalization processing on the image matrix and the label matrix to obtain a standard image matrix and a standard label matrix, the loss function formula is as follows:
Figure 450143DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 419236DEST_PATH_IMAGE002
representing a standard image matrix normalized to an edge matrix of the image matrix,
Figure 105432DEST_PATH_IMAGE003
and representing the standard label matrix after normalizing the edge matrix of the label matrix.
The embodiment of the invention encodes the image and the image label to obtain the image matrix and the label matrix, determines the loss value jointly through the image matrix and the label matrix to carry out back propagation, and optimizes the model parameters. The knowledge in the natural language task is introduced into the feature recognition task of the image, the fusion of different task knowledge is realized, the learning of the image features under the condition of a small sample data set is accelerated, and the efficiency and the accuracy of the learning of the image features under the small sample data set are improved.
Example two
Fig. 3 is a flowchart of a small sample learning method in the second embodiment of the present invention, which is further optimized based on the first embodiment of the present invention, and after a trained image representation model and a label preprocessing model are obtained, an image test sample set in a complete data set is tested, where the image test sample set is a subset of the complete data set. As shown in fig. 3, the test method includes:
step 301, dividing an image test sample set into a support set and a query set; wherein the support set includes a label for each support image test specimen.
The query set refers to an image sample set needing to predict image classification information; the support set refers to an image sample set provided with label information so as to provide a prediction reference for image samples to be classified in a query.
In order to ensure the accuracy of the test result, the image test sample set is a subset of the complete data set. Illustratively, a complete data set is divided into a training set and a testing set, wherein the classes in the training set and the testing set are not repeated, M classes are randomly selected from all classes in the testing set respectively in different scene training, K samples with labels are randomly selected from each class in the M classes respectively, and a support set S is formed by the K M samples; and randomly selecting T samples from the rest labeled samples of each category in the M categories, and forming a query set Q by the T M samples. The query concentrated query image test sample tape label facilitates subsequent determination of the accuracy of the training result.
Optionally, in order to ensure the accuracy of the test result, M categories are randomly selected from the complete data set without the categories in the training set, that is, it is ensured that the categories in the image training sample set are inconsistent with the categories in the image test sample set.
Step 302, determining the support image vector representation of each support image test sample in the support set and the query image vector representation of each query image test sample in the query set according to the trained image representation model.
Because the trained image representation model is obtained through interaction of natural language knowledge and image knowledge, the trained image representation model is fused with knowledge obtained through training in the natural language processing field. Coding the trained image representation models of all the images in the support set S and the query set Q to obtain the support image vector representation Z of each support image test sampleSAnd a query image vector representation Z for each query image test sample in the query setQ. The information of interactive fusion of the image self characteristic information and the label word semantic information in the image vector representation and the query image vector representation is supported.
Step 303, determining the distance between the target query image vector representation of the target query image test sample in the query set and the support image vector representation of each support image test sample in the support set.
The target query image test sample is any image sample in the query set, and exemplarily, each image sample in the query set is sequentially used as the target query image test sample to determine a label prediction result of each sample in the query set.
In particular, for query set ZQIs calculated with the support set ZSThe distance determination method for each support image vector can be determined by referring to the preset distance function in the first embodiment.
And step 304, determining a label prediction result of the target query image test sample according to the distance.
And after the distance between the target query image test sample and each support image test sample is obtained, selecting the label of the support image test sample closest to the target query image test sample as a label prediction result of the target query image test sample.
Illustratively, the categories corresponding to the samples with the preset number and closest to the target query image test sample in the support set are obtained, the tickets of the categories are counted, the category with the largest number of tickets is used as a final label prediction result, and if the category with the largest number of tickets is available, one of the categories is randomly selected as the final label prediction result.
The embodiment of the invention encodes the image test sample through the trained image representation model which integrates the image knowledge and the natural language semantic knowledge, and finally tests according to the distance represented by the encoded image vector. The method and the device have the advantages that in the image classification learning task under the condition of the small sample data set, the learning knowledge of other tasks is introduced to improve the effect of small sample learning, and the accuracy of the image classification result under the condition of small sample data set learning is improved.
EXAMPLE III
Fig. 4 is a schematic structural diagram of a small sample learning apparatus in a third embodiment of the present invention, which is applicable to a case where image classification training is performed on a small sample data set. As shown in fig. 4, the apparatus includes:
an image encoding module 410, configured to encode an image training sample set according to an image representation model, to obtain an image matrix formed by image vector representations of each image training sample;
a label coding module 420, configured to code labels of the image training sample set according to a label preprocessing model to obtain a label matrix formed by label vector representations of labels of each image training sample;
and the model optimization module 430 is configured to perform back propagation according to the loss values of the image matrix and the label matrix, so as to perform parameter optimization on the image representation model and the label preprocessing model, and obtain a trained image representation model and a trained label preprocessing model.
The embodiment of the invention encodes the image and the image label to obtain the image matrix and the label matrix, determines the loss value jointly through the image matrix and the label matrix to carry out back propagation, and optimizes the model parameters. The knowledge in the natural language task is introduced into the feature recognition task of the image, the fusion of different task knowledge is realized, the learning of the image features under the condition of a small sample data set is accelerated, and the efficiency and the accuracy of the learning of the image features under the small sample data set are improved.
Optionally, the image training sample set is a subset of a complete data set;
correspondingly, the tag encoding module is specifically configured to:
determining complete tag information for the complete data set;
determining complete word vector representation according to the complete label information and an original word vector model;
determining label information of a training sample according to the label of the image training sample set and the complete label information;
and determining a label matrix formed by label vector representations of the labels of each image training sample according to the label information of the training sample and the complete word vector representations.
Optionally, the model optimization module is specifically configured to:
and performing back propagation according to the loss values of the image matrix and the label matrix so as to perform parameter optimization on the complete word vector representation.
Optionally, the apparatus further includes an original word vector model optimization module, configured to:
and after the trained label preprocessing model is obtained, optimizing and updating the original word vector model according to the complete word vector representation in the trained label preprocessing model.
Optionally, the apparatus further includes a matrix processing module, configured to:
before back propagation is carried out according to the loss values of the image matrix and the label matrix, carrying out dimension consistency processing and normalization processing on the image matrix and the label matrix to obtain a standard image matrix and a standard label matrix;
correspondingly, the model optimization module is specifically configured to:
and performing back propagation according to the loss values of the standard image matrix and the standard label matrix.
Optionally, the model optimization module is specifically configured to:
calculating loss values of the image matrix and the label matrix based on a preset loss function;
performing back propagation according to the loss value;
wherein the preset loss function is expressed by the following formula:
Figure 109160DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 488189DEST_PATH_IMAGE013
according to the determination of the image matrix,
Figure 944578DEST_PATH_IMAGE014
according to the determination of the label matrix,
Figure 434465DEST_PATH_IMAGE015
to represent
Figure 761542DEST_PATH_IMAGE013
The value of the element in the ith row and the jth column of the matrix,
Figure 577051DEST_PATH_IMAGE016
to represent
Figure 520736DEST_PATH_IMAGE014
And the element values of the ith row and the jth column in the matrix are N, and the N represents the total number of samples in the image training sample set.
Alternatively to this, the first and second parts may,
Figure 814314DEST_PATH_IMAGE013
an edge matrix calculated from the graph network for the image matrix,
Figure 261476DEST_PATH_IMAGE014
and calculating an edge matrix for the label matrix according to the graph network.
Optionally, the image test sample set is a subset of the complete data set;
correspondingly, the device further comprises a test module, which is specifically configured to:
after a trained image representation model and a label preprocessing model are obtained, dividing an image test sample set into a support set and a query set; wherein the support set includes a label for each support image test specimen;
determining a support image vector representation of each support image test sample in the support set and a query image vector representation of each query image test sample in the query set according to the trained image representation model;
determining a distance between a target query image vector representation of a target query image test sample in the query set and a support image vector representation of each support image test sample in the support set;
and determining a label prediction result of the target query image test sample according to the distance.
The small sample learning device provided by the embodiment of the invention can execute the small sample learning method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects for executing the small sample learning method.
Example four
Fig. 5 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. FIG. 5 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 5 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 5, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory device 28, and a bus 18 that couples various system components including the system memory device 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory device bus or memory device controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system storage 28 may include computer system readable media in the form of volatile storage, such as Random Access Memory (RAM) 30 and/or cache storage 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Storage 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in storage 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with device 12, and/or with any devices (e.g., network card, modem, etc.) that enable device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown in FIG. 5, the network adapter 20 communicates with the other modules of the electronic device 12 via the bus 18. It should be appreciated that although not shown in FIG. 5, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system storage device 28, for example, implementing a small sample learning method provided by an embodiment of the present invention, including:
coding an image training sample set according to an image representation model to obtain an image matrix formed by image vector representation of each image training sample;
encoding the labels of the image training sample set according to a label preprocessing model to obtain a label matrix formed by label vector representations of the labels of each image training sample;
and performing back propagation according to the loss values of the image matrix and the label matrix to perform parameter optimization on the image representation model and the label preprocessing model to obtain the trained image representation model and label preprocessing model.
EXAMPLE five
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a small sample learning method provided in an embodiment of the present invention, where the computer program includes:
coding an image training sample set according to an image representation model to obtain an image matrix formed by image vector representation of each image training sample;
encoding the labels of the image training sample set according to a label preprocessing model to obtain a label matrix formed by label vector representations of the labels of each image training sample;
and performing back propagation according to the loss values of the image matrix and the label matrix to perform parameter optimization on the image representation model and the label preprocessing model to obtain the trained image representation model and label preprocessing model.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A small sample learning method, comprising:
coding an image training sample set according to an image representation model to obtain an image matrix formed by image vector representation of each image training sample; the image training sample set is determined according to a small sample data set for image classification training;
encoding the labels of the image training sample set according to a label preprocessing model to obtain a label matrix formed by label vector representations of the labels of each image training sample; the label preprocessing model is constructed based on a natural language semantic feature extraction model, and the labels comprise natural semantic feature information;
and performing back propagation according to the loss values of the image matrix and the label matrix to perform parameter optimization on the image representation model and the label preprocessing model to obtain a trained image representation model and a trained label preprocessing model, and obtaining a classification result of the image test sample according to the trained image representation model and the trained label preprocessing model.
2. The method of claim 1, wherein the set of image training samples is a subset of a complete data set;
correspondingly, encoding the labels of the image training sample set according to the label preprocessing model to obtain a label matrix formed by label vector representations of the labels of each image training sample, including:
determining complete tag information for the complete data set;
determining complete word vector representation according to the complete label information and an original word vector model;
determining label information of a training sample according to the label of the image training sample set and the complete label information;
and determining a label matrix formed by label vector representations of the labels of each image training sample according to the label information of the training sample and the complete word vector representations.
3. The method of claim 2, after obtaining the trained label preprocessing model, further comprising:
and optimizing and updating the original word vector model according to the complete word vector representation in the trained label preprocessing model.
4. The method of claim 1, further comprising, prior to back-propagating based on the loss values of the image matrix and the label matrix:
carrying out dimension consistency processing and normalization processing on the image matrix and the label matrix to obtain a standard image matrix and a standard label matrix;
correspondingly, the back propagation is performed according to the loss values of the image matrix and the label matrix, and the back propagation includes:
and performing back propagation according to the loss values of the standard image matrix and the standard label matrix.
5. The method of claim 1, wherein back-propagating based on the loss values of the image matrix and the label matrix comprises:
calculating loss values of the image matrix and the label matrix based on a preset loss function;
performing back propagation according to the loss value;
wherein the preset loss function is expressed by the following formula:
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE004
according to the determination of the image matrix,
Figure DEST_PATH_IMAGE006
according to the determination of the label matrix,
Figure DEST_PATH_IMAGE008
to represent
Figure 906597DEST_PATH_IMAGE004
The value of the element in the ith row and the jth column of the matrix represents
Figure 354896DEST_PATH_IMAGE006
And the element values of the ith row and the jth column in the matrix are N, and the N represents the total number of samples in the image training sample set.
6. The method of claim 5,
Figure 806737DEST_PATH_IMAGE004
an edge matrix calculated from the graph network for the image matrix,
Figure 733105DEST_PATH_IMAGE006
and calculating an edge matrix for the label matrix according to the graph network.
7. The method of claim 1, wherein the image test sample set is a subset of a complete data set;
correspondingly, after the trained image representation model and label preprocessing model are obtained, the method further comprises the following steps:
dividing the image test sample set into a support set and a query set; wherein the support set includes a label for each support image test specimen;
determining a support image vector representation of each support image test sample in the support set and a query image vector representation of each query image test sample in the query set according to the trained image representation model;
determining a distance between a target query image vector representation of a target query image test sample in the query set and a support image vector representation of each support image test sample in the support set;
and determining a label prediction result of the target query image test sample according to the distance.
8. A small sample learning device, comprising:
the image coding module is used for coding the image training sample set according to the image representation model to obtain an image matrix formed by image vector representation of each image training sample; the image training sample set is determined according to a small sample data set for image classification training;
the label coding module is used for coding the labels of the image training sample set according to the label preprocessing model to obtain a label matrix formed by label vector representation of each image training sample label; the label preprocessing model is constructed based on a natural language semantic feature extraction model, and the labels comprise natural semantic feature information;
and the model optimization module is used for performing back propagation according to the loss values of the image matrix and the label matrix so as to perform parameter optimization on the image representation model and the label preprocessing model to obtain a trained image representation model and a trained label preprocessing model, and obtaining a classification result of the image test sample according to the trained image representation model and the trained label preprocessing model.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the small sample learning method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a small sample learning method according to any one of claims 1 to 7.
CN202110252710.3A 2021-03-09 2021-03-09 Small sample learning method and device, electronic equipment and storage medium Active CN112633419B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110252710.3A CN112633419B (en) 2021-03-09 2021-03-09 Small sample learning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110252710.3A CN112633419B (en) 2021-03-09 2021-03-09 Small sample learning method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112633419A CN112633419A (en) 2021-04-09
CN112633419B true CN112633419B (en) 2021-07-06

Family

ID=75297801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110252710.3A Active CN112633419B (en) 2021-03-09 2021-03-09 Small sample learning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112633419B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111971A (en) * 2021-05-07 2021-07-13 浙江宇视科技有限公司 Intelligent processing method and device for classification model, electronic equipment and medium
CN113128619B (en) * 2021-05-10 2022-05-31 北京瑞莱智慧科技有限公司 Method for training detection model of counterfeit sample, method for identifying counterfeit sample, apparatus, medium, and device
CN113326851B (en) * 2021-05-21 2023-10-27 中国科学院深圳先进技术研究院 Image feature extraction method and device, electronic equipment and storage medium
CN113408606B (en) * 2021-06-16 2022-07-22 中国石油大学(华东) Semi-supervised small sample image classification method based on graph collaborative training
CN113449821B (en) * 2021-08-31 2021-12-31 浙江宇视科技有限公司 Intelligent training method, device, equipment and medium fusing semantics and image characteristics
CN113837394A (en) * 2021-09-03 2021-12-24 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Multi-feature view data label prediction method, system and readable storage medium
CN113920497B (en) * 2021-12-07 2022-04-08 广东电网有限责任公司东莞供电局 Nameplate recognition model training method, nameplate recognition method and related devices
CN114090780B (en) * 2022-01-20 2022-05-31 宏龙科技(杭州)有限公司 Prompt learning-based rapid picture classification method
CN114494818B (en) * 2022-01-26 2023-07-25 北京百度网讯科技有限公司 Image processing method, model training method, related device and electronic equipment
CN114461629A (en) * 2022-02-10 2022-05-10 电子科技大学 Temperature calibration method and device for aircraft engine and storage medium
CN116229175B (en) * 2022-03-18 2023-12-26 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN115049878B (en) * 2022-06-17 2024-05-03 平安科技(深圳)有限公司 Target detection optimization method, device, equipment and medium based on artificial intelligence
CN116383724B (en) * 2023-02-16 2023-12-05 北京数美时代科技有限公司 Single-domain label vector extraction method and device, electronic equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840530A (en) * 2017-11-24 2019-06-04 华为技术有限公司 The method and apparatus of training multi-tag disaggregated model
CN111177569A (en) * 2020-01-07 2020-05-19 腾讯科技(深圳)有限公司 Recommendation processing method, device and equipment based on artificial intelligence

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919183B (en) * 2019-01-24 2020-12-18 北京大学 Image identification method, device and equipment based on small samples and storage medium
CN109961089B (en) * 2019-02-26 2023-04-07 中山大学 Small sample and zero sample image classification method based on metric learning and meta learning
CN110188358B (en) * 2019-05-31 2023-10-24 鼎富智能科技有限公司 Training method and device for natural language processing model
CN110555475A (en) * 2019-08-29 2019-12-10 华南理工大学 few-sample target detection method based on semantic information fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840530A (en) * 2017-11-24 2019-06-04 华为技术有限公司 The method and apparatus of training multi-tag disaggregated model
CN111177569A (en) * 2020-01-07 2020-05-19 腾讯科技(深圳)有限公司 Recommendation processing method, device and equipment based on artificial intelligence

Also Published As

Publication number Publication date
CN112633419A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN112633419B (en) Small sample learning method and device, electronic equipment and storage medium
CN112015859A (en) Text knowledge hierarchy extraction method and device, computer equipment and readable medium
CN109034203B (en) Method, device, equipment and medium for training expression recommendation model and recommending expression
CN111738001B (en) Training method of synonym recognition model, synonym determination method and equipment
CN113064964A (en) Text classification method, model training method, device, equipment and storage medium
CN114911958B (en) Semantic preference-based rapid image retrieval method
CN113177700B (en) Risk assessment method, system, electronic equipment and storage medium
CN111950279A (en) Entity relationship processing method, device, equipment and computer readable storage medium
CN113239702A (en) Intention recognition method and device and electronic equipment
CN112214595A (en) Category determination method, device, equipment and medium
CN111401309A (en) CNN training and remote sensing image target identification method based on wavelet transformation
US11494431B2 (en) Generating accurate and natural captions for figures
CN116402166B (en) Training method and device of prediction model, electronic equipment and storage medium
CN109902162B (en) Text similarity identification method based on digital fingerprints, storage medium and device
CN111444335B (en) Method and device for extracting central word
CN115795038A (en) Intention identification method and device based on localization deep learning framework
CN116030295A (en) Article identification method, apparatus, electronic device and storage medium
US20220164705A1 (en) Method and apparatus for providing information based on machine learning
CN112287144B (en) Picture retrieval method, equipment and storage medium
CN113010687B (en) Exercise label prediction method and device, storage medium and computer equipment
CN114936564A (en) Multi-language semantic matching method and system based on alignment variational self-coding
CN114297022A (en) Cloud environment anomaly detection method and device, electronic equipment and storage medium
CN111199170B (en) Formula file identification method and device, electronic equipment and storage medium
CN112926314A (en) Document repeatability identification method and device, electronic equipment and storage medium
CN113849592B (en) Text emotion classification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210409

Assignee: Zhejiang Yushi System Technology Co., Ltd

Assignor: ZHEJIANG UNIVIEW TECHNOLOGIES Co.,Ltd.

Contract record no.: X2021330000197

Denomination of invention: Small sample learning method, device, electronic device and storage medium

Granted publication date: 20210706

License type: Common License

Record date: 20210831

EE01 Entry into force of recordation of patent licensing contract