CN115759205A - Negative sample sampling method based on multi-model cooperation contrast learning - Google Patents

Negative sample sampling method based on multi-model cooperation contrast learning Download PDF

Info

Publication number
CN115759205A
CN115759205A CN202211515939.2A CN202211515939A CN115759205A CN 115759205 A CN115759205 A CN 115759205A CN 202211515939 A CN202211515939 A CN 202211515939A CN 115759205 A CN115759205 A CN 115759205A
Authority
CN
China
Prior art keywords
sample
model
similarity
negative
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211515939.2A
Other languages
Chinese (zh)
Inventor
许林漪
陈百基
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202211515939.2A priority Critical patent/CN115759205A/en
Publication of CN115759205A publication Critical patent/CN115759205A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a negative sample sampling method based on multi-model collaborative contrast learning, which comprises the following steps: 1) Constructing a plurality of comparison learning models and constraining different feature subspaces of learning data of the comparison learning models; 2) Each comparison learning model selects a potential positive sample set in a characteristic space of the comparison learning model by using a potential positive sample recognition algorithm; 3) Potential positive samples are removed from the candidate negative sample set by combining the identification results of different models on the potential positive samples, and a preliminary negative sample set is screened out; 4) And selecting the hard negative samples from the preliminary negative sample set by using a hard negative sample mining algorithm to serve as a negative sample set which finally participates in the comparative learning training. According to the method, the problem of sampling deviation caused by the deviation of the characteristic space of the model can be solved by introducing the cooperative sampling of a plurality of models, potential positive samples in the negative sample set are removed more comprehensively, and the quality of the samples difficult to be negative is improved, so that the generalization capability of the comparative learning model is improved, and the method is better applied to downstream tasks.

Description

Negative sample sampling method based on multi-model collaborative contrast learning
Technical Field
The invention relates to the technical field of contrast learning and negative sample sampling, in particular to a negative sample sampling method based on multi-model collaborative contrast learning.
Background
The contrast learning is self-supervision learning, and has the advantage that the model can be drawn close to the distance of the positive sample in the feature embedding space by virtue of contrast loss in a scene without a data label, and the feature information of the data can be learned in a mode of pushing away the negative sample. In a common comparative learning framework, each sample instance is treated as a class, i.e., negative samples are randomly chosen from all the remaining samples in the dataset, except for the anchor sample. The problem caused by this method is that some samples very similar to the anchor samples easily exist in the negative sample set, thereby affecting the convergence speed and final performance of the model.
In order to solve the problem caused by randomly selecting negative samples, the existing technology for sampling negative samples in contrast learning can be divided into the following two aspects: 1. and detecting and rejecting potential positive samples in the negative sample set through a clustering result or according to a similarity method. 2. Inspired by the difficult negative sample mining in the metric learning, the efficiency and the performance of model training are improved by selecting the difficult negative sample. The method mainly selects high-quality difficult-to-load samples from the existing samples based on the similarity, or generates the difficult-to-load samples through data mixing and confrontation generation networks. However, the existing methods need to select or synthesize by means of the similarity between samples, and the existing methods only calculate in the feature space of the model when calculating the similarity, so that the reliability of the similarity depends on the current feature expression capability of the model; meanwhile, due to the randomness of deep learning training, a single model only learns partial feature space of data, so that the selected negative sample set has model deviation of the selected negative sample set, and the problem of sampling deviation exists.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a negative sample sampling method based on multi-model collaborative contrast learning, which can eliminate the introduction of sampling deviation due to the deviation of the characteristic space of a model, more comprehensively eliminate potential positive samples in a negative sample set and improve the quality of difficult negative samples, thereby improving the generalization capability of a contrast learning model and being better applied to downstream tasks.
In order to achieve the purpose, the technical scheme provided by the invention is as follows: the negative sample sampling method based on multi-model collaborative contrast learning comprises the following steps:
1) Two or more than two comparative learning models are constructed, and then different models can be ensured to learn different feature subspaces of the data set by using a diversity constraint method;
2) Each comparison learning model calculates the similarity between the anchor point sample and the candidate negative sample set in the respective feature space, and then selects a potential positive sample set from the candidate negative sample set by using a potential positive sample recognition algorithm;
3) Combining potential positive sample sets selected by different models to obtain a final positive sample set, and removing the final positive sample set from the candidate negative sample set to obtain a primary negative sample set;
4) And selecting the hard negative samples from the preliminary negative sample set by using a hard negative sample mining algorithm as a negative sample set which finally participates in the comparative learning training.
Further, in step 1), m contrast learning models are constructed, each contrast learning model is composed of a feature encoder and a mapper, and input data x are sent into the ith contrast learning model after being subjected to data enhancement to obtain corresponding metric embedded representation
Figure BDA0003971935240000021
Namely:
Figure BDA0003971935240000022
wherein, x is input data,
Figure BDA0003971935240000023
for the metric-embedded representation corresponding to the input data x, h i (. H) a space function of the mapper for the ith model, f i () is a representation space function of a feature encoder of the ith model, t (-) represents a data enhancement function, m is the total number of comparison learning models, and m is more than or equal to 2;
meanwhile, in order to ensure that different comparison learning models can finally learn different feature subspaces of the data set, the diversity among the different comparison learning models is ensured from the following aspects:
different contrast learning models use different data enhancement transformations, the data enhancement method comprises random cutting size, random horizontal turning, random image attribute changing and random gray-scale image changing, and the data enhancement method has randomness, so that different data enhancement transformation results can be obtained by applying the data enhancement method once before the same input data x is input into different contrast learning models;
the network layer parameters and the initialization method used in the construction process of different contrast learning models are not completely the same, each layer of parameters of the contrast learning models comprises the size and number of convolution kernels, step length and zero filling mode, and the network layer weight initialization mode comprises uniform distribution, normal distribution, xavier initialization and kaiming initialization;
in order to further ensure that the extracted features of different contrast learning models have difference, a feature diversity constraint between the models is applied in the training process, namely, the similarity between the features extracted by feature encoders of different contrast learning models is calculated, and the similarity is minimized:
Figure BDA0003971935240000031
where min refers to the value of the equation on the minimization right, loss similarity Is the sum of the similarity of the features between every two models, cos (-) is the cosine similarity between two data, λ represents the adjustment range of the whole equation value, f j (-) is the representation space function of the feature encoder of the jth model.
Further, the step 2) comprises the following steps:
2.1 Given a set of anchor samples and candidate negative samples, each model calculates the similarity of the anchor sample and each candidate negative sample in its own feature space:
similarity(x a ,x nc ;θ i )=cos(h i (f i (t(x a ))),h i (f i (t(x nc )))),x nc ∈NC
in the formula, x a Representing an anchor sample, NC representing a set of candidate negative samples, x nc Representing candidate negative samples, similarity representing a measure of similarity between two samples, θ i Is the whole expression space of the ith model, comprises a feature coder and a mapper, cos is the cosine similarity between two data, h i (. H) a space function representing a mapper for the ith model, f i () a representation space function of the feature encoder for the ith model, t (-) a data enhancement function;
2.2 Each model selects a potential positive sample set from the candidate negative sample set according to the obtained similarity by means of a potential positive sample identification algorithm, wherein the potential positive sample identification algorithm is that when the similarity of a sample and the anchor sample is higher than a specified threshold value, the sample is defined as a potential positive sample, and the selected potential positive sample set of each model is recorded as:
Pos i ={x p |x p ∈NC∧similarity(x a ,x p ;θ i )≥α}
in the formula, pos i Refers to the potential positive sample set, x, selected by the model i p Representing potential positive samples, alpha is a designated threshold value, and the threshold value is determined by the following method:
Figure BDA0003971935240000041
in the formula (I), the compound is shown in the specification,
Figure BDA0003971935240000042
is used for calculating input sequence
Figure BDA0003971935240000043
The k-th largest value in the middle,
Figure BDA0003971935240000044
representing that the ith model obtains similarity sequences of the anchor sample and all candidate negative samples:
Figure BDA0003971935240000045
in the formula s nc Representing the similarity of the anchor sample to a candidate negative sample.
Further, in step 3), in order to remove all potential positive samples from the candidate negative sample set to the maximum extent, if a sample is considered as a potential positive sample by one of the models, the sample is determined as a positive sample, so that the final positive sample set is:
Figure BDA0003971935240000046
in the formula, pos final Refers to the final set of positive samples, pos i The method is characterized in that a potential positive sample set selected by a model i is selected, m is the total number of the models, and a final positive sample set is removed from a candidate negative sample set to obtain a preliminary negative sample set:
Figure BDA0003971935240000052
where Neg denotes the preliminary set of negative samples, x n Representing a preliminary negative sample, and NC is a candidate negative sample set.
Further, in the step 4), the hard negative sample mining algorithm is to select a sample which is very similar to the anchor sample from the preliminary negative sample set, so that the convergence speed and the final performance of model training can be effectively improved; the step 4) comprises the following steps:
4.1 To avoid the bias introduced by the single model, the mean of the similarity between the anchor sample and the preliminary negative sample set is calculated using all models as the final similarity score for the anchor sample and each negative sample:
Figure BDA0003971935240000051
in the formula, score represents the final similarity score of the anchor sample and the negative sample, similarity represents the similarity between two samples, specifically, it represents that model i calculates the cosine similarity of the anchor sample and the negative sample, m is the total number of models, x is the total number of models, and a representing anchor samples, x n Representing negative samples, theta i Is the entire representation space of the ith model, including the feature coder and the mapper;
4.2 Sorting the final similarity scores of all negative samples in a descending order from large to small, and regarding the negative samples with the sorting ratio of beta before comparison as the negative samples which are difficult to be negative samples, and taking the negative samples as a negative sample set which finally participates in the comparison learning training, wherein beta is a percentage.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. by means of multi-model cooperative sampling, different characteristic subspace characteristics can be comprehensively considered, sampling deviation and error accumulation caused by self model deviation are reduced, and sampling quality and accuracy of negative samples are improved.
2. According to the method, the final potential positive sample set is identified by means of a multi-model cooperation mode, and compared with the existing method, more potential positive samples can be identified, so that the finally used negative sample set is cleaner.
3. In the invention, each model learns the characteristic subspace information of other models in the process of sampling through the cooperation with other models, so that the generalization capability of the final comparative learning is improved.
4. In the invention, a relatively stable characteristic space can be obtained by multi-model training, and the generalization error is reduced.
5. The method can reduce the possibility that a single model falls into local optimum by means of multi-model cooperative sampling.
Drawings
FIG. 1 is a logic flow diagram of the present invention.
FIG. 2 is a schematic diagram of an example of a plurality of contrast models constructed according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
As shown in fig. 1 and fig. 2, the present embodiment discloses a negative sample sampling method based on multi-model collaborative contrast learning, which specifically includes the following steps:
1) M contrast learning models are constructed, each contrast learning model is composed of a characteristic encoder and a mapper, and input data x are sent into the ith contrast learning model after being subjected to data enhancement to obtain corresponding measurement embedded representation
Figure BDA0003971935240000061
Namely:
Figure BDA0003971935240000062
wherein, x is input data,
Figure BDA0003971935240000063
for the metric-embedded representation corresponding to the input data x, h i (. H) a space function of the mapper for the ith model, f i () is the representation space function of the feature encoder of the ith model, t (-) represents the data enhancement function, m is the total number of comparison learning models, and m is more than or equal to 2. In the present embodiment, m =3 is used as an example for explanation.
Meanwhile, in order to ensure that different contrast learning models can finally learn different feature subspaces of the data set, the diversity among the different contrast learning models is ensured from the following aspects:
different contrast learning models use different data enhancement transformations, the data enhancement method comprises random cutting size, random horizontal turning, random image attribute changing and random gray-scale image changing, and the data enhancement method has randomness, so that different data enhancement transformation results can be obtained by applying the data enhancement method once before the same input data x is input into different contrast learning models;
the network layer parameters and the initialization method used in the construction process of different models are not completely the same, each layer of parameters of the comparative learning model comprises the size and the number of convolution kernels, step length and zero filling mode, and the network layer weight initialization mode comprises uniform distribution, normal distribution, xavier initialization and kaiming initialization. As shown in fig. 2, the parameters of the feature coders of the three comparative learning models are consistent, while the number of network layers of the mapper and the output channels of each layer are inconsistent. In addition, the first model initializes the parameters in a normal distribution mode, the second model initializes the parameters in an Xavier initialization mode, and the third model initializes the parameters in a kaiming initialization mode.
In order to further ensure that the features extracted by different models have differences, a feature diversity constraint between the models is applied in the training process, namely, the similarity between the features extracted by the feature encoders of different models is calculated, and the similarity is minimized:
Figure BDA0003971935240000071
where min refers to the value of the equation on the minimization right, loss similarity Is the sum of the similarity of the features between all the models, cos (-) is the cosine similarity between the two data, f j (. Cndot.) is the representational spatial function of the feature encoder for the jth model, and λ represents the adjustment of the magnitude of the entire equation value. Here, as a rule of thumb, λ takes 0.05.
2) According to a given anchor point and a candidate negative sample set, each contrast learning model calculates the similarity between samples in a respective feature space, and then selects a potential positive sample set from the candidate negative sample set by using a potential positive sample recognition algorithm, wherein the method comprises the following steps:
2.1 Given a set of anchor samples and candidate negative samples, each model computes the similarity of an anchor sample to each candidate negative sample in its own feature space:
similarity(x a ,x nc ;θ i )=cos(h i (f i (t(x a ))),h i (f i (t(x nc )))),x nc ∈NC
in the formula, x a Representing an anchor sample, NC representing a set of candidate negative samples, x nc Representing candidate negative examples, similarity representing a measure of similarity between two examples, θ i Is the whole representation space of the ith model, comprises a feature coder and a mapper, cos is the cosine similarity between two data, h i (. H) a space function of the mapper for the ith model, f i () a representation space function of the feature encoder for the ith model, t (-) a data enhancement function;
2.2 Each model selects a potential positive sample set from the candidate negative sample set according to the obtained similarity by a potential positive sample identification algorithm, wherein the potential positive sample identification algorithm is that when the similarity of a sample and the anchor sample is higher than a specified threshold, the sample is defined as a potential positive sample, and the selected potential positive sample set of each model can be recorded as:
Pos i ={x p |x p ∈NC∧similarity(x a ,x p ;θ i )≥α}
in the formula, pos i Refers to the potential positive sample set, x, selected by the model i p Representing potential positive samples, α is a specified threshold value determined by:
Figure BDA0003971935240000081
in the formula (I), the compound is shown in the specification,
Figure BDA0003971935240000082
is used for calculating input sequence
Figure BDA0003971935240000083
The k-th largest value among them,
Figure BDA0003971935240000084
represents the ith modelSimilarity sequence to anchor sample and all candidate negative samples:
Figure BDA0003971935240000085
in the formula, s nc Representing the similarity of the anchor sample to a candidate negative sample. Here, empirically, k is set to the rounding-down case of 1% of the candidate negative sample set, for example, the size of the candidate negative sample set is 4096 here, so k takes the rounding-down case of 4096 × 1% =40.96, i.e., 40.
3) To maximize the rejection of all potential positive samples from the candidate negative sample set, a sample is identified as a positive sample if it is considered by one of the models as a potential positive sample. The final set of positive samples is therefore:
Figure BDA0003971935240000091
in the formula, pos final Refers to the final set of positive samples, pos i The method refers to a potential positive sample set selected by the model i, and m is the total number of the models. The final positive sample set in this embodiment is the union of three potential positive sample sets selected by the three models. And removing the final positive sample set from the candidate negative sample set to obtain a preliminary negative sample set:
Figure BDA0003971935240000093
where Neg denotes the preliminary set of negative samples, x n Representing a preliminary negative sample, and NC is a candidate negative sample set.
4) Selecting a difficult-to-negative sample from the preliminary negative sample set by using a difficult-to-negative sample mining algorithm as a negative sample set which finally participates in comparison learning training, wherein the difficult-to-negative sample mining algorithm is that a sample which is very similar to an anchor point sample is selected from the preliminary negative sample set, so that the convergence speed and the final performance of model training can be effectively improved; the method comprises the following steps:
4.1 To avoid the bias introduced by the single model, the mean of the similarity between the anchor sample and the preliminary negative sample set is calculated using all models as the final similarity score for the anchor sample and each negative sample:
Figure BDA0003971935240000092
in the formula, score represents the final similarity score of the anchor sample and the negative sample, similarity represents the similarity between two samples, specifically, the similarity represents that the cosine similarity of the anchor sample and the negative sample is calculated by a model i, m is the total number of models, and x is the total number of models a Representing anchor samples, x n Representing negative examples. Namely, for each negative sample, the three models respectively calculate the similarity between the negative sample and the anchor point, and then the average of the three similarities is used as the final similarity between the negative sample and the anchor point.
4.2 Sorting the final similarity scores of all negative samples in descending order from large to small, and regarding the negative samples with the sorting ratio of beta as difficult negative samples, and taking the difficult negative samples as a negative sample set which finally participates in the contrast learning training, wherein beta is a percentage. As a rule of thumb, β here is set to 50%.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (5)

1. The negative sample sampling method based on multi-model collaborative comparative learning is characterized by comprising the following steps of:
1) Two or more than two comparative learning models are constructed, and then different models can be ensured to learn different feature subspaces of the data set by using a diversity constraint method;
2) Each comparison learning model calculates the similarity between the anchor point sample and the candidate negative sample set in the respective feature space, and then a potential positive sample set is selected from the candidate negative sample set by using a potential positive sample recognition algorithm;
3) Combining potential positive sample sets selected by different models to obtain a final positive sample set, and removing the final positive sample set from the candidate negative sample set to obtain a primary negative sample set;
4) And selecting the hard negative samples from the preliminary negative sample set by using a hard negative sample mining algorithm as a negative sample set which finally participates in the comparative learning training.
2. The negative sample sampling method based on multi-model collaborative contrast learning according to claim 1, wherein in step 1), m contrast learning models are constructed, each contrast learning model is composed of a feature encoder and a mapper, and input data x is sent to the ith contrast learning model after being subjected to data enhancement to obtain a corresponding metric embedded representation
Figure FDA0003971935230000011
Namely:
Figure FDA0003971935230000012
wherein, x is input data,
Figure FDA0003971935230000013
for the metric-embedded representation corresponding to the input data x, h i (. H) a space function of the mapper for the ith model, f i () is a representation space function of a feature encoder of the ith model, t (-) represents a data enhancement function, m is the total number of comparison learning models, and m is more than or equal to 2;
in order to ensure that different contrast learning models can eventually learn different feature subspaces of a data set, diversity between different contrast learning models is ensured from the following aspects:
different contrast learning models use different data enhancement transformations, the data enhancement method comprises random cutting size, random horizontal turning, random image attribute changing and random gray-scale image changing, and the data enhancement method has randomness, so that different data enhancement transformation results can be obtained by applying the data enhancement method once before the same input data x is input into different contrast learning models;
network layer parameters and initialization methods used in the construction process of different comparison learning models are not completely the same, each layer of parameters of the comparison learning models comprise the size and number of convolution kernels, step length and zero padding mode, and the network layer weight initialization mode comprises uniform distribution, normal distribution, xavier initialization and kaiming initialization;
in order to further ensure that the extracted features of different contrast learning models have difference, a feature diversity constraint between the models is applied in the training process, namely, the similarity between the features extracted by feature encoders of different contrast learning models is calculated, and the similarity is minimized:
Figure FDA0003971935230000021
where min refers to the value of the equation on the minimization right, loss similarity Is the sum of the similarity of the features between every two models, cos (-) is the cosine similarity between two data, λ represents the adjustment range of the whole equation value, f j (-) is the representation space function of the feature encoder of the jth model.
3. The negative sample sampling method based on multi-model cooperative contrast learning according to claim 1, wherein the step 2) comprises the following steps:
2.1 Given a set of anchor samples and candidate negative samples, each model computes the similarity of an anchor sample to each candidate negative sample in its own feature space:
similarity(x a ,x nc ;θ i )=cos(h i (f i (t(x a ))),h i (f i (t(x nc )))),x nc ∈NC
in the formula, x a Representing anchor samples, NC representing a set of candidate negative samples, x nc Representing candidate negative samples, similarity representing a measure of similarity between two samples, θ i Is the whole expression space of the ith model, comprises a feature coder and a mapper, cos is the cosine similarity between two data, h i (. H) a space function of the mapper for the ith model, f i () a representation space function of the feature encoder for the ith model, t (-) a data enhancement function;
2.2 Each model selects a potential positive sample set from the candidate negative sample set according to the obtained similarity by means of a potential positive sample identification algorithm, wherein the potential positive sample identification algorithm is that when the similarity of a sample and the anchor sample is higher than a specified threshold value, the sample is defined as a potential positive sample, and the selected potential positive sample set of each model is recorded as:
Pos i ={x p |x p ∈NC∧similarity(x a ,x p ;θ i )≥α}
in the formula, pos i Refers to the potential positive sample set, x, selected by the model i p Representing potential positive samples, alpha is a designated threshold value, and the threshold value is determined by the following method:
Figure FDA0003971935230000031
in the formula (I), the compound is shown in the specification,
Figure FDA0003971935230000032
is used for calculating input sequence
Figure FDA0003971935230000033
The k-th largest value among them,
Figure FDA0003971935230000034
representing that the ith model obtains similarity sequences of the anchor sample and all candidate negative samples:
Figure FDA0003971935230000035
in the formula, s nc Representing the similarity of the anchor sample to a candidate negative sample.
4. The negative sample sampling method based on multi-model cooperative contrast learning of claim 1, characterized in that: in step 3), to remove all potential positive samples from the candidate negative sample set to the maximum extent, if a sample is considered as a potential positive sample by one of the models, the sample is determined as a positive sample, so that the final positive sample set is:
Figure FDA0003971935230000036
in the formula, pos final Refers to the final set of positive samples, pos i The method is characterized in that a potential positive sample set selected by a model i is adopted, m is the total number of models, and a final positive sample set is removed from a candidate negative sample set to obtain a preliminary negative sample set:
Figure FDA0003971935230000041
where Neg denotes the preliminary set of negative samples, x n Representing a preliminary negative sample, and NC is a candidate negative sample set.
5. The negative sample sampling method based on multi-model cooperative contrast learning of claim 1, characterized in that: in the step 4), the hard negative sample mining algorithm is to select a sample which is very similar to an anchor point sample from a primary negative sample set, so that the convergence speed and the final performance of model training can be effectively improved; the step 4) comprises the following steps:
4.1 To avoid the bias introduced by the single model, the mean of the similarity between the anchor sample and the preliminary set of negative samples is calculated using all models as the final similarity score for the anchor sample and each negative sample:
Figure FDA0003971935230000042
in the formula, score represents the final similarity score of the anchor sample and the negative sample, similarity represents the similarity between two samples, specifically, the similarity represents that the cosine similarity of the anchor sample and the negative sample is calculated by a model i, m is the total number of models, and x is the total number of models a Representing anchor samples, x n Representing negative samples, theta i Is the entire representation space of the ith model, including the feature coder and the mapper;
4.2 Sorting the final similarity scores of all negative samples in descending order from large to small, and regarding the negative samples with the sorting ratio of beta as difficult negative samples, and taking the difficult negative samples as a negative sample set which finally participates in the contrast learning training, wherein beta is a percentage.
CN202211515939.2A 2022-11-30 2022-11-30 Negative sample sampling method based on multi-model cooperation contrast learning Pending CN115759205A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211515939.2A CN115759205A (en) 2022-11-30 2022-11-30 Negative sample sampling method based on multi-model cooperation contrast learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211515939.2A CN115759205A (en) 2022-11-30 2022-11-30 Negative sample sampling method based on multi-model cooperation contrast learning

Publications (1)

Publication Number Publication Date
CN115759205A true CN115759205A (en) 2023-03-07

Family

ID=85341147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211515939.2A Pending CN115759205A (en) 2022-11-30 2022-11-30 Negative sample sampling method based on multi-model cooperation contrast learning

Country Status (1)

Country Link
CN (1) CN115759205A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116776160A (en) * 2023-08-23 2023-09-19 腾讯科技(深圳)有限公司 Data processing method and related device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116776160A (en) * 2023-08-23 2023-09-19 腾讯科技(深圳)有限公司 Data processing method and related device
CN116776160B (en) * 2023-08-23 2023-11-10 腾讯科技(深圳)有限公司 Data processing method and related device

Similar Documents

Publication Publication Date Title
Chen et al. Adversarial-learned loss for domain adaptation
CN110458216B (en) Image style migration method for generating countermeasure network based on conditions
Gu et al. Clustering-driven unsupervised deep hashing for image retrieval
CN113326731A (en) Cross-domain pedestrian re-identification algorithm based on momentum network guidance
Zheng et al. Prompt vision transformer for domain generalization
Duong et al. Shrinkteanet: Million-scale lightweight face recognition via shrinking teacher-student networks
CN111105160A (en) Steel quality prediction method based on tendency heterogeneous bagging algorithm
CN110287770B (en) Water individual target matching identification method based on convolutional neural network
CN109348229B (en) JPEG image mismatch steganalysis method based on heterogeneous feature subspace migration
CN113076927A (en) Finger vein identification method and system based on multi-source domain migration
CN112164033A (en) Abnormal feature editing-based method for detecting surface defects of counternetwork texture
CN114419413A (en) Method for constructing sensing field self-adaptive transformer substation insulator defect detection neural network
CN115759205A (en) Negative sample sampling method based on multi-model cooperation contrast learning
CN114998602A (en) Domain adaptive learning method and system based on low confidence sample contrast loss
CN114006870A (en) Network flow identification method based on self-supervision convolution subspace clustering network
CN110751191A (en) Image classification method and system
CN115147632A (en) Image category automatic labeling method and device based on density peak value clustering algorithm
CN116452862A (en) Image classification method based on domain generalization learning
CN115578248A (en) Generalized enhanced image classification algorithm based on style guidance
Wei et al. Edge devices clustering for federated visual classification: A feature norm based framework
Kim et al. Semi-supervised domain adaptation via selective pseudo labeling and progressive self-training
CN113919440A (en) Social network rumor detection system integrating dual attention mechanism and graph convolution
CN110992320A (en) Medical image segmentation network based on double interleaving
WO2023201772A1 (en) Cross-domain remote sensing image semantic segmentation method based on adaptation and self-training in iteration domain
CN110717068A (en) Video retrieval method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination