CN112085194B - Distributed robustness confrontation learning method - Google Patents

Distributed robustness confrontation learning method Download PDF

Info

Publication number
CN112085194B
CN112085194B CN202010891222.2A CN202010891222A CN112085194B CN 112085194 B CN112085194 B CN 112085194B CN 202010891222 A CN202010891222 A CN 202010891222A CN 112085194 B CN112085194 B CN 112085194B
Authority
CN
China
Prior art keywords
covariate
model
covariates
current
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010891222.2A
Other languages
Chinese (zh)
Other versions
CN112085194A (en
Inventor
崔鹏
刘家硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010891222.2A priority Critical patent/CN112085194B/en
Publication of CN112085194A publication Critical patent/CN112085194A/en
Application granted granted Critical
Publication of CN112085194B publication Critical patent/CN112085194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a distributed robustness countercheck learning method, and belongs to the technical field of robust learning methods and countercheck learning. The method includes the steps that firstly, multi-environment training data are obtained, a covariate set and a target variable set of the training data are respectively established, the relative robustness of the covariate is deduced while model optimization is carried out on the multi-environment training data, different covariates are weighted under a distributed robust learning framework according to the deduced robustness, a confrontation distribution set which is more consistent with reality is constructed, and finally trained model parameters and covariate weight vectors are obtained. When the method is applied, the relative robustness of different covariates is distinguished through the covariate weight, and after the covariate weight is input into a model, a more accurate classification result is obtained. The method is based on the observation that different covariates have different degrees of robustness in the actual situation, different covariates are treated differently to construct a confrontation distribution set which is more consistent with the actual situation, more effective distribution robust optimization is carried out, and the method has high application value in various fields such as image classification and the like.

Description

Distributed robustness confrontation learning method
Technical Field
The invention belongs to the technical field of robust learning methods and antagonistic learning, and particularly provides a distributed robust antagonistic learning method.
Background
Traditional machine learning methods are based on empirical risk minimization, and such methods often exhibit poor generalization performance when there is potential heterogeneity, confounding factors, or distribution bias in the training data, so that the prediction performance in real environments is not stable. In distributed robust learning, the worst case of a confrontation distribution set is expected to be constructed and optimized to achieve better generalization performance under the condition of distribution bias, but the problem that the constructed confrontation distribution set is too large exists in the existing method, and part of the confrontation distribution set is not required to be considered in a real scene, so that in practical application, particularly under the condition of strong distribution bias, the practical generalization effect is poor.
In a real application scenario, for example, in an image classification task, covariates representing color, texture, and background change more severely in different environments than covariates representing target objects, and represent different covariate robustness differences, when the method is applied in an actual scenario, if the method is applied to a scenario rarely found in training data, the classification performance of the machine learning model is greatly affected. The existing distributed robust learning method in image classification mainly carries out tiny disturbance on images, and the disturbed images are used for training to realize the robustness of the model under the tiny disturbance. Because the prior method does not distinguish different covariates to be treated, the image can only be slightly disturbed, and the robustness under the condition of only slight disturbance can be ensured on the assumption that the image label is not changed after disturbance.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a distributed robustness counterstudy method. Based on the observation that different covariates have different degrees of robustness in the actual situation, the method and the device construct a confrontation distribution set which is more in line with the reality by treating different covariates differently, and perform more effective distribution robust optimization.
The invention provides a distributed robustness confrontation learning method which is characterized by comprising the following steps:
1) Acquiring multi-environment training data;
selecting training data D from different environments e epsilon e ={X e ,Y e Form multi-environment training data, where ε is the set of environments, X e The covariate array of environment e consisting of covariates from all training samples of environment e, Y e An array of target variables for the environment consisting of the target variables for all training samples from environment e; forming the covariate arrays of all environments into a covariate set, and forming the target variable arrays of all environments into a target variable set;
2) Establishing a model: the model is a mapping from the space of covariates X to the target variable Y;
3) Initializing a weight vector formed by each one-dimensional weight of the covariates in the covariate set obtained in the step 1) into a vector of all 1, namely the relative robustness of each one-dimensional covariate is the same, and taking the weight vector as the current weight vector w of the covariates;
4) Constructing a current confrontation distribution set under Wasserstein distance measurement by utilizing a covariate current weight vector w
Figure BDA0002657059130000021
Wherein c is w (z 1 ,z 2 )=|w⊙(z 1 -z 2 )| 2
Wherein P is 0 Representing the original training data distribution, p is the radius of the set of challenge distributions,
Figure BDA0002657059130000028
representative distribution Q and initial distribution P 0 The Wasserstein distance between distributions of (c); z is a radical of i =(x i ,y i ) Represents a training sample i, where x i Is the covariate of the sample, y i I =1,2 \ 8230for the target variable for this sample;
5) For current confrontation distribution set
Figure BDA0002657059130000027
Using the current parameter theta of the model established in the step 2) of the distributed robust optimization learning:
Figure BDA0002657059130000022
wherein the content of the first and second substances,
Figure BDA0002657059130000029
is the error on the sample point (X, Y);
Figure BDA0002657059130000023
the expectation for errors on the data obeying the distribution Q for the model current parameter θ;
the initial value of the current parameter theta of the model is a result after random initialization;
6) Calculating by using the current theta and the current w and by using the multi-environment training data obtained in the step 1)
Figure BDA0002657059130000024
Figure BDA0002657059130000025
Wherein
Figure BDA0002657059130000026
Representing the average error under the environment e, then updating the covariate current weight vector w by using R (theta (w)), and then returning to the step 4); wherein the parameter alpha is a hyper-parameter, and alpha is more than 0;
7) Repeating the steps 4) to 6), adopting a gradient descent training model until the model is converged, wherein the current parameter theta of the model at the moment is the final model parameter, and the current weight vector w of the covariate at the moment is the final covariate weight vector after the model is trained;
8) Randomly obtaining a test sample, and endowing the covariates in the test sample to the final covariate weight vector obtained in the step 7) to obtain the corrected covariates of the test sample; and inputting the corrected covariates into the trained model in the step 7), wherein the output of the model is the target variable prediction result of the test sample.
The invention has the characteristics and beneficial effects that:
1 the invention constructs a more sparse antagonistic distribution set to carry out distributed robust learning, and ensures the sparsity of the set under the same robustness.
2, the method deduces the relative robustness of the covariates while optimizing the model through multi-environment training data, and weights different covariates under a distributed robust learning framework according to the inferred relative robustness of the covariates to construct a confrontation distribution set which is more in line with the reality.
3, the optimized linear regression model of the invention can obviously reduce the estimation error of the model parameters and keep the prediction performance of the robustness in a complex and changeable test environment
4 the invention can provide strong algorithm robustness guarantee, in practical application, the incidence relation in the data is likely to be different along with the change of time, region and user type, and if the model optimized by the existing empirical risk minimization method is used, great performance loss can be met under the change. The method can effectively cope with the change, so that the model has uniform and good generalization performance when the data distribution is different.
5, the directional distribution robust learning method provided by the invention distinguishes the relative robustness of different covariates through data of multiple environments, and tends to apply great disturbance to unstable covariates. In the task of image classification, compared with the previous method which can only uniformly and slightly disturb all covariates because the covariates are not distinguished, the method can greatly disturb 'unstable' regions in the image, such as the background, the whole color and the like of the image, so that the false association of the background, the color and the like of the image and an image label can be destroyed, the model can be predicted by more stable and interpretable features, and stronger robustness can be realized.
Detailed Description
The invention provides a distributed robustness counterlearning method, which is further described in detail below by combining specific embodiments.
The invention provides a distributed robustness confrontation learning method, which comprises the following steps:
1) Acquiring multi-environment training data: selecting training data D from different environments e E epsilon e ={X e ,T e Form multi-environment training data, where ε is the set of environments, X e An array of covariates for environment e (there may be more than one training sample in each environment, the covariate in each training sample is a piece of multidimensional data, and the covariates for all training samples have the same dimension), and Y is composed of covariates for all training samples from environment e e The target variable array of the environment is composed of target variables of all training samples from the environment e (the target variable in each training sample is one-dimensional data) (the number of training samples selected in each environment is not necessarily the same). And (3) forming the covariate arrays of all environments into a covariate set (the set is a matrix, and the size of the covariate set is multiplied by the total number of all training samples), and forming the target variable arrays of all environments into a target variable set.
2) Establishing a model: here, the model abstraction is used to map the covariate X space to the target variable Y, and the present invention is not designed for a specific model and is suitable for optimizing various models.
3) Initializing the weight vector formed by the weight of each dimension of the covariate in the covariate set obtained in the step 1) into a vector of all 1, namely the relative robustness of each dimension of the covariate is the same, and taking the weight vector as the current weight vector w of the covariate.
4) Constructing a current confrontation distribution set under Wasserstein distance measurement by utilizing a covariate current weight vector w
Figure BDA0002657059130000031
Wherein c is w (z 1 ,z 2 )=|w⊙(z 1 -z 2 )| 2
Wherein, P 0 Representing the original training data distribution, p is the radius of the set of challenge distributions,
Figure BDA00026570591300000417
representative distribution Q and initial distribution P 0 Represents all distributions Q and an initial distribution P included in the confrontation distribution set 0 The Wasserstein distance between the distributions of (a) does not exceed rho; z is a radical of i =(x i ,y i ) Represents a training sample i, where x i Is the covariate of the sample, y i I =1,2 \ 8230for the target variable for this sample;
wherein the Wasserstein distance is defined as: suppose that
Figure BDA0002657059130000041
And is provided with
Figure BDA0002657059130000042
Wherein
Figure BDA0002657059130000043
Representing the space of the input covariate,
Figure BDA0002657059130000044
representing a target variable space, given a transfer cost function
Figure BDA0002657059130000045
Satisfy nonnegativity, lower semicontinuousness, and c (z, z) =0, then for the support set
Figure BDA00026570591300000418
Distribution P ofQ, wasserstein distance between distributions P and Q is defined as:
W c (P,Q)=inf {M∈Π(P,Q)} E {(z,z′~M)} [c(z,z′)]
wherein pi (P, Q) represents
Figure BDA0002657059130000046
Set of upper measure M and satisfy
Figure BDA0002657059130000047
And is provided with
Figure BDA0002657059130000048
5) For current confrontation distribution set
Figure BDA00026570591300000416
Learning the current parameters θ of the model built in step 2) using distributed robust optimization:
Figure BDA0002657059130000049
wherein
Figure BDA00026570591300000410
The current confrontational distribution set constructed for the algorithm, theta is the model current parameter,
Figure BDA00026570591300000411
is the error on the sample point (X, Y);
Figure BDA00026570591300000412
the expectation for errors on the data obeying the distribution Q is for the model current parameter θ.
If the step is executed for the first time, the model parameter theta is a result after random initialization, and the weight w of the covariate is an initially initialized full 1 vector; if not, both θ and w are the results of the current optimization.
6) Utilizing the current theta and the current weight w, and utilizing the multi-environment obtained in the step 1)Training data, calculation
Figure BDA00026570591300000413
Figure BDA00026570591300000414
Wherein
Figure BDA00026570591300000415
Representing the average error under the environment e, then updating the covariate current weight vector w by using R (theta (w)), and then returning to the step 4); wherein the parameter alpha is a set hyper-parameter, and alpha is required to be more than 0.
7) And repeating the steps 4) to 6), adopting a gradient descent training model until the model is converged, wherein the current parameter theta of the model at the moment is the final model parameter, and the current weight vector w of the covariate at the moment is the final covariate weight vector after the model is trained.
8) Randomly obtaining a test sample, and endowing the covariates in the test sample to the final covariate weight vector obtained in the step 7) to obtain the corrected covariates of the test sample; and inputting the corrected covariates into the trained model in the step 7), wherein the output of the model is the target variable prediction result of the test sample.
After the final model parameter theta and the covariate weight w are obtained, the model parameter theta can be directly used for predicting test data/unknown data; the robustness degree of different covariates under multiple environments can be described to a certain degree by the weight of the covariate, and the higher the weight is, the more stable the relationship between the covariate and the target variable Y is proved, so that the covariate is more suitable for prediction; otherwise, it tends to be falsely correlated and unsuitable for prediction.
The present invention is further described in detail below with reference to a specific example.
The method can be applied to image classification tasks, and the camel and the horse in the picture are classified, wherein the most camel picture backgrounds are deserts, and the most horse picture backgrounds are grasslands; a few are the opposite.
The embodiment provides a distributed robustness counterstudy method, which comprises the following steps:
1) Acquiring multi-environment training data: selecting training data D from different environments e epsilon e ={X e ,T e Form multi-environment training data, where ε is the set of environments, X e An array of covariates for environment e (there may be more than one training sample in each environment, the covariate in each training sample is a piece of multidimensional data, and the covariates for all training samples have the same dimension), and Y is composed of covariates for all training samples from environment e e The target variable array of the environment (the target variable in each training sample is one-dimensional data) is composed of the target variables of all the training samples from the environment e (the number of training samples selected in each environment is not necessarily the same). And (3) forming the covariate arrays of all environments into a covariate set (the set is a matrix, and the size of the covariate set is multiplied by the total number of all training samples), and forming the target variable arrays of all environments into a target variable set.
The inputs for this example are two pictures of the environment and corresponding category labels, one environment being the "vast majority" above (i.e. camel in the picture in desert, horse on grass); another environment is the minority (i.e. camels on grass in pictures, horses in deserts). In the embodiment, the picture is a covariate, and the category label of the picture is a target variable;
2) Establishing a model: a general image classification neural network is sufficient.
3) Initializing a weight vector formed by the weight of each dimension of the covariates in the covariate set obtained in the step 1) into a vector of all 1, namely the relative robustness of each dimension of the covariates is the same, and taking the weight vector as the current weight vector w of the covariates.
4) Constructing a current confrontation distribution set under the Wasserstein distance measurement by utilizing a covariate current weight vector w
Figure BDA0002657059130000051
Wherein c is w (z 1 ,z 2 )=|w⊙(z 1 -z 2 )| 2
Wherein P is 0 Representing the original training data distribution, p is the radius of the set of challenge distributions,
Figure BDA0002657059130000056
representative distribution Q and initial distribution P 0 The Wasserstein distance between distributions; z is a radical of i =(x i ,y i ) Represents a training sample i, where x i Is the covariate of the sample, y i I =1,2 \ 8230for the target variable for this sample;
5) For current confrontation distribution set
Figure BDA0002657059130000055
Learning the current parameters θ of the model built in step 2) using distributed robust optimization:
Figure BDA0002657059130000052
wherein
Figure BDA0002657059130000053
For the current challenge distribution set constructed by the algorithm, theta is the current parameter of the model,
Figure BDA0002657059130000057
is the error on the sample point (X, Y);
Figure BDA0002657059130000054
the expectation for errors on the data obeying the distribution Q is for the model current parameter θ.
If the step is executed for the first time, the model parameter theta is a result after random initialization, and the weight w of the covariate is an initially initialized full 1 vector; if not, both θ and w are the results of the current optimization.
6) Calculating by using the current theta and the current weight w and by using the multi-environment training data obtained in the step 1)
Figure BDA0002657059130000061
Figure BDA0002657059130000062
Wherein
Figure BDA0002657059130000063
Representing the average error under the environment e, then updating the covariate current weight vector w by using R (theta (w)), and then returning to the step 4); wherein the parameter alpha is a set hyper-parameter, and alpha is required to be more than 0.
7) And (5) repeating the steps 4) to 6), adopting a gradient descent training model until the model is converged, wherein the current parameter theta of the model at the moment is the final model parameter, and the current weight vector w of the covariate at the moment is the final covariate weight vector after the model is trained.
8) Randomly obtaining a test sample, and endowing the covariates in the test sample to the final covariate weight vector obtained in the step 7) to obtain the corrected covariates of the test sample; and inputting the corrected covariates into the trained model in the step 7), wherein the output of the model is the target variable prediction result of the test sample.
After the final model parameter theta and the covariate weight w are obtained, the model parameter theta can be directly used for predicting test data/unknown data; the robustness degree of different covariates in a multi-environment can be described to a certain extent by the weight of the covariate, and the higher the weight is, the more stable the relationship between the covariate and the target variable Y is proved, so that the covariate is more suitable for prediction; otherwise, false correlations are prone to be inappropriate for prediction.
In the embodiment, animals and the background in the input picture can be distinguished by using the covariate weight vector w, and the method tends to greatly disturb the background, so that the model cannot predict the label by using the background, namely the model cannot predict camels by using deserts and cannot predict horses by using grasslands.

Claims (1)

1. A distributed robustness confrontation learning method, comprising the steps of:
1) Acquiring multi-environment training data;
selecting training data D from different environments e epsilon e ={X e ,Y e Form multi-environment training data, where ε is the set of environments, X e The covariate array of environment e consisting of covariates from all training samples of environment e, Y e An array of target variables for the environment consisting of the target variables for all training samples from environment e; forming the covariate arrays of all environments into a covariate set, and forming the target variable arrays of all environments into a target variable set;
2) Establishing a model: the model is a mapping from the space of covariates X to the target variable Y;
3) Initializing a weight vector formed by each one-dimensional weight of the covariates in the covariate set obtained in the step 1) into a vector of all 1, namely the relative robustness of each one-dimensional covariate is the same, and taking the weight vector as the current weight vector w of the covariates;
4) Constructing a current confrontation distribution set under Wasserstein distance measurement by utilizing a covariate current weight vector w
Figure FDA0002657059120000011
Wherein c is w (z 1 ,z 2 )=|w⊙(z 1 -z 2 )| 2
Wherein P is 0 Representing the original training data distribution, p is the radius of the set of challenge distributions,
Figure FDA0002657059120000012
representative distribution Q and initial distribution P 0 The Wasserstein distance between distributions of (c); z is a radical of i =(x i ,y i ) Represents a training sample i, where x i Is the covariate of the sample, y i I =1,2 \ 8230for the target variable of the sample;
5) For current confrontation distribution set
Figure FDA0002657059120000013
Learning the model built in step 2) using distributed robust optimizationThe current parameter θ:
Figure FDA0002657059120000014
wherein the content of the first and second substances,
Figure FDA0002657059120000015
is the error on the sample point (X, Y);
Figure FDA0002657059120000016
the expectation for errors on the data obeying the distribution Q for the model current parameter θ;
the initial value of the current parameter theta of the model is a result after random initialization;
6) Calculating by using the current theta and the current w and by using the multi-environment training data obtained in the step 1)
Figure FDA0002657059120000017
Figure FDA0002657059120000018
Wherein
Figure FDA0002657059120000019
Representing the average error under the environment e, then updating the covariate current weight vector w by using R (theta (w)), and then returning to the step 4); wherein the parameter alpha is a hyper-parameter, and alpha is more than 0;
7) Repeating the steps 4) to 6), adopting a gradient descent training model until the model is converged, wherein the current parameter theta of the model at the moment is the final model parameter, and the current weight vector w of the covariate at the moment is the final covariate weight vector after the model is trained;
8) Randomly obtaining a test sample, and endowing the covariates in the test sample to the final covariate weight vector obtained in the step 7) to obtain the corrected covariates of the test sample; and inputting the corrected covariates into the model trained in the step 7), wherein the output of the model is the target variable prediction result of the test sample.
CN202010891222.2A 2020-08-30 2020-08-30 Distributed robustness confrontation learning method Active CN112085194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010891222.2A CN112085194B (en) 2020-08-30 2020-08-30 Distributed robustness confrontation learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010891222.2A CN112085194B (en) 2020-08-30 2020-08-30 Distributed robustness confrontation learning method

Publications (2)

Publication Number Publication Date
CN112085194A CN112085194A (en) 2020-12-15
CN112085194B true CN112085194B (en) 2022-12-13

Family

ID=73728944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010891222.2A Active CN112085194B (en) 2020-08-30 2020-08-30 Distributed robustness confrontation learning method

Country Status (1)

Country Link
CN (1) CN112085194B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205184B (en) * 2021-04-28 2023-01-31 清华大学 Invariant learning method and device based on heterogeneous hybrid data
CN113345525B (en) * 2021-06-03 2022-08-09 谱天(天津)生物科技有限公司 Analysis method for reducing influence of covariates on detection result in high-throughput detection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353548A (en) * 2020-03-11 2020-06-30 中国人民解放军军事科学院国防科技创新研究院 Robust feature deep learning method based on confrontation space transformation network
CN111414937A (en) * 2020-03-04 2020-07-14 华东师范大学 Training method for improving robustness of multi-branch prediction single model in scene of Internet of things

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414937A (en) * 2020-03-04 2020-07-14 华东师范大学 Training method for improving robustness of multi-branch prediction single model in scene of Internet of things
CN111353548A (en) * 2020-03-11 2020-06-30 中国人民解放军军事科学院国防科技创新研究院 Robust feature deep learning method based on confrontation space transformation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Wasserstein距离的双向学习推理;花强等;《河北大学学报(自然科学版)》;20200525(第03期);全文 *

Also Published As

Publication number Publication date
CN112085194A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
Kumar et al. Videoflow: A flow-based generative model for video
Bakurov et al. Structural similarity index (SSIM) revisited: A data-driven approach
Oliva et al. Cross entropy based thresholding for magnetic resonance brain images using Crow Search Algorithm
CN106910192B (en) Image fusion effect evaluation method based on convolutional neural network
CN110210486B (en) Sketch annotation information-based generation countermeasure transfer learning method
Fowlkes et al. Spectral grouping using the nystrom method
Titsias et al. Spike and slab variational inference for multi-task and multiple kernel learning
Oliva et al. Image segmentation by minimum cross entropy using evolutionary methods
CN108073876B (en) Face analysis device and face analysis method
Zhong et al. An unsupervised artificial immune classifier for multi/hyperspectral remote sensing imagery
CN112085194B (en) Distributed robustness confrontation learning method
CN108052881A (en) The method and apparatus of multiclass entity object in a kind of real-time detection construction site image
JP2020537204A (en) Deep Neural Network Normalization Methods and Devices, Instruments, and Storage Media
CN112001903A (en) Defect detection network construction method, abnormality detection method and system, and storage medium
CN112257603B (en) Hyperspectral image classification method and related equipment
CN108038503B (en) Woven fabric texture characterization method based on K-SVD learning dictionary
CN109598220A (en) A kind of demographic method based on the polynary multiple dimensioned convolution of input
CN109544603A (en) Method for tracking target based on depth migration study
CN108121962B (en) Face recognition method, device and equipment based on nonnegative adaptive feature extraction
CN110363218A (en) A kind of embryo's noninvasively estimating method and device
CN111915603A (en) Artificial intelligence prediction method for noise-free phase diagram in noise-containing EBSD data
CN110263808B (en) Image emotion classification method based on LSTM network and attention mechanism
CN111340106A (en) Unsupervised multi-view feature selection method based on graph learning and view weight learning
CN109345497A (en) Image fusion processing method and system, computer program based on fuzzy operator
CN111814804B (en) Human body three-dimensional size information prediction method and device based on GA-BP-MC neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant