CN112232378A - Zero-order learning method for fMRI visual classification - Google Patents

Zero-order learning method for fMRI visual classification Download PDF

Info

Publication number
CN112232378A
CN112232378A CN202011006608.7A CN202011006608A CN112232378A CN 112232378 A CN112232378 A CN 112232378A CN 202011006608 A CN202011006608 A CN 202011006608A CN 112232378 A CN112232378 A CN 112232378A
Authority
CN
China
Prior art keywords
training
images
fmri
image
test set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011006608.7A
Other languages
Chinese (zh)
Inventor
陈健
谢鹏飞
乔凯
梁宁宁
王林元
张子飞
罗旭
魏月纳
闫镔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Original Assignee
Information Engineering University of PLA Strategic Support Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN202011006608.7A priority Critical patent/CN112232378A/en
Publication of CN112232378A publication Critical patent/CN112232378A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention provides a zero-time learning method for fMRI visual classification. The method comprises the following steps: step 1: constructing a data set for zero learning of fMRI visual classification, wherein the data set comprises a training set and a test set, and the training set comprises training set images and training set fMRI brain signals after the training set images are tested to see; the test set comprises a test set image and a test set fMRI brain signal after the test set image is tested to be stimulated; the semantic categories of the images of the training set are different from those of the images of the test set; step 2: training according to a training set, and automatically generating a network based on fMRI brain signals under the image characteristic condition of counterstudy; and step 3: training a semantic category visual classification network according to the test set; and 4, step 4: and (3) inputting the fMRI brain signals of the test set according to the semantic category visual classification network trained in the step (3) to obtain a prediction result, and realizing visual classification of the fMRI brain signals of the test set. The semantic visual classification network of the present invention can be extended to image semantic categories not seen before the test.

Description

Zero-order learning method for fMRI visual classification
Technical Field
The invention relates to the technical field of visual classification based on fMRI, in particular to a zero-order learning method for fMRI visual classification.
Background
fMRI-based visual classification is a key technique for fMRI-based visual information decoding, aiming at predicting the category of external image stimuli from cerebral cortical neural information.
Conventional visual classification models require training of the mapping of visual cortical voxels to image semantic classes on a training set of fMRI data, and then are able to predict the class of the image viewed under test from the new brain visual cortical voxel responses in the test set. Generally, the number of voxels in the visual cortex is large (higher dimensionality), and the fMRI dataset is small in size, which is not conducive to model training. Therefore, before the voxels are input into the model, some important voxels are usually selected from a large number of voxels, namely the dimensionality is reduced, the important voxels are used for constructing the visual classification model, and the classification accuracy is improved. In general, voxel dimension reduction methods can be divided into region-of-interest-based methods, voxel activation-based methods, accuracy-based methods, and principal feature analysis-based voxel selection methods. After the dimension of the voxel is reduced, training and testing are carried out by designing a classification model, and a vision classification task is completed.
Existing visual classification models can be generally classified into three ways: classifier-based approaches, voxel matching-based approaches, and feature matching-based approaches.
(1) Classifier-based approach
The classifier models are trained primarily from the training set, and then the corresponding classes can be predicted from the voxel responses in the validation set. In 2003, Cox and Savoy et al (Cox, d.d.and r.l.savoy, Functional magnetic resonance imaging (fMRI) "bridging", detecting and classifying distributed patterns of fMRI activity in human visual classification, 2003.19(2): p.261-270) proposed the use of SVM classifiers to predict classes, and in addition, various classes of classifiers including the her fis classifier, the K neighbor classifier, and the like were used for visual classification. In 2017, Wen et al (Wen, H., et al, Neural encoding and decoding with deep learning for dynamic natural vision, Cerebral Cortex,2017: p.1-25.) used a pre-trained deep network model to map voxels to input features of the last layer of classifiers, and then completed visual classification using the classifiers.
(2) Voxel matching based approach
Firstly, a voxel template is constructed according to training set data, each class corresponds to one voxel template, then the correlation between the voxels to be predicted in a verification set and each class of voxel templates is calculated, and the class corresponding to the maximum correlation is used as the prediction result of the cortical voxel. Therefore, the key to this type of method is the construction of the voxel template. In 2001, Haxby et al (Haxby, J.V., et al, Distributed and overlapping representations of faces and objects in a vertical temporal coding science,2001.293(5539): p.2425-2430.) directly calculated averages of voxels belonging to the same class in the training set were used as voxel templates for that class. In 2008, Kay et al (Kay, k.n., et al, Identifying natural images from human blue activity, 2008.452(7185): p.352.) first train a visual coding model according to a training set, then input those images of the same category to obtain corresponding predicted voxels, and calculate the average of the predicted voxels as the voxel template of the category.
(3) Feature matching based approach
Firstly, a feature template is constructed according to training set data, each category corresponds to one feature template, then voxels are mapped to a feature space, further, the correlation between the features and the template features is calculated, and the category corresponding to the maximum correlation is used as the prediction result of the cortical voxels. Therefore, the method needs a certain feature as an intermediate bridge, and maps voxels to a feature space for matching calculation. In 2017, Horikawa and Kamitani et al (Horikawa, T.and Y.Kamitani, general decoding of seal and imaged objects using historical Visual Features. Nature communications,2017.8: p.15037.), 2018, Wen et al (Wen, H., et al, Deep resource Network precursors scientific registration and Organization of Visual Features for Rapid modeling. scientific reports,2018.8(1): p.3752.) constructed feature templates using Deep web Features as intermediate bridges.
The current visual classification model design mainly faces the training set and the test set and has the same class set, namely the semantic class of the image in the training set is completely the same as that in the test set. The visual classification model is limited in practical application scenes, and the trained model is difficult to extend to other image semantic categories which are not seen before being tested.
Disclosure of Invention
Aiming at the problem that the existing visual classification model is difficult to expand to the semantic classes of images which are not seen before the test, the invention provides a zero-time learning method for fMRI visual classification.
The zero-time learning method for fMRI visual classification provided by the invention comprises the following steps:
step 1: constructing a data set of zero learning facing fMRI visual classification, wherein the data set comprises a training set and a test set, and the training set comprises training set images and training set fMRI brain signals after the training set images are tested to see; the test set comprises a test set image and test set fMRI brain signals after the test set image is tested to be stimulated; the semantic categories of the images between the training set images and the testing set images are different;
step 2: training according to a training set, and automatically generating a network based on fMRI brain signals under the image characteristic condition of counterstudy;
and step 3: training a semantic category visual classification network according to the test set;
and 4, step 4: and (3) inputting the fMRI brain signals of the test set according to the semantic category visual classification network trained in the step (3) to obtain a prediction result, and realizing visual classification of the fMRI brain signals of the test set.
Further, step 2 comprises:
step 2.1: extracting image high-level semantic features of the training set images by using a pre-training deep network classification model, and forming positive sample pairs by the image high-level semantic features of the training set images and fMRI brain signals of the training set;
step 2.2: according to the image semantic category of the training set image, collecting images of the same image semantic category which do not correspond to the fMRI brain signal, and recording the images as training set category images; extracting image high-level semantic features of the training set class images by using a pre-training deep network classification model same as the step 2.1, generating pseudo fMRI brain signals of the training set class images by a generator generating a countermeasure network according to Gaussian distribution sampled noise vectors under the constraint of the image high-level semantic features, and enabling the image high-level semantic features of the training set class images and the pseudo fMRI brain signals to form negative sample pairs; the pseudo fMRI brain signals of the training set class images contain image semantic class information;
step 2.3: and sending the positive sample pair and the negative sample pair into a discriminator for generating an antagonistic network, introducing a pseudo fMRI brain signal visual classification network, and performing antagonistic training and auxiliary training of the fMRI brain signal visual classification network through the generator and the discriminator to finally achieve balance, so that the discriminator cannot distinguish the positive sample pair from the negative sample pair, and the generated antagonistic network obtained at the moment is the fMRI brain signal automatic generation network.
Further, the generator and the decision device are stacked by a full connection layer, an activation function layer and a normalization layer.
Further, step 3 comprises:
step 3.1: collecting a plurality of images with the same image semantic category according to the image semantic category of the test set image, and recording the collected images as the test set category images; extracting image high-level semantic features of the test set class images by using a pre-training depth network classification model which is the same as that in the step 2.1, generating pseudo fMRI brain signals of the test set class images according to Gaussian-distributed noise vectors by using a generator trained in the step 2.3, and forming pseudo sample pairs by the image high-level semantic features of the test set class images and the pseudo fMRI brain signals of the test set class images;
step 3.2: and (4) training a semantic category visual classification network according to the pseudo sample pair in the step 3.1.
Further, the semantic category visual classification network is formed by stacking a full connection layer, an activation function layer and a normalization layer.
Further, when the semantic category visual classification network is trained, a cross entropy loss function is adopted as a loss function.
The invention has the beneficial effects that:
the zero-time learning method facing the fMRI visual classification provided by the invention is mainly based on the thought of generating confrontation network confrontation learning, adopts the generated confrontation network to generate forged fMRI brain signals, and is introduced into the visual classification model training to realize the zero-time learning of the model;
according to the zero-time learning model for visual classification, the semantic classes of the images in the training set are completely different from those in the testing set, namely, no intersection exists, the images in the testing set and fMRI brain signals thereof do not need to be introduced in model training, and the trained zero-time learning model can be used for recognizing semantic class prediction which does not participate in training before in the specified testing set class set, so that the semantic visual classification network can be expanded to the semantic classes of the images which are not seen before being tested.
Drawings
Fig. 1 is a schematic flowchart of a zero-order learning method for fMRI visual classification according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an fMRI brain signal automatic generation network training under an image feature condition based on counterlearning according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of semantic category visual classification network training according to an embodiment of the present invention;
fig. 4 is a schematic diagram of performing fMRI brain signal visual classification of a test set using a semantic category visual classification network according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The fMRI visual classification means that semantic categories of images seen by a test are directly predicted according to fMRI signals of a brain visual area when the test sees image stimulation. The invention provides a zero-time learning method for fMRI visual classification, which is required to be capable of realizing training and learning of an fMRI brain signal prediction model with an unknown semantic class under the condition that the semantic class of a training set is not crossed with a test set, wherein zero-time learning refers to the fact that the semantic class which does not appear in the training set can be predicted.
As shown in fig. 1, an embodiment of the present invention provides a zero-order learning method for fMRI visual classification, which includes the following steps:
s101: constructing a data set of zero learning facing fMRI visual classification, wherein the data set comprises a training set and a test set, and the training set comprises training set images and training set fMRI brain signals after the training set images are tested to see; the test set comprises a test set image and test set fMRI brain signals after the test set image is tested to be stimulated; the semantic categories of the images between the training set images and the testing set images are different;
s102: training according to a training set, and automatically generating a network based on fMRI brain signals under the image characteristic condition of counterstudy;
specifically, as shown in fig. 2, this step includes the following substeps:
s1021: extracting image high-level semantic features of the training set images by using a pre-training deep network classification model, and forming positive sample pairs by the image high-level semantic features of the training set images and fMRI brain signals of the training set;
the pre-training deep network classification model in the step can adopt classical models such as AlexNet, VGGNet, ResNet and the like. The high-level semantic features of the image extracted in the step are features of the last but one level of the pre-training depth network model, namely features before being sent into the classifier. The class characteristics contain more high-level semantic information and are more helpful to classification.
S1022: according to the image semantic category of the training set image, collecting images of the same image semantic category which do not correspond to the fMRI brain signal, and recording the images as training set category images; extracting image high-level semantic features of the training set class images by using a pre-training depth network classification model which is the same as that in the step S1021, generating pseudo fMRI brain signals of the training set class images by a generator for generating a countermeasure network according to Gaussian distribution sampled noise vectors under the constraint of the image high-level semantic features, and forming negative sample pairs by the image high-level semantic features of the training set class images and the pseudo fMRI brain signals of the training set class images;
in this step, images in the existing public database or images downloaded from the internet may be used when collecting training set category images. Since the subject is not looking at the training set class images, there are no corresponding real fMRI brain signals. The generated pseudo-fMRI brain signals of the training set class images contain image semantic class information.
S1023: the positive sample pair and the negative sample pair are sent to a discriminator for generating an antagonistic network, a pseudo fMRI brain signal visual classification network (namely a training set classifier in fig. 2) is introduced, antagonistic training and auxiliary training of the fMRI brain signal visual classification network are carried out through the generator and the discriminator, balance is finally achieved, the discriminator cannot distinguish the positive sample pair from the negative sample pair, and the generated antagonistic network is the fMRI brain signal automatic generation network.
Specifically, in the prior art, a generation countermeasure Network (GAN) is mainly used for generation of natural images. The generation countermeasure network is one of deep neural networks and comprises a generator network and a discriminator network, wherein in the image generation, the generator network is responsible for generating a forged image through a noise vector of fixed distribution downsampling, the discriminator network is used for distinguishing a real image from the forged image, and the balance is finally achieved through countermeasure training of the generator and the discriminator, so that the discriminator cannot distinguish the real image from the forged image. Different from the traditional effect of generating an antagonistic network in the prior art, the embodiment of the invention is based on the thought of generating the antagonistic network for antagonistic learning, adopts the generated antagonistic network to generate a fake fMRI brain signal, and introduces the fake fMRI brain signal into the visual classification model training to realize zero-time learning of the model.
In the step, the generator and the decision device are formed by stacking a full connection layer, an activation function layer and a normalization layer; the generator is used for realizing nonlinear transformation; and the discriminator is used for outputting the one-dimensional true and false probability through the sigmoid layer. The pseudo-fMRI brain signal visual classification network is used to assist in the generation of training for the antagonistic network. The fMRI brain signals generated by the trained fMRI brain signal automatic generation network imitate the distribution of real brain signals.
The generation and the discriminator are used for carrying out the countertraining and the auxiliary training of the fMRI brain signal visual classification network, and the method comprises the following steps: the training set classifier is used for restraining a pseudo brain signal generated by the generator according to image features in training to contain image semantic category information, and the discriminator is used for restraining the pseudo brain signal generated by the generator in training to be similar to a real brain signal as much as possible, and the pseudo brain signal and the real brain signal support the training of the generator together.
S103: training a semantic category visual classification network according to the test set;
specifically, as shown in fig. 3, this step includes the following two substeps:
s1031: collecting a plurality of images with the same image semantic category according to the image semantic category of the test set image, and recording the collected images as the test set category images; extracting image high-level semantic features of the test set class images by using a pre-training depth network classification model which is the same as that in the step S1021, generating pseudo fMRI brain signals of the test set class images according to Gaussian-distributed noise vectors by using a generator trained in the step S1023, and forming pseudo sample pairs by the image high-level semantic features of the test set class images and the pseudo fMRI brain signals of the test set class images;
s1032: training a semantic class visual classification network (i.e. the test set classifier in fig. 3) according to the pseudo sample pair of step S1031.
In the step, the semantic category visual classification network is formed by stacking a full connection layer, an activation function layer and a normalization layer; and when the semantic category visual classification network is trained, a cross entropy loss function is adopted as a loss function.
S104: as shown in fig. 4, according to the semantic category visual classification network obtained by training, the test set fMRI brain signals are input to obtain a prediction result, so as to realize visual classification of the test set fMRI brain signals.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (6)

1. The zero-time learning method for fMRI visual classification is characterized by comprising the following steps of:
step 1: constructing a data set of zero learning facing fMRI visual classification, wherein the data set comprises a training set and a test set, and the training set comprises training set images and training set fMRI brain signals after the training set images are tested to see; the test set comprises a test set image and test set fMRI brain signals after the test set image is tested to be stimulated; the semantic categories of the images between the training set images and the testing set images are different;
step 2: training according to a training set, and automatically generating a network based on fMRI brain signals under the image characteristic condition of counterstudy;
and step 3: training a semantic category visual classification network according to the test set;
and 4, step 4: and (3) inputting the fMRI brain signals of the test set according to the semantic category visual classification network trained in the step (3) to obtain a prediction result, and realizing visual classification of the fMRI brain signals of the test set.
2. The method of claim 1, wherein step 2 comprises:
step 2.1: extracting image high-level semantic features of the training set images by using a pre-training deep network classification model, and forming positive sample pairs by the image high-level semantic features of the training set images and fMRI brain signals of the training set;
step 2.2: according to the image semantic category of the training set image, collecting images of the same image semantic category which do not correspond to the fMRI brain signal, and recording the images as training set category images; extracting image high-level semantic features of the training set class images by using a pre-training deep network classification model same as the step 2.1, generating pseudo fMRI brain signals of the training set class images by a generator generating a countermeasure network according to Gaussian distribution sampled noise vectors under the constraint of the image high-level semantic features, and enabling the image high-level semantic features of the training set class images and the pseudo fMRI brain signals to form negative sample pairs; the pseudo fMRI brain signals of the training set class images contain image semantic class information;
step 2.3: and sending the positive sample pair and the negative sample pair into a discriminator for generating an antagonistic network, introducing a pseudo fMRI brain signal visual classification network, and performing antagonistic training and auxiliary training of the fMRI brain signal visual classification network through the generator and the discriminator to finally achieve balance, so that the discriminator cannot distinguish the positive sample pair from the negative sample pair, and the generated antagonistic network obtained at the moment is the fMRI brain signal automatic generation network.
3. The method of claim 2, wherein the generator and the decider are stacked by a fully connected layer, an activation function layer, and a normalization layer.
4. The method of claim 2, wherein step 3 comprises:
step 3.1: collecting a plurality of images with the same image semantic category according to the image semantic category of the test set image, and recording the collected images as the test set category images; extracting image high-level semantic features of the test set class images by using a pre-training depth network classification model which is the same as that in the step 2.1, generating pseudo fMRI brain signals of the test set class images according to Gaussian-distributed noise vectors by using a generator trained in the step 2.3, and forming pseudo sample pairs by the image high-level semantic features of the test set class images and the pseudo fMRI brain signals of the test set class images;
step 3.2: and (4) training a semantic category visual classification network according to the pseudo sample pair in the step 3.1.
5. The method of claim 4, wherein the semantic category visual classification network is stacked by a fully connected layer, an activation function layer, and a normalization layer.
6. The method of claim 4, wherein a cross entropy loss function is used as the loss function when training the semantic category visual classification network.
CN202011006608.7A 2020-09-23 2020-09-23 Zero-order learning method for fMRI visual classification Pending CN112232378A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011006608.7A CN112232378A (en) 2020-09-23 2020-09-23 Zero-order learning method for fMRI visual classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011006608.7A CN112232378A (en) 2020-09-23 2020-09-23 Zero-order learning method for fMRI visual classification

Publications (1)

Publication Number Publication Date
CN112232378A true CN112232378A (en) 2021-01-15

Family

ID=74107421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011006608.7A Pending CN112232378A (en) 2020-09-23 2020-09-23 Zero-order learning method for fMRI visual classification

Country Status (1)

Country Link
CN (1) CN112232378A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313232A (en) * 2021-05-19 2021-08-27 华南理工大学 Functional brain network classification method based on pre-training and graph neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140270493A1 (en) * 2013-03-13 2014-09-18 National Taiwan University Adaptable classification method
CN109816630A (en) * 2018-12-21 2019-05-28 中国人民解放军战略支援部队信息工程大学 FMRI visual coding model building method based on transfer learning
CN110580501A (en) * 2019-08-20 2019-12-17 天津大学 Zero sample image classification method based on variational self-coding countermeasure network
CN110691550A (en) * 2017-02-01 2020-01-14 塞雷比安公司 System and method for measuring perception experience
CN111476294A (en) * 2020-04-07 2020-07-31 南昌航空大学 Zero sample image identification method and system based on generation countermeasure network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140270493A1 (en) * 2013-03-13 2014-09-18 National Taiwan University Adaptable classification method
CN110691550A (en) * 2017-02-01 2020-01-14 塞雷比安公司 System and method for measuring perception experience
CN109816630A (en) * 2018-12-21 2019-05-28 中国人民解放军战略支援部队信息工程大学 FMRI visual coding model building method based on transfer learning
CN110580501A (en) * 2019-08-20 2019-12-17 天津大学 Zero sample image classification method based on variational self-coding countermeasure network
CN111476294A (en) * 2020-04-07 2020-07-31 南昌航空大学 Zero sample image identification method and system based on generation countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SUNHEE HWANG ET.AL: ""EZSL-GAN: EEG-based Zero-Shot Learning approach using a Generative Adversarial Network", 《2019 7TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE (BCI)》 *
秦亿青等: "结合场景分类数据的高分遥感图像语义分割方法", 《计算机应用与软件》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313232A (en) * 2021-05-19 2021-08-27 华南理工大学 Functional brain network classification method based on pre-training and graph neural network

Similar Documents

Publication Publication Date Title
EP3961441B1 (en) Identity verification method and apparatus, computer device and storage medium
CN106951825A (en) A kind of quality of human face image assessment system and implementation method
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN110490239A (en) Training method, the quality classification method, device and equipment of image quality control network
CN111368926B (en) Image screening method, device and computer readable storage medium
CN105930834A (en) Face identification method and apparatus based on spherical hashing binary coding
Genovese et al. Towards explainable face aging with generative adversarial networks
CN111368911B (en) Image classification method and device and computer readable storage medium
CN112990282B (en) Classification method and device for fine-granularity small sample images
CN114118165A (en) Multi-modal emotion data prediction method and device based on electroencephalogram and related medium
CN108766464A (en) Digital audio based on mains frequency fluctuation super vector distorts automatic testing method
CN110502989A (en) A kind of small sample EO-1 hyperion face identification method and system
CN113096169A (en) Non-rigid multimode medical image registration model establishing method and application thereof
CN113762326A (en) Data identification method, device and equipment and readable storage medium
CN113052150A (en) Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN113011895A (en) Associated account sample screening method, device and equipment and computer storage medium
CN116958637A (en) Training method, device, equipment and storage medium of image detection model
Liu et al. Learning multiple gaussian prototypes for open-set recognition
CN112232378A (en) Zero-order learning method for fMRI visual classification
CN110163169A (en) Face identification method, device, electronic equipment and storage medium
CN110348516A (en) Data processing method, device, storage medium and electronic equipment
CN109190649B (en) Optimization method and device for deep learning network model server
CN116232699A (en) Training method of fine-grained network intrusion detection model and network intrusion detection method
JP2023154373A (en) Information processing apparatus
CN113205044B (en) Deep fake video detection method based on characterization contrast prediction learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210115