CN110992334B - Quality evaluation method for DCGAN network generated image - Google Patents

Quality evaluation method for DCGAN network generated image Download PDF

Info

Publication number
CN110992334B
CN110992334B CN201911200153.XA CN201911200153A CN110992334B CN 110992334 B CN110992334 B CN 110992334B CN 201911200153 A CN201911200153 A CN 201911200153A CN 110992334 B CN110992334 B CN 110992334B
Authority
CN
China
Prior art keywords
pictures
dcgan
dcgan network
classifier
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911200153.XA
Other languages
Chinese (zh)
Other versions
CN110992334A (en
Inventor
李潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Homwee Technology Co ltd
Original Assignee
Homwee Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Homwee Technology Co ltd filed Critical Homwee Technology Co ltd
Priority to CN201911200153.XA priority Critical patent/CN110992334B/en
Publication of CN110992334A publication Critical patent/CN110992334A/en
Application granted granted Critical
Publication of CN110992334B publication Critical patent/CN110992334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image processing, and discloses a quality evaluation method for DCGAN network generated images, which is used for improving the accuracy of the quality evaluation of the DCGAN network generated images. According to the method, firstly, pictures generated by a DCGAN network are used as input of the DCGAN network to be iterated repeatedly, the pictures are stored for one time in an intermittent mode in the iteration process, and a part of pictures with good quality are selected from the pictures stored for each time; then, labeling the pictures, simultaneously taking out a part of the pictures from the original pictures and labeling the pictures, and mixing the pictures together in equal proportion; then, obtaining a qualified classifier through mixed picture set training; inputting the mixed picture set into a DCGAN network, generating a certain number of pictures x, and putting the pictures x into a classifier to classify the pictures x, thereby obtaining a multi-dimensional vector y and a probability p (y) thereof; and finally, obtaining a quality evaluation result of the image generated by the DCGAN network based on the probability p (y). The method is suitable for the quality evaluation of the image generated by the DCGAN network.

Description

Quality evaluation method for DCGAN network generated image
Technical Field
The invention relates to the field of image processing, in particular to a quality evaluation method for DCGAN network generated images.
Background
GAN is collectively referred to as a Generative adaptive Networks, meaning a Generative countermeasure network. The original GAN is an unsupervised learning method, which skillfully utilizes the thought of 'antagonism' to learn a generative model, and can generate a brand new data sample once training is completed. The DCGAN expands the concept of GAN into a convolutional neural network, and can generate picture samples with higher quality.
The generation type confrontation network is the most popular image generation method at present, various GAN networks are also in endless, the quality of generated pictures is higher and higher, but currently, the judgment methods for the quality of the pictures generated by the confrontation network are not many, people usually judge whether a confrontation network is generated or not according to the quality of the finally generated pictures, but most of the confrontation networks are qualitatively and subjectively judged by a visual observation method to judge the difference between the generated pictures and real pictures. The most popular quantitative evaluation method at present IS an IS and FID discrimination method, the IS can only discriminate the diversity of the generated image, the generated sample and the real sample are not compared, and the method has certain defects, while the FID value also depends on the inclusion Net of ImageNet and training, the inclusion activation value between the real image and the generated image IS compared, and the comparison makes the activation values of the real image and the generated image approximate to gaussian distribution, and the improvement of detail change cannot be clearly explained.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a quality evaluation method for DCGAN network generated images is provided to improve the accuracy of the quality evaluation of the DCGAN network generated images.
In order to solve the problems, the technical scheme adopted by the invention is as follows: the quality evaluation method for the DCGAN network generated image comprises the following steps:
step 1: repeatedly iterating the picture generated by the DCGAN network as the input of the DCGAN network until the iteration times reach a threshold value M times, wherein M is more than 1, and the initial input of the DCGAN network is an original picture prepared by a user; in the iteration process, outputting and storing a picture after each iteration is carried out for N times, wherein M is an integral multiple of N; after iteration is finished, selecting a part of pictures with better quality from the pictures stored each time for subsequent picture mixing, wherein the pictures stored each time can be sorted from high to low according to quality during selection, and a part of the pictures with the top sorting, such as the top 1/4 and the top 1/3, is selected;
and 2, step: respectively labeling the selected pictures in the step 1 with different labels, simultaneously taking out a part of the pictures from the original pictures and labeling, and then mixing the labeled pictures together in equal proportion to obtain a mixed picture set;
and 3, step 3: inputting a part of the mixed picture set as a training set into a classifier to train the classifier, testing the classification precision of the classifier by using the rest part of the mixed picture set, and entering the step 4 when the classification precision meets the requirement;
and 4, step 4: inputting the mixed picture set into a DCGAN network, enabling the mixed picture set to generate a certain number of pictures which are called x, then putting x into the classifier obtained in the step 3 to be classified, thereby obtaining a multidimensional vector y and the probability p (y) of the vector y, wherein the value of each dimension of the vector y corresponds to the probability p (y | x) that x belongs to various pictures, and obtaining the quality evaluation result of the images generated by the DCGAN network based on the probability p (y | x) and the probability p (y).
Further, after the probabilities p (y) and p (y | x) are obtained in step 4, the quality evaluation result of the DCGAN network generated image can be obtained by calculating the divergence of p (y) and p (y | x), wherein the smaller the divergence is, the better the quality evaluation result of the DCGAN network generated image is.
Further, a preferred allocation manner of step 3 is: and inputting 90% of the mixed picture set into a classifier as a training set to train the classifier, and testing the classification precision of the classifier by using the remaining 10% of the mixed picture set.
Further, the final layer of the activation function of the generator of the DCGAN network preferably uses the tanh function. The reason for using the tanh function is that the last layer outputs an image, and the pixel value of the image has a range, such as 0 to 255. The output of the ReLU function may be large, while the output of the tanh function is between-1 and 1, so long as the pixel values of 0 to 255 are obtained by adding 1 to the output of the tanh function and multiplying by 127.5.
The beneficial effects of the invention are: the invention adopts the idea of iteration and the idea in the IS method, and can well reflect the fact that the characteristic situation of loss of a GAN network under the iteration of a generation and a generation at the end on another level, and indirectly judge the quality of DCGAN from the other aspect. In addition, the present example is able to identify the over-fit problem well through iteration.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
The existing evaluation means is too harsh to analyze a network model and analyze results weakly, but a generative confrontation network is used, so that a better result can be obtained, the quality of the confrontation network can be judged, and the result can be used for starting, so that when the DCGAN is evaluated, the DCGAN is generated based on convolution characteristic extraction according to the DCGAN, the principle is similar to heredity, and the relation coefficient of analyzing the original data and the generated data can just become a judgment standard. Both the IS and FID values are based on the inclusion method, which cannot distinguish the relationship between image quality and image diversity, in other words, the two methods only know whether the quality of the last picture IS good or bad, and do not know whether the reason for affecting the quality of the last picture IS related to the original data. When the factors influencing the DCGAN network are analyzed, the factors probably analyzing long parameters still cannot find the reason, and finally, the factors are found to be the factors with poor original training data. Our approach can be very good at avoiding this problem. Our method solves this problem with new methods of image classification based on accuracy, improves the IS method, and demonstrates the significant difference between the real image and the generated image. And the overfitting problem can be better distinguished aiming at the overfitting problem of DCGAN training. The inspiration of the invention comes from finding the son according to the appearance of the couple, and can also distinguish the parents according to the appearance of the son, because the picture has the pixel characteristics which can be reflected in the aspect.
The scheme of the invention is explained in detail by combining the attached drawings, and the specific scheme of the invention is as follows:
(1) The method mainly comprises the steps that the DCGAN selects a proper generator, a discriminator and a data set, and finds the best classification effect through an optimized loss function, a training method and a classifier parameter-adjusting optimization mode. And the implementation is to implement DCGAN on pytorech.
(2) The structure of a generated model in the DCGAN is as follows: the generator first generates a 100-dimensional noise, which can be regarded as a 100 × 1 picture, and since the training data set is a 3 × 96 picture, the resolution of the final generated picture by the generator should also be 3 × 96. After five convolution layers, pictures with resolution of 1024 × 4 — >256 × 8 — >64 × 16 — >64 × 32 — >3 × 96 are output, respectively, and the fully-connected layers of the first four layers are normalized by batch norm2d (ndf) in each small batch (mini-batch) data, and the mean and standard deviation of the input dimensions are calculated. gamma and beta are learnable size-C parameter vectors (C is the input size). During training, the layer calculates the mean and variance of each input and performs a moving average. The input standardization solidification learning process is carried out, the training efficiency can be improved, and the influence caused by poor initialization is reduced. The moving average defaults to a momentum value of 0.1. The activation function of the first four layers is activated by adopting a ReLU, and the output layer is activated by adopting a tanh function. The reason for using the tanh function is that the last layer outputs an image, and the pixel value of the image has a range, such as 0 to 255. The output of the ReLU function may be large, while the output of the tanh function is between-1 and 1, so long as the pixel values of 0 to 255 are obtained by adding 1 to the output of the tanh function and multiplying by 127.5.
(3) The network structure of the discriminator and the network of the generator are very similar, the discriminator is basically a symmetrical process, the network is provided with five layers, the first four layers are reduced in size through convolution of Conv2d two-dimensional convolution layer functions, the scale of the discriminator is the reverse process of the generator, and pictures with the resolution of 3 × 96-. And then, normalization operation is carried out, the first four layers are all activated by an activation function LeakyReLU, the last layer of output layer has no activation function, and the last layer of output layer is normalized to a number between 0 and 1 through a Sigmoid () function, and the probability that the picture is true is also shown.
(4) The method can adopt the cartoon head portrait data set as experimental data, and applies a deep learning algorithm Convolutional Neural Network (CNN) to extract the characteristics of the cartoon head portrait, and predicts and classifies the characteristic samples extracted by the CNN through a machine learning algorithm. The parameter adjusting and optimizing experiment of the SVM classifier is mainly completed in a machine learning classification algorithm, and 20 combinations of three parameters including kernel, C and gamma are compared and optimized respectively. And other machine learning classification methods are used as comparison experiments, which comprise: k nearest neighbor classification (KNN), gaussian naive Bayes classification (GNB), extreme random tree classification (ET), random forest classification (RF), multi-layer perceptron classification (MLP), linear discriminant analysis classification (LDA), self-training increment net and the like. The comprehensive comparative evaluation of the classification efficiency and precision of each classifier is carried out in the experiment by using evaluation standards such as a t-SNE characteristic diagram, a confusion matrix, accuracy, recall ratio, F1 value and the like. And averaging the final classification precision to obtain a value p (w).
(5) As shown in fig. 1, the trained DCGAN network is debugged, 3 ten thousand pictures are also generated by training 3 ten thousand original cartoon characters (the number of the pictures can be adjusted according to the user requirement), and then the generated 3 ten thousand pictures are used as training data to enter the DCGAN network to generate 3 ten thousand pictures, so that the iteration is continuously performed. After every 5 iteration times (the iteration times can be adjusted according to the needs of users), storing and outputting the pictures once, iterating for 25 times, outputting 5 times of pictures totally, outputting 3 ten thousand pictures each time, manually picking out 1 ten thousand pictures with good quality, sorting the stored pictures each time according to the quality from high to low when picking the pictures, and picking the 1 ten thousand pictures with the top sorting.
(6) And then labeling the pictures output each time, taking out ten thousand pictures from the original data and labeling the pictures, and taking the pictures as six types. And then, taking out 90% of the mixed six classes of sixty thousand pictures according to the proportion, using the 90% of the mixed pictures as a training set, inputting the training set into a classifier, using the remaining 10% of the mixed pictures as test data, testing the classification precision p (z) of the classifier, and entering the step (7) when the classification precision meets the requirement.
(7) Then, the mixed sixty thousand pictures are continuously input into a DCGAN network, 6000 pictures are generated and called x, the 6000 pictures are put into a classifier to be classified, a 6-dimensional vector y and the probability of the vector y are obtained, the value of each dimension of the vector y corresponds to the probability p (y | x) that the pictures belong to the original pictures and 5 th, 10 th, 15 th, 20 th and 25 th iterations generate the pictures, and then the divergence between p (y) and p (y | x) IS calculated just like IS, of course, the divergence between the p (y) and the p (y | x) IS smaller and better, and not, the divergence between the p (y) and the p (y | x) IS larger. But the final result does not have to be done with kl divergence, only two values of p (y) and p (y | x) are already very problematic, they are even somewhat mutually exclusive, i.e. one is good and the other is certainly bad.
(8) The method can also better find the overfitting state, and once overfitting is carried out, no matter how many times of iteration is, the value of the original data in p (y) is always very close to 1, so that the overfitting problem can be well reflected.

Claims (4)

1. The quality evaluation method for the DCGAN network generated image is characterized by comprising the following steps:
step 1: repeatedly iterating the pictures generated by the DCGAN network as the input of the DCGAN network until the iteration times reach a threshold value M times, wherein M is more than 1, and the initial input of the DCGAN network is an original picture prepared by a user; in the iteration process, outputting and storing a picture after each iteration is carried out for N times, wherein M is an integral multiple of N; after iteration is finished, selecting a part of pictures with better quality from the pictures stored each time, sorting the pictures stored each time from high to low according to quality during selection, and taking a part in the front of the sorting as a subsequent picture for mixing;
step 2: respectively labeling the selected pictures in the step 1 with different labels, simultaneously taking out a part of the pictures from the original pictures and labeling, and then mixing the labeled pictures together in equal proportion to obtain a mixed picture set;
and step 3: inputting a part of the mixed picture set as a training set into a classifier to train the classifier, testing the classification precision of the classifier by using the rest part of the mixed picture set, and entering the step 4 when the classification precision meets the requirement;
and 4, step 4: inputting the mixed picture set into a DCGAN network, enabling the mixed picture set to generate a certain number of pictures which are called x, then putting x into the classifier obtained in the step 3 to be classified, thereby obtaining a multidimensional vector y and the probability p (y) of the vector y, wherein the value of each dimension of the vector y corresponds to the probability p (y | x) that x belongs to various pictures, and obtaining the quality evaluation result of the images generated by the DCGAN network based on the probability p (y | x) and the probability p (y).
2. The quality assessment method for DCGAN network generated images according to claim 1, wherein the quality assessment result of DCGAN network generated images is obtained by calculating divergence of p (y) and p (y | x).
3. The method of claim 1, wherein step 3 inputs 90% of the mixed picture set as a training set into a classifier to train the classifier, and then tests the classification accuracy of the classifier using the remaining 10% of the mixed picture set.
4. The quality assessment method for DCGAN network generated images of claim 1, wherein the activation function last layer of the generator of the DCGAN network uses a tanh function.
CN201911200153.XA 2019-11-29 2019-11-29 Quality evaluation method for DCGAN network generated image Active CN110992334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911200153.XA CN110992334B (en) 2019-11-29 2019-11-29 Quality evaluation method for DCGAN network generated image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911200153.XA CN110992334B (en) 2019-11-29 2019-11-29 Quality evaluation method for DCGAN network generated image

Publications (2)

Publication Number Publication Date
CN110992334A CN110992334A (en) 2020-04-10
CN110992334B true CN110992334B (en) 2023-04-07

Family

ID=70088353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911200153.XA Active CN110992334B (en) 2019-11-29 2019-11-29 Quality evaluation method for DCGAN network generated image

Country Status (1)

Country Link
CN (1) CN110992334B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111969B (en) * 2021-05-03 2022-05-06 齐齐哈尔大学 Hyperspectral image classification method based on mixed measurement

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011039831A (en) * 2009-08-12 2011-02-24 Kddi Corp Re-learning method for support vector machine
JP2014203134A (en) * 2013-04-01 2014-10-27 キヤノン株式会社 Image processor and method thereof
CN106503672A (en) * 2016-11-03 2017-03-15 河北工业大学 A kind of recognition methods of the elderly's abnormal behaviour
CN107392312A (en) * 2017-06-01 2017-11-24 华南理工大学 A kind of dynamic adjustment algorithm based on DCGAN performances
CN108230339A (en) * 2018-01-31 2018-06-29 浙江大学 A kind of gastric cancer pathological section based on pseudo label iteration mark marks complementing method
CN108399406A (en) * 2018-01-15 2018-08-14 中山大学 The method and system of Weakly supervised conspicuousness object detection based on deep learning
CN108665005A (en) * 2018-05-16 2018-10-16 南京信息工程大学 A method of it is improved based on CNN image recognition performances using DCGAN
CN109063723A (en) * 2018-06-11 2018-12-21 清华大学 The Weakly supervised image, semantic dividing method of object common trait is excavated based on iteration
CN109389138A (en) * 2017-08-09 2019-02-26 武汉安天信息技术有限责任公司 A kind of user's portrait method and device
CN109445895A (en) * 2018-10-26 2019-03-08 深圳易嘉恩科技有限公司 The method and device of the non-distorted load large scale picture of Android platform
CN109614921A (en) * 2018-12-07 2019-04-12 安徽大学 A kind of cell segmentation method for the semi-supervised learning generating network based on confrontation
CN110288013A (en) * 2019-06-20 2019-09-27 杭州电子科技大学 A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011039831A (en) * 2009-08-12 2011-02-24 Kddi Corp Re-learning method for support vector machine
JP2014203134A (en) * 2013-04-01 2014-10-27 キヤノン株式会社 Image processor and method thereof
CN106503672A (en) * 2016-11-03 2017-03-15 河北工业大学 A kind of recognition methods of the elderly's abnormal behaviour
CN107392312A (en) * 2017-06-01 2017-11-24 华南理工大学 A kind of dynamic adjustment algorithm based on DCGAN performances
CN109389138A (en) * 2017-08-09 2019-02-26 武汉安天信息技术有限责任公司 A kind of user's portrait method and device
CN108399406A (en) * 2018-01-15 2018-08-14 中山大学 The method and system of Weakly supervised conspicuousness object detection based on deep learning
CN108230339A (en) * 2018-01-31 2018-06-29 浙江大学 A kind of gastric cancer pathological section based on pseudo label iteration mark marks complementing method
CN108665005A (en) * 2018-05-16 2018-10-16 南京信息工程大学 A method of it is improved based on CNN image recognition performances using DCGAN
CN109063723A (en) * 2018-06-11 2018-12-21 清华大学 The Weakly supervised image, semantic dividing method of object common trait is excavated based on iteration
CN109445895A (en) * 2018-10-26 2019-03-08 深圳易嘉恩科技有限公司 The method and device of the non-distorted load large scale picture of Android platform
CN109614921A (en) * 2018-12-07 2019-04-12 安徽大学 A kind of cell segmentation method for the semi-supervised learning generating network based on confrontation
CN110288013A (en) * 2019-06-20 2019-09-27 杭州电子科技大学 A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Co-Labeling for Multi-view Weakly Labeled;Xu,Xinxing;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20160601;第1113-1125页 *
Multi-Phase Offline Signature Verification System Using Deep Convolutional;Zhang,Zehua;《PROCEEDINGS OF 2016 9TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 2》;20170616;第103-107页 *
低分辨率人脸图像的迭代标签传播识别算法;薛杉;《模式识别与人工智能》;20180731;第31卷(第7期);第602-611页 *
基于半监督生成对抗网络X光图像分类算法;刘坤;《光学学报》;20190831;第39卷(第8期);第1-9页 *
基于深度卷积生成对抗网络的航拍图像去厚云方法;李从利;《兵工学报》;20190731;第40卷(第7期);第1434-1442页 *
基于深度学习的图像样本标签赋值校正算法实现;舒忠;《数字印刷》;20191010;第38-46页 *

Also Published As

Publication number Publication date
CN110992334A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN109063724B (en) Enhanced generation type countermeasure network and target sample identification method
Bendale et al. Towards open set deep networks
CN111160176B (en) Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
Moallem et al. Optimal threshold computing in automatic image thresholding using adaptive particle swarm optimization
CN111832650B (en) Image classification method based on generation of antagonism network local aggregation coding semi-supervision
CN109993057A (en) Method for recognizing semantics, device, equipment and computer readable storage medium
CN109919055B (en) Dynamic human face emotion recognition method based on AdaBoost-KNN
CN111311702B (en) Image generation and identification module and method based on BlockGAN
CN110110845B (en) Learning method based on parallel multi-level width neural network
CN107358172B (en) Human face feature point initialization method based on human face orientation classification
CN112699717A (en) SAR image generation method and generation device based on GAN network
CN108805061A (en) Hyperspectral image classification method based on local auto-adaptive discriminant analysis
CN110992334B (en) Quality evaluation method for DCGAN network generated image
CN110837818A (en) Chinese white sea rag dorsal fin identification method based on convolutional neural network
CN107766792A (en) A kind of remote sensing images ship seakeeping method
CN117197591B (en) Data classification method based on machine learning
CN113221758B (en) GRU-NIN model-based underwater sound target identification method
CN114037001A (en) Mechanical pump small sample fault diagnosis method based on WGAN-GP-C and metric learning
CN112818774A (en) Living body detection method and device
CN109376619A (en) A kind of cell detection method
Storrs et al. Unsupervised learning predicts human perception and misperception of specular surface reflectance
Dai et al. Foliar disease classification
CN112733963B (en) General image target classification method and system
CN106803080B (en) Complementary pedestrian detection method based on shape Boltzmann machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221104

Address after: Floor 29, Building 1, No. 199, Tianfu 4th Street, Chengdu Hi tech Zone, China (Sichuan) Pilot Free Trade Zone, Chengdu 610000, Sichuan

Applicant after: Homwee Technology Co.,Ltd.

Address before: 518057 unit 01, 23rd floor, Changhong science and technology building, Keji South 12 road, high tech Zone, Yuehai street, Nanshan District, Shenzhen, Guangdong

Applicant before: SHENZHEN YIJIAEN TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant