CN111243045A - Image generation method based on Gaussian mixture model prior variation self-encoder - Google Patents

Image generation method based on Gaussian mixture model prior variation self-encoder Download PDF

Info

Publication number
CN111243045A
CN111243045A CN202010024870.8A CN202010024870A CN111243045A CN 111243045 A CN111243045 A CN 111243045A CN 202010024870 A CN202010024870 A CN 202010024870A CN 111243045 A CN111243045 A CN 111243045A
Authority
CN
China
Prior art keywords
encoder
gaussian mixture
variational self
mixture model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010024870.8A
Other languages
Chinese (zh)
Other versions
CN111243045B (en
Inventor
郭春生
周家洛
应娜
陈华华
杨萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202010024870.8A priority Critical patent/CN111243045B/en
Publication of CN111243045A publication Critical patent/CN111243045A/en
Application granted granted Critical
Publication of CN111243045B publication Critical patent/CN111243045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses an image generation method based on a Gaussian mixture model prior variational self-encoder, which comprises the following steps: s11, presetting a generated image training data set; wherein the training data set consists of a plurality of batches of training data; s12, building a prior variational self-encoder network of a Gaussian mixture model; s13, uploading a plurality of preset batches of training data to a variational self-encoder network, and determining posterior distribution and prior distribution of the variational self-encoder network; s14, determining the relation between Gaussian components in the Gaussian mixture model to obtain a mapping function; s15, obtaining a reconstruction loss function and a KL divergence function by using the variational self-encoder network and the obtained mapping function, calculating the loss functions of posterior distribution and prior distribution of the variational self-encoder network, and updating parameters of the variational self-encoder network to generate an image; and S16, when the image is generated, uploading the pseudo input serving as an input image to a variational self-coder network to obtain a finally generated image.

Description

Image generation method based on Gaussian mixture model prior variation self-encoder
Technical Field
The invention relates to the technical field of deep learning, in particular to an image generation method based on a Gaussian mixture model prior variational self-encoder.
Background
In the internet era, machine learning develops rapidly, and great achievements are achieved, wherein an image generation technology is taken as a branch of machine learning and plays an important role in understanding images. The image generation model is a probability model used for carrying out probability modeling on the image, and the deep neural network can be regarded as a very complex nonlinear function with very strong fitting capability and can be used for building the generation model to estimate parameters of a probability density function. The image generation model can be used for generating more different picture samples, recovering image information, converting pictures of different modalities or between the pictures and characters, voice and the like, and predicting the future, for example, predicting the future frame according to the past frame and the current frame in the video.
The variational self-encoder is a well-known image generation model based on deep learning, is a natural development of variational inference, combines the advantages of ELBO and neural network, solves the inference problem in a general scene, and simultaneously solves the generation problem of continuous data. It has many advantages including fast training, stability, etc., and thus has wide application in both theoretical models and industry. However, standard variational auto-coders generate blurred pictures a priori due to the under-fitting problem.
Disclosure of Invention
The invention aims to provide an image generation method based on a Gaussian mixture model prior variational self-encoder aiming at the defects of the prior art, which can be used for modeling a complex image and generating a high-quality picture, thereby greatly improving the generation capability of the model.
In order to achieve the purpose, the invention adopts the following technical scheme:
an image generation method based on a Gaussian mixture model prior variation self-encoder comprises the following steps:
s1, presetting a generated image training data set; wherein the training data set consists of several batches of training data;
s2, building a variational self-encoder network based on Gaussian mixture model prior;
s3, uploading the preset training data of a plurality of batches to a constructed variational self-encoder network, and determining posterior distribution and prior distribution of the variational self-encoder network;
s4, determining the relation between Gaussian components in the Gaussian mixture model to obtain a mapping function;
s5, obtaining a reconstruction loss function and a KL divergence function by using the variational self-encoder network and the obtained mapping function, calculating the loss functions of posterior distribution and prior distribution of the variational self-encoder network according to the obtained reconstruction loss function and the KL divergence function, and updating the parameters of the variational self-encoder network to generate an image;
and S6, when the image is generated, uploading a pseudo input serving as an input image to the variational self-encoder network to obtain a finally generated image.
Further, the step S2 further includes constructing a posterior distribution of hidden variables in the variational autoencoder network.
Further, the parameters in the variational self-encoder network constructed in step S2 include a network input image size C × H × W, a batch size B, a hidden variable dimension D, a hidden variable z, a gaussian mixture number M, a pseudo input α, and a pseudo input number K.
Further, in step S3, the preset batches of training data are uploaded to a constructed variational self-encoder network, where the uploaded training data includes image sample X ═ { X ═ X1,x2,…,xNIn which xiFor the ith sample in the current batch, i is 1, 2, … B, and the pseudo input α is α12,...,αKTherein αjRepresenting the jth dummy input, j ═ 1, 2, … K.
Further, the step S3 determines the posterior distribution of the hidden variables and the prior distribution of the hidden variables in the form of the aggregate posterior;
the posterior distribution of the hidden variables is as follows:
Figure BDA0002362095050000021
the hidden variable prior is:
Figure BDA0002362095050000022
wherein the content of the first and second substances,
Figure BDA0002362095050000023
m denotes the number of Gaussian mixtures, K denotes the number of pseudo inputs, αkA pseudo-input is represented by a number of input,
Figure BDA0002362095050000031
πmrepresenting the coefficients of a gaussian mixture model.
Further, the relationship between the gaussian components in the gaussian mixture model determined in step S4 is determined by a greedy algorithm.
Further, the step S4 is specifically to sequentially construct a mapping function according to the following functions:
Figure BDA0002362095050000032
where a ═ β (t) | t ═ 1., m-1}, β (·) denotes a mapping function.
Further, the reconstruction loss function obtained in step S5 is:
Figure BDA0002362095050000033
wherein n represents the dimension of the input picture; x is the number ofiA value representing the ith dimension of the input sample picture;
Figure BDA0002362095050000038
a value representing the ith dimension of the output picture; l isRERepresenting the reconstruction loss for each sample.
Further, the KL divergence function obtained in step S5 is:
Figure BDA0002362095050000034
wherein L isKLThe KL distance for each sample is indicated.
Further, the calculated loss function is:
Figure BDA0002362095050000035
wherein the content of the first and second substances,
Figure BDA0002362095050000036
representing the reconstruction error of the ith sample;
Figure BDA0002362095050000037
indicating the KL divergence for the ith sample.
Compared with the prior art, the method has the advantages that the variational self-coder network based on the optimized Gaussian mixture model prior is built, the training efficiency is high, the convergence is strong, the network can be used for modeling a complex image to generate a high-quality picture, and the generation capability of the model is greatly improved.
Drawings
FIG. 1 is a flowchart of an image generation method based on a Gaussian mixture model prior variation auto-encoder according to an embodiment;
fig. 2 is a schematic diagram of a variational auto-encoder network based on an optimized gaussian mixture model prior according to an embodiment.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
The invention aims to provide an image generation method based on a Gaussian mixture model prior variation self-encoder aiming at the defects of the prior art.
Example one
The embodiment provides an image generation method based on a gaussian mixture model prior variation self-encoder, as shown in fig. 1-2, including the steps of:
s11, presetting a generated image training data set; wherein the training data set consists of several batches of training data;
s12, building a variational self-encoder network based on Gaussian mixture model prior;
s13, uploading the preset training data of a plurality of batches to a constructed variational self-encoder network, and determining posterior distribution and prior distribution of the variational self-encoder network;
s14, determining the relation between Gaussian components in the Gaussian mixture model to obtain a mapping function;
s15, obtaining a reconstruction loss function and a KL divergence function by using the variational self-encoder network and the obtained mapping function, calculating the loss functions of posterior distribution and prior distribution of the variational self-encoder network according to the obtained reconstruction loss function and the KL divergence function, and updating parameters of the variational self-encoder network to generate an image;
and S16, when the image is generated, uploading a pseudo input serving as an input image to the variational self-encoder network to obtain a finally generated image.
In step S11, generating an image training data set is preset; wherein the training data set consists of several batches of training data.
Preparing a qualified training data set of generated images, defining the training data with the size of B in each batch, and forming a training batch { x ] by B image samples1,x2,…,xN}。
In step S12, a variational self-coder network based on a gaussian mixture model prior is built.
The parameters in the constructed variational autoencoder network comprise network input image size C multiplied by H multiplied by W, batch size B, hidden variable dimension D, hidden variable z, Gaussian mixture number M, pseudo input α and pseudo input number K, wherein in the embodiment, D is 40, M is 3, and K is 500.
It should be noted that, in the present embodiment, a gaussian mixture model is established based on the variational self-encoder. Unlike the variational autocoder, the posterior distribution of the hidden variables is constructed by a gaussian mixture model rather than a single gaussian model, which is diagonal in order to simplify the computation of the covariance matrix for each component.
In step S13, the preset batches of training data are uploaded to a constructed variational self-encoder network, and a posterior distribution and a prior distribution of the variational self-encoder network are determined.
Uploading a plurality of preset batches of training data to a constructed variational self-encoder network, wherein each batch of training data sent to the variational self-encoder network comprises image samples expressed as X ═ X1,x2,…,xNIn which xiFor the ith sample in the batch, i ═ 1, 2, … B, and learnable pseudo-input { α }12,...,αKTherein αjRepresents the jth dummy input, j ═ 1, 2, … K; the posterior distribution q (z | x) of the hidden variables of the network and the prior distribution p (z) of the hidden variables in aggregated posterior form are determined.
The posterior distribution of hidden variables is:
Figure BDA0002362095050000051
hidden variable prior is:
Figure BDA0002362095050000052
wherein the content of the first and second substances,
Figure BDA0002362095050000053
m denotes the number of Gaussian mixtures, K denotes the number of pseudo inputs, αkA pseudo-input is represented by a number of input,
Figure BDA0002362095050000061
πmrepresenting the coefficients of a gaussian mixture model.
In step S14, the relationship between gaussian components in the gaussian mixture model is determined, and a mapping function is obtained.
A greedy algorithm is used to determine the correspondence between the gaussian components in the two gaussian models, resulting in mapping function β (·).
In this embodiment, according to the Gaussian weight pimAll Gaussian components in the Gaussian mixture model are referenced in a large-to-small order such that c1≥c2≥…≥cMWhen i is 1, the ratio of the total of the two,
Figure BDA0002362095050000064
according to the formula
Figure BDA0002362095050000062
A mapping function is constructed in turn, where a ═ { β (t) | t ═ 1.
Wherein, a is a ∪ { β (M) }, if M < M, M is M +1, returning to the step of constructing the mapping function in sequence, otherwise ending, and continuing to execute step S15.
In step S15, a reconstruction loss function and a KL divergence function are obtained by using the variational self-encoder network and the obtained mapping function, a loss function of posterior distribution and prior distribution of the variational self-encoder network is calculated according to the obtained reconstruction loss function and KL divergence function, and parameters of the variational self-encoder network are updated to generate an image.
In the present embodiment, forThe variational self-encoder network respectively reconstructs a loss function and a KL divergence function to the input X and the output of the variational self-encoder network
Figure BDA0002362095050000065
And calculating a loss function by the posterior distribution q (z | x) and the prior distribution p (z), and updating parameters of the variational self-encoder network by a back propagation algorithm until the network converges.
The reconstruction loss function for each sample is calculated as:
Figure BDA0002362095050000063
wherein n represents the dimension of the input picture; x is the number ofiA value representing the ith dimension of the input sample picture;
Figure BDA0002362095050000066
a value representing the ith dimension of the output picture; l isRERepresenting the reconstruction loss for each sample.
Using mapping function β (), the KL divergence function for each sample is calculated as:
Figure BDA0002362095050000071
wherein L isKLThe KL distance for each sample is indicated.
Calculating input X and output of the variational autoencoder network through the calculated reconstruction loss function of each sample and the KL divergence function of each sample
Figure BDA0002362095050000072
And the loss function of the posterior distribution q (z | x) and the prior distribution p (z) is:
Figure BDA0002362095050000073
wherein the content of the first and second substances,
Figure BDA0002362095050000074
representing the reconstruction error of the ith sample;
Figure BDA0002362095050000075
indicating the KL divergence for the ith sample.
In step S16, when an image is generated, a dummy input is uploaded as an input image to the variational self-encoder network, resulting in a finally generated picture.
When an image is generated, a pseudo input is input into the network as an input image, and a high-quality generated picture can be output.
In the present embodiment, the terms are explained as follows:
the gaussian mixture model is a model that accurately quantifies objects by using a gaussian probability density function (normal distribution curve), and is formed by decomposing objects into a plurality of objects based on the gaussian probability density function (normal distribution curve).
Greedy algorithm (also called greedy algorithm) means that when solving a problem, always the choice that seems best at the present time is made. That is, rather than being considered globally optimal, he makes a locally optimal solution in some sense. A greedy algorithm is an improved hierarchical approach. The core of the method is to select a measurement standard according to the theme. The multiple inputs are then arranged in the order required by the metrology standards, in which order the quantities are input one at a time. If the addition of this input to the part of the best solution that is currently already formed in this quantitative sense does not result in a feasible solution, then this input is not added to this part of the solution. This hierarchical approach to obtaining an optimal solution in some metric sense is known as a greedy algorithm.
Compared with the prior art, the method has the advantages that the variational self-encoder network based on the optimization Gaussian mixture model prior is built, the training efficiency is high, the convergence is strong, the network can be used for modeling a complex image to generate a high-quality picture, and the generation capability of the model is greatly improved.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An image generation method based on a Gaussian mixture model prior variation self-encoder is characterized by comprising the following steps:
s1, presetting a generated image training data set; wherein the training data set consists of several batches of training data;
s2, building a variational self-encoder network based on Gaussian mixture model prior;
s3, uploading the preset training data of a plurality of batches to a constructed variational self-encoder network, and determining posterior distribution and prior distribution of the variational self-encoder network;
s4, determining the relation between Gaussian components in the Gaussian mixture model to obtain a mapping function;
s5, obtaining a reconstruction loss function and a KL divergence function by using the variational self-encoder network and the obtained mapping function, calculating the loss functions of posterior distribution and prior distribution of the variational self-encoder network according to the obtained reconstruction loss function and the KL divergence function, and updating the parameters of the variational self-encoder network to generate an image;
and S6, when the image is generated, uploading a pseudo input serving as an input image to the variational self-encoder network to obtain a finally generated image.
2. The method of claim 1, wherein the step S2 further includes constructing a posterior distribution of hidden variables in the variational autoencoder network.
3. The image generation method based on the Gaussian mixture model prior variational self-encoder as claimed in claim 2, wherein the parameters in the variational self-encoder network constructed in the step S2 include network input image size C x H x W, batch size B, hidden variable dimension D, hidden variable z, Gaussian mixture number M, pseudo input α, and pseudo input number K.
4. The image generation method based on the Gaussian mixture model prior variational self-encoder as claimed in claim 3, wherein in step S3, the pre-set batches of training data are uploaded to the constructed variational self-encoder network, wherein the uploaded training data comprise image samples X ═ { X ═ X1,x2,…,xNIn which xiFor the ith sample in the current batch, i is 1, 2, … B, and the pseudo input α is α12,...,αKTherein αjRepresenting the jth dummy input, j ═ 1, 2, … K.
5. The image generation method based on the Gaussian mixture model prior variation auto-encoder as claimed in claim 4, wherein the step S3 determines the posterior distribution of the hidden variables and the prior distribution of the hidden variables in the form of the aggregation posterior;
the posterior distribution of the hidden variables is as follows:
Figure FDA0002362095040000021
the hidden variable prior is:
Figure FDA0002362095040000022
wherein the content of the first and second substances,
Figure FDA0002362095040000023
m represents the number of Gaussian mixturesQuantity, K denotes the number of false inputs, αkA pseudo-input is represented by a number of input,
Figure FDA0002362095040000024
πmrepresenting the coefficients of a gaussian mixture model.
6. The image generation method based on the Gaussian mixture model prior variational self-encoder as claimed in claim 5, wherein the relation between Gaussian components in the Gaussian mixture model determined in step S4 is determined by a greedy algorithm.
7. The image generation method based on the gaussian mixture model prior variational self-encoder as claimed in claim 6, wherein said step S4 is specifically to construct the mapping function according to the following functions in turn:
Figure FDA0002362095040000025
where a ═ β (t) | t ═ 1., m-1}, β (·) denotes a mapping function.
8. The image generation method based on the gaussian mixture model prior variation self-encoder according to claim 7, wherein the reconstruction loss function obtained in step S5 is:
Figure FDA0002362095040000026
wherein n represents the dimension of the input picture; x is the number ofiA value representing the ith dimension of the input sample picture;
Figure FDA0002362095040000027
a value representing the ith dimension of the output picture; l isRERepresenting the reconstruction loss for each sample.
9. The image generation method based on the gaussian mixture model prior variation auto-encoder according to claim 8, wherein the KL divergence function obtained in step S5 is:
Figure FDA0002362095040000031
wherein L isKLThe KL distance for each sample is indicated.
10. The image generation method of claim 9, wherein the calculated loss function is:
Figure FDA0002362095040000032
wherein the content of the first and second substances,
Figure FDA0002362095040000033
representing the reconstruction error of the ith sample;
Figure FDA0002362095040000034
indicating the KL divergence for the ith sample.
CN202010024870.8A 2020-01-10 2020-01-10 Image generation method based on Gaussian mixture model prior variation self-encoder Active CN111243045B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010024870.8A CN111243045B (en) 2020-01-10 2020-01-10 Image generation method based on Gaussian mixture model prior variation self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010024870.8A CN111243045B (en) 2020-01-10 2020-01-10 Image generation method based on Gaussian mixture model prior variation self-encoder

Publications (2)

Publication Number Publication Date
CN111243045A true CN111243045A (en) 2020-06-05
CN111243045B CN111243045B (en) 2023-04-07

Family

ID=70874471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010024870.8A Active CN111243045B (en) 2020-01-10 2020-01-10 Image generation method based on Gaussian mixture model prior variation self-encoder

Country Status (1)

Country Link
CN (1) CN111243045B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860660A (en) * 2020-07-24 2020-10-30 辽宁工程技术大学 Small sample learning garbage classification method based on improved Gaussian network
CN113255830A (en) * 2021-06-21 2021-08-13 上海交通大学 Unsupervised target detection method and system based on variational self-encoder and Gaussian mixture model
CN113822437A (en) * 2020-06-18 2021-12-21 辉达公司 Deep layered variational automatic encoder
CN114501034A (en) * 2021-12-11 2022-05-13 同济大学 Image compression method and medium based on discrete Gaussian mixture super-prior and Mask
CN114638905A (en) * 2022-01-30 2022-06-17 中国科学院自动化研究所 Image generation method, device, equipment, storage medium and computer program product
CN115131347A (en) * 2022-08-29 2022-09-30 江苏茂融智能科技有限公司 Intelligent control method for processing zinc alloy parts
CN115797216A (en) * 2022-12-14 2023-03-14 齐鲁工业大学 Inscription character restoration model and restoration method based on self-coding network
CN116958712A (en) * 2023-09-20 2023-10-27 山东建筑大学 Image generation method, system, medium and device based on prior probability distribution
CN117036862A (en) * 2023-08-21 2023-11-10 武汉纺织大学 Image generation method based on Gaussian mixture variation self-encoder

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013061732A (en) * 2011-09-12 2013-04-04 Fuji Xerox Co Ltd Image identification information provision program and image identification information provision device
CN106952240A (en) * 2017-03-29 2017-07-14 成都信息工程大学 A kind of image goes motion blur method
CN108171324A (en) * 2017-12-26 2018-06-15 天津科技大学 A kind of variation own coding mixed model
CN108491925A (en) * 2018-01-25 2018-09-04 杭州电子科技大学 The extensive method of deep learning feature based on latent variable model
CN108875818A (en) * 2018-06-06 2018-11-23 西安交通大学 Based on variation from code machine and confrontation network integration zero sample image classification method
CN109447098A (en) * 2018-08-27 2019-03-08 西北大学 A kind of image clustering algorithm based on deep semantic insertion
CN110309853A (en) * 2019-05-20 2019-10-08 湖南大学 Medical image clustering method based on variation self-encoding encoder

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013061732A (en) * 2011-09-12 2013-04-04 Fuji Xerox Co Ltd Image identification information provision program and image identification information provision device
CN106952240A (en) * 2017-03-29 2017-07-14 成都信息工程大学 A kind of image goes motion blur method
CN108171324A (en) * 2017-12-26 2018-06-15 天津科技大学 A kind of variation own coding mixed model
CN108491925A (en) * 2018-01-25 2018-09-04 杭州电子科技大学 The extensive method of deep learning feature based on latent variable model
CN108875818A (en) * 2018-06-06 2018-11-23 西安交通大学 Based on variation from code machine and confrontation network integration zero sample image classification method
CN109447098A (en) * 2018-08-27 2019-03-08 西北大学 A kind of image clustering algorithm based on deep semantic insertion
CN110309853A (en) * 2019-05-20 2019-10-08 湖南大学 Medical image clustering method based on variation self-encoding encoder

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张芳;郭春生: "基于非局部贝叶斯的泊松图像去噪算法" *
鲍宗袍;陈华华;: "基于稀疏先验和边缘约束的图像盲去模糊算法" *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822437A (en) * 2020-06-18 2021-12-21 辉达公司 Deep layered variational automatic encoder
CN111860660A (en) * 2020-07-24 2020-10-30 辽宁工程技术大学 Small sample learning garbage classification method based on improved Gaussian network
CN113255830A (en) * 2021-06-21 2021-08-13 上海交通大学 Unsupervised target detection method and system based on variational self-encoder and Gaussian mixture model
CN114501034A (en) * 2021-12-11 2022-05-13 同济大学 Image compression method and medium based on discrete Gaussian mixture super-prior and Mask
CN114501034B (en) * 2021-12-11 2023-08-04 同济大学 Image compression method and medium based on discrete Gaussian mixture super prior and Mask
CN114638905B (en) * 2022-01-30 2023-02-21 中国科学院自动化研究所 Image generation method, device, equipment and storage medium
CN114638905A (en) * 2022-01-30 2022-06-17 中国科学院自动化研究所 Image generation method, device, equipment, storage medium and computer program product
CN115131347A (en) * 2022-08-29 2022-09-30 江苏茂融智能科技有限公司 Intelligent control method for processing zinc alloy parts
CN115797216A (en) * 2022-12-14 2023-03-14 齐鲁工业大学 Inscription character restoration model and restoration method based on self-coding network
CN117036862A (en) * 2023-08-21 2023-11-10 武汉纺织大学 Image generation method based on Gaussian mixture variation self-encoder
CN117036862B (en) * 2023-08-21 2024-03-22 武汉纺织大学 Image generation method based on Gaussian mixture variation self-encoder
CN116958712A (en) * 2023-09-20 2023-10-27 山东建筑大学 Image generation method, system, medium and device based on prior probability distribution
CN116958712B (en) * 2023-09-20 2023-12-15 山东建筑大学 Image generation method, system, medium and device based on prior probability distribution

Also Published As

Publication number Publication date
CN111243045B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111243045B (en) Image generation method based on Gaussian mixture model prior variation self-encoder
CN111242282B (en) Deep learning model training acceleration method based on end edge cloud cooperation
CN110062934A (en) The structure and movement in image are determined using neural network
CN113065974A (en) Link prediction method based on dynamic network representation learning
CN111986105A (en) Video time sequence consistency enhancing method based on time domain denoising mask
CN112884236B (en) Short-term load prediction method and system based on VDM decomposition and LSTM improvement
CN116523079A (en) Reinforced learning-based federal learning optimization method and system
CN112308961A (en) Robot rapid robust three-dimensional reconstruction method based on layered Gaussian mixture model
CN113947133A (en) Task importance perception element learning method for small sample image recognition
CN110570034A (en) Bus load prediction method based on multi-XGboost model fusion
CN113743474A (en) Digital picture classification method and system based on cooperative semi-supervised convolutional neural network
CN109540089B (en) Bridge deck elevation fitting method based on Bayes-Kriging model
CN114723037A (en) Heterogeneous graph neural network computing method for aggregating high-order neighbor nodes
CN113194493B (en) Wireless network data missing attribute recovery method and device based on graph neural network
CN114880527B (en) Multi-modal knowledge graph representation method based on multi-prediction task
CN116258923A (en) Image recognition model training method, device, computer equipment and storage medium
CN113537613B (en) Temporal network prediction method for die body perception
CN115941871A (en) Video frame insertion method and device, computer equipment and storage medium
CN115759291A (en) Space nonlinear regression method and system based on ensemble learning
CN115908600A (en) Massive image reconstruction method based on prior regularization
CN115358485A (en) Traffic flow prediction method based on graph self-attention mechanism and Hox process
CN114595890A (en) Ship spare part demand prediction method and system based on BP-SVR combined model
CN113065321B (en) User behavior prediction method and system based on LSTM model and hypergraph
Zhao et al. Combining influence and sensitivity to factorize matrix for multi-context recommendation
Davis et al. Residual Multi-Fidelity Neural Network Computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant