CN114399119A - MMP prediction method and device based on conditional convolution generation type countermeasure network - Google Patents

MMP prediction method and device based on conditional convolution generation type countermeasure network Download PDF

Info

Publication number
CN114399119A
CN114399119A CN202210055932.0A CN202210055932A CN114399119A CN 114399119 A CN114399119 A CN 114399119A CN 202210055932 A CN202210055932 A CN 202210055932A CN 114399119 A CN114399119 A CN 114399119A
Authority
CN
China
Prior art keywords
convolution
mmp
data
generator
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210055932.0A
Other languages
Chinese (zh)
Inventor
黄灿
田冷
黄文奎
王恒力
周毓韬
吴涛
张翔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum Beijing
Original Assignee
China University of Petroleum Beijing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum Beijing filed Critical China University of Petroleum Beijing
Priority to CN202210055932.0A priority Critical patent/CN114399119A/en
Publication of CN114399119A publication Critical patent/CN114399119A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Animal Husbandry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mining & Mineral Resources (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Primary Health Care (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Agronomy & Crop Science (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a method and a device for predicting MMP (matrix metalloproteinase) of a countermeasure network based on a conditional convolution generation mode, wherein the method comprises the following steps: acquiring MMP influence factor data of a target oil reservoir; inputting the MMP influencing factor data into a pre-trained convolution generator to obtain an MMP predicted value of the target oil reservoir output by the pre-trained convolution generator, wherein the convolution generator is built according to a convolution neural network, the convolution generator does not contain random noise input, the pre-trained convolution generator is obtained by carrying out multiple iterative training on the convolution generator according to a training sample set, and each training sample in the training sample set comprises: MMP values of the reservoir and MMP influential factor data of the reservoir. The method has the beneficial effect of accurately and efficiently predicting the MMP of the oil reservoir.

Description

MMP prediction method and device based on conditional convolution generation type countermeasure network
Technical Field
The invention relates to the technical field of oil reservoir development, in particular to an MMP prediction method and device based on a conditional convolution generation type countermeasure network.
Background
CO2Miscible flooding is low permeability reservoir CO2The EOR is the oil displacement mode which is most widely applied and has the highest recovery ratio. Injection of CO into oil reservoir2In the oil displacement process, the interaction of three phases of gas, oil and water can occur in the rock stratum. Resulting in phase-to-phase composition transfer, phase change and other complex phase behavior. The basic mechanism of miscible flooding is the displacing agent (CO)2Injected gas) and displaced agent (crude oil) form a stable heterogeneous leading edge under reservoir conditions, the leading edge being a single phase, the movement of which is effective to push the crude oil forward and ultimately to the production well. Because of miscible phase, the oil-gas interface disappears, and the interfacial tension in the porous medium is reduced to zero, so that the micro-displacement efficiency can reach 100% theoretically.
CO2The Minimum Miscible Pressure (MMP) with the reservoir crude oil is CO2One of the key parameters in the displacement process is the discrimination of CO2Boundary of miscible flooding and immiscible flooding. Accurate determination of CO2Minimum miscible pressure with crude oil versus CO increase2The miscible displacement efficiency, the reduction of the operation cost and the social and economic benefits are all very important.
The prior art determination of MMP usually employs an experimental measurement method which, although ensuring accuracy, is complex, time-consuming and costly to operateThe cost is large. Therefore, the prior art lacks a more efficient determination of CO2And Minimum Miscible Pressure (MMP) with reservoir crude oil.
Disclosure of Invention
In order to solve at least one technical problem in the background art, the present invention provides an MMP prediction method and device based on a conditional convolution generation type countermeasure network.
In order to achieve the above object, according to an aspect of the present invention, there is provided an MMP prediction method for a countermeasure network based on a conditional convolution generation method, the method including:
acquiring MMP influence factor data of a target oil reservoir;
inputting the MMP influencing factor data into a pre-trained convolution generator to obtain an MMP predicted value of the target oil reservoir output by the pre-trained convolution generator, wherein the convolution generator is built according to a convolution neural network, the convolution generator does not contain random noise input, the pre-trained convolution generator is obtained by carrying out multiple iterative training on the convolution generator according to a training sample set, and each training sample in the training sample set comprises: MMP values of the reservoir and MMP influential factor data of the reservoir.
Optionally, the MMP prediction method for a conditional convolution-based generative countermeasure network further includes:
acquiring the training sample set;
performing H1 times of iterative training according to the training sample set to obtain the pre-trained convolution generator, wherein each time of iterative training is divided into multiple batches of training, when each batch of training is performed, firstly selecting H2 training samples from the training sample set, then training the network weight of a convolution discriminator based on the selected training samples, and finally training the network weight of the convolution generator in a combined model composed of the convolution discriminator and the convolution generator based on the selected training samples, wherein the convolution discriminator is built according to the combination of a convolution neural network and a fully-connected neural network, and H1 and H2 are both positive integers.
Optionally, the training sample is composed of first data and second data, the first data is MMP influence factor data of the oil reservoir, and the second data is MMP value of the oil reservoir;
the training of the network weight of the convolution discriminator based on the selected training sample specifically comprises:
respectively combining the MMP predicted value output by the convolution generator according to the first data of the training sample with the first data of the training sample aiming at each selected training sample to obtain combined data, setting the label of the combined data to be 0, and smoothing the label of the combined data;
setting the label of each selected training sample as 1, and smoothing the label of the training sample;
and inputting the combined data subjected to the label smoothing processing and the training sample subjected to the label smoothing processing into the convolution discriminator, and training the network weight of the convolution discriminator.
Optionally, the training the network weight of the convolution generator in a combined model composed of the convolution discriminator and the convolution generator based on the selected training sample specifically includes:
respectively inputting first data of the training samples into the convolution generator aiming at each selected training sample to obtain an MMP predicted value which is output by the convolution generator and corresponds to the training sample;
respectively combining the first data of the training samples with the MMP predicted values corresponding to the training samples aiming at each selected training sample to obtain combined data, setting the label of the combined data to be 1, and smoothing the label of the combined data;
and inputting the combined data subjected to the label smoothing into the convolution discriminator to obtain the probability that the combined data output by the convolution discriminator is real data.
Optionally, the performing H1 times of iterative training according to the training sample set to obtain the pre-trained convolution generator includes:
and optimizing the iterative training times H1, the training sample number H2, the hyper-parameters of the convolution generator and the hyper-parameters of the convolution discriminator by adopting a hyper-parameter optimization method to obtain an optimal parameter combination, and then performing iterative training according to the optimal parameter combination to obtain the pre-trained convolution generator.
Optionally, the input of the convolution discriminator is MMP influencing factor data and an MMP value, the MMP value includes an MMP prediction value output by the convolution generator, and the output of the convolution discriminator is a probability that the data is real data; the network structure of the convolution discriminator specifically includes: convolutional neural network layer, concatenation layer and full-connection neural network layer, convolutional neural network layer is used for carrying out the preliminary treatment to MMP influence factor data, the concatenation layer is used for with data after the preliminary treatment of convolutional neural network layer output and MMP value splice, full-connection neural network layer is used for right data after the concatenation of concatenation layer output handles the probability that output data is true data.
Optionally, the hyper-parameters of the convolution generator specifically include: the number of layers of the convolutional neural network layer, the number of convolutional kernels of each layer of the convolutional neural network layer, the size of the convolutional kernels and the initial learning rate of an optimizer in the convolutional generator;
the hyper-parameters of the convolution discriminator specifically include: the number of layers of the convolutional neural network layer, the number of convolutional kernels of each convolutional neural network layer, the size of the convolutional kernels, the number of layers of the fully-connected neural network layer, the number of neurons of each fully-connected neural network layer, the discarding rate of each fully-connected neural network layer, and the initial learning rate of an optimizer in a convolutional discriminator.
In order to achieve the above object, according to another aspect of the present invention, there is provided an MMP prediction apparatus for a countermeasure network based on a conditional convolution generation method, the apparatus including:
the data acquisition unit is used for acquiring MMP (matrix metalloproteinases) influence factor data of the target oil reservoir;
the prediction unit is configured to input the MMP influence factor data into a pre-trained convolution generator to obtain an MMP prediction value of the target oil reservoir output by the pre-trained convolution generator, where the convolution generator is built according to a convolutional neural network, the convolution generator does not include random noise input, the pre-trained convolution generator is obtained by performing multiple iterative training on the convolution generator according to a training sample set, and each training sample in the training sample set includes: MMP values of the reservoir and MMP influential factor data of the reservoir.
To achieve the above object, according to another aspect of the present invention, there is also provided a computer device including a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above MMP prediction method based on a conditional convolution generation countermeasure network when executing the computer program.
To achieve the above object, according to another aspect of the present invention, there is also provided a computer program product comprising computer program/instructions which, when executed by a processor, implement the steps of the above MMP prediction method based on conditional convolution generation-based countermeasure network.
The invention has the beneficial effects that:
the invention generates a condition type countermeasure network and CO2The method is combined with the prediction of Minimum Miscible Pressure (MMP) among crude oil in the oil reservoir, a generator of the condition generating type confrontation network is constructed based on the convolutional neural network, and then the convolutional generator of the condition generating type confrontation network is trained to serve as an MMP prediction model, so that the beneficial effect of accurately and efficiently predicting the MMP in the oil reservoir is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts. In the drawings:
FIG. 1 is a first flowchart of an MMP prediction method for a conditional convolution-based generative countermeasure network in accordance with an embodiment of the present invention;
FIG. 2 is a second flowchart of an MMP prediction method for a conditional convolution-based generative countermeasure network in accordance with an embodiment of the present invention;
FIG. 3 is a flow chart of the training of a convolution arbiter according to an embodiment of the present invention;
FIG. 4 is a flow chart of the training of a convolution generator according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a training sample set according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a network structure of a convolution generator according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a network structure of a convolution discriminator according to an embodiment of the present invention;
FIG. 8 is a schematic view of a combined model according to an embodiment of the present invention;
FIG. 9 is a graph of MMP dependence with temperature change for a conditional full-link generative antagonistic network model;
FIG. 10 is a graph of MMP dependence with temperature change for a conditional convolution-generated antagonistic network model;
FIG. 11 is MMP and CO antagonistic network model based on conditional convolution generation2In N2In terms of mole fraction of (c);
FIG. 12 is MMP and CO antagonistic network model based on conditional convolution generation2Middle H2The mole fraction of S;
fig. 13 is a first block diagram of an MMP prediction apparatus of an embodiment of the present invention based on a conditional convolution-generated countermeasure network;
FIG. 14 is a second block diagram of an MMP predicting apparatus for a conditional convolution-based countermeasure network in accordance with an embodiment of the present invention;
FIG. 15 is a schematic diagram of a computer apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of the present invention and the above-described drawings, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
In the present invention, MMP means CO2Minimum miscible pressure with crude oil.
The Generative Adaptive Networks (GANs) is a deep learning model, and is one of the most promising methods for unsupervised learning in complex distribution in recent years. The model passes through (at least) two modules in the framework: the mutual game learning of the generative model (also called a generator) and the discriminant model (also called a discriminant) produces quite good output.
The generative confrontation network is an unsupervised machine learning method, so that the method is generally applied to data enhancement, namely when few training samples are provided for machine learning, the generative confrontation network can be adopted to generate a plurality of data samples for the machine to learn. Since the generation type confrontation network is late in generation, it has been used for data enhancement by a few oil workers in recent years, but the effect of the data generated by the method is pragmatically inconsistent by each researcher.
The most primitive generative countermeasure network is to input a random vector and then get a generated object, but we cannot control what object is generated. Therefore, researchers put forward a conditional generative confrontation network theory, add constraints to original GAN, introduce conditional variable y (conditional variable y) into a generative model and a discriminant model, and introduce additional information for the model to generate data in an instructive manner. In theory y can make meaningful information, such as class labels, change GAN, an unsupervised learning method, into supervised.
The occurrence of the conditional generative countermeasure network changes the generative countermeasure network from unsupervised learning to supervised learning, which means that the method can be used for parameter prediction in the aspect of petroleum and has great application prospect in the aspect of petroleum. When the MMP is predicted by using the conditional generation type countermeasure network, although the prediction accuracy of the MMP prediction model based on the conditional full-connection generation type countermeasure network is high, the MMP can follow N to search the relation between the MMP and the influencing factors due to the existence of random noise input in a generator2、H2The change of factors such as S and the like shows a rule different from the physical experiment phenomenon, and is not beneficial to the practical application of the MMP prediction model.
Fig. 9 shows that MMP shows unstable rising changes with increasing temperature due to random noise generated in the network in the conditional fully-connected neural network, which is not in accordance with the actual physical law, is not favorable for the practical application of MMP prediction model, and needs to be improved.
In order to overcome the defects in the prior art that the MMP is predicted through a conditional full-connection neural network, the embodiment of the invention provides a prediction method based on a conditional full-connection neural networkCO for improved conditional convolution generation countermeasure networks2Minimum Miscible Pressure (MMP) prediction scheme with crude oil.
Fig. 1 is a first flowchart of an MMP prediction method of an embodiment of the present invention based on a conditional convolution generating countermeasure network, as shown in fig. 1, in an embodiment of the present invention, the MMP prediction method of the embodiment of the present invention based on a conditional convolution generating countermeasure network includes steps S101 and S102.
And step S101, acquiring MMP influence factor data of the target oil reservoir.
In one embodiment of the invention, the MMP influencer data specifically includes: oil layer temperature (T)R) Mole fraction of volatile component in crude oil (X)vol) C in crude oil2-C4Component mole fraction (X)C2-4) C in crude oil5-C6Component mole fraction (X)C5-6) C in crude oil7+Component Molecular Weight (MW)C7+) CO in the injected gas2And four mole fractions of impurities (i.e., y)CO2、yC1、yN2、yH2SAnd yHC) And the like.
Step S102, inputting the MMP influencing factor data into a pre-trained convolution generator to obtain an MMP predicted value of the target oil reservoir output by the pre-trained convolution generator, wherein the convolution generator is built according to a convolution neural network, the convolution generator does not contain random noise input, the pre-trained convolution generator is obtained by carrying out multiple iterative training on the convolution generator according to a training sample set, and each training sample in the training sample set comprises: MMP values of the reservoir and MMP influential factor data of the reservoir.
According to the invention, the generator model is improved, the scheme of adopting a fully-connected neural network to construct a generator in the prior art is abandoned, and a convolutional neural network is creatively adopted to construct the generator to form a convolutional generator. The input of the convolution generator of the present invention does not contain random noise input, making MMP prediction more accurate, and making MMP not follow N when exploring the relationship between MMP and influencing factors2、H2S, etc. appear in the presence of the change inThe law of different physical experiment phenomena is helpful for the practical application of the MMP prediction model.
Fig. 2 is a second flowchart of the MMP prediction method based on conditional convolution generation type countermeasure network according to the embodiment of the present invention, and as shown in fig. 2, the convolution generator trained in advance in step S102 is specifically generated by training in step S201 and step S202.
Step S201, obtaining the training sample set.
In one embodiment of the invention, MMP values of a certain amount of existing oil reservoirs and corresponding MMP influence factor data are collected, and the MMP values and the corresponding MMP influence factor data are divided into a training sample set, a verification sample set and a test sample set according to a certain proportion.
FIG. 5 shows MMP values collected for 105 reservoirs and corresponding MMP influencing factor data for one embodiment of the present invention. Specifically, according to the invention, all data are divided into a training sample set, a verification sample set and a test sample set according to the ratio of 6:2:2, so that the training sample set comprises 63 groups of data, the verification sample set comprises 21 groups of data, and the test sample set comprises 21 groups of data.
In an embodiment of the present invention, after obtaining the training sample set, the verification sample set, and the test sample set, the present invention performs the maximum and minimum normalization processing on the training sample set, and then performs the same processing on the data in the verification sample set and the test sample set by using the maximum value and the minimum value of the data in the training sample set.
In one embodiment of the invention, the maximum and minimum normalized formula may be as follows:
Figure BDA0003476221420000081
step S202, carrying out H1 times of iterative training according to the training sample set to obtain the pre-trained convolution generator, wherein each time of iterative training is divided into multiple batches of training, when each batch of training is carried out, firstly, H2 training samples are selected from the training sample set, then, the network weight of a convolution discriminator is trained based on the selected training samples, finally, the network weight of the convolution generator is trained in a combined model composed of the convolution discriminator and the convolution generator based on the selected training samples, the convolution discriminator is built according to a combination of a convolution neural network and a fully-connected neural network, and H1 and H2 are both positive integers.
In the invention, each iterative training is divided into a plurality of batches of training, when each batch of training is carried out, H2 training samples are selected from a training sample set, the training samples selected for a plurality of batches of training of the same iterative training are different, and if the number of the remaining samples in the training sample set during the training of a certain batch is less than H2, all the remaining samples are selected to carry out the training of the certain batch. Therefore, after all samples in the training sample set are trained once, an iterative training process of the conditional generation type countermeasure network is realized, which is also called a training period, and the performances of the convolution generator and the convolution discriminator are gradually improved along with the increase of the iterative training times (H1).
In the invention, H1 iterative training is carried out according to the above process. When iterative training reaches a certain number of times, the MMP predicted value generated by the convolution generator under the corresponding condition is very close to the real data, and the MMP prediction function is realized. In one embodiment of the invention, the prediction error of the convolution generator on the verification set after each iterative training is finished is monitored simultaneously in the iterative training process, the convolution generator after each iterative training is stored separately, and finally the convolution generator with the minimum error on the verification set in the iterative training process is selected, namely the MMP prediction model.
In the invention, the network weight of the convolution discriminator is trained based on the training sample, and then the network weight of the convolution generator is trained in a combined model composed of the convolution discriminator and the convolution generator based on the training sample. Fig. 8 is a schematic diagram of a combination model according to an embodiment of the present invention, and as shown in fig. 8, when the network weight of the convolution generator is trained in the combination model, the network weight of the convolution discriminator does not change, and the network weight of the convolution generator changes along with the training of data.
In one embodiment of the invention, the training sample is comprised of first data that is MMP contributor data for the reservoir and second data that is MMP values for the reservoir.
Fig. 3 is a flowchart of training a convolution discriminator according to an embodiment of the present invention, and as shown in fig. 3, in an embodiment of the present invention, the training of the network weights of the convolution discriminator based on the selected training samples in step S202 specifically includes steps S301 to S303.
Step S301, for each selected training sample, combining the MMP prediction value output by the convolution generator according to the first data of the training sample with the first data of the training sample to obtain combined data, setting the label of the combined data to 0, and smoothing the label of the combined data.
Step S302, setting the label of each selected training sample as 1, and smoothing the label of the training sample.
In the present invention, there are various ways to smooth the label. In one embodiment of the present invention, the label smoothing method of the present invention is: tag 1 is replaced with a random number within a first predetermined range of values (preferably 0.8 to 1.0) and tag 0 is replaced with a random number within a second predetermined range of values (preferably 0 to 0.2).
Step S303, inputting the combined data after the label smoothing processing and the training sample after the label smoothing processing into the convolution discriminator, and training the network weight of the convolution discriminator.
In an embodiment of the invention, when an iterative training is started, a first batch of training samples is selected from a training sample set, the number of the training samples in the batch is H2, a well-established convolution generator is utilized, first data in H2 training samples in the batch is used as a condition input of the convolution generator, and an MMP predicted value under a corresponding condition generated by the convolution generator for the first time is output; then, combining the first data with the MMP predicted value output by the convolution generator to obtain combined data, setting the combined data to be 0, then carrying out smoothing processing on the label, and inputting the label into a convolution discriminator; meanwhile, the label is set to be 1 according to the combination of the first data and the corresponding real MMP value, namely training data, then the label is subjected to smoothing processing and sent into a convolution discriminator, and the convolution discriminator is subjected to first learning, namely, the first learning is carried out on the true and false data. According to the process, multiple batches of convolution discriminators are subjected to H1 times of iterative training, and the convolution discriminators subjected to iterative training can accurately recognize real data.
Fig. 4 is a training flowchart of a convolution generator according to an embodiment of the present invention, and as shown in fig. 4, in an embodiment of the present invention, the training of the network weights of the convolution generator in the combined model composed of the convolution discriminator and the convolution generator based on the selected training sample in step S202 specifically includes steps S401 to S403.
Step S401, respectively inputting the first data of the training sample into the convolution generator for each selected training sample, to obtain the MMP prediction value corresponding to the training sample output by the convolution generator.
Step S402, aiming at each selected training sample, combining the first data of the training sample with the MMP predicted value corresponding to the training sample to obtain combined data, setting the label of the combined data to be 1, and smoothing the label of the combined data.
Step S403, inputting the combined data after the label smoothing process into the convolution discriminator to obtain the probability that the combined data output by the convolution discriminator is real data.
In the invention, after the convolution discriminator is trained, the weight of the convolution discriminator is kept unchanged, and the network weight of the convolution generator is trained in a combined model consisting of the convolution discriminator and the convolution generator. Specifically, the conditional input of the convolution generator (namely the first data in the training data) and the MMP predicted value output by the convolution generator are combined, the label is set to be 1, then the label is subjected to smoothing processing and input into the combined model, so that the independent training and learning of the convolution generator are realized, the convolution discriminator is not influenced, and the accuracy of the data generated by the convolution generator is improved.
Fig. 6 is a schematic diagram of a network structure of a convolution generator according to an embodiment of the present invention, and as shown in fig. 6, the present invention constructs a conditional convolution generation formula to counteract a convolution generator in a network by using a Convolutional Neural Network (CNN), and removes a random noise input in the convolution generator to obtain an improved convolution generator model. The convolution generator accepts only one input, i.e., the MMP influencer data, and the output of the convolution generator is the MMP predictor.
As shown in fig. 6, in one embodiment of the present invention, the convolution generator is specifically composed of multiple convolutional neural network layers.
In one embodiment of the present invention, the present invention further requires that before inputting the MMP influence factor data to the convolutional neural network layer of the convolutional generator, the one-dimensional MMP influence factor data is converted into a two-dimensional matrix form that can be received by the convolutional neural network layer.
In an embodiment of the present invention, the hyper-parameters of the convolution generator specifically include: the number of layers of the convolutional neural network layer A1, the number of convolutional kernels per layer of the convolutional neural network layer B1, the size of the convolutional kernels C1, and the initial learning rate of the optimizer in the convolutional generator G1.
In one embodiment of the invention, the initial learning rate of the optimizer in the convolution generator is specifically the initial learning rate of the Adam optimizer in the convolution generator.
FIG. 7 is a schematic diagram of a network structure of a convolution discriminator according to an embodiment of the present invention, where as shown in FIG. 7, the inputs of the convolution discriminator are MMP influencing factor data and MMP values, the MMP values include MMP predicted values output by the convolution generator, and the output of the convolution discriminator is the probability that the data is real data; the network structure of the convolution discriminator specifically includes: convolutional neural network layer, concatenation layer and full-connection neural network layer, convolutional neural network layer is used for carrying out the preliminary treatment to MMP influence factor data, the concatenation layer is used for with data after the preliminary treatment of convolutional neural network layer output and MMP value splice, full-connection neural network layer is used for right data after the concatenation of concatenation layer output handles the probability that output data is true data.
As shown in fig. 7, the convolution discriminator accepts two inputs, the first input being a condition X, i.e., normalized MMP influential factor data, for preprocessing, and the second being a true MMP value Y corresponding to the condition X or an MMP predicted value Y' generated by the convolution generator under the condition X.
As shown in fig. 7, in an embodiment of the present invention, when the convolution discriminator is set, the convolution neural network layer is first set to preprocess the condition X, i.e., the normalized MMP influence factor data; then, splicing the data preprocessed by the convolutional neural network layer with an input MMP value (a real MMP value Y corresponding to the condition X or an MMP predicted value Y' generated by a convolutional generator under the condition X); and finally, setting a fully-connected neural network layer again to process the spliced data, wherein the number of the neurons of the last layer of the fully-connected neural network layer is 1, the activation function is sigmoid, the probability that the currently input data is real data is output, if the output probability is greater than 0.5, the data belongs to the real data, and otherwise, the data is false data, namely the data generated by the convolution generator.
In an embodiment of the present invention, the hyper-parameters of the convolution discriminator specifically include: the number of layers of the convolutional neural network layer A2, the number of convolutional cores per convolutional neural network layer B2, the size of convolutional cores C2, the number of layers of the fully-connected neural network layer D2, the number of neurons per layer of the fully-connected neural network layer E2, the discarding rate per layer of the fully-connected neural network layer F2, and the initial learning rate of the optimizer in the convolutional discriminator G2.
In one embodiment of the invention, the initial learning rate of the optimizer in the convolution discriminator is specifically the initial learning rate of the Adam optimizer in the convolution generator.
In an embodiment of the present invention, when performing iterative training in step S202, the present invention further optimizes the iterative training times H1, the number of training samples H2, the hyper-parameters of the convolution generator, and the hyper-parameters of the convolution discriminator by using a hyper-parameter optimization method, so as to obtain an optimal parameter combination. And then carrying out iterative training according to the optimal parameter combination to obtain the pre-trained convolution generator, namely the MMP prediction model.
In a specific embodiment of the invention, the invention utilizes a Bayesian hyper-parameter optimization method to optimize the hyper-parameters of a convolution generator and the hyper-parameters of a convolution discriminator, the training sample number (H2) in each batch in each iterative training process and the iterative training times (H1), and searches for the parameter combination which enables the model to have the best prediction effect in a verification set, and takes the parameter combination with the best prediction effect in the verification set as the optimal parameter combination.
During Bayesian optimization, a plurality of groups of parameter combinations are randomly used for trial calculation, the trial calculation times can be set manually, and the trial calculation times are set to be 10. Then, the real bayesian optimization is performed, and each bayesian optimization after the trial calculation is finished refers to the result of the previous calculation, that is, the expression of the model on the verification set, so as to select the hyper-parameters and the iterative training times which should be used in the next calculation, wherein the step number of the real bayesian optimization is set to 40.
In one embodiment of the invention, the invention builds MMP prediction models (i.e., convolution generators) using the best parameter combinations obtained by bayesian hyperparametric optimization. The model is a model which is used for predicting the MMP of a new oil reservoir, and the corresponding MMP value can be predicted by inputting MMP influence factor data of the new oil reservoir.
In one embodiment of the present invention, the optimal parameter combination obtained by using bayesian hyperparametric optimization may be as follows:
in the convolutional generator network, the number of layers a1 of the convolutional neural network layer is set to 1 layer, the number of convolutional kernels B1 of each layer of the convolutional neural network layer is set to 26, the size C1 of the convolutional kernels is set to 2, and the activation functions are all relu. The initial learning rate G1 of the optimizer in the convolution generator was set to 0.0002190.
In the convolutional arbiter network, the number of layers A2 of the convolutional neural network layer is set to 2, the number of convolutional kernels B2 of each convolutional neural network layer is set to 88, the size C2 of the convolutional kernels is set to 4, and the activation functions are all relu; the number D2 of the layers of the fully-connected neural network layer is set to be 3, the number of neurons in each layer of the front 2 layer is set to be 91, the discarding rate of each layer of the front 2 layer is 0.2021, the number of neurons in the last layer is set to be 1, the activation functions of the front 2 layer are relu, and the activation function of the last 1 layer is sigmoid. The initial learning rate G2 of the optimizer in the convolution arbiter is set to 0.0009755.
The iterative training times (H1) are set to 482 through Bayesian optimization, and the training sample number (H2) in each batch in each iterative training process is set to 45 through Bayesian optimization.
Furthermore, according to the same training set data, the invention respectively utilizes 3 machine learning methods such as a full-connection neural network, a support vector machine, a conditional full-connection neural network and the like to establish an MMP prediction model, and optimizes the structure of each model by combining a Bayesian algorithm and a verification data set. And finally, evaluating the prediction accuracy of each optimized model by using the data of the test set which is not trained.
Table 1 mean absolute percentage error in the test sample set for each MMP prediction model
Figure BDA0003476221420000131
As can be seen from table 1, compared with the prediction results of MMP models established by machine learning methods such as fully-connected neural network, support vector machine, conditional fully-connected neural network, etc., the errors in the test set of the MMP prediction model of the countermeasure network based on the improved conditional convolution generation type are respectively increased by 3, 10, and 4 percentage points, which is the highest precision among the four machine learning methods. The improved conditional convolution generation type countermeasure network has strong fitting capability, higher prediction precision than a fully-connected neural network, a support vector machine and a conditional fully-connected neural network, and strong generalization capability.
FIG. 10, FIG. 11 and FIG. 12, are the conditions after improvementMMP (matrix metal matrix) along with temperature and CO (carbon monoxide) obtained by MMP prediction model established by convolution generation type countermeasure network2In N2Mole fraction of (A) and CO2Middle H2Curve of the change in the mole fraction of S. It can be seen that MMP is predicted to vary with temperature and CO2In N2Increases with increasing CO2Middle H2The increase and decrease of the mole fraction of S shows that the rule is consistent with the actual physical rule, thereby proving the reliability and the effectiveness of the model of the invention, and being used for prediction and influence factor analysis of MMP.
As can be seen from the above embodiments, the MMP prediction method based on conditional convolution generation type countermeasure network of the present invention achieves at least the following beneficial effects:
1. the invention combines the conditional convolution generation type countermeasure network in the machine learning method with the oil reservoir MMP prediction, is a new MMP prediction idea and method, opens up the precedent of the conditional convolution generation type countermeasure network in the MMP prediction, and has important significance for the oil reservoir MMP prediction and the oil reservoir development scheme design;
2. the invention adaptively improves the conditional convolution generation type countermeasure network according to the prediction scene of MMP, firstly improves the generator, deletes the random noise input of the generator, then carries out smoothing treatment on the formed labels of true and false data, and jointly improves the model prediction precision, thereby obtaining the improved conditional convolution generation type countermeasure network.
In general, the method has the advantages of simple and convenient model establishing process, high calculation efficiency, high prediction precision, strong comprehensiveness and applicability and wide application prospect.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
Based on the same inventive concept, the embodiment of the present invention further provides an MMP prediction apparatus based on a conditional convolution generation countermeasure network, which can be used to implement the MMP prediction method based on a conditional convolution generation countermeasure network described in the foregoing embodiment, as described in the following embodiment. Since the principle of solving the problem of the MMP prediction apparatus based on the conditional convolution generation countermeasure network is similar to that of the MMP prediction method based on the conditional convolution generation countermeasure network, the embodiments of the MMP prediction apparatus based on the conditional convolution generation countermeasure network can be referred to the embodiments of the MMP prediction method based on the conditional convolution generation countermeasure network, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 13 is a first block diagram of an MMP prediction apparatus based on a conditional convolution generating countermeasure network according to an embodiment of the present invention, and as shown in fig. 13, the MMP prediction apparatus based on a conditional convolution generating countermeasure network according to an embodiment of the present invention includes:
the data acquisition unit 1 is used for acquiring MMP (matrix metalloproteinases) influence factor data of a target oil reservoir;
the prediction unit 2 is configured to input the MMP influence factor data into a pre-trained convolution generator, and obtain an MMP prediction value of the target oil reservoir output by the pre-trained convolution generator, where the convolution generator is built according to a convolutional neural network, the convolution generator does not include a random noise input, the pre-trained convolution generator is obtained by performing multiple iterative training on the convolution generator according to a training sample set, and each training sample in the training sample set includes: MMP values of the reservoir and MMP influential factor data of the reservoir.
Fig. 14 is a second block diagram of the MMP predicting apparatus of the embodiment of the present invention based on the conditional convolution generating countermeasure network, and as shown in fig. 14, in an embodiment of the present invention, the MMP predicting apparatus of the embodiment of the present invention based on the conditional convolution generating countermeasure network further includes:
a training sample set obtaining unit 3, configured to obtain the training sample set;
and the model training unit 4 is configured to perform H1 times of iterative training according to the training sample set to obtain the pre-trained convolution generator, where each time of iterative training is divided into multiple batches of training, when performing training of each batch, first select H2 training samples from the training sample set, then train the network weight of the convolution discriminator based on the selected training samples, and finally train the network weight of the convolution generator in a combined model composed of the convolution discriminator and the convolution generator based on the selected training samples, where the convolution discriminator is built according to a combination of a convolution neural network and a fully-connected neural network, and H1 and H2 are both positive integers.
In one embodiment of the invention, the training sample is comprised of first data that is MMP contributor data for the reservoir and second data that is MMP values for the reservoir. In an embodiment of the present invention, the model training unit specifically includes:
the first label setting module is used for combining the MMP predicted value output by the convolution generator according to the first data of the training sample with the first data of the training sample respectively aiming at each selected training sample to obtain combined data, setting the label of the combined data to be 0 and smoothing the label of the combined data;
the second label setting module is used for setting the label of each selected training sample to be 1 and smoothing the label of the training sample;
and the convolution discriminator training module is used for inputting the combined data after the label smoothing processing and the training sample after the label smoothing processing into the convolution discriminator and training the network weight of the convolution discriminator.
In an embodiment of the present invention, the model training unit specifically includes:
the predicted value acquisition module is used for inputting first data of the training samples into the convolution generator respectively aiming at each selected training sample to obtain an MMP predicted value which is output by the convolution generator and corresponds to the training sample;
the combined data acquisition module is used for combining the first data of the training samples with the MMP predicted values corresponding to the training samples respectively aiming at each selected training sample to obtain combined data, setting the label of the combined data to be 1 and smoothing the label of the combined data;
and the convolution generator training module is used for inputting the combined data subjected to the label smoothing processing into the convolution discriminator to obtain the probability that the combined data output by the convolution discriminator is real data.
Optionally, the model training unit further includes:
and the hyper-parameter optimization module is used for optimizing the iterative training times H1, the training sample number H2, the hyper-parameters of the convolution generator and the hyper-parameters of the convolution discriminator by adopting a hyper-parameter optimization method to obtain an optimal parameter combination.
To achieve the above object, according to another aspect of the present application, there is also provided a computer apparatus. As shown in fig. 15, the computer device comprises a memory, a processor, a communication interface and a communication bus, wherein a computer program that can be run on the processor is stored in the memory, and the steps of the method of the embodiment are realized when the processor executes the computer program.
The processor may be a Central Processing Unit (CPU). The Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or a combination thereof.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and units, such as the corresponding program units in the above-described method embodiments of the present invention. The processor executes various functional applications of the processor and the processing of the work data by executing the non-transitory software programs, instructions and modules stored in the memory, that is, the method in the above method embodiment is realized.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be coupled to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more units are stored in the memory and when executed by the processor perform the method of the above embodiments.
The specific details of the computer device may be understood by referring to the corresponding related descriptions and effects in the above embodiments, and are not described herein again.
In order to achieve the above object, according to another aspect of the present application, there is also provided a computer-readable storage medium storing a computer program which, when executed in a computer processor, implements the steps in the above MMP prediction method based on a conditional convolution-generated countermeasure network. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
To achieve the above object, according to another aspect of the present application, there is also provided a computer program product comprising computer program/instructions which, when executed by a processor, implement the steps of the above MMP prediction method based on conditional convolution generation-based countermeasure network.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An MMP prediction method for a countermeasure network based on a conditional convolution generation mode is characterized by comprising the following steps:
acquiring MMP influence factor data of a target oil reservoir;
inputting the MMP influencing factor data into a pre-trained convolution generator to obtain an MMP predicted value of the target oil reservoir output by the pre-trained convolution generator, wherein the convolution generator is built according to a convolution neural network, the convolution generator does not contain random noise input, the pre-trained convolution generator is obtained by carrying out multiple iterative training on the convolution generator according to a training sample set, and each training sample in the training sample set comprises: MMP values of the reservoir and MMP influential factor data of the reservoir.
2. The MMP prediction method of the conditional convolution-based generative countermeasure network of claim 1, further comprising:
acquiring the training sample set;
performing H1 times of iterative training according to the training sample set to obtain the pre-trained convolution generator, wherein each time of iterative training is divided into multiple batches of training, when each batch of training is performed, firstly selecting H2 training samples from the training sample set, then training the network weight of a convolution discriminator based on the selected training samples, and finally training the network weight of the convolution generator in a combined model composed of the convolution discriminator and the convolution generator based on the selected training samples, wherein the convolution discriminator is built according to the combination of a convolution neural network and a fully-connected neural network, and H1 and H2 are both positive integers.
3. The MMP prediction method based on conditional convolution generation countermeasure network of claim 2, wherein the training sample is composed of a first data and a second data, the first data is MMP influence factor data of the oil reservoir, the second data is MMP value of the oil reservoir;
the training of the network weight of the convolution discriminator based on the selected training sample specifically comprises:
respectively combining the MMP predicted value output by the convolution generator according to the first data of the training sample with the first data of the training sample aiming at each selected training sample to obtain combined data, setting the label of the combined data to be 0, and smoothing the label of the combined data;
setting the label of each selected training sample as 1, and smoothing the label of the training sample;
and inputting the combined data subjected to the label smoothing processing and the training sample subjected to the label smoothing processing into the convolution discriminator, and training the network weight of the convolution discriminator.
4. The MMP prediction method based on conditional convolution generation countermeasure network of claim 3, wherein the training of the network weights of the convolution generator in the combined model composed of the convolution discriminator and convolution generator based on the selected training samples specifically includes:
respectively inputting first data of the training samples into the convolution generator aiming at each selected training sample to obtain an MMP predicted value which is output by the convolution generator and corresponds to the training sample;
respectively combining the first data of the training samples with the MMP predicted values corresponding to the training samples aiming at each selected training sample to obtain combined data, setting the label of the combined data to be 1, and smoothing the label of the combined data;
and inputting the combined data subjected to the label smoothing into the convolution discriminator to obtain the probability that the combined data output by the convolution discriminator is real data.
5. The MMP prediction method based on conditional convolution generation countermeasure network of claim 2, wherein said H1 times of iterative training according to said training sample set to obtain said pre-trained convolution generator includes:
and optimizing the iterative training times H1, the training sample number H2, the hyper-parameters of the convolution generator and the hyper-parameters of the convolution discriminator by adopting a hyper-parameter optimization method to obtain an optimal parameter combination, and then performing iterative training according to the optimal parameter combination to obtain the pre-trained convolution generator.
6. The MMP prediction method based on conditional convolution generation countermeasure network of claim 2, wherein the input of the convolution discriminator is MMP influence factor data and MMP values including MMP prediction values output from the convolution generator, the output of the convolution discriminator is a probability that data is real data; the network structure of the convolution discriminator specifically includes: convolutional neural network layer, concatenation layer and full-connection neural network layer, convolutional neural network layer is used for carrying out the preliminary treatment to MMP influence factor data, the concatenation layer is used for with data after the preliminary treatment of convolutional neural network layer output and MMP value splice, full-connection neural network layer is used for right data after the concatenation of concatenation layer output handles the probability that output data is true data.
7. The MMP prediction method of the conditional convolution-based generative countermeasure network of claim 5, wherein the hyper-parameters of the convolution generator specifically include: the number of layers of the convolutional neural network layer, the number of convolutional kernels of each layer of the convolutional neural network layer, the size of the convolutional kernels and the initial learning rate of an optimizer in the convolutional generator;
the hyper-parameters of the convolution discriminator specifically include: the number of layers of the convolutional neural network layer, the number of convolutional kernels of each convolutional neural network layer, the size of the convolutional kernels, the number of layers of the fully-connected neural network layer, the number of neurons of each fully-connected neural network layer, the discarding rate of each fully-connected neural network layer, and the initial learning rate of an optimizer in a convolutional discriminator.
8. An MMP prediction apparatus for a countermeasure network based on conditional convolution generation, comprising:
the data acquisition unit is used for acquiring MMP (matrix metalloproteinases) influence factor data of the target oil reservoir;
the prediction unit is configured to input the MMP influence factor data into a pre-trained convolution generator to obtain an MMP prediction value of the target oil reservoir output by the pre-trained convolution generator, where the convolution generator is built according to a convolutional neural network, the convolution generator does not include random noise input, the pre-trained convolution generator is obtained by performing multiple iterative training on the convolution generator according to a training sample set, and each training sample in the training sample set includes: MMP values of the reservoir and MMP influential factor data of the reservoir.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer program product comprising computer program/instructions, characterized in that the computer program/instructions, when executed by a processor, implement the steps of the method of any one of claims 1 to 7.
CN202210055932.0A 2022-01-18 2022-01-18 MMP prediction method and device based on conditional convolution generation type countermeasure network Pending CN114399119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210055932.0A CN114399119A (en) 2022-01-18 2022-01-18 MMP prediction method and device based on conditional convolution generation type countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210055932.0A CN114399119A (en) 2022-01-18 2022-01-18 MMP prediction method and device based on conditional convolution generation type countermeasure network

Publications (1)

Publication Number Publication Date
CN114399119A true CN114399119A (en) 2022-04-26

Family

ID=81230868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210055932.0A Pending CN114399119A (en) 2022-01-18 2022-01-18 MMP prediction method and device based on conditional convolution generation type countermeasure network

Country Status (1)

Country Link
CN (1) CN114399119A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034368A (en) * 2022-06-10 2022-09-09 小米汽车科技有限公司 Vehicle-mounted model training method and device, electronic equipment, storage medium and chip

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034368A (en) * 2022-06-10 2022-09-09 小米汽车科技有限公司 Vehicle-mounted model training method and device, electronic equipment, storage medium and chip
CN115034368B (en) * 2022-06-10 2023-09-29 小米汽车科技有限公司 Vehicle model training method and device, electronic equipment, storage medium and chip

Similar Documents

Publication Publication Date Title
CN111696345A (en) Intelligent coupled large-scale data flow width learning rapid prediction algorithm based on network community detection and GCN
CN111406264A (en) Neural architecture search
CN104462190A (en) On-line position prediction method based on mass of space trajectory excavation
CN112765415A (en) Link prediction method based on relational content joint embedding convolution neural network
CN111563161B (en) Statement identification method, statement identification device and intelligent equipment
Wu et al. Optimized deep learning framework for water distribution data-driven modeling
CN107392311A (en) The method and apparatus of sequence cutting
CN114399119A (en) MMP prediction method and device based on conditional convolution generation type countermeasure network
CN113435128A (en) Oil and gas reservoir yield prediction method and device based on condition generation type countermeasure network
CN114169240A (en) MMP (matrix metalloproteinase) prediction method and device based on condition generation type countermeasure network
EP4261749A1 (en) Automated creation of tiny deep learning models based on multi-objective reward function
CN110728359B (en) Method, device, equipment and storage medium for searching model structure
Xie et al. Scalenet: Searching for the model to scale
Zhou et al. Effective vision transformer training: A data-centric perspective
Wei et al. Diff-RNTraj: A Structure-aware Diffusion Model for Road Network-constrained Trajectory Generation
CN116502779A (en) Traveling merchant problem generation type solving method based on local attention mechanism
Li et al. ANN: a heuristic search algorithm based on artificial neural networks
CN114595641A (en) Method and system for solving combined optimization problem
Liu et al. SuperPruner: automatic neural network pruning via super network
CN112487191A (en) Text classification method and device based on CNN-BilSTM/BiGRU hybrid combination model
Hu et al. Big data analytics-based traffic flow forecasting using inductive spatial-temporal network
CN109670598A (en) A kind of data processing method based on deep learning
Zafar et al. An Optimization Approach for Convolutional Neural Network Using Non-Dominated Sorted Genetic Algorithm-II.
Nguyen et al. InfoCNF: Efficient conditional continuous normalizing flow using adaptive solvers
CN116562299B (en) Argument extraction method, device and equipment of text information and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination