CN112199637A - Regression modeling method for generating countermeasure network data enhancement based on regression attention - Google Patents

Regression modeling method for generating countermeasure network data enhancement based on regression attention Download PDF

Info

Publication number
CN112199637A
CN112199637A CN202010994598.6A CN202010994598A CN112199637A CN 112199637 A CN112199637 A CN 112199637A CN 202010994598 A CN202010994598 A CN 202010994598A CN 112199637 A CN112199637 A CN 112199637A
Authority
CN
China
Prior art keywords
data
regression
attention
discriminator
attention module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010994598.6A
Other languages
Chinese (zh)
Other versions
CN112199637B (en
Inventor
葛志强
江肖禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010994598.6A priority Critical patent/CN112199637B/en
Publication of CN112199637A publication Critical patent/CN112199637A/en
Application granted granted Critical
Publication of CN112199637B publication Critical patent/CN112199637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Probability & Statistics with Applications (AREA)
  • Operations Research (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a regression modeling method for enhancing resistance network data based on regression attention generation, wherein an attention mechanism is added to both a generator and a discriminator of a resistance network generated by the regression attention; an attention module 1 in the generator constructs a regression loss by using an independent variable and a dependent variable of the generated data output by the generator; meanwhile, the real data also finely adjusts the attention module 1; the attention module 2 in the discriminator uses the difference value of the regression loss of the real data and the generated data to construct a new loss; it extracts the feature containing the maximum regression information by minimizing this loss, and this feature contains the maximum regression difference information between the real data and the generated data, which is beneficial to the consideration of the regression information by the discriminator. According to the method, the raw data is enhanced by utilizing the generated data of the reactive network based on the regression attention generation, and then the regression modeling is carried out by utilizing a data driving method, so that the performance and the prediction precision of the regression model are effectively improved.

Description

Regression modeling method for generating countermeasure network data enhancement based on regression attention
Technical Field
The invention belongs to the field of industrial process soft measurement, and particularly relates to a regression modeling method for generating the enhancement of countermeasure network data based on regression attention.
Background
In the current big data era, data-driven models play an important role. The regression model is widely applied to many scenes as a practical tool, such as stock trend prediction in the financial industry, soft sensors in the process industry, and the like. The quality of the data is crucial for the data driven model. In application scenarios such as limited data accumulation, difficult data acquisition, data privacy protection, and the like, the lack of data affects the prediction accuracy of the regression model. Therefore, how to improve the performance of the regression model under limited data is an important issue.
Generative confrontation networks (GAN) is a generative model proposed by Goodfellow in 2014. Adding the generated data of the trained GAN into the real data set and participating in modeling together is a data enhancement idea, which can help the original data to obtain the information expansion so as to improve the effect of the data-driven model. However, there is currently no GAN model for regression problem to perform data-enhanced regression modeling. When the existing GAN model is used for data generation, independent variables and dependent variables are simply spliced and reconstructed, and the regression relationship of the independent variables and the dependent variables in the data is ignored. In addition, because the dependent variable is a variable to be predicted, the acquisition mode is often stricter, and the accuracy is higher. However, the GAN training does not give enough attention to the dependent variable, and the error of the independent variable is propagated to the dependent variable. Thus, these factors limit the effectiveness of data-enhanced regression modeling.
Disclosure of Invention
The invention aims to provide a regression modeling method for generating the enhancement of the reactive network data based on the regression attention, aiming at the defects of the prior art.
The purpose of the invention is realized by the following technical scheme: a regression modeling method for generating an enhancement to resistance network data based on regression attention comprises the following steps:
(1) collecting original data, and obtaining independent variable x after removing outlier and normalizing dataiAnd yi,i=1~M。
(2) X is to beiAnd yiSplicing and reconstructing to obtain a new data set D ═ X, Y]={[x1,y1],[x2,y2],…,[xM,yM]},M∈Z。
(3) One RA-GAN was unsupervised trained with D until it converged.
(4) Generating dummy data D '═ X', Y 'from the trained RA-GAN']={[x′1,y′1],[x′2,y′2],…,[x′N,y′N]},N∈Z。
(5) And adding the generated data into the real data set D, and performing data enhancement to obtain an enhanced data set.
(6) The data of the enhanced data set D is segmented into independent variables { X, X '} and dependent variables { Y, Y' }, which are used to train the regression model.
(7) And inputting the independent variable of the test data into the trained regression model to obtain the predicted dependent variable.
Further, the regression attention generation confrontation network introduces two attention modules, namely an attention module 1 in the generator and an attention module 2 in the discriminator.
The generator is a multi-layer perceptron, random noise z is input into an input layer, and data D' is generated from output of an output layer. Generation data is partitioned into arguments x'jAnd dependent variable yj′,j∈[1,2,…,N]. Attention module 1 is a multi-layer perceptron connected to the generator, the input is the argument x 'of the generated data'jThe output layer is the predicted value
Figure BDA0002692110720000021
To make it possible to
Figure BDA0002692110720000022
As much as possible with yj' equality, the loss function of the generator adds a new loss term LA1The update is as follows:
Figure BDA0002692110720000023
wherein D (-) is the output of the discriminator, x-PgMeaning that sample x is from the generator and β is the regression coefficient for attention module 1.
The discriminator is also a multi-layer perceptron for discriminating whether the input is real data or generated data. The attention module 2 is arranged at the front end of the discriminator network and is also a multilayer perceptron. The input data D or D' is divided into independent variable and dependent variable, the independent variable is input into the attention module 2 and output after being mapped by the network
Figure BDA0002692110720000024
Or
Figure BDA0002692110720000025
The loss function of attention module 2 is:
Figure BDA0002692110720000026
where γ is the regression coefficient of attention module 2. Then, will
Figure BDA0002692110720000027
Or
Figure BDA0002692110720000028
With argument x corresponding to inputiOr x'jAnd splicing and inputting the spliced signals into a subsequent discriminator. The penalty function for the arbiter with attention module 2 is:
Figure BDA0002692110720000029
wherein D (-) is the output of the discriminator, x-PrMeans sample x is from real data; λ represents a penalty factor for the gradient,
Figure BDA00026921107200000210
the representative data is from a sample space between the real data and the generated data,
Figure 100002_1
is discriminator gradient
Figure BDA00026921107200000212
The two norms of (a).
Further, the network parameters of the attention module 1 can also be fine-tuned by the real data D, with the fine-tuning loss as follows:
Figure BDA00026921107200000213
where M is the number of real data and α is the fine tuning coefficient. The loss function of the generator may also be LG *’=LG *+LA1’。
The invention has the beneficial effects that: the invention designs a regression attention generation countermeasure network (RA-GAN), introduces an attention mechanism into a generator and a discriminator for generating the countermeasure network, and considers the relationship inside a generated data variable during network parameter training. An attention module 1 in the generator constructs a regression loss by using an independent variable and a dependent variable of the generated data output by the generator; meanwhile, the real data also finely adjusts the attention module 1; the attention module 2 in the discriminator uses the difference value of the regression loss of the real data and the generated data to construct a new loss; it extracts the feature containing the maximum regression information by minimizing this loss, and this feature contains the maximum regression difference information between the real data and the generated data, which is beneficial to the consideration of the regression information by the discriminator. According to the method, the raw data is enhanced by utilizing the generated data of the reactive network based on the regression attention generation, and then the regression modeling is carried out by utilizing a data driving method, so that the performance and the prediction precision of the regression model are effectively improved.
Drawings
FIG. 1 is a flow chart for generating an antagonistic network (RA-GAN) based on regression attention;
FIG. 2 is a flow chart of attention module 1 in the builder of RA-GAN;
FIG. 3 is a flow chart of attention module 2 of the arbiter of RA-GAN;
FIG. 4 is a flow chart of data-enhanced regression modeling based on RA-GAN;
FIG. 5 is a graph comparing the effect of WGAN-GP and RA-GAN generated data on support vector regression model data enhancement.
Detailed Description
The regression modeling method for generating the augmentation to the network data based on the regression attention according to the present invention will be further described in detail with reference to the following embodiments.
The regression attention generation pairing network (RA-GAN) provided by the invention uses the basic structure of a Wasserstein network with a gradient penalty term as reference, and introduces two attention modules, namely an attention module 1 in a generator and an attention module 2 in a discriminator. The independent variable in the invention is a process variable in an industrial process, and the dependent variable is a corresponding quality variable.
As shown in FIG. 1, the generator of the regression attention generation vs. impedance network (RA-GAN) is a multi-layer perceptron, the input layer inputs random noise z, and the hidden layer is set to [32,32 ]]The output layer outputs generated data D ' ([ X ', Y ']={[x′1,y′1],[x′2,y′2],…,[x′N,y′N]N ∈ Z, N denotes the amount of samples in D', Z denotes an integer; generated data D 'is segmented into arguments x'jAnd dependent variable yj′,i∈[1,2,…,N]。
As shown in FIG. 2, the attention module 1 is a multi-layered perceptron connected to the generator, and its hidden layer setting is [32 ]](ii) a Its input is the argument x 'of the generated data'iThe output layer is the predicted value of the dependent variable
Figure BDA0002692110720000031
To make it possible to
Figure BDA0002692110720000032
As much as possible with yj' equality, loss function L of the generatorG *A new loss term L associated with the attention module 1 is addedA1The update is as follows:
Figure BDA0002692110720000041
wherein D (-) refers to the output of the discriminator, E denotes expectation, x-PgSample x is from the generated data; beta is the regression coefficient of attention module 1, used to adjust the regression loss LA1Loss L at the generatorGSpecific gravity in x;
Figure BDA0002692110720000042
representing a two-norm. Meanwhile, the network parameters of the attention module 1 may also be represented by real data D ═ X, Y]={xi,yi}={[x1,y1],[x2,y2],…,[xM,yM]Fine tuning is performed by M ∈ Z, i ═ 1,2, …, M, and at this time, the argument x of the real data isiIs used as an input to control the regression relationship between the generated data variables, and the fine tuning losses are as follows:
Figure BDA0002692110720000043
wherein M is the number of real data, and alpha is a fine adjustment coefficient; the loss function of the generator may also be LG *’=LG *+LA1’。
As shown in FIG. 3, the attention module 2 is arranged at the front end of the network of discriminators, and the discriminator of RA-GAN is also a multi-layer perceptron, which discriminates whether the input data is real data or generated data, and its hidden layer arrangement is [32,64,32 ].
The attention module 2 is also a multi-layer perceptron, whose hidden layer arrangement is [32 ]]. The input real data or generated data is divided into independent variable and dependent variable, the independent variable x of the real dataiOr the argument x of the generated dataj' is input into attention module 2, and is output after being mapped by network
Figure BDA0002692110720000044
Or
Figure BDA0002692110720000045
In order to incorporate the difference in the regression relationship between the real data and the generated data into the result of the discriminator, attention is therefore directed to the loss function L of the module 2A2Is designed as follows:
Figure BDA0002692110720000046
where γ is the regression coefficient of attention module 2.
Minimization of LA2So that the last layer of the hidden layer of the attention module 2 extracts the features with the maximum regression information; thereafter, attention is paid to the output of the module 2
Figure BDA0002692110720000047
Or
Figure BDA0002692110720000048
Argument corresponding to input (argument x of real data)iOr the argument x of the generated dataj') are spliced and input into a subsequent discriminator multi-layer perceptron. Finally, the loss function L of the arbiter with attention module 2DThe method comprises the following steps:
Figure BDA0002692110720000051
wherein the first term is the expected value of the output of the real data input discriminator, D (-) is the output of the discriminator, x-PrMeans sample x is from real data; the second term is the expected value of the output after the data input discriminator is generated; the third term is a gradient penalty term, λ represents a gradient penalty factor,
Figure BDA0002692110720000052
the representative data is from a sample space between the real data and the generated data,
Figure 2
is discriminator gradient
Figure BDA0002692110720000054
The two norms of (a).
As shown in fig. 4, based on the above RA-GAN, a regression modeling method for data enhancement in industrial processes is proposed:
1. collecting original data, and obtaining independent variable x after removing outlier and normalizing dataiAnd yi,i=1~M。
2. X is to beiAnd yiSplicing and reconstructing to obtain new data D ═ X, Y]={[x1,y1],[x2,y2],…,[xM,yM]},M∈Z。
3. One of the above RA-GANs is trained unsupervised with D until it converges.
4. Generating dummy data D '═ X', Y 'from the trained RA-GAN']={[x′1,y′1],[x′2,y′2],…,[x′N,y′N]},N∈Z。
5. And adding the generated data D 'into the real data set D for data enhancement to obtain an enhanced data set { D, D' }.
6. The data of the enhanced dataset is partitioned into independent variables { X, X '} and dependent variables { Y, Y' }, which are used to train the regression model.
7. And inputting the independent variable of the test data into the trained regression model to obtain the predicted dependent variable.
The performance of the RA-GAN based data-enhanced regression modeling is illustrated below in conjunction with a specific carbon dioxide absorption process example. The carbon dioxide absorption tower is a key device in the urea synthesis industry and is used for removing carbon dioxide in mixed gas and preventing the carbon dioxide from influencing the quality of a final product. However, carbon dioxide content is a variable that is difficult to detect in real time and requires off-line analysis by a mass spectrometer. Therefore we need to model soft measurements of carbon dioxide in the case of small samples.
Table 1: list of independent variables of carbon dioxide absorption process
Figure BDA0002692110720000055
Figure BDA0002692110720000061
The carbon dioxide absorption process has a total of 11 independent variables as shown in table 1. We collected 300 samples containing the dependent and the dependent variables, 100 of which were used as training and the remaining 200 as tests. On the basis of establishing a regression model by using a support vector regression model, data enhancement is carried out on the model by respectively using Wasserstein generation countermeasure network (WGAN-GP) with gradient penalty and RA-GAN. The WAGAN-GP and RA-GAN iterated 12000 cycles under the same learning rate and optimization algorithm, respectively, and FIG. 5 shows the final result, which we used the Root Mean Square Error (RMSE) of the prediction result and the true value of the regression model as the comparison index.
From fig. 5, it can be seen that WGAN-GP cannot improve the performance of the original regression model because it does not consider the regression relationship between the variables of the generated data, and the generated data generated by WGAN-GP is added with new regression information. However, RA-GAN introduces a mechanism of attention to the arguments during data generation, effectively preserving the regression relationships between data variables. Its data enhancement effect is significant. In addition, as the generated data is increased, the performance of the data enhancement regression model is correspondingly improved.

Claims (3)

1. A regression modeling method for generating an enhancement to resistance network data based on regression attention is characterized by comprising the following steps:
(1) collecting original data, and obtaining independent variable x after removing outlier and normalizing dataiAnd yi,i=1~M。
(2) Can convert x intoiAnd yiSplicing and reconstructing to obtain a new data set D ═ X, Y]={[x1,y1],[x2,y2],…,[xM,yM]},M∈Z。
(3) One RA-GAN was unsupervised trained with D until it converged.
(4) Generating dummy data D '═ X', Y 'from the trained RA-GAN']={[x′1,y′1],[x′2,y′2],…,[x′N,y′N]},N∈Z。
(5) The generated data can be added into the real data set D for data enhancement to obtain an enhanced data set.
(6) The data of the enhanced data set D is segmented into independent variables { X, X '} and dependent variables { Y, Y' }, which are used to train the regression model.
(7) And inputting the independent variable of the test data into the trained regression model to obtain the predicted dependent variable.
2. The regression modeling method for enhancing data on an opposition network based on regression attention generation according to claim 1 is characterized in that the regression attention generation opposition network introduces two attention modules, attention module 1 in the generator and attention module 2 in the discriminator.
The generator is a multi-layer perceptron, random noise z is input into an input layer, and data D' is generated from output of an output layer. Generation data is partitioned into arguments x'jAnd dependent variable yj′,j∈[1,2,…,N]. Attention module 1 is a multi-layer perceptron connected to the generator, the input is the argument x 'of the generated data'jThe output layer is the predicted value
Figure FDA0002692110710000017
To make it possible to
Figure FDA0002692110710000018
As much as possible with yj' equality, the loss function of the generator adds a new loss term LA1The update is as follows:
Figure FDA0002692110710000011
wherein D (-) is the output of the discriminator, x-PgMeaning that sample x is from the generator and β is the regression coefficient for attention module 1.
The discriminator is also a multi-layer perceptron for discriminating whether the input is real data or generated data. The attention module 2 is arranged at the front end of the discriminator network and is also a multilayer perceptron. The input data D or D' is divided into independent variable and dependent variable, the independent variable is input into the attention module 2 and output after being mapped by the network
Figure FDA0002692110710000012
Or
Figure FDA0002692110710000013
The loss function of attention module 2 is:
Figure FDA0002692110710000014
where γ is the regression coefficient of attention module 2. Then, will
Figure FDA0002692110710000015
Or
Figure FDA0002692110710000016
With argument x corresponding to inputiOr x'jAnd splicing and inputting the spliced signals into a subsequent discriminator. The penalty function for the arbiter with attention module 2 is:
Figure FDA0002692110710000021
wherein D (-) is the output of the discriminator, x-PrMeans sample x is from real data; λ represents a penalty factor for the gradient,
Figure FDA0002692110710000022
the representative data is from a sample space between the real data and the generated data,
Figure 1
is discriminator gradient
Figure FDA0002692110710000025
The two norms of (a).
3. A regression modeling method for regression attention generation versus network data enhancement as claimed in claim 1, characterized in that the network parameters of the attention module 1 can also be fine-tuned by the real data D, with the fine tuning loss as follows:
Figure FDA0002692110710000024
where M is the number of real data and α is the fine tuning coefficient. The loss function of the generator may also be LG *’=LG *+LA1’。
CN202010994598.6A 2020-09-21 2020-09-21 Regression modeling method for generating contrast network data enhancement based on regression attention Active CN112199637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010994598.6A CN112199637B (en) 2020-09-21 2020-09-21 Regression modeling method for generating contrast network data enhancement based on regression attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010994598.6A CN112199637B (en) 2020-09-21 2020-09-21 Regression modeling method for generating contrast network data enhancement based on regression attention

Publications (2)

Publication Number Publication Date
CN112199637A true CN112199637A (en) 2021-01-08
CN112199637B CN112199637B (en) 2024-04-12

Family

ID=74015694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010994598.6A Active CN112199637B (en) 2020-09-21 2020-09-21 Regression modeling method for generating contrast network data enhancement based on regression attention

Country Status (1)

Country Link
CN (1) CN112199637B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537247A (en) * 2021-08-13 2021-10-22 重庆大学 Data enhancement method for converter transformer vibration signal
CN117332048A (en) * 2023-11-30 2024-01-02 运易通科技有限公司 Logistics information query method, device and system based on machine learning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180336471A1 (en) * 2017-05-19 2018-11-22 Mehdi Rezagholizadeh Semi-supervised regression with generative adversarial networks
CN110320162A (en) * 2019-05-20 2019-10-11 广东省智能制造研究所 A kind of semi-supervised high-spectral data quantitative analysis method based on generation confrontation network
CN111079509A (en) * 2019-10-23 2020-04-28 西安电子科技大学 Abnormal behavior detection method based on self-attention mechanism
CN111275647A (en) * 2020-01-21 2020-06-12 南京信息工程大学 Underwater image restoration method based on cyclic generation countermeasure network
CN111429340A (en) * 2020-03-25 2020-07-17 山东大学 Cyclic image translation method based on self-attention mechanism
CN111476294A (en) * 2020-04-07 2020-07-31 南昌航空大学 Zero sample image identification method and system based on generation countermeasure network
WO2020168731A1 (en) * 2019-02-19 2020-08-27 华南理工大学 Generative adversarial mechanism and attention mechanism-based standard face generation method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180336471A1 (en) * 2017-05-19 2018-11-22 Mehdi Rezagholizadeh Semi-supervised regression with generative adversarial networks
WO2020168731A1 (en) * 2019-02-19 2020-08-27 华南理工大学 Generative adversarial mechanism and attention mechanism-based standard face generation method
CN110320162A (en) * 2019-05-20 2019-10-11 广东省智能制造研究所 A kind of semi-supervised high-spectral data quantitative analysis method based on generation confrontation network
CN111079509A (en) * 2019-10-23 2020-04-28 西安电子科技大学 Abnormal behavior detection method based on self-attention mechanism
CN111275647A (en) * 2020-01-21 2020-06-12 南京信息工程大学 Underwater image restoration method based on cyclic generation countermeasure network
CN111429340A (en) * 2020-03-25 2020-07-17 山东大学 Cyclic image translation method based on self-attention mechanism
CN111476294A (en) * 2020-04-07 2020-07-31 南昌航空大学 Zero sample image identification method and system based on generation countermeasure network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄宏宇;谷子丰;: "一种基于自注意力机制的文本图像生成对抗网络", 重庆大学学报, no. 03 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537247A (en) * 2021-08-13 2021-10-22 重庆大学 Data enhancement method for converter transformer vibration signal
CN117332048A (en) * 2023-11-30 2024-01-02 运易通科技有限公司 Logistics information query method, device and system based on machine learning
CN117332048B (en) * 2023-11-30 2024-03-22 运易通科技有限公司 Logistics information query method, device and system based on machine learning

Also Published As

Publication number Publication date
CN112199637B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN109272156B (en) Ultra-short-term wind power probability prediction method
CN112199637A (en) Regression modeling method for generating countermeasure network data enhancement based on regression attention
Fang et al. Structural pruning for diffusion models
US20220036231A1 (en) Method and device for processing quantum data
CN113011085A (en) Equipment digital twin modeling method and system
Duraipandian Adaptive algorithms for signature wavelet recognition in the musical sounds
CN109378014A (en) A kind of mobile device source discrimination and system based on convolutional neural networks
CN115809624B (en) Automatic analysis design method for integrated circuit microstrip line transmission line
CN114140398A (en) Few-sample defect detection method using defect-free image
CN117371543A (en) Enhanced soft measurement method based on time sequence diffusion probability model
CN113627597B (en) Method and system for generating countermeasure sample based on general disturbance
Jia et al. Federated domain adaptation for asr with full self-supervision
Yang et al. A Balanced Deep Transfer Network for Bearing Fault Diagnosis
Kashkin et al. HiFi-VC: High quality ASR-based voice conversion
Yuan et al. Multi-branch bounding box regression for object detection
CN110633516B (en) Method for predicting performance degradation trend of electronic device
CN112785088A (en) Short-term daily load curve prediction method based on DCAE-LSTM
CN116521564A (en) Software defect prediction method of multi-word embedded coding-gating fusion mechanism based on LSTM
CN115062551A (en) Wet physical process parameterization method based on time sequence neural network
CN115186584A (en) Width learning semi-supervised soft measurement modeling method integrating attention mechanism and adaptive composition
CN115169235A (en) Super surface unit structure inverse design method based on improved generation of countermeasure network
CN112231987B (en) Ionosphere forecasting method based on VMD and Elman neural network
Liu Construction and research of Guqin sound synthesis and plucking big data simulation model based on computer synthesis technology
Hu et al. Monthly Rainfall Prediction Based on VMD-GRA-Elman Model
CN111339646A (en) Temperature data enhancement method for full-automatic control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Zheng Junhua

Inventor after: Ge Zhiqiang

Inventor after: Jiang Xiaoyu

Inventor before: Ge Zhiqiang

Inventor before: Jiang Xiaoyu

CB03 Change of inventor or designer information