CN112990439A - Method for enhancing correlation of time series data under mine - Google Patents

Method for enhancing correlation of time series data under mine Download PDF

Info

Publication number
CN112990439A
CN112990439A CN202110341123.1A CN202110341123A CN112990439A CN 112990439 A CN112990439 A CN 112990439A CN 202110341123 A CN202110341123 A CN 202110341123A CN 112990439 A CN112990439 A CN 112990439A
Authority
CN
China
Prior art keywords
information
gate
time
time series
correlation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110341123.1A
Other languages
Chinese (zh)
Inventor
赵菊敏
李灯熬
李高飞
赵新平
李俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN202110341123.1A priority Critical patent/CN112990439A/en
Publication of CN112990439A publication Critical patent/CN112990439A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Primary Health Care (AREA)
  • Animal Husbandry (AREA)
  • Agronomy & Crop Science (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Mining & Mineral Resources (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of deep learning, in particular to a method for enhancing correlation of time series data under a mine based on a long-time memory neural network, which optimizes a result obtained by predicting an LSTM model in a time series through an SGD algorithm; the invention takes the long-time memory neural network as a special cyclic convolution network, inherits the advantage of the cyclic convolution network which is unique in processing the prediction problem of the time sequence, and can effectively solve the 'gradient dispersion' phenomenon of the cyclic convolution network when processing a long time step, namely the gradient is gradually reduced to disappear along with the time extension, thereby solving the problem of poor correlation of the underground data.

Description

Method for enhancing correlation of time series data under mine
Technical Field
The invention relates to the technical field of deep learning, in particular to a method for enhancing correlation of time series data under a mine based on a long-time and short-time memory neural network.
Background
Coal is an important basic energy source in China, the annual average coal yield in China is over 40 percent in full time, and the consumption of coal resources is over 70 percent. It is obvious that China is a very common country of coal resources, coal is indistinguishable from economic development of the whole country, and the method provides powerful support for national pillar type industries such as the whole railway transportation industry, the power generation industry and the like. According to statistics, the coal yield of China is continuously improved, and the death rate is greatly reduced. However, although China pays attention to the hidden danger caused by coal safety accidents and takes effective measures to prevent the occurrence of the accidents all the time, the coal mine safety production is still serious and far away.
The data captured by the sensors under the mine are complex and need to be supervised for a long time. For example, gas breakthrough is a complex, constantly changing process. For example, the gas pressure, gas content, coal seam depth, ground stress, and initial velocity of gas emission of the coal seam are important factors in the process of researching gas outburst, and the gas time series data can be well predicted only by knowing the impression of the factors on the gas outburst. Although the traditional neural network has excellent linear approximation capability, for time series data such as gas concentration, the traditional neural network model cannot establish a connection with a previous time step when processing serialized data, so that the learning efficiency is low and the accuracy is insufficient, namely, the data correlation is poor.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method for enhancing the correlation of the mine underground time series data is used for solving the problem of poor correlation of the mine underground data by utilizing the long-time and short-time memory neural network.
In order to solve the technical problems, the invention adopts the technical scheme that: a method for enhancing correlation of time series data under a mine well is characterized in that results obtained by predicting an LSTM model in a time series are optimized through an SGD algorithm.
The invention has the beneficial effects that: according to the method, a Long Short-term Memory Network (LSTN) is used as a special cyclic convolution Network (RNN), the advantage of unique RNN in processing time sequence prediction is inherited, and the problem of poor correlation of data in a mine is solved, wherein the phenomenon of gradient dispersion generated when the RNN processes a Long time step is effectively solved, namely, the gradient is gradually reduced until the gradient disappears along with the time extension.
Drawings
FIG. 1 illustrates the working principle of LTSM according to the method for enhancing the correlation between the time series data under the mine;
FIG. 2 is an expanded view of LTSM for a method of enhancing correlation of time series data in a mine well according to the present invention.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1 and 2, a method for enhancing correlation between time series data in a mine is to optimize results obtained by predicting an LSTM model in a time series through an SGD algorithm.
Furthermore, the LSTM model comprises a storage unit for storing long-term information, and three logic gate control units, namely an input gate, an output gate and a forgetting gate, are used for controlling the number of data streams passing through;
the logic gate is responsible for the memory module part of the whole network, and modifies the weight value at the edge.
Further, the LSTM model also includes the cellular state of memory cells responsible for the memory function of neurons in the network;
the input gate updates information, selectively updates the information in the cell state, and replaces old information or information needing to be forgotten;
the forgetting gate selectively forgets the information in the whole network;
the output gate stores the old time information in the hidden layer, and predicts the next result when outputting.
Further, the memory cell CtAnd input gate itOutput gate OtAnd forget door ftThe working process in the LSTM model at the time t comprises the following steps:
ft=σ(Wf.[ht-1,xt]+bf)
it=σ(Wi.[ht-1,xt]+bi)
Ct=tanh(Wc.[ht-1,xt]+bc)
Ct=ft.Ct-1+it.Ct
Ot=σ(Wo.[ht-1,xt]+bo)
ht=Ot.tanh(Ct)
the memory cell CtAnd input gate itOutput gate OtAnd forget door ftThe method comprises the steps that a sigmoid activation layer and a pair of multiplication operations are formed;
when the LSTM works, the output C at the last moment is firstly passedt-1With the current input xtGenerating a number between 0 and 1 by the sigmoid activation layer, and determining the information C at the last moment according to the size of the numbert-1How much the throughput depends directly on whether the input information at the current moment is valuable;
LSTM obtains new information C of current timetIncluding updating of the decision value of the active layer through an input gate, C to be input at the previous momentt-1Is converted into a hidden layer state ht-1Calculating by combining the internal weight of the LSTM neural network model;
used by tanh layer to update new candidate value CtThe result obtained by calculation of the two is added with the information content C of the last time of passingt-1As new letter of current timeMessage Ct(ii) a In updating CtBy means of an activation function and a gating unit, h is added by means of weighted summationt-1Conversion to Ct-1
Obtaining initial output through sigmoid activation layer, and processing C through tanh layertAnd multiplying the two to obtain the output of the network at the current moment t.
Further, the SGD algorithm includes using the loss function of each sample to partially derive θ to obtain a corresponding gradient to update θ:
Figure BDA0002999128850000031
has the advantages that: according to the method, a Long Short-term Memory Network (LSTN) is used as a special cyclic convolution Network (RNN), the advantage of unique RNN in processing time sequence prediction is inherited, and the problem of poor correlation of data in a mine is solved, wherein the phenomenon of gradient dispersion generated when the RNN processes a Long time step is effectively solved, namely, the gradient is gradually reduced until the gradient disappears along with the time extension.
Example one
A method for enhancing correlation of time series data in a mine comprises two parts, wherein the first part provides an LSTM model which does not have the result of gradient dissipation or gradient explosion compared with RNN for time series data processing; the second part can optimize the results obtained by predicting the LSTM model in time series through the SGD algorithm.
1. Long-short time memory neural network (LSTM) model
Compared with an RNN neural network, the LTSM neural network model with a more complex structure has the advantages that the time sequence processing capability is enhanced and the learning capability in the aspect of prediction is improved in the aspect of long-term dependence. The LSTM creates a 'retention effect' between output and feedback in order to prevent the gradient diffusion phenomenon, and ensures that a section of stable persistent error exists in the structure to avoid the gradient diffusion phenomenon. The method is characterized in that an LSTM unit comprises a storage unit for storing long-term information, and three logic gate control units are utilized simultaneously: the input gate, the output gate and the forgetting gate are used for controlling the data flow. The logic gate units are independent, and do not transmit the self-behaviors to other neurons, but are responsible for the memory module part of the whole network to modify the magnitude of the weight at the edge. The working principle of the LSTM is shown in fig. 1.
In fig. 1, it can be seen that the central Cell State, i.e. the State of the cells, is responsible for the memory function of the neurons in the entire network, and corresponds to the brain. Second, three gate units can be seen:
input Gate (Input Gate) is used to update information, selectively update the information into the state of the lead cells, and replace old information that needs to be forgotten.
Forgetting Gate (Forget Gate) forgets the information in the whole network selectively, and since the information at some time is not important for the new content, the selective discarding is necessary.
And an Output Gate (Output Gate) for storing the old time information in the hidden layer and predicting the next result when outputting.
The input gate and the forgetting gate both act on the cell state, while the output gate acts on the hidden layer. In fact, the LSTM is a special RNN model, the interior of a common RNN hidden layer is very simple, and the current output is obtained through the current input and the output at the previous moment. Unlike the general RNN model, the hidden layer of LSTM is relatively complex. An expanded view of the model is shown in fig. 2.
The structure of the rectangular box at time t in FIG. 2 is the complete working process of LSTM, which can be expressed by equations (1) - (6)
ft=σ(Wf.[ht-1,xt]+bf)(1)
it=σ(Wi.[ht-1,xt]+bi)(2)
Ct=tanh(Wc·[ht-1,xt]+bc)(3)
Ct=ft.Ct-1+it.Ct(4)
Ot=σ(Wo.[ht-1,xt]+bo)(5)
ht=Ot.tanh(Ct)(6)
In the above formula, ft,it,OtAnd memory cells Ct constitute a memory block of the LSTM. f. oft,it,OtThe three gate control units of the LSTM are called forgetting gate, input gate and output gate, respectively. They each play their own role, selectively controlling the flow of information in the network, and usually consist of a sigmoid active layer and a pair-wise multiplication operation.
The working process of the LSTM is actually that the LSTM firstly passes through the output C at the last momentt-1With the current input xtThe sigmoid activation layer generates a number between 0 and 1, the throughput of the information Ct-1 at the previous moment is determined by the size of the number, and the throughput depends directly on whether the input information at the current moment is valuable or not. The network then needs to obtain new information C at the current momenttThis is divided into three steps:
1) the decision of which values are to be updated is made via an input gate via the active layer, in which case C input at the previous time is again usedt-1Is converted into a hidden layer state ht-1And then combining the internal recalculation of the LSTM neural network model to carry out calculation.
2) Used by tanh layer to update new candidate value CtThe result obtained by calculation of the two is added with the information content C of the last time of passingt-1As new information C of the current momentt. In updating CtIn the process, h is subjected to weighted summation by an activation function and a gating unitt-1Conversion to Ct-1
3) Obtaining initial output through sigmoid activation layer, and processing C through tanh layertAnd multiplying the two to obtain the output of the network at the current moment t.
2. Random gradient descent method
The random Gradient Descent (SGD) method is a method for optimizing a function and finding a minimum value. In traditional neural network models, the model is often treated with a gradient descent method (GD). The gradient descent method is called gradient descent method because the gradient descent method is similar to a descent process from large to small.
The gradient descent method is widely applied to the traditional neural network model, but the method also has certain defects: firstly, when a data set is large, the calculation amount of the accurate derivative of the current function f (x) is large, and the influence on the efficiency is large; secondly, if a relatively poor local optimum is encountered during the descent, at which time the derivative is 0, the gradient descent method stops working, and the result is only a local minimum, not a global minimum. The SGD does not have this problem, and updates θ by using the partial derivative of θ with the loss function of each sample to obtain the corresponding gradient:
Figure BDA0002999128850000061
the above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (5)

1. A method for enhancing correlation of time series data under a mine well is characterized in that results obtained by predicting an LSTM model in the time series are optimized through an SGD algorithm.
2. The method for enhancing the correlation of the time series data in the mine well is characterized in that the LSTM model comprises a storage unit for storing long-term information, and three logic gate control units, namely an input gate, an output gate and a forgetting gate, are used for controlling the number of data streams passing through;
the logic gate is responsible for the memory module part of the whole network, and modifies the weight value at the edge.
3. The method for enhancing the correlation of time series data in a mine according to claim 2, wherein the LSTM model further comprises the cell states of memory cells responsible for the memory function of neurons in the network;
the input gate updates information, selectively updates the information in the cell state, and replaces old information or information needing to be forgotten;
the forgetting gate selectively forgets the information in the whole network;
the output gate stores the old time information in the hidden layer, and predicts the next result when outputting.
4. The method for enhancing correlation of time series data under mine according to claim 3, wherein the memory cell CtAnd input gate itOutput gate OtAnd forget door ftThe working process in the LSTM model at the time t comprises the following steps:
ft=σ(Wf·[ht-1,xt]+bf)
it=σ(Wi·[ht-1,xt]+bi)
Ct=tanh(Wc·[ht-1,xt]+bc)
Ct=ft·Ct-1+it·Ct
Ot=σ(Wo·[ht-1,xt]+bo)
ht=Ot·tanh(Ct)
the memory cell CtAnd input gate itOutput gate OtAnd forget door ftMultiplication by a sigmoid activation layer and a pairThe method comprises the following steps of (1) carrying out operation;
when the LSTM works, the output C at the last moment is firstly passedt-1With the current input xtGenerating a number between 0 and 1 by the sigmoid activation layer, and determining the information C at the last moment according to the size of the numbert-1How much the throughput depends directly on whether the input information at the current moment is valuable;
LSTM obtains new information C of current timetIncluding updating of the decision value of the active layer through an input gate, C to be input at the previous momentt-1Is converted into a hidden layer state ht-1Calculating by combining the internal weight of the LSTM neural network model;
used by tanh layer to update new candidate value CtThe result obtained by calculation of the two is added with the information content C of the last time of passingt-1As new information C of the current momentt(ii) a In updating CtBy means of an activation function and a gating unit, h is added by means of weighted summationt-1Conversion to Ct-1
Obtaining initial output through sigmoid activation layer, and processing C through tanh layertAnd multiplying the two to obtain the output of the network at the current moment t.
5. The method for enhancing the correlation of the time series data of the underground mine as claimed in claim 1, wherein the SGD algorithm includes updating theta by using a partial derivative of theta with a loss function of each sample to obtain a corresponding gradient:
Figure FDA0002999128840000021
CN202110341123.1A 2021-03-30 2021-03-30 Method for enhancing correlation of time series data under mine Pending CN112990439A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110341123.1A CN112990439A (en) 2021-03-30 2021-03-30 Method for enhancing correlation of time series data under mine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110341123.1A CN112990439A (en) 2021-03-30 2021-03-30 Method for enhancing correlation of time series data under mine

Publications (1)

Publication Number Publication Date
CN112990439A true CN112990439A (en) 2021-06-18

Family

ID=76339128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110341123.1A Pending CN112990439A (en) 2021-03-30 2021-03-30 Method for enhancing correlation of time series data under mine

Country Status (1)

Country Link
CN (1) CN112990439A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147875A (en) * 2019-05-07 2019-08-20 西安交通大学 A kind of shield machine auxiliary cruise method based on LSTM neural network
US20190325514A1 (en) * 2018-04-24 2019-10-24 Alibaba Group Holding Limited Credit risk prediction method and device based on lstm model
CN110942101A (en) * 2019-11-29 2020-03-31 湖南科技大学 Rolling bearing residual life prediction method based on depth generation type countermeasure network
CN111079906A (en) * 2019-12-30 2020-04-28 燕山大学 Cement product specific surface area prediction method and system based on long-time and short-time memory network
CN112381591A (en) * 2020-12-04 2021-02-19 四川长虹电器股份有限公司 Sales prediction optimization method based on LSTM deep learning model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325514A1 (en) * 2018-04-24 2019-10-24 Alibaba Group Holding Limited Credit risk prediction method and device based on lstm model
CN110147875A (en) * 2019-05-07 2019-08-20 西安交通大学 A kind of shield machine auxiliary cruise method based on LSTM neural network
CN110942101A (en) * 2019-11-29 2020-03-31 湖南科技大学 Rolling bearing residual life prediction method based on depth generation type countermeasure network
CN111079906A (en) * 2019-12-30 2020-04-28 燕山大学 Cement product specific surface area prediction method and system based on long-time and short-time memory network
CN112381591A (en) * 2020-12-04 2021-02-19 四川长虹电器股份有限公司 Sales prediction optimization method based on LSTM deep learning model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZHENG ZHAO 等: "LSTM network:a deep Learning approach for short-term traffic forecast", 《IET INTELLIGENT TRANSPORT SYSTEMS》 *
李伟山 等: "LSTM在煤矿瓦斯预测预警系统中的应用与设计", 《西安科技大学学报》 *
李高盛 等: "基于 LSTM 的城市公交车站短时客流量预测研究", 《公路交通科技》 *
陈杰浩 等: "基于深度置信网络的广告点击率预估的优化", 《软件学报》 *

Similar Documents

Publication Publication Date Title
Park et al. History matching and uncertainty quantification of facies models with multiple geological interpretations
Feng et al. Modeling non-linear displacement time series of geo-materials using evolutionary support vector machines
Wang et al. On a new method of estimating shear wave velocity from conventional well logs
Mohammadian et al. A case study of petrophysical rock typing and permeability prediction using machine learning in a heterogenous carbonate reservoir in Iran
US20230203925A1 (en) Porosity prediction method based on selective ensemble learning
Daolun et al. Physics-constrained deep learning for solving seepage equation
CN111079978B (en) Coal and gas outburst prediction method based on logistic regression and reinforcement learning
Bagheripour et al. Fuzzy ruling between core porosity and petrophysical logs: Subtractive clustering vs. genetic algorithm–pattern search
CN111058840A (en) Organic carbon content (TOC) evaluation method based on high-order neural network
Li et al. Leveraging LSTM for rapid intensifications prediction of tropical cyclones
Asadi et al. Development of optimal fuzzy models for predicting the strength of intact rocks
US20240303540A1 (en) Prediction device, prediction method, and recording medium
Liu Potential for evaluation of interwell connectivity under the effect of intraformational bed in reservoirs utilizing machine learning methods
Alguliyev et al. History matching of petroleum reservoirs using deep neural networks
Rai et al. Fast parameter estimation of generalized extreme value distribution using neural networks
Zhang et al. Real-time prediction of logging parameters during the drilling process using an attention-based Seq2Seq model
Ren et al. Research on the rate of penetration prediction method based on stacking ensemble learning
CN112990439A (en) Method for enhancing correlation of time series data under mine
CN109558615B (en) Oil-gas exploration decision tree analysis method and system
CN113705878B (en) Method and device for determining water yield of horizontal well, computer equipment and storage medium
Du et al. A data-driven model for production prediction of strongly heterogeneous reservoir under uncertainty
Juda et al. An attempt to boost posterior population expansion using fast machine learning algorithms
Tripathi et al. Deep Learning–Based Production Forecasting and Data Assimilation in Unconventional Reservoir
Siwik et al. Hybridization of isogeometric finite element method and evolutionary multi-agent system as a tool-set for multiobjective optimization of liquid fossil fuel reserves exploitation with minimizing groundwater contamination
Sreenivasan et al. Automatic sucker rod pump fault diagnostics by transfer learning using googlenet integrated machine learning classifiers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618