CN114661754B - Water pollution unsupervised early warning method based on fractional guide regularization network - Google Patents
Water pollution unsupervised early warning method based on fractional guide regularization network Download PDFInfo
- Publication number
- CN114661754B CN114661754B CN202210067624.XA CN202210067624A CN114661754B CN 114661754 B CN114661754 B CN 114661754B CN 202210067624 A CN202210067624 A CN 202210067624A CN 114661754 B CN114661754 B CN 114661754B
- Authority
- CN
- China
- Prior art keywords
- data
- network
- self
- training
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000003911 water pollution Methods 0.000 title claims abstract description 19
- 238000013528 artificial neural network Methods 0.000 claims abstract description 44
- 238000012549 training Methods 0.000 claims abstract description 43
- 230000008569 process Effects 0.000 claims abstract description 23
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 5
- 239000002352 surface water Substances 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 230000005484 gravity Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000002759 z-score normalization Methods 0.000 claims description 3
- 230000002159 abnormal effect Effects 0.000 abstract description 9
- 238000001514 detection method Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000003673 groundwater Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Remote Sensing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a water pollution unsupervised early warning method based on a fractional guide regularization network, which comprises the following steps: step S1: establishing a data set; step S2: constructing a self-coding neural network; step S3: calculating a loss value of training data; step S4: training a self-coding neural network; step S5: and identifying the data to be tested by using the model. The water pollution unsupervised early warning method based on the score-guided regularized network designs a score-guided regularized scoring network, the network can learn the difference between normal data and abnormal data, the difference is gradually enhanced in the network training process, the regularized scoring network is integrated into a self-coding neural network structure, the limitation of the self-coding neural network on normal data input is broken, and the scoring network can be integrated into different unsupervised abnormal detection methods.
Description
Technical Field
The invention relates to the field of water pollution early warning, in particular to a water pollution unsupervised early warning method based on a fractional guide regularization network.
Background
Currently, sudden pollution of water body caused by intentional or unintentional chemical leakage continuously occurs. Frequent water pollution events and the serious hazards accompanying the same draw great social attention, and meanwhile, the development and application of pollution early warning response technology are promoted.
The water quality automatic monitoring system of the country is arranged in 2008 of China, water quality data of a water area can be automatically obtained, along with increasing importance of the country on river water quality management, water body monitoring infrastructure is increasingly sound, data accumulation is increasingly abundant, and water environment management under the support of big data is increasingly mature. Later environmental science and technology researches fully use the subject technologies such as satellite remote sensing, numerical information, simulation modes, multimedia images and the like, and additionally use newly added innovative means such as the Internet, big data, artificial intelligence and the like to describe the change of a large-scale environmental system by digital quantization so as to achieve the effects of accurately explaining and forecasting the environmental dynamics and disasters.
Because the water pollution data has the characteristics of nonlinearity, non-stationarity and ambiguity, the water pollution data is difficult to fit by using a shallow machine learning method and a general deep learning method so as to realize predictive early warning, and most of the existing non-supervision methods require that the input data are normal data, but in practical application, the normal data and the abnormal data are often mixed together, so that the water pollution non-supervision early warning method based on the fractional guide regularized network is provided.
Disclosure of Invention
The invention aims to overcome the existing defects, provides an unsupervised early warning method for water pollution based on a score-guided regularized network, designs a regularized scoring network with score guidance, can learn the difference between normal data and abnormal data, gradually enhances the difference in the network training process, integrates the regularized scoring network into a self-coding neural network structure, breaks the limitation of the self-coding neural network on normal data input, and can be integrated into different unsupervised abnormal detection methods, thereby effectively solving the problems in the background art.
In order to achieve the above purpose, the present invention provides the following technical solutions: the water pollution unsupervised early warning method based on the fractional guide regularization network comprises the following steps:
Step S1: establishing a data set, generating the data set from the hour data of the water quality pollution monitoring micro station in a sliding window mode, setting a label for the data in each sliding window according to the standard limit value of the quality standard basic project of the surface water environment, and dividing the data set into a training set and a testing set according to a certain proportion, wherein the training set does not contain label data;
Step S2: constructing a self-coding neural network, wherein the self-coding neural network comprises an encoder, a decoder and a scoring network, the encoder and the decoder comprise a full-connection layer and a ReLU layer, the scoring network is used for evaluating the superscalar score value of data, and the decoder and the scoring network are sequentially connected behind the encoder;
step S3: calculating a loss value of training data, wherein the loss value of the training data needs to be calculated in the construction process, the loss value is divided into two parts, one part is a loss value of reconstructed data of the self-coding neural network and is used for ensuring that the self-coding neural network generates a proper hidden vector, the other part is regularization of fraction guidance and is used for distinguishing normal data from superscalar data, so that the fraction of the normal data is close to 0, and the superscalar data is close to the upper limit value of the fraction;
Step S4: training the self-coding neural network constructed in the step S2 by using the data set constructed in the step S1, wherein a loss function in the step S3 is used in the training process;
step S5: after training, the data to be tested are identified by using the model, the data sequentially pass through the encoder and the scoring network, a score is obtained, and whether the data exceeds the standard is judged by using the score.
Wherein, the quality standard of the surface water environment is in accordance with the quality standard of the surface water environment of GB 3838-2002.
Further, in step S1, the process of creating a data set includes the following steps:
Step S11: performing outlier and missing value processing on the original time sequence data, detecting outlier by using a quartile range pair of the box graph, and processing missing data by using a linear interpolation method;
Step S12: sampling the time sequence by using a sliding window to obtain data sets with equal length;
Step S13: the data in S12 is subjected to Z-score normalization processing using the following formula:
wherein x i represents the original data, Represents normalized data, μ represents the mean value of the data, and σ represents the standard deviation of the data.
Further, in step S12, the sliding window length is 27, wherein the first 24 data are input and the last 3 data are output.
Further, in step S1, the data set is divided into a training set and a test set according to a proportion of 70% and 30%.
Further, in step S2, the self-encoding neural network includes:
An encoder:
The first layer is a fully connected layer, with an input dimension of 24, an output dimension of 20,
The second layer is a ReLU layer;
A decoder:
the first layer is a fully connected layer, the input dimension is 20, the output dimension is 24,
The second layer is a ReLU layer;
Scoring network:
The first layer is a fully connected layer, the input dimension is 20, the output dimension is 10,
The second layer is a fully connected layer with an input dimension of 10 and an output dimension of 1.
Further, in step S3, a loss value is calculated by a loss function, which includes the following two parts:
(1) The loss value of the reconstructed data from the encoded neural network is formulated as follows:
wherein x represents the input data, Representing the data after reconstruction of the self-coding neural network, wherein N represents the data quantity, namely, taking the L2 norm as a loss value of the self-coding neural network;
(2) Fractional guided regularization is formulated as follows:
wherein s is a scoring value obtained through a scoring network, a is the upper limit value of scoring, 6 is set, mu 0 is a very small random positive number for ensuring that the scoring of normal data is close to 0, epsilon is the threshold value of normal data and exceeding data, and gamma is an exceeding parameter for adjusting the loss ratio of exceeding data;
the final loss function is a weighted sum of the two parts and is formulated as follows:
L=LRE+θLSE,
where θ is a super parameter for adjusting the specific gravity of the two loss functions.
Further, in step S4, the training process is as follows:
Step S41: before training of each round is started, inputting a data set into a current self-coding neural network, calculating an L2 norm, and then finding out 90 quantiles in the L2 norm;
step S42: inputting the data set in batches to the self-coding neural network for training, and using the 90 quantiles obtained in the step S41 as boundaries of normal values and superscript values;
Step S43: calculating a loss value of the current batch data;
Step S44: gradient back propagation, optimizing network parameters by using an Adam optimizer, wherein the step size is 0.0001;
step S45: the above process is repeated until the training is finished.
Further, in step S5, the determination process is as follows:
step S51: the input data is processed by an encoder to obtain a hidden vector;
Step S52: inputting the hidden vectors obtained in the step S51 into a scoring network to obtain data scores;
Step S53: whether the data exceeds the standard is judged by the score, and the higher the score is, the higher the possibility that the data is exceeding the standard is indicated.
Compared with the prior art, the invention has the beneficial effects that: the water pollution unsupervised early warning method based on the score-guided regularized network designs a score-guided regularized scoring network, the network can learn the difference between normal data and abnormal data, the difference is gradually enhanced in the network training process, the regularized scoring network is integrated into a self-coding neural network structure, the limitation of the self-coding neural network on normal data input is broken, the scoring network can be integrated into different unsupervised abnormal detection methods, and the cost for researching and developing other abnormal detection methods is reduced.
Detailed Description
The technical solutions will be clearly and completely described by means of examples, which are obviously only some, but not all, of the examples of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
The water quality pollution non-supervision early warning method based on the fractional guide regularization network comprises the following steps:
Step S1: establishing a data set, generating the data set from the hour data of the water quality pollution monitoring micro station in a sliding window mode, setting a label according to the standard limit value of the ground water environment quality standard basic project for the data in each sliding window, dividing the data set into a training set and a testing set according to the proportion of 70% and 30%, wherein the training set does not contain label data due to the unsupervised thought, and performing Z-score standardization processing on the data in the training set to avoid fitting;
step S2: constructing a self-coding neural network, wherein the self-coding neural network comprises an encoder and a decoder, and the encoder and the decoder comprise a full-connection layer and a ReLU layer; constructing a scoring network for evaluating the superscalar score value of the data, wherein the greater the superscalar probability of the data is, the higher the superscalar score is, the greater the probability is that the water pollution early warning is required to be sent out, and the scoring network and the decoder are connected behind the encoder;
step S3: calculating a loss value of training data, wherein the loss value of the training data needs to be calculated in the construction process, the loss value is divided into two parts, one part is a loss value of reconstructed data of the self-coding neural network and is used for ensuring that the self-coding neural network generates a proper hidden vector, the other part is regularization of fraction guidance and is used for distinguishing normal data from superscalar data, so that the fraction of the normal data is close to 0, and the superscalar data is close to the upper limit value of the fraction;
Step S4: training the self-coding neural network constructed in the step S2 by using the data set constructed in the step S1, wherein a loss function in the step S3 is used in the training process;
step S5: after training, the data to be tested are identified by using the model, the data sequentially pass through the encoder and the scoring network, a score is obtained, and whether the data exceeds the standard is judged by using the score.
In step S1, the process of establishing a data set includes the following steps:
Step S11: performing outlier and missing value processing on the original time sequence data, detecting outlier by using a quartile range pair of the box graph, and processing missing data by using a linear interpolation method;
step S12: sampling the time sequence by using a sliding window to obtain a data set with equal length, wherein the sliding window length is 27, and the first 24 data are taken as input and the last 3 data are taken as output;
Step S13: the data in S12 is subjected to Z-score normalization processing using the following formula:
wherein x i represents the original data, Represents normalized data, μ represents the mean value of the data, and σ represents the standard deviation of the data.
Example two
The first difference between this embodiment and the second embodiment is that:
In this embodiment, in step S2, the self-encoding neural network includes:
An encoder:
The first layer is a fully connected layer, with an input dimension of 24, an output dimension of 20,
The second layer is a ReLU layer;
A decoder:
the first layer is a fully connected layer, the input dimension is 20, the output dimension is 24,
The second layer is a ReLU layer;
Scoring network:
The first layer is a fully connected layer, the input dimension is 20, the output dimension is 10,
The second layer is a fully connected layer with an input dimension of 10 and an output dimension of 1.
In this embodiment, in step S3, the loss value is calculated by a loss function, which includes the following two parts:
(1) The loss value of the reconstructed data from the encoded neural network is formulated as follows:
wherein x represents the input data, Representing the data after reconstruction of the self-coding neural network, wherein N represents the data quantity, namely, taking the L2 norm as a loss value of the self-coding neural network;
(2) Fractional guided regularization is formulated as follows:
wherein s is a scoring value obtained through a scoring network, a is the upper limit value of scoring, 6 is set, mu 0 is a very small random positive number for ensuring that the scoring of normal data is close to 0, epsilon is the threshold value of normal data and exceeding data, and gamma is an exceeding parameter for adjusting the loss ratio of exceeding data;
the final loss function is a weighted sum of the two parts and is formulated as follows:
L=LRE+θLSE,
where θ is a super parameter for adjusting the specific gravity of the two loss functions.
Example III
The first difference between this embodiment and the second embodiment is that:
in this embodiment, in step S4, the training process is as follows:
Step S41: before training of each round is started, inputting a data set into a current self-coding neural network, calculating an L2 norm, and then finding out 90 quantiles in the L2 norm;
step S42: inputting the data set in batches to the self-coding neural network for training, and using the 90 quantiles obtained in the step S41 as boundaries of normal values and superscript values;
Step S43: calculating a loss value of the current batch data;
Step S44: gradient back propagation, optimizing network parameters by using an Adam optimizer, wherein the step size is 0.0001;
step S45: the above process is repeated until the training is finished.
In this embodiment, in step S5, the determination process is as follows:
step S51: the input data is processed by an encoder to obtain a hidden vector;
Step S52: inputting the hidden vectors obtained in the step S51 into a scoring network to obtain data scores;
Step S53: whether the data exceeds the standard is judged by the score, and the higher the score is, the higher the possibility that the data is exceeding the standard is indicated.
The water quality pollution unsupervised early warning method based on the score-guided regularization network provided by the invention has the advantages that the score-guided regularization scoring network is designed, the network can learn the difference between normal data and abnormal data, the difference is gradually enhanced in the network training process, the regularization scoring network is integrated into a self-coding neural network structure, the limitation of the self-coding neural network on normal data input is broken, the water quality data of the water quality pollution monitoring micro-station is automatically acquired, the data is input into the self-coding neural network after training is completed, the data score is finally output, whether the data exceeds the standard is judged by using the score, and the early warning is further carried out on the water area with the data exceeding the standard.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related arts are included in the scope of the present invention.
Claims (7)
1. The water pollution unsupervised early warning method based on the fractional guide regularized network is characterized by comprising the following steps of:
Step S1: establishing a data set, generating the data set from the hour data of the water quality pollution monitoring micro station in a sliding window mode, setting a label for the data in each sliding window according to the standard limit value of the quality standard basic project of the surface water environment, and dividing the data set into a training set and a testing set according to a certain proportion, wherein the training set does not contain label data;
Step S2: constructing a self-coding neural network, wherein the self-coding neural network comprises an encoder, a decoder and a scoring network, the encoder and the decoder comprise a full-connection layer and a ReLU layer, the scoring network is used for evaluating the superscalar score value of data, and the decoder and the scoring network are sequentially connected behind the encoder;
step S3: calculating a loss value of training data, wherein the loss value of the training data needs to be calculated in the construction process, the loss value is divided into two parts, one part is a loss value of reconstructed data of the self-coding neural network and is used for ensuring that the self-coding neural network generates a proper hidden vector, the other part is regularization of fraction guidance and is used for distinguishing normal data from superscalar data, so that the fraction of the normal data is close to 0, and the superscalar data is close to the upper limit value of the fraction;
The loss value is calculated by a loss function comprising the following two parts:
(1) The loss value of the reconstructed data from the encoded neural network is formulated as follows:
wherein x represents the input data, Representing the data after reconstruction of the self-coding neural network, wherein N represents the data quantity, namely, taking the L2 norm as a loss value of the self-coding neural network;
(2) Fractional guided regularization is formulated as follows:
wherein s is a scoring value obtained through a scoring network, a is the upper limit value of scoring, 6 is set, mu 0 is a very small random positive number for ensuring that the scoring of normal data is close to 0, epsilon is the threshold value of normal data and exceeding data, and gamma is an exceeding parameter for adjusting the loss ratio of exceeding data;
the final loss function is a weighted sum of the two parts and is formulated as follows:
L=LRE+θLSE,
Wherein θ is a super parameter for adjusting the specific gravity of the two loss functions;
Step S4: training the self-coding neural network constructed in the step S2 by using the data set constructed in the step S1, wherein a loss function in the step S3 is used in the training process;
step S5: after training, the data to be tested are identified by using the model, the data sequentially pass through the encoder and the scoring network, a score is obtained, and whether the data exceeds the standard is judged by using the score.
2. The method for unsupervised early warning of water pollution based on score-guided regularized network according to claim 1, wherein in step S1, the process of establishing a data set comprises the following steps:
Step S11: performing outlier and missing value processing on the original time sequence data, detecting outlier by using a quartile range pair of the box graph, and processing missing data by using a linear interpolation method;
Step S12: sampling the time sequence by using a sliding window to obtain data sets with equal length;
Step S13: the data in S12 is subjected to Z-score normalization processing using the following formula:
wherein x i represents the original data, Represents normalized data, μ represents the mean value of the data, and σ represents the standard deviation of the data.
3. The method for unsupervised early warning of water pollution based on fractional guide regularized network according to claim 2, characterized in that in step S12, the sliding window length is 27, wherein the first 24 data are input and the last 3 data are output.
4. The water pollution unsupervised early warning method based on the score-guided regularized network as claimed in claim 1, wherein in step S1, the data set is divided into a training set and a testing set according to the proportion of 70% and 30%.
5. The method for unsupervised early warning of water pollution based on fractional guide regularization network of claim 1, wherein in step S2, the self-coding neural network comprises:
An encoder:
The first layer is a fully connected layer, with an input dimension of 24, an output dimension of 20,
The second layer is a ReLU layer;
A decoder:
the first layer is a fully connected layer, the input dimension is 20, the output dimension is 24,
The second layer is a ReLU layer;
Scoring network:
The first layer is a fully connected layer, the input dimension is 20, the output dimension is 10,
The second layer is a fully connected layer with an input dimension of 10 and an output dimension of 1.
6. The water quality pollution unsupervised early warning method based on the score-guided regularized network as claimed in claim 1, wherein in step S4, the training process is as follows:
Step S41: before training of each round is started, inputting a data set into a current self-coding neural network, calculating an L2 norm, and then finding out 90 quantiles in the L2 norm;
step S42: inputting the data set in batches to the self-coding neural network for training, and using the 90 quantiles obtained in the step S41 as boundaries of normal values and superscript values;
Step S43: calculating a loss value of the current batch data;
Step S44: gradient back propagation, optimizing network parameters by using an Adam optimizer, wherein the step size is 0.0001;
step S45: the above process is repeated until the training is finished.
7. The water quality pollution unsupervised early warning method based on the score-guided regularized network as claimed in claim 1, wherein in step S5, the determination process is as follows:
step S51: the input data is processed by an encoder to obtain a hidden vector;
Step S52: inputting the hidden vectors obtained in the step S51 into a scoring network to obtain data scores;
Step S53: whether the data exceeds the standard is judged by the score, and the higher the score is, the higher the possibility that the data is exceeding the standard is indicated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210067624.XA CN114661754B (en) | 2022-01-20 | 2022-01-20 | Water pollution unsupervised early warning method based on fractional guide regularization network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210067624.XA CN114661754B (en) | 2022-01-20 | 2022-01-20 | Water pollution unsupervised early warning method based on fractional guide regularization network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114661754A CN114661754A (en) | 2022-06-24 |
CN114661754B true CN114661754B (en) | 2024-05-03 |
Family
ID=82026052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210067624.XA Active CN114661754B (en) | 2022-01-20 | 2022-01-20 | Water pollution unsupervised early warning method based on fractional guide regularization network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114661754B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110009030A (en) * | 2019-03-29 | 2019-07-12 | 华南理工大学 | Sewage treatment method for diagnosing faults based on stacking meta learning strategy |
WO2020192166A1 (en) * | 2019-03-24 | 2020-10-01 | 北京工业大学 | Method for soft measurement of dioxin emission concentration in municipal solid waste incineration process |
CN112149353A (en) * | 2020-09-24 | 2020-12-29 | 南京大学 | Method for identifying DNAPL pollutant distribution in underground aquifer based on convolutional neural network |
CN113092684A (en) * | 2021-04-07 | 2021-07-09 | 青岛理工大学 | Air quality inference method based on space-time matrix decomposition |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210319033A1 (en) * | 2020-04-09 | 2021-10-14 | Microsoft Technology Licensing, Llc | Learning to rank with alpha divergence and entropy regularization |
-
2022
- 2022-01-20 CN CN202210067624.XA patent/CN114661754B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020192166A1 (en) * | 2019-03-24 | 2020-10-01 | 北京工业大学 | Method for soft measurement of dioxin emission concentration in municipal solid waste incineration process |
CN110009030A (en) * | 2019-03-29 | 2019-07-12 | 华南理工大学 | Sewage treatment method for diagnosing faults based on stacking meta learning strategy |
CN112149353A (en) * | 2020-09-24 | 2020-12-29 | 南京大学 | Method for identifying DNAPL pollutant distribution in underground aquifer based on convolutional neural network |
CN113092684A (en) * | 2021-04-07 | 2021-07-09 | 青岛理工大学 | Air quality inference method based on space-time matrix decomposition |
Non-Patent Citations (2)
Title |
---|
Robust Regression with Data-Dependent Regularization Parameters and Autoregressive Temporal Correlations;Na Wang;《 Environmental Modeling & Assessment》;20180423;779–786 * |
基于变步长梯度正则化算法识别分数阶地下水污染模型参数;邢利英;《兰州交通大学学报》;20170615;92-96 * |
Also Published As
Publication number | Publication date |
---|---|
CN114661754A (en) | 2022-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110533631B (en) | SAR image change detection method based on pyramid pooling twin network | |
CN110728656A (en) | Meta-learning-based no-reference image quality data processing method and intelligent terminal | |
CN110781413B (en) | Method and device for determining interest points, storage medium and electronic equipment | |
CN113591948B (en) | Defect pattern recognition method and device, electronic equipment and storage medium | |
CN112419268A (en) | Method, device, equipment and medium for detecting image defects of power transmission line | |
CN110930995A (en) | Voice recognition model applied to power industry | |
CN115327041A (en) | Air pollutant concentration prediction method based on correlation analysis | |
CN114283285A (en) | Cross consistency self-training remote sensing image semantic segmentation network training method and device | |
CN112348809A (en) | No-reference screen content image quality evaluation method based on multitask deep learning | |
CN113033624A (en) | Industrial image fault diagnosis method based on federal learning | |
CN117521512A (en) | Bearing residual service life prediction method based on multi-scale Bayesian convolution transducer model | |
CN115170874A (en) | Self-distillation implementation method based on decoupling distillation loss | |
CN114819260A (en) | Dynamic generation method of hydrologic time series prediction model | |
CN114661754B (en) | Water pollution unsupervised early warning method based on fractional guide regularization network | |
CN110648023A (en) | Method for establishing data prediction model based on quadratic exponential smoothing improved GM (1,1) | |
CN116189008A (en) | Remote sensing image change detection method based on fixed point number quantification | |
CN116129215A (en) | Long-tail target detection method based on deep learning | |
CN112861670B (en) | Transmission line hardware detection method and system | |
CN114593917A (en) | Small sample bearing fault diagnosis method based on triple model | |
CN111062468B (en) | Training method and system for generating network, and image generation method and device | |
CN112417148A (en) | Urban waterlogging public opinion result obtaining method and device | |
CN118514566B (en) | Detection method for illegal charging of storage battery car | |
CN118228772B (en) | Distance measurement method, framework, system and medium for actually measured traveling wave of power transmission line | |
CN114970955B (en) | Short video heat prediction method and device based on multi-mode pre-training model | |
CN118485044B (en) | Low-bit quantization method and system for large language model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |