CN115630101A - Hydrological parameter intelligent monitoring and water resource big data management system - Google Patents

Hydrological parameter intelligent monitoring and water resource big data management system Download PDF

Info

Publication number
CN115630101A
CN115630101A CN202211301484.4A CN202211301484A CN115630101A CN 115630101 A CN115630101 A CN 115630101A CN 202211301484 A CN202211301484 A CN 202211301484A CN 115630101 A CN115630101 A CN 115630101A
Authority
CN
China
Prior art keywords
neural network
output
parameter
input
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211301484.4A
Other languages
Chinese (zh)
Other versions
CN115630101B (en
Inventor
王林涛
丁唯峰
李文轩
孙宁怡
马从国
孙娜
陈帅
周恒瑞
李亚洲
柏小颖
秦小芹
金德飞
王建国
马海波
丁晓红
王苏琪
黄凤芝
夏奥运
宗佳文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202211301484.4A priority Critical patent/CN115630101B/en
Publication of CN115630101A publication Critical patent/CN115630101A/en
Application granted granted Critical
Publication of CN115630101B publication Critical patent/CN115630101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/252Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/10Information sensed or collected by the things relating to the environment, e.g. temperature; relating to location
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Fuzzy Systems (AREA)
  • Environmental & Geological Engineering (AREA)
  • Toxicology (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses a hydrological parameter intelligent monitoring and water resource big data management system, which comprises a hydrological condition detection and control subsystem and a water resource big data management subsystem of the Internet of things, and realizes intelligent detection and adjustment of hydrological parameters and water resource big data management; aiming at the problem of low automation degree of traditional hydrological data monitoring and water resource management, the hydrological data are automatically acquired by setting a plurality of fixed monitoring nodes by using the technologies of Internet of things, wireless communication and intelligent control, and then the acquired data are sent to a cloud platform in real time through a wireless communication unit for processing, analyzing, storing and sharing, so that the timely early warning and automatic adjustment of the hydrological data are guaranteed, and the centralized monitoring and management of the distributed hydrological data are realized.

Description

Hydrological parameter intelligent monitoring and water resource big data management system
Technical Field
The invention relates to the technical field of hydrologic data monitoring and management, in particular to a hydrologic parameter intelligent monitoring and water resource big data management system.
Background
The hydrological monitoring not only can provide powerful data basis for research work of flood control and disaster prevention, but also plays a vital role in making sustainable water resource utilization decisions. China has wide breadth, rivers, lakes, reservoirs, channels, underground water and drinking water resources are distributed in a large number and are criss-cross, and a special water resource system is formed. However, in recent years, abnormal weather is often encountered, and when natural weather of strong rainfall occurs, the water level and the flow of water resources are also increased, so that river banks and reservoir dams are greatly threatened, and even some regions have the danger of dam breaking and dam breaking, so that the life safety and property safety of people are lost, and the need of hydrological monitoring is more urgent. Through the thing networking, artificial intelligence, big data, cloud service and hydrology control combine closely, the water resource intelligent monitoring and the water resource information management system of internet of things are applied to remote monitoring river, the lake, the reservoir, the channel, groundwater and the water level flow and the quality of water online analysis who drinks the water resource, rainwater condition telemetering measurement, scenes such as remote video monitoring, in time master the water level flow of water resource, reservoir water level, hydrology key element data such as precipitation, realize the storage to hydrology monitored data, the inquiry, statistical analysis and the show of various forms.
Disclosure of Invention
The invention discloses a hydrological parameter intelligent monitoring and water resource big data management system, which aims at the problem of low automation degree of traditional hydrological data monitoring and water resource management, utilizes the technologies of Internet of things, wireless communication and intelligent control, automatically acquires hydrological data by arranging a plurality of fixed monitoring nodes, and then sends the acquired data to a cloud platform in real time through a wireless communication unit for processing, analyzing, storing and sharing, thereby providing guarantee for timely early warning and automatic adjustment of hydrological data and realizing centralized monitoring and management of distributed hydrological data.
In order to solve the problems, the invention adopts the following technical scheme:
the hydrological parameter intelligent monitoring and water resource big data management system is characterized by comprising a hydrological regime detection and control subsystem and a water resource big data management subsystem of the Internet of things, and the hydrological parameter intelligent detection, adjustment and water resource big data management are realized.
The invention further adopts the technical improvement scheme that:
the water regime detection and control subsystem comprises an Elman neural network-NARX neural network model, a fuzzy recurrent neural network-NARX neural network controller, an AANN auto-associative neural network model, a PI controller-NARX neural network controller, a PI controller, a parameter detection module and a parameter prediction module, wherein a plurality of groups of upstream water rainfall, water flow and water level sensor outputs are used as the input of the parameter prediction module, a water level set value, a parameter prediction module output and an AANN auto-associative neural network model output are respectively used as the corresponding input of the Elman neural network-NARX neural network model, the difference between the output of the Elman neural network-NARX neural network model and the output of the AANN auto-associative neural network model is used as a water level error, the water level error and an error change rate are used as the input of the fuzzy recurrent neural network-NARX neural network controller, each downstream water flow sensor group is used as the corresponding parameter detection module input, the output of the fuzzy recurrent neural network-NARX neural network controller and the corresponding PI controller output and the corresponding parameter detection module output are used as water flow errors, the water flow error is used as the corresponding water flow input of the NARX neural network controller, and the corresponding water pumping device of the NARX neural network controller; the water level sensor group of each downstream water area is used as the input of a corresponding parameter detection module, the difference between a water level set value and the output of the parameter detection module is used as the water level difference, the water level difference and the change rate of the water level difference are used as the input of a corresponding PI controller, the output of a plurality of parameter detection modules is used as the corresponding input of an AANN auto-associative neural network model, and the water situation detection and control subsystem realizes the detection and control of the water level and the water flow of multiple areas. The water regime detection and control subsystem is shown in figure 1.
The invention further adopts the technical improvement scheme that:
the parameter detection module is composed of a plurality of noise reduction self-coding neural network models, an adaptive AP (access point) clustering device, a plurality of PSO wavelet self-adaptive neural network models and an ESN neural network model, measured parameters output by a plurality of groups of parameter sensors within a period of time are respectively used as the input of the corresponding noise reduction self-coding neural network models, the output of the plurality of noise reduction self-coding neural network models is used as the input of the adaptive AP clustering device, the adaptive AP clustering device outputs the output values of different types of noise reduction self-coding neural network models respectively as the input of the corresponding PSO wavelet self-adaptive neural network models, the output of the plurality of PSO wavelet self-adaptive neural network models is used as the corresponding input of the ESN neural network models, and the output of the ESN neural network models is used as the output of the parameter detection module; the parameter detection module is shown in fig. 2.
The invention further adopts the technical improvement scheme that:
the parameter prediction module comprises a parameter detection module, a TDL beat-to-beat delay line A, a metabolism GM (1, 1) trend model, a NARX neural network model A, a NARX neural network model B, a TDL beat-to-beat delay line C, a TDL beat-to-beat delay line D and a BAM neural network-ANFIS adaptive neural fuzzy inference model of interval hesitation fuzzy numbers, wherein the output of the parameter detection module is used as the input of the TDL beat-to-beat delay line A, the output of the TDL beat-to-beat delay line A is used as the input of the metabolism GM (1, 1) trend model, the difference between the output of the TDL beat-to-beat delay line A and the output of the metabolism GM (1, 1) trend model are respectively used as the input of the NARX neural network model A and the NARX neural network model B, the output of the NARX neural network model A and the NARX neural network model B are respectively used as the input of a TDL beat-to-beat delay line B and a TDL beat-to-beat delay line C, the output of the BAM neural network-ANFIS adaptive neural fuzzy inference model of the TDL beat-to-beat delay line B, the TDL beat-to-beat delay line C and the interval hesitation fuzzy number is respectively used as the corresponding input of the BAM neural network-ANFIS adaptive neural fuzzy inference model of the interval hesitation fuzzy number, the 4 parameters output by the BAM neural network-ANFIS adaptive neural fuzzy inference model of the interval hesitation fuzzy number are respectively a, B, C and D, the interval number [ a, B ] of the a and B is used as the minimum value of the detected parameter, the interval number [ C, D ] of the C and D are used as the maximum value of the detected parameter, the interval number [ a, B ] and the interval number [ C, D ] are used as the interval hesitation number of the detected parameter, the BAM neural network-ANFIS self-adaptive neural fuzzy inference model of the interval hesitation fuzzy numbers outputs the interval hesitation fuzzy numbers of the detected parameters. The parameter prediction module is shown in fig. 2.
The invention further adopts the technical improvement scheme that:
the noise reduction self-coding neural network-NARX neural network model, the BAM neural network-ANFIS adaptive neural fuzzy inference model, the BAM neural network-NARX neural network model, the Elman neural network-NARX neural network model, the fuzzy recurrent neural network-NARX neural network controller and the PI controller-NARX neural network controller are characterized in that the noise reduction self-coding neural network is connected with the NARX neural network model in series, the BAM neural network is connected with the ANFIS adaptive neural fuzzy inference model in series, the BAM neural network is connected with the NARX neural network model in series, the Elman neural network is connected with the NARX neural network model in series, the fuzzy recurrent neural network is connected with the NARX neural network controller in series, and the PI controller is connected with the NARX neural network controller in series.
The invention further adopts the technical improvement scheme that:
the big data management subsystem of water resource of thing networking includes hydrology parameter measurement end, hydrology gateway, the on-the-spot control end, hydrology parameter cloud platform and hydrology monitoring cell-phone APP, hydrology parameter measurement end is responsible for gathering the hydrology parameter information who is detected the waters, there is the hydrology to detect in the on-the-spot control end and control subsystem, realize hydrology parameter measurement end, hydrology parameter control end, the on-the-spot control end, the two-way communication of hydrology parameter cloud platform and hydrology monitoring cell-phone APP through hydrology gateway, realize the intelligent regulation of hydrology parameter. The water resource big data management subsystem of the internet of things is shown in figure 3.
The invention further adopts the technical improvement scheme that:
the hydrological parameter measuring terminal comprises a water level, water flow, rainfall, pH value, water temperature and dissolved oxygen sensor group for collecting hydrological parameters, a corresponding signal conditioning circuit, an STM32 microprocessor, a GPS module, a camera and a GPRS wireless transmission module; the hydrological parameter measurement terminal is shown in figure 4.
Compared with the prior art, the invention has the following obvious advantages:
1. aiming at the uncertainty and randomness of the problems of accuracy errors, interference, abnormal measurement and the like of a hydrological parameter measurement sensor in the hydrological parameter measurement process, the output value of the hydrological parameter measurement sensor is converted into a BAM neural network-ANFIS adaptive neural fuzzy inference model form of interval hesitation fuzzy numbers through a parameter prediction module, the ambiguity, the dynamics and the uncertainty of the hydrological parameter measurement are effectively processed, and the objectivity and the reliability of the parameter detection of the hydrological parameter sensor are improved.
2. The BAM neural network of the BAM neural network-ANFIS fuzzy neural network model is an associative memory neural network model, BAM neural network output is used as ANFIS fuzzy neural network model input, the BAM neural network can realize a bidirectional different associative model, hydrological parameter input sample data pairs of the BAM neural network summarized in advance are stored through a bidirectional associative memory matrix, when the BAM neural network has new hydrological parameter information input, the BAM neural network parallelly recalls and associates out corresponding output results to be used as ANFIS fuzzy neural network model input, the bidirectional associative memory network of the BAM neural network is a two-layer nonlinear feedback neural network and has functions of associative memory, distributed storage, self-learning and the like, and when an input signal is added to one layer of the BAM neural network, the other layer of the BAM neural network can obtain a hydrological parameter output signal.
3. The PSO adaptive wavelet neural network model uses wavelet function as transfer function of hidden layer, and can effectively input and extract signal time-frequency characteristics by adaptively adjusting parameters of wavelet function, and has the advantages of stable structure, simple algorithm, strong global search capability, fast convergence, strong generalization capability, etc. The blindness of BP network in structural design, linear distribution of network weight coefficient and the convexity of learning target function are avoided, the problems of local optimization and the like are fundamentally avoided in the training process of the network, the algorithm concept is simple, the convergence speed is high, the function learning capability is strong, and any nonlinear function can be approached with high precision.
4. The invention adopts the PSO adaptive wavelet neural network, avoids the requirements of activating the function to be microminiaturized and calculating the derivation process of the function in the gradient descent method, and has simple iterative formula during searching of each particle, thereby having higher calculation speed than the gradient descent method. By adjusting the parameters in the iterative formula, local extreme values can be well jumped out, global optimization is carried out, and the training speed of the network is simply and effectively improved. The self-adaptive wavelet neural network model of the PSO algorithm has smaller error, faster convergence rate and stronger generalization capability. The model has the advantages of simple algorithm, stable structure, high calculation convergence speed, strong global optimization capability, high identification precision and strong generalization capability.
5. The invention adopts a metabolism GM (1, 1) trend model to predict the long time span of the input parameters. The metabolism GM (1, 1) trend model can be used for predicting future time values according to input parameter values, the future time values of each input parameter predicted by the method are added into the original number series of the metabolism GM (1, 1) trend model, one data at the beginning of the input number series is correspondingly removed for modeling, and then the future values of the input number parameters are predicted. And by analogy, predicting the future value of the output parameter of the metabolism GM (1, 1) trend model. The method is called as an equal-dimensional gray number successive compensation model, can realize the prediction for a long time, and can more accurately master the change trend of the input parameter value.
Drawings
FIG. 1 is a diagram of a regimen detection and control subsystem of the present invention;
FIG. 2 is a diagram of a parameter detection module and a parameter prediction module according to the present invention;
FIG. 3 is a water resource big data management subsystem of the Internet of things of the invention;
FIG. 4 is a hydrographic parameter measurement terminal according to the present invention;
FIG. 5 is a hydrologic parameter control terminal of the present invention;
FIG. 6 is a hydrologic gateway of the present invention;
fig. 7 shows the field control terminal software structure according to the present invention.
Detailed Description
The technical scheme of the invention is further described by combining the attached drawings 1-7:
1. the water regime detection and control subsystem comprises the following steps:
1. constructing a parameter detection module, wherein the parameter detection module consists of a plurality of noise reduction self-coding neural network-NARX neural network models, a self-adaptive AP (access point) clustering device, a plurality of small wave self-adaptive neural network models of PSO (particle swarm optimization) and an ESN (electronic service network) neural network model; measuring parameters output by a plurality of groups of parameter sensors for a period of time are respectively used as the input of corresponding noise reduction self-coding neural network-NARX neural network models, the output of a plurality of noise reduction self-coding neural network-NARX neural network models is used as the input of an adaptive AP (access point) clustering device, the output values of different types of noise reduction self-coding neural network-NARX neural network models output by the adaptive AP clustering device are respectively used as the input of corresponding PSO wavelet self-adaptive neural network models, the output of the plurality of PSO wavelet self-adaptive neural network models is used as the corresponding input of an ESN (enterprise service network) neural network model, and the output of the ESN neural network model is used as the output of a parameter detection module;
(1) Design of noise reduction self-coding neural network-NARX neural network model
The noise reduction self-coding neural network-NARX neural network model is that the output of the noise reduction self-coding neural network is used as the input of the NARX neural network model, and the noise reduction self-coding neural network is a dimension reduction method, and high-dimensional data is converted into low-dimensional data by training a multilayer neural network with a small central layer. A noise-reducing self-coding neural network (DAE) is a typical three-layer neural network with an encoding process between a hidden layer and an input layer and a decoding process between an output layer and the hidden layer. The noise reduction self-coding neural network obtains a coding representation (a coder) through a coding operation on input data, and obtains reconstructed input data (a decoder) through an output decoding operation on a hidden layer, wherein the data of the hidden layer is dimension reduction data. A reconstruction error function is then defined to measure the learning effect of the noise-reduced self-coding neural network. Constraints can be added based on the error function to generate various types of noise reduction self-coding neural networks, and the encoder and decoder and the loss function are as follows:
an encoder: h = δ (Wx + b) (1)
A decoder:
Figure BDA0003904843330000061
loss function:
Figure BDA0003904843330000062
the training process of AE is similar to BP neural network, W and W 'are weight matrix, b and b' are offset, h is output value of hidden layer, x is input vector,
Figure BDA0003904843330000071
to output the vector, δ is the excitation function, typically using a Sigmoid function or a tanh function. The noise reduction self-coding neural network is divided into a coding process and a decoding process, wherein the coding process is from an input layer to a hidden layer, and the decoding process is from the hidden layer to an output layer. The objective of the noise reduction self-coding neural network is to make the input and the output as close as possible by using an error function, obtain the optimal weight and bias of the self-coding network by reversely propagating the minimized error function, and prepare for establishing a deep self-coding network model. Based on the principle of self-coding network coding and decoding, coding is obtained by using data containing noise measurementAnd decoding data and data, constructing an error function through the decoded data and the original data, and minimizing the error function through back propagation to obtain an optimal network weight and an optimal bias. The measurement data is corrupted by adding noise and the corrupted data is then input into the neural network as an input layer. The reconstruction result of the noise reduction self-coding neural network is similar to the original data of the measurement parameters, and by the method, the disturbance of the measurement parameters can be eliminated and a stable structure can be obtained. The original measurement parameters are input into the encoder to obtain the characteristic expression, and then mapped to the output layer through the decoder. The current output of the NARX neural network model not only depends on the past NARX neural network model output y (t-n), but also depends on the current denoised self-coding neural network output vector X (t) of the NARX neural network model, the delay order of the denoised self-coding neural network output vector, and the like. The output of the noise reduction self-coding neural network of the NARX neural network model is transmitted to the hidden layer through the time delay layer, the hidden layer processes signals output by the noise reduction self-coding neural network and transmits the processed signals to the output layer, the output layer linearly weights the output signals of the hidden layer to obtain final neural network output signals, and the time delay layer delays signals fed back by the NARX neural network model and signals output by the noise reduction self-coding neural network of the input layer and transmits the delayed signals to the hidden layer. The NARX neural network model has the characteristics of nonlinear mapping capability, good robustness, adaptability and the like, and is suitable for further processing a plurality of high-frequency fluctuation parts of the output predicted values of the parameter sensors. The NARX neural network model (Nonlinear Auto-Regression with External input neural network) is a dynamic feedforward neural network, the NARX neural network is a Nonlinear autoregressive network with External input, the NARX neural network has a dynamic characteristic of multistep time delay and is connected with a plurality of layers of closed networks through feedback, the NARX neural network model is a regressive dynamic neural network which is most widely applied in a Nonlinear dynamic system, and the performance of the NARX neural network model is generally superior to that of a full regressive neural network. The NARX neural network model mainly comprises an input layer, a hidden layer, an output layer and an input and output delay layer, wherein before application, the delay order of input and output, the number of neurons in the hidden layer and the NARX neural network are generally determined in advanceThe output of the network model at the time is not only dependent on the output y (t-n) of the NARX neural network model in the past, but also dependent on the output vector X (t) of the particle swarm optimization adaptive wavelet neural network of the NARX neural network model at the time, the delay order of the output vector of the particle swarm optimization adaptive wavelet neural network, and the like. The output of the particle swarm optimization adaptive wavelet neural network of the NARX neural network model is transmitted to the hidden layer through the time delay layer, the hidden layer processes signals output by the particle swarm optimization adaptive wavelet neural network and then transmits the processed signals to the output layer, the output layer linearly weights the output signals of the hidden layer to obtain final output signals of the neural network, and the time extension layer delays signals fed back by the NARX neural network model and the output signals of the particle swarm optimization adaptive wavelet neural network of the input layer and then transmits the delayed signals to the hidden layer. The NARX neural network model has the characteristics of non-linear mapping capability, good robustness, adaptability and the like. x (t) represents the input of the NARX neural network model, i.e., the noise-reduced self-encoding neural network output; m represents the delay order of the external input; y (t) is the output of the neural network, i.e., the predicted value of the noise-reduced self-encoded neural network output for the next time period; n is the output delay order; s is the number of hidden layer neurons; the output of the jth implicit element can thus be found as:
Figure BDA0003904843330000081
in the above formula, w ji As a connection weight between the ith input and the jth implicit neuron, b j Is the bias value for the jth implicit neuron.
(2) Adaptive AP clusterer design
The self-adaptive AP clustering device completes clustering through message transmission among data objects, mainly uses similarity (adopting negative Euclidean distance as standard) among data points as a basis, and uses two messages of attraction responsiveness and attribution availability to perform cyclic update iteration, and finally finds out an optimal clustering result. Data set X = { X) composed of N data points 1 ,x 2 ,…,x n Of any two data thereinThe similarity between points is:
Figure BDA0003904843330000082
wherein the value on the main diagonal of the semi (i, k) is replaced by a biased parameter value p, with a larger p indicating a larger probability that the point is selected as the representative point. Therefore, the final cluster number will change as p changes, and p is generally set to the median of sami (i, k) without a priori knowledge. Defining R (i, k) as the attraction degree of the candidate representative point k to each data point i, and A (i, k) as the degree that the data point i supports k as the representative point. The larger R (i, k) + A (i, k), the greater the likelihood that the representative point k will be the data center (exemplar). The specific adaptive AP clustering device algorithm flow is as follows:
A. the initial attraction degree R (i, k) and the attribution degree R (i, k) are both isomorphic zero matrices with the similarity matrix sima (i, k).
B. Let p = -50, lamda = -0.5, continuously and circularly update R (i, K) and a (i, K) until the constraint condition is reached, and the cluster number is recorded as K1.
C. Let p = p-10, continuously and cyclically update R (i, K) and a (i, K) until the constraint condition is reached to obtain a series of cluster numbers K2,3, \ 8230l (according to experience lmax = 10).
D. In steps B and C, if it is detected that the algorithm oscillates and cannot converge, lamda (value range 0.5-0.9) eliminates the oscillation with a step size of 0.1 until the algorithm converges.
E. And (3) evaluating the clustering quality and the clustering number in the steps B and C by using the contour coefficient index, wherein the larger the index is, the better the clustering quality is, and the corresponding clustering number K is the optimal clustering number.
F. The self-adaptive AP clustering device improves the accuracy and the rapidity of the algorithm mainly by self-adaptively adjusting the deviation parameters and the damping factors of the original AP clustering device. The algorithm utilizes the contour coefficient as a judgment index of the clustering effectiveness and the clustering quality, utilizes the oscillation degree as an index for judging whether the algorithm is converged after oscillation occurs, adaptively adjusts and obtains the combination of the optimal deviation parameter and the damping factor, and finally obtains the optimal clustering result.
(3) Design of wavelet self-adaptive neural network model of PSO (particle swarm optimization)
The self-adaptive wavelet neural network of the PSO self-adaptive wavelet neural network is a feed-forward type network which is provided by adopting a nonlinear wavelet base to replace a common nonlinear Sigmoid function and combining an artificial neural network on the basis of a wavelet theory, wherein a transfer function of a hidden layer uses a wavelet function, and the time-frequency characteristics of input parameters can be more effectively extracted by adaptively adjusting the parameters of the wavelet function. The method takes a wavelet function as an excitation function of a neuron, and the expansion and contraction, translation factors and connection weights of the wavelet are adaptively adjusted in the optimization process of an error energy function. The input signal of the wavelet neural network can be expressed as a one-dimensional vector x of an input parameter i (i =1,2, \ 8230;, n), the output signal being denoted y k (k =1,2, \8230;, m), the calculation formula of the wavelet neural network output layer value is:
Figure BDA0003904843330000101
in the formula omega ij Inputting the connection weight between the i node of the layer and the j node of the hidden layer,
Figure BDA0003904843330000102
as wavelet basis functions, b j Is a shift factor of the wavelet basis function, a j Scale factor, omega, of wavelet basis functions jk The connection weight between the node of the hidden layer j and the node of the output layer k. The correction algorithm of the weight and the threshold of the self-adaptive wavelet neural network in the patent adopts a gradient correction method to update the network weight and the wavelet basis function parameters, so that the output of the wavelet neural network continuously approaches to the expected output. The PSO adaptive wavelet neural network is adopted, the requirements of activating the function to be microminiaturized in the gradient descent method and the calculation of the function derivation process are avoided, and the iterative formula is simple during searching of each particle, so that the calculation speed is much higher than that of the gradient descent method. And through the adjustment of the parameters in the iterative formula,and local extreme values can be well jumped out. A population of random particles is initialized and then an optimal solution is found through iteration. In each iteration, the particle updates itself by tracking two "extrema". The first is the optimal solution pbest found by the particle itself, this solution is called the individual extremum; the other is the best solution currently found for the whole population, this solution is called global extremum gbest. The PSO adaptive wavelet neural network is characterized in that various parameters of the PSO adaptive wavelet neural network are taken as position vectors X of particles, a mean square error energy function formula is set as an objective function for optimization, iteration is carried out through a basic formula of a particle swarm optimization algorithm, and an optimal solution is sought. The PSO adaptive wavelet neural network training algorithm is as follows:
A. initializing a network structure, determining the number of neurons of a network hidden layer, and determining the dimension D of a target search space.
B. Determining the number m of the particles, initializing position vectors and velocity vectors of the particles, substituting the position vectors and the velocity vectors of the particles into an algorithm iterative formula for updating, performing optimization calculation by taking an error energy function as a target function, and recording the optimal position pbest searched by each particle so far and the optimal position gbest searched by the whole particle swarm so far.
C. Searching the whole particle swarm to the optimal position gbest so far, mapping the optimal position gbest to a network weight and a threshold value for the learning, and performing the chemical calculation by taking an error energy function as the fitness of the particle.
D. If the error energy function value is within the error range allowed by the actual problem, finishing the iteration; otherwise, the algorithm is switched back to continue the iteration.
(4) ESN neural network model design
An ESN (Echo state network, ESN) is a novel dynamic neural network, has all the advantages of the dynamic neural network, and can better adapt to nonlinear system identification compared with a common dynamic neural network because the Echo state network introduces a reserve pool concept. The reserve pool is a randomly connected reserve pool which is formed by converting a part connected among traditional dynamic neural networks, and the whole learning process is a process of learning how to connect the reserve pool. The "pool" is actually a randomly generated large-scale recursive structure in which the interconnection of neurons is sparse, usually denoted SD as the percentage of interconnected neurons in the total number of neurons N. The state equation of the ESN neural network model is as follows:
Figure BDA0003904843330000111
wherein W is a weight matrix of the reserve pool, W in Inputting a weight matrix; w back Is a feedback weight matrix; x (n) represents the internal state of the neural network; w out A connection weight matrix among a nuclear reserve pool of the ESN neural network model, the input of the neural network and the output of the neural network;
Figure BDA0003904843330000112
is the output deviation of the neural network or may represent noise; f = f [ ] 1 ,f 2 ,…,f n ]N activation functions for neurons within the "pool of stores"; f. of i Is a hyperbolic tangent function; f. of out Is the epsilon output functions of the ESN neural network model.
2. Building parameter prediction module design
The parameter prediction module comprises a parameter detection module, a TDL beat-to-beat delay line A, a metabolism GM (1, 1) trend model, a NARX neural network model A, a NARX neural network model B, a TDL beat-to-beat delay line C, a TDL beat-to-beat delay line D and a BAM neural network-ANFIS adaptive neural fuzzy inference model of interval hesitation fuzzy numbers; the parameter detection module outputs as the input of a TDL beat-pressing delay line A, the output of the TDL beat-pressing delay line A is input as a metabolism GM (1, 1) trend model, the difference between the output of the TDL beat-pressing delay line A and the output of the metabolism GM (1, 1) trend model are respectively input as an NARX neural network model A and an NARX neural network model B, the outputs of the NARX neural network model A and the NARX neural network model B are respectively input as a TDL beat-pressing delay line B and a TDL beat-pressing delay line C, the outputs of the BAM neural network-ANFIS adaptive neural fuzzy inference model of the TDL beat-pressing delay line B, the TDL beat-pressing delay line C and the interval hesitation fuzzy number are respectively input as the corresponding inputs of the BAM neural network-ANFIS adaptive neural inference model of the interval hesitation fuzzy number, the BAM neural network-ANFIS adaptive neural fuzzy inference model of interval hesitation fuzzy numbers outputs 4 parameters of a, B, C and d, the interval numbers [ a and B ] of a and B are used as minimum values of detected parameters, the interval numbers [ C and d ] of C and d are used as maximum values of the detected parameters, the interval numbers [ a and B ] and the interval numbers [ C and d ] are used as interval hesitation fuzzy numbers of the detected parameters, and the BAM neural network-ANFIS adaptive neural fuzzy inference model of interval hesitation fuzzy numbers outputs the interval hesitation fuzzy numbers of the detected parameters.
(1) GM (1, 1) trend model design for metabolism
The GM (1, 1) gray prediction method has more advantages than the traditional statistical prediction method, whether the prediction variable obeys normal distribution or not is not required to be determined, large sample statistics is not required, the prediction model is not required to be changed at any time according to the change of the output of the ESN neural network model, a uniform differential equation model is established through an accumulation generation technology, the accumulation ESN neural network model outputs original values to be restored to obtain a prediction result, and the differential equation model has higher prediction precision. The essence of establishing the GM (1, 1) trend model is to perform one-time accumulation generation on input original data to enable a generated sequence to present a certain rule, and a fitted curve is obtained by establishing a differential equation model to predict the trend output by the ESN neural network model. The invention adopts a metabolism GM (1, 1) trend model to predict the time span of the output trend of the ESN neural network model, the metabolism GM (1, 1) trend model can predict the output trend parameter value at the future time according to the ESN neural network model value, after each output trend parameter value predicted by the method is used, the predicted output parameter trend value at the future time is added into the original number series of the output parameters of the ESN neural network model respectively, one data at the beginning of the number series is correspondingly removed for modeling, and then a plurality of future output parameter trend values are predicted. And by analogy, predicting the trend value of the output parameter at the future moment. The method is called a metabolism model, can realize the trend prediction of the output parameters for a long time, and can more accurately master the change trend of the output parameter values.
(2) BAM neural network-ANFIS adaptive neural fuzzy inference model design of interval hesitation fuzzy numbers
The BAM neural network-ANFIS adaptive neural fuzzy inference model of interval hesitation fuzzy numbers is that BAM neural network output is used as ANFIS adaptive neural fuzzy inference model input, in the topological structure of BAM neural network model, the initial mode of network input end is x (t), and passes through weight matrix W 1 Weighted and then reaches the y end of the output end, and passes through the transfer characteristic f of the output node y Non-linear transformation of (1) and W 2 The matrix is weighted and returns to the input end x, and then the transfer characteristic f of the output node at the x end is passed x The nonlinear transformation of the BAM neural network model is changed into the output of the input terminal x, and the operation process is repeated, so that the state transition equation of the BAM neural network model is shown in an equation (8).
Figure BDA0003904843330000131
The ANFIS self-adaptive neural inference model organically combines the neural network and the fuzzy control, not only can exert the advantages of the neural network and the fuzzy control, but also can make up the respective defects. The fuzzy membership function and the fuzzy rule in the ANFIS adaptive neural network fuzzy system are obtained by learning a large amount of known data, and the maximum characteristic of the ANFIS adaptive neural inference model is a data-based modeling method instead of any given method based on experience or intuition. This is particularly important in systems where the characteristics are not yet fully understood or are very complex. The main operation steps of the ANFIS self-adaptive neural inference model are as follows:
layer 1: fuzzifying the data of the input, and representing the corresponding output of each node as:
Figure BDA0003904843330000132
the formula n is the number of each input membership function, and the membership function adopts a Gaussian membership function.
Layer 2: and realizing rule operation, outputting the applicability of the rule, and multiplying the rule operation of the ANFIS self-adaptive neural inference model by adopting multiplication.
Figure BDA0003904843330000133
Layer 3: normalizing the applicability of each rule:
Figure BDA0003904843330000134
layer 4: the transfer function of each node is a linear function and represents a local linear model, and the output of each self-adaptive node i is as follows:
Figure BDA0003904843330000141
layer 5: the single node of the layer is a fixed node, and the output of the ANFIS self-adaptive neural inference model is as follows:
Figure BDA0003904843330000142
the condition parameters for determining the shape of the membership function and the conclusion parameters of the inference rule in the ANFIS adaptive neural inference model can be trained through a learning process. The parameters are adjusted by an algorithm combining a linear least square estimation algorithm and gradient descent. In each iteration of the ANFIS self-adaptive neural inference model, firstly, an input signal is transmitted to a layer 4 along the forward direction of a network, and at the moment, a least square estimation algorithm is adopted to adjust network parameters under a fixed condition parameter; the signal continues to propagate forward along the network to the output layer (i.e., layer 5). The ANFIS self-adaptive neural inference model reversely propagates the obtained error signals along the network, and the condition parameters are updated by a gradient method. By adjusting the given condition parameters in the ANFIS adaptive neural inference model in this way, the global optimum point of the conclusion parameters can be obtained, so that the dimension of the search space in the gradient method can be reduced, and the convergence speed of the parameters of the ANFIS adaptive neural inference model can be improved.
3. Water regime detection and control subsystem design
The water regime detection and control subsystem comprises an Elman neural network-NARX neural network model, a fuzzy recurrent neural network-NARX neural network controller, an AANN auto-associative neural network model, a PI controller-NARX neural network controller, a PI controller, a parameter detection module and a parameter prediction module;
(1) Elman neural network-NARX neural network model design
The Elman neural network output of the Elman neural network-NARX neural network model is used as the input of the NARX neural network model, and the Elman neural network can be regarded as a forward neural network with a local memory unit and a local feedback connection, and a special correlation layer is arranged besides the hidden layer. The layer receives feedback signals from hidden layers, each hidden layer node having an associated layer node connection corresponding thereto. The association layer takes the hidden layer state at the previous moment and the network input at the current moment as the input of the hidden layer, which is equivalent to state feedback. The transfer function of the hidden layer is generally a Sigmoid function, the output layer is a linear function, and the associated layer is also a linear function. In order to effectively solve the problem of approaching precision in water level parameter adjustment, the function of the association layer is enhanced. The output of the NARX neural network model at the time is not only dependent on the output y (t-n) of the NARX neural network model in the past, but also dependent on the output vector X (t) of the Elman neural network of the NARX neural network model at the time, the delay order of the output vector of the Elman neural network, and the like. The output of the Elman neural network of the NARX neural network model is transmitted to the hidden layer through the time delay layer, the hidden layer processes signals output by the Elman neural network and then transmits the processed signals to the output layer, the output layer linearly weights the output signals of the hidden layer to obtain final output signals of the neural network, and the time extension layer delays the signals fed back by the NARX neural network model and the signals output by the Elman neural network of the input layer and then transmits the delayed signals to the hidden layer.
(2) Fuzzy recurrent neural network-NARX neural network controller design
Fuzzy recurrent neural network-the output of the fuzzy recurrent neural network of the NARX neural network controller is used as the input of the NARX neural network controller, and the fuzzy recurrent neural network is composed of 4 layers: the network comprises n input nodes, wherein each input node corresponds to m condition nodes, m represents a rule number, nm rule nodes and 1 output node. Layer I introducing input into the network; the second layer fuzzifies the input, and the adopted membership function is a Gaussian function; layer III corresponds to fuzzy reasoning; layer IV corresponds to the defuzzification operation. By using
Figure BDA0003904843330000151
Representing the input and output of the ith node of the kth layer, respectively, the signal transfer process inside the network and the input-output relationship between the layers can be described as follows. Layer I: an input layer, each input node of the layer being directly connected to an input variable, the input and output of the network being represented as:
Figure BDA0003904843330000152
in the formula
Figure BDA0003904843330000153
And
Figure BDA0003904843330000154
for the input and output of the ith node of the network input layer, N represents the number of iterations. Layer II: the membership function layer, the nodes of the layer fuzzify the input variables, each node represents a membership function, a Gaussian function is adopted as the membership function, and the input and output of the network are expressed as:
Figure BDA0003904843330000155
in the formula m ij And σ ij Respectively representing the mean center and the width value of the j-th Gaussian function of the ith linguistic variable of the II layer, wherein m is the number of all linguistic variables corresponding to the input node. Layer III: the fuzzy inference layer, namely the rule layer, adds dynamic feedback to ensure that the network has better learning efficiency, and the feedback link introduces an internal variable h k And selecting a sigmoid function as an activation function of the internal variable of the feedback link. The inputs and outputs of the network are represented as:
Figure BDA0003904843330000161
in the formula of omega jk Is the connecting weight of the recursion part, the neuron of the layer represents the front part of the fuzzy logic rule, the node of the layer carries out pi operation on the output quantity of the second layer and the feedback quantity of the third layer,
Figure BDA0003904843330000162
is the output of the third layer, and m represents the number of rules in the case of full connection. The feedback link mainly calculates the value of the internal variable and the activation strength of the corresponding membership function of the internal variable. The activation strength is related to the degree of matching of the rule nodes at layer 3. The internal variables introduced by the feedback link comprise two types of nodes: the system comprises a receiving node and a feedback node, wherein the receiving node calculates an internal variable by using weighted summation to realize the defuzzification function; the result of fuzzy inference of hidden rules represented by internal variables. And the feedback node adopts a sigmoid function as a fuzzy membership function to realize fuzzification of internal variables. Layer IV: the deblurring layer, i.e., the output layer. The layer node performs a summation operation on the input quantities. The inputs and outputs of the network are represented as:
Figure BDA0003904843330000163
in the formula lambda j The recursive neural network has the performance of approximating a highly nonlinear dynamic system, and the recursion of an internal variable is addedThe training error and the testing error of the neural network are respectively obviously reduced, the fuzzy recursive neural network of the patent adopts a gradient descent algorithm added with cross validation to train the weight of the neural network, internal variables are introduced in a feedback link, the output quantity of a rule layer is subjected to weighted summation and then defuzzification output to be used as feedback quantity, and the feedback quantity and the output quantity of a membership function layer are used as input of the rule layer at the next moment. The output of the fuzzy recurrent neural network contains the activation intensity of the rule layer and the output historical information, and the capability of the fuzzy recurrent neural network to adapt to a nonlinear dynamic system is enhanced. The current output of the NARX neural network model not only depends on the past NARX neural network model output y (t-n), but also depends on the current fuzzy recurrent neural network output vector X (t) of the NARX neural network model, the delay order of the fuzzy recurrent neural network output vector and the like. The output of the fuzzy recurrent neural network of the NARX neural network model is transmitted to the hidden layer through the time delay layer, the hidden layer processes signals output by the fuzzy recurrent neural network and then transmits the processed signals to the output layer, the output layer linearly weights the output signals of the hidden layer to obtain final output signals of the neural network, and the time extension layer delays signals fed back by the NARX neural network model and signals output by the fuzzy recurrent neural network of the input layer and then transmits the delayed signals to the hidden layer.
(3) AANN self-association neural network model design
The AANN Auto-associative neural network model is a feed-forward Auto-associative neural network (AANN) with a special structure, and the AANN Auto-associative neural network model structure includes an input layer, a number of hidden layers, and an output layer. The method comprises the steps of firstly compressing data information output by a plurality of water level parameter detection modules through an input layer, a mapping layer and a bottleneck layer of parameters, extracting a most representative low-dimensional subspace reflecting the output of the water level parameter detection modules from a high-dimensional parameter space output by the water level parameter detection modules, effectively filtering noise and measurement errors in the output data of the water level parameter detection modules, decompressing the output of the water level parameter detection modules through the bottleneck layer, the demapping layer and the output layer, and restoring the previously compressed information to each input value, thereby realizing reconstruction of the output data of the water level parameter detection modules. In order to achieve the purpose of compressing the output information of a plurality of water level parameter detection modules, the number of nodes of a bottleneck layer of an AANN self-association neural network model is obviously smaller than that of an input layer, and in order to prevent the formation of simple single mapping between the output of the plurality of water level parameter detection modules and an output layer of the AANN self-association neural network, except that an excitation function of the output layer of the AANN self-association neural network adopts a linear function, other layers all adopt nonlinear excitation functions. In essence, the first layer of the hidden layer of the AANN auto-associative neural network model is called as a mapping layer, and the node transfer function of the mapping layer can be an S-shaped function or other similar nonlinear functions; the second layer of the hidden layer is called a bottleneck layer, the dimension of the bottleneck layer is the minimum in the network, the transfer function of the bottleneck layer can be linear or nonlinear, the bottleneck layer avoids the mapping relation that the one-to-one output and input are equal, which is easy to realize, the bottleneck layer enables the network to encode and compress the output signals of the plurality of water level parameter detection modules, and the output data of the plurality of water level parameter detection modules is decoded and decompressed after the bottleneck layer to generate the estimated values of the output signals of the plurality of water level parameter detection modules; the third layer or the last layer of the hidden layer is called a demapping layer, the node transfer function of the demapping layer is a generally nonlinear S-shaped function, and the AANN self-associative neural network is trained by an error back propagation algorithm.
(4) PI controller-NARX neural network controller design
The PI controller output of the PI controller-NARX neural network controller is used as the input of the NARX neural network controller, and the output of the NARX neural network model at the moment is not only dependent on the output y (t-n) of the NARX neural network model in the past, but also dependent on the output vector X (t) of the current PI controller of the NARX neural network model, the delay order of the output vector of the PI controller, and the like. The output of a PI controller of the NARX neural network model is transmitted to the hidden layer through the time delay layer, the hidden layer processes signals output by the PI controller and then transmits the signals to the output layer, the output layer linearly weights the output signals of the hidden layer to obtain final neural network output signals, and the time delay layer delays signals fed back by the NARX neural network model and signals output by the PI controller of the input layer and then transmits the signals to the hidden layer.
2. Water resource big data management subsystem of Internet of things
Big data management subsystem of water resource of thing networking includes hydrology parameter measurement end, the on-the-spot monitoring end, hydrology parameter control end, the hydrology gateway, hydrology parameter cloud platform and hydrology monitoring cell-phone APP, hydrology parameter measurement end is responsible for gathering the hydrology parameter information of being detected the waters, there is the hydrology to detect in the on-the-spot monitoring end and control subsystem, realize hydrology parameter measurement end through the hydrology gateway, hydrology parameter control end, the on-the-spot monitoring end, the both-way communication of hydrology parameter cloud platform and hydrology monitoring cell-phone APP, realize the intelligent regulation of hydrology parameter. Hydrologic monitoring cell-phone APP passes through 5G network access hydrologic parameter cloud platform and realizes the remote monitoring to hydrologic parameter, and wherein the regimen detects and control subsystem realizes in claim 1.
1. Design of overall system function
The large water resource data management subsystem of the Internet of things realizes detection and adjustment of hydrological parameters, and a plurality of hydrological parameter measurement terminals of the system construct a wireless monitoring network in a self-organizing manner to realize bidirectional wireless communication of the parameter measurement terminals, the hydrological parameter control end, the hydrological gateway, the field monitoring end, the hydrological parameter cloud platform and the hydrological monitoring mobile phone APP; the water level parameter measuring terminal sends the detected hydrological parameters to the field monitoring terminal and the hydrological parameter cloud platform through the hydrological gateway, and the hydrological condition detection and control subsystem of the field monitoring terminal processes the hydrological parameters and intelligently adjusts the hydrological parameters; and the hydrologic parameter real-time monitoring is realized by accessing the hydrologic parameter cloud platform at the hydrologic monitoring mobile phone APP terminal. The water resource big data management subsystem of the internet of things is shown in figure 3.
2. Design of hydrological parameter measurement terminal
A large number of hydrological parameter measuring terminals based on a wireless sensor network are used as hydrological parameter sensing terminals, and the hydrological parameter measuring terminals and the hydrological parameter control terminals realize mutual information interaction among the field monitoring terminal, the hydrological gateway and the hydrological parameter cloud platform through the self-organizing wireless network. The hydrological parameter measurement terminal comprises a sensor group for acquiring hydrological parameters such as water level, water flow speed, rainfall, PH value, water temperature and dissolved oxygen sensor group and the like, which influence the hydrological parameters, a corresponding signal conditioning circuit, an STM32 microprocessor, a GPS module, a camera and a GPRS wireless transmission module; by means of a high-precision GPS positioning technology, a river geographical graph is established, coordinate information of the lead fish is monitored in real time, the rotation angles of a camera and a searchlight are automatically controlled, the upstream and the downstream of a section are monitored, and the position of the lead fish can be tracked in real time or manually by self-tracking video monitoring. Software of the hydrological parameter measurement terminal mainly realizes wireless communication and acquisition and pretreatment of hydrological parameters. The software is designed by adopting a C language program, so that the compatibility degree is high, the working efficiency of software design and development is greatly improved, and the reliability, readability and transportability of program codes are enhanced. The structure of the hydrological parameter measurement terminal is shown in figure 4.
3. Hydrological parameter control end design
The hydrological parameter control end comprises an STM32 single chip microcomputer, a GPRS wireless transmission module and a water pumping device, a hydrological condition detection and control subsystem is designed in the field monitoring end to realize detection and adjustment of hydrological parameters, the software is designed by adopting a C language program, the compatibility degree is high, the working efficiency of software design and development is greatly improved, and the reliability, readability and transportability of program codes are enhanced. The structure of the hydrological parameter control end is shown in figure 5.
3. Hydrological gateway design
The hydrological gateway comprises a GPRS wireless transmission module, an NB-IoT module, an STM32 single chip microcomputer and an RS232 interface, the hydrological gateway comprises a self-organizing communication network which is used for realizing communication between the GPRS wireless transmission module and a hydrological parameter measurement terminal and a hydrological control terminal, the NB-IoT module is used for realizing data bidirectional interaction between the gateway and a hydrological parameter cloud platform, a hydrological parameter control terminal, a field control terminal and a hydrological monitoring mobile phone APP, and the RS232 interface is connected with the field monitoring terminal to realize information interaction between the gateway and the field monitoring terminal. The hydrological gateway is shown in fig. 6.
4. Site monitoring terminal software
The site monitoring terminal is an industrial control computer, the site monitoring terminal mainly achieves acquisition of hydrological parameters and intelligent adjustment of the hydrological parameters, information interaction between the site monitoring terminal and the hydrological parameter control terminal and between the hydrological parameter cloud platform and the hydrological monitoring mobile phone APP is achieved, and the site monitoring terminal mainly has the functions of communication parameter setting, water resource data analysis and data management and hydrological detection and control subsystem. The management software selects Microsoft Visual + +6.0 as a development tool, calls the Mscomm communication control of the system to design a communication program, and the functions of the field monitoring end software are shown in figure 7.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.

Claims (7)

1. The hydrological parameter intelligent monitoring and water resource big data management system is characterized by comprising a hydrological condition detection and control subsystem and a water resource big data management subsystem of the Internet of things, and the hydrological parameter intelligent detection, adjustment and water resource big data management are realized;
the regimen detection and control subsystem comprises an Elman neural network-NARX neural network model, a fuzzy recurrent neural network-NARX neural network controller, an AANN auto-associative neural network model, a PI controller-NARX neural network controller, a PI controller, a parameter detection module and a parameter prediction module.
2. The intelligent hydrological parameter monitoring and water resource big data management system according to claim 1, wherein: the rainfall, the water flow and the water level sensor output of a plurality of groups of upstream water areas are used as the input of a parameter prediction module, the water level set value, the parameter prediction module output and the AANN self-association neural network model output are respectively used as the corresponding input of an Elman neural network-NARX neural network model, the difference between the output of the Elman neural network-NARX neural network model and the output of the AANN self-association neural network model is used as a water level error, the water level error and the error change rate are used as the input of a fuzzy recurrent neural network-NARX neural network controller, each downstream water area water flow sensor group is used as the input of a corresponding parameter detection module, the difference between the output of the fuzzy recurrent neural network-NARX neural network controller and the output of a corresponding PI controller and the output of a corresponding parameter detection module is used as a water flow error, the water flow error is used as the input of a corresponding PI controller-NARX neural network controller, and the PI controller-NARX neural network controller outputs as the control quantity of a corresponding water pumping device; the water level sensor group of each downstream water area is used as the input of a corresponding parameter detection module, the difference between a water level set value and the output of the parameter detection module is used as the water level difference, the water level difference and the change rate of the water level difference are used as the input of a corresponding PI controller, the output of a plurality of parameter detection modules is used as the corresponding input of an AANN auto-associative neural network model, and the water situation detection and control subsystem realizes the detection and control of the water level and the water flow of multiple areas.
3. The intelligent hydrologic parameter monitoring and water resource big data management system of claim 1, characterized in that: the parameter detection module is composed of a noise reduction self-coding neural network-NARX neural network model, a self-adaptive AP (access point) clustering device, a PS0 wavelet self-adaptive neural network model and an ESN neural network model, measurement parameters output by a parameter sensor for a period of time are respectively used as the input of the corresponding noise reduction self-coding neural network-NARX neural network model, the output of the noise reduction self-coding neural network-NARX neural network model is used as the input of the self-adaptive AP clustering device, the output values of different types of noise reduction self-coding neural network-NARX neural network models are respectively used as the input of the corresponding PS0 wavelet self-adaptive neural network model, the output of the PSO wavelet self-adaptive neural network model is used as the corresponding input of the ESN neural network model, and the output of the ESN neural network model is used as the output of the parameter detection module.
4. The intelligent hydrologic parameter monitoring and water resource big data management system of claim 1, characterized in that: the parameter prediction module comprises a parameter detection module, a TDL beat-to-beat delay line A, a metabolism GM (1, 1) trend model, a NARX neural network model A, a NARX neural network model B, a TDL beat-to-beat delay line C, a TDL beat-to-beat delay line D and a BAM neural network-ANFIS adaptive neural fuzzy inference model of interval hesitation fuzzy numbers, wherein the output of the parameter detection module is used as the input of the TDL beat-to-beat delay line A, the output of the TDL beat-to-beat delay line A is used as the input of a metabolism GM (1, 1) trend model, the difference between the output of the TDL beat-to-beat delay line A and the output of the metabolism GM (1, 1) trend model are respectively used as the input of the NARX neural network model A and the NARX neural network model B, the output of the NARX neural network model A and the output of the NARX neural network model B are respectively used as the input of a TDL beat-to-beat delay line B and a TDL beat-to-beat delay line C, the output of the BAM neural network-ANFIS adaptive neural fuzzy inference model of the TDL beat-to-beat delay line B, the TDL beat-to-beat delay line C and the interval hesitation fuzzy number is respectively used as the corresponding input of the BAM neural network-ANFIS adaptive neural fuzzy inference model of the interval hesitation fuzzy number, 4 parameters output by the BAM neural network-ANFIS adaptive neural fuzzy inference model of the interval hesitation fuzzy number are respectively a, B, C and D, and the BAM neural network-ANFIS adaptive neural fuzzy inference model of the interval hesitation fuzzy number outputs the interval hesitation fuzzy number of the detected parameters.
5. The intelligent hydrologic parameter monitoring and water resource big data management system of claim 4, characterized in that: the a and b form interval number [ a, b ] as the minimum value of the detected parameter, the c and d form interval number [ c, d ] as the maximum value of the detected parameter, and the interval number [ a, b ] and the interval number [ c, d ] form ([ a, b ], [ c, d ]) as the interval hesitation fuzzy number of the detected parameter.
6. The intelligent hydrologic parameter monitoring and water resource big data management system of claim 1, characterized in that: the big data management subsystem of water resource of thing networking is responsible for gathering the hydrology parameter information who is detected the waters for hydrology parameter measurement end, has the regimen to detect and control subsystem in the on-the-spot monitoring end, realizes hydrology parameter measurement end, hydrology parameter control end, on-the-spot monitoring end, hydrology parameter cloud platform and hydrology monitoring cell-phone APP's two-way communication through the hydrology gateway, realizes the intelligent regulation of hydrology parameter.
7. The intelligent hydrological parameter monitoring and water resource big data management system according to claim 6, wherein: the hydrological parameter measuring end comprises a water level, water flow, rainfall, a PH value, a water temperature and dissolved oxygen sensor group for collecting hydrological parameters, a corresponding signal conditioning circuit, an STM32 microprocessor, a GPS module, a camera and a GPRS wireless transmission module.
CN202211301484.4A 2022-10-24 2022-10-24 Hydrologic parameter intelligent monitoring and water resource big data management system Active CN115630101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211301484.4A CN115630101B (en) 2022-10-24 2022-10-24 Hydrologic parameter intelligent monitoring and water resource big data management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211301484.4A CN115630101B (en) 2022-10-24 2022-10-24 Hydrologic parameter intelligent monitoring and water resource big data management system

Publications (2)

Publication Number Publication Date
CN115630101A true CN115630101A (en) 2023-01-20
CN115630101B CN115630101B (en) 2023-10-20

Family

ID=84907119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211301484.4A Active CN115630101B (en) 2022-10-24 2022-10-24 Hydrologic parameter intelligent monitoring and water resource big data management system

Country Status (1)

Country Link
CN (1) CN115630101B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116644832A (en) * 2023-04-07 2023-08-25 自然资源部第二海洋研究所 Optimized layout determining method for bay water quality monitoring station
CN116859830A (en) * 2023-03-27 2023-10-10 福建天甫电子材料有限公司 Production management control system for electronic grade ammonium fluoride production
CN117572770A (en) * 2023-11-15 2024-02-20 淮阴工学院 Control method of intelligent valve positioner and Internet of things system thereof
CN118430698A (en) * 2024-04-25 2024-08-02 淮阴工学院 Intelligent water quality monitoring method and aquaculture Internet of things system
CN118430698B (en) * 2024-04-25 2024-10-22 淮阴工学院 Intelligent water quality monitoring method and aquaculture Internet of things system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201229194Y (en) * 2008-06-19 2009-04-29 北京矿咨信矿业技术研究有限公司 Automatic monitoring system for seepage line of tailing dam
CN104715282A (en) * 2015-02-13 2015-06-17 浙江工业大学 Data prediction method based on improved PSO-BP neural network
CN105139274A (en) * 2015-08-16 2015-12-09 东北石油大学 Power transmission line icing prediction method based on quantum particle swarm and wavelet nerve network
CN105142177A (en) * 2015-08-05 2015-12-09 西安电子科技大学 Complex neural network channel prediction method
CN108345738A (en) * 2018-02-06 2018-07-31 广州地理研究所 A kind of self rating method of middle Storm flood of small basins confluence Runoff Model parameter
US20180237487A1 (en) * 2002-08-20 2018-08-23 Opsanitx Llc Lectin compositions and methods for modulating an immune response to an antigen
CN109492792A (en) * 2018-09-28 2019-03-19 昆明理工大学 A method of it is predicted based on particle group optimizing wavelet neural network powerline ice-covering
CN113903395A (en) * 2021-10-28 2022-01-07 聊城大学 BP neural network copy number variation detection method and system for improving particle swarm optimization
US11355223B1 (en) * 2015-02-06 2022-06-07 Brain Trust Innovations I, Llc Baggage system, RFID chip, server and method for capturing baggage data
CN115016276A (en) * 2022-06-17 2022-09-06 淮阴工学院 Intelligent moisture regulation and environmental parameter Internet of things big data system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180237487A1 (en) * 2002-08-20 2018-08-23 Opsanitx Llc Lectin compositions and methods for modulating an immune response to an antigen
CN201229194Y (en) * 2008-06-19 2009-04-29 北京矿咨信矿业技术研究有限公司 Automatic monitoring system for seepage line of tailing dam
US11355223B1 (en) * 2015-02-06 2022-06-07 Brain Trust Innovations I, Llc Baggage system, RFID chip, server and method for capturing baggage data
CN104715282A (en) * 2015-02-13 2015-06-17 浙江工业大学 Data prediction method based on improved PSO-BP neural network
CN105142177A (en) * 2015-08-05 2015-12-09 西安电子科技大学 Complex neural network channel prediction method
CN105139274A (en) * 2015-08-16 2015-12-09 东北石油大学 Power transmission line icing prediction method based on quantum particle swarm and wavelet nerve network
CN108345738A (en) * 2018-02-06 2018-07-31 广州地理研究所 A kind of self rating method of middle Storm flood of small basins confluence Runoff Model parameter
CN109492792A (en) * 2018-09-28 2019-03-19 昆明理工大学 A method of it is predicted based on particle group optimizing wavelet neural network powerline ice-covering
CN113903395A (en) * 2021-10-28 2022-01-07 聊城大学 BP neural network copy number variation detection method and system for improving particle swarm optimization
CN115016276A (en) * 2022-06-17 2022-09-06 淮阴工学院 Intelligent moisture regulation and environmental parameter Internet of things big data system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHANGWEI CAI 等: "Optimizing floating centroids method neural network classifier using dynamic multilayer particle swarm optimization", 《GECCO \'18: PROCEEDINGS OF THE GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE》, pages 394 *
YE-QUN WANG 等: "Dropout topology-assisted bidirectional learning particle swarm optimization for neural architecture search", 《GECCO \'22: PROCEEDINGS OF THE GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION》, pages 93 *
滕飞达: "松嫩平原典型区白城市水资源快速评价系统建立及应用", 《中国优秀硕士学位论文全文数据库基础科学辑》, no. 08, pages 013 - 16 *
蒋佰权: "人工神经网络在水环境质量评价与预测上的应用", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》, no. 02, pages 027 - 210 *
赵晗博: "基于GMS的某矿区地下水环境影响预测研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》, no. 02, pages 027 - 466 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116859830A (en) * 2023-03-27 2023-10-10 福建天甫电子材料有限公司 Production management control system for electronic grade ammonium fluoride production
CN116859830B (en) * 2023-03-27 2024-01-26 福建天甫电子材料有限公司 Production management control system for electronic grade ammonium fluoride production
CN116644832A (en) * 2023-04-07 2023-08-25 自然资源部第二海洋研究所 Optimized layout determining method for bay water quality monitoring station
CN117572770A (en) * 2023-11-15 2024-02-20 淮阴工学院 Control method of intelligent valve positioner and Internet of things system thereof
CN117572770B (en) * 2023-11-15 2024-05-17 淮阴工学院 Control method of intelligent valve positioner and Internet of things system thereof
CN118430698A (en) * 2024-04-25 2024-08-02 淮阴工学院 Intelligent water quality monitoring method and aquaculture Internet of things system
CN118430698B (en) * 2024-04-25 2024-10-22 淮阴工学院 Intelligent water quality monitoring method and aquaculture Internet of things system

Also Published As

Publication number Publication date
CN115630101B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN111832814B (en) Air pollutant concentration prediction method based on graph attention mechanism
CN115630101B (en) Hydrologic parameter intelligent monitoring and water resource big data management system
CN112116080A (en) CNN-GRU water quality prediction method integrated with attention mechanism
CN113126676B (en) Livestock and poultry house breeding environment parameter intelligent control system
CN115016276B (en) Intelligent water content adjustment and environment parameter Internet of things big data system
CN115905938B (en) Storage tank safety monitoring method and system based on Internet of things
CN115687995A (en) Big data environmental pollution monitoring method and system
CN113281465A (en) Livestock and poultry house breeding environment harmful gas detection system
CN115343784A (en) Local air temperature prediction method based on seq2seq-attention model
CN112766603A (en) Traffic flow prediction method, system, computer device and storage medium
CN115128978A (en) Internet of things environment big data detection and intelligent monitoring system
CN111292121A (en) Garden load prediction method and system based on garden image
CN117221352A (en) Internet of things data acquisition and intelligent big data processing method and cloud platform system
CN115659201A (en) Gas concentration detection method and monitoring system for Internet of things
CN117306608A (en) Foundation pit big data acquisition and intelligent monitoring method and Internet of things system thereof
CN117232817A (en) Intelligent big data monitoring method of electric valve and Internet of things system
CN115616163A (en) Gas accurate preparation and concentration measurement system
Lin et al. An RBF‐based model with an information processor for forecasting hourly reservoir inflow during typhoons
CN116821784A (en) ST-Informar-based ship traffic flow long-sequence space-time prediction method
CN115016275B (en) Intelligent feeding and livestock house big data Internet of things system
CN114254828B (en) Power load prediction method based on mixed convolution feature extractor and GRU
CN114995248A (en) Intelligent maintenance and environmental parameter big data internet of things system
Nayak et al. Water quality time-series modeling and forecasting techniques
CN116108361B (en) Intelligent oil quality detection method and monitoring system
Zheng et al. Wind Electricity Power Prediction Based on CNN-LSTM Network Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant