CN115630101A - Hydrological parameter intelligent monitoring and water resource big data management system - Google Patents
Hydrological parameter intelligent monitoring and water resource big data management system Download PDFInfo
- Publication number
- CN115630101A CN115630101A CN202211301484.4A CN202211301484A CN115630101A CN 115630101 A CN115630101 A CN 115630101A CN 202211301484 A CN202211301484 A CN 202211301484A CN 115630101 A CN115630101 A CN 115630101A
- Authority
- CN
- China
- Prior art keywords
- neural network
- output
- parameter
- input
- hydrological
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 title claims abstract description 112
- 238000012544 monitoring process Methods 0.000 title claims abstract description 57
- 238000013523 data management Methods 0.000 title claims abstract description 27
- 238000001514 detection method Methods 0.000 claims abstract description 62
- 238000004891 communication Methods 0.000 claims abstract description 11
- 238000003062 neural network model Methods 0.000 claims description 130
- 238000013528 artificial neural network Methods 0.000 claims description 126
- 230000001537 neural effect Effects 0.000 claims description 71
- 230000003044 adaptive effect Effects 0.000 claims description 64
- 238000005259 measurement Methods 0.000 claims description 33
- 230000009467 reduction Effects 0.000 claims description 29
- 230000000306 recurrent effect Effects 0.000 claims description 22
- 230000008859 change Effects 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 6
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 claims description 3
- 230000003750 conditioning effect Effects 0.000 claims description 3
- 229910052760 oxygen Inorganic materials 0.000 claims description 3
- 239000001301 oxygen Substances 0.000 claims description 3
- 238000005086 pumping Methods 0.000 claims description 3
- 238000011144 upstream manufacturing Methods 0.000 claims description 3
- 230000004060 metabolic process Effects 0.000 claims 3
- 230000006855 networking Effects 0.000 claims 1
- 239000003643 water by type Substances 0.000 claims 1
- 238000007726 management method Methods 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000012545 processing Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 78
- 238000000034 method Methods 0.000 description 37
- 230000008569 process Effects 0.000 description 22
- 238000004422 calculation algorithm Methods 0.000 description 21
- 239000002245 particle Substances 0.000 description 19
- 238000013461 design Methods 0.000 description 18
- 230000002503 metabolic effect Effects 0.000 description 16
- 238000005457 optimization Methods 0.000 description 12
- 210000002569 neuron Anatomy 0.000 description 11
- 230000004913 activation Effects 0.000 description 9
- 238000012546 transfer Methods 0.000 description 9
- 230000006872 improvement Effects 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 8
- 238000003825 pressing Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 238000013507 mapping Methods 0.000 description 7
- 238000012549 training Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 230000001934 delay Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 4
- 238000012886 linear function Methods 0.000 description 4
- 241000251468 Actinopterygii Species 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000005284 excitation Effects 0.000 description 3
- 238000011478 gradient descent method Methods 0.000 description 3
- 238000005312 nonlinear dynamic Methods 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000013016 damping Methods 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 239000003651 drinking water Substances 0.000 description 2
- 235000020188 drinking water Nutrition 0.000 description 2
- 239000003673 groundwater Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000010355 oscillation Effects 0.000 description 2
- 238000001556 precipitation Methods 0.000 description 2
- 201000004569 Blindness Diseases 0.000 description 1
- 241000580063 Ipomopsis rubra Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
- G06F16/2462—Approximate or statistical queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/248—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
- G06F16/252—Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y20/00—Information sensed or collected by the things
- G16Y20/10—Information sensed or collected by the things relating to the environment, e.g. temperature; relating to location
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y40/00—IoT characterised by the purpose of the information processing
- G16Y40/10—Detection; Monitoring
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A10/00—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
- Y02A10/40—Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Fuzzy Systems (AREA)
- Environmental & Geological Engineering (AREA)
- Toxicology (AREA)
- Feedback Control In General (AREA)
Abstract
Description
技术领域technical field
本发明涉及水文数据监控与管理技术领域,具体涉及水文参数智能化监控与水资源大数据管理系统。The invention relates to the technical field of hydrological data monitoring and management, in particular to a system for intelligent monitoring of hydrological parameters and water resources big data management.
背景技术Background technique
水文监控不仅能为防洪防灾的研究工作提供强有力的数据依据,还对制定水资源可持续利用决策起到至关重要的作用。我国幅员辽阔,河流、湖泊、水库、渠道、地下水和饮用水资源分布众多、纵横交错,组成了特有的水资源体系。然而,近些年经常遇到反常的天气,当出现强降水的自然天气时,还会使水资源的水位和流量暴涨,从而对河堤和水库堤坝造成极大的威胁,甚至有些地区出现了溃堤溃坝的危险,给人民群众的生命安全和财产安全造成损失,所以对水文监控的需要更为迫切。通过物联网、人工智能、大数据、云服务与水文监控紧密结合,物联网技术的水资源智能监控与水资源信息管理系统应用于远程监测河流、湖泊、水库、渠道、地下水和饮用水资源的水位流量和水质在线分析、雨水情遥测、远程视频监控等场景,及时掌握水资源的水位流量、水库水位、降水等水文要素数据,实现对水文监控数据的存储、查询、统计分析和各种形式的展示。Hydrological monitoring can not only provide a strong data basis for research on flood control and disaster prevention, but also play a vital role in making decisions about the sustainable use of water resources. my country has a vast territory, with numerous rivers, lakes, reservoirs, channels, groundwater and drinking water resources distributed in a criss-cross pattern, forming a unique water resource system. However, in recent years, abnormal weather has often been encountered. When natural weather with heavy precipitation occurs, the water level and flow of water resources will skyrocket, thereby posing a great threat to river embankments and reservoir dams, and even some areas have The danger of dike and dam breaks will cause losses to people's life safety and property safety, so the need for hydrological monitoring is more urgent. Through the close combination of the Internet of Things, artificial intelligence, big data, cloud services and hydrological monitoring, the water resource intelligent monitoring and water resource information management system of the Internet of Things technology is applied to the remote monitoring of rivers, lakes, reservoirs, channels, groundwater and drinking water resources. On-line analysis of water level, flow and water quality, rainwater telemetry, remote video monitoring and other scenarios, timely grasp the water level and flow of water resources, reservoir water level, precipitation and other hydrological element data, and realize the storage, query, statistical analysis and various forms of hydrological monitoring data display.
发明内容Contents of the invention
本发明公开了水文参数智能化监控与水资源大数据管理系统,本发明针对传统水文数据监控与水资源管理自动化程度低的问题,利用物联网、无线通信和智能控制技术,通过设置多个固定的监测节点来自动地获取水文数据,然后通过无线通信单元将采集到的数据实时发送至云平台进行处理、分析、保存和共享,为及时预警和自动化调节水文数据提供了保障,实现了对分布式水文数据的集中监控管理。The invention discloses a hydrological parameter intelligent monitoring and water resource big data management system. The invention aims at the problem of low automation of traditional hydrological data monitoring and water resource management. The monitoring nodes can automatically obtain hydrological data, and then send the collected data to the cloud platform in real time through the wireless communication unit for processing, analysis, storage and sharing, which provides guarantee for timely early warning and automatic adjustment of hydrological data, and realizes the distributed Centralized monitoring and management of hydrological data.
为解决上述问题,本发明采用如下的技术方案:In order to solve the above problems, the present invention adopts the following technical solutions:
水文参数智能化监控与水资源大数据管理系统,其特征在于,所述系统包括水情检测与控制子系统和物联网的水资源大数据管理子系统,实现对水文参数的智能化检测、调节和水资源大数据管理。The intelligent monitoring of hydrological parameters and the big data management system of water resources are characterized in that the system includes a water regime detection and control subsystem and a water resources big data management subsystem of the Internet of Things to realize intelligent detection and adjustment of hydrological parameters and big data management of water resources.
本发明进一步技术改进方案是:The further technical improvement scheme of the present invention is:
水情检测与控制子系统包括Elman神经网络-NARX神经网络模型、模糊递归神经网络-NARX神经网络控制器、AANN自联想神经网络模型、PI控制器-NARX神经网络控制器、PI控制器、参数检测模块和参数预测模块,多组上游水域降雨量、水流和水位传感器输出作为参数预测模块输入,水位设定值、参数预测模块输出、AANN自联想神经网络模型输出分别作为Elman神经网络-NARX神经网络模型的对应输入,Elman神经网络-NARX神经网络模型输出与AANN自联想神经网络模型输出的差作为水位误差,水位误差和误差变化率作为模糊递归神经网络-NARX神经网络控制器的输入,每个下游水域水流传感器组作为对应的参数检测模块输入,模糊递归神经网络-NARX神经网络控制器输出与对应的PI控制器输出的和与对应的参数检测模块输出的差作为水流误差,水流误差作为对应的PI控制器-NARX神经网络控制器的输入,PI控制器-NARX神经网络控制器输出作为对应的抽水装置的控制量;每个下游水域的水位传感器组作为对应参数检测模块输入,水位设定值与参数检测模块输出的差作为水位差,水位差和水位差变化率作为对应的PI控制器的输入,多个参数检测模块输出作为AANN自联想神经网络模型的对应输入,水情检测与控制子系统实现对多区域水位和水流的检测与控制。水情检测与控制子系统见图1。The water regime detection and control subsystem includes Elman neural network-NARX neural network model, fuzzy recurrent neural network-NARX neural network controller, AANN self-associative neural network model, PI controller-NARX neural network controller, PI controller, parameters The detection module and the parameter prediction module, the output of multiple sets of upstream water area rainfall, water flow and water level sensors are used as the input of the parameter prediction module, the water level setting value, the output of the parameter prediction module, and the output of the AANN self-associative neural network model are respectively used as the Elman neural network-NARX neural network The corresponding input of network model, the difference of Elman neural network-NARX neural network model output and AANN self-associative neural network model output is as water level error, and water level error and error change rate are as the input of fuzzy recursive neural network-NARX neural network controller, each The water flow sensor group in the downstream water area is used as the input of the corresponding parameter detection module, the difference between the output of the fuzzy recurrent neural network-NARX neural network controller and the output of the corresponding PI controller and the output of the corresponding parameter detection module is used as the water flow error, and the water flow error is used as The input of the corresponding PI controller-NARX neural network controller, the output of the PI controller-NARX neural network controller is used as the control quantity of the corresponding pumping device; the water level sensor group of each downstream water area is used as the input of the corresponding parameter detection module, and the water level is set The difference between the fixed value and the output of the parameter detection module is used as the water level difference, the water level difference and the rate of change of the water level difference are used as the input of the corresponding PI controller, and the output of multiple parameter detection modules is used as the corresponding input of the AANN self-associative neural network model. The control subsystem realizes the detection and control of multi-area water level and water flow. The water regime detection and control subsystem is shown in Figure 1.
本发明进一步技术改进方案是:The further technical improvement scheme of the present invention is:
参数检测模块由多个降噪自编码神经网络-NARX神经网络模型、自适应AP聚类器、多个PSO的小波自适应神经网络模型和ESN神经网络模型组成,多组参数传感器输出的一段时间的测量参数分别作为对应的降噪自编码神经网络-NARX神经网络模型的输入,多个降噪自编码神经网络-NARX神经网络模型输出作为自适应AP聚类器的输入,自适应AP聚类器输出不同类型的降噪自编码神经网络-NARX神经网络模型的输出值分别作为对应的PSO的小波自适应神经网络模型的输入,多个PSO的小波自适应神经网络模型的输出作为ESN神经网络模型的对应输入,ESN神经网络模型输出作为参数检测模块输出;参数检测模块见图2。The parameter detection module is composed of multiple noise reduction self-encoder neural network-NARX neural network models, adaptive AP clusterers, multiple PSO wavelet adaptive neural network models and ESN neural network models. The measurement parameters of each are used as the input of the corresponding noise reduction autoencoder neural network-NARX neural network model, and the output of multiple noise reduction autoencoder neural network-NARX neural network models is used as the input of the adaptive AP clusterer, and the adaptive AP clustering The output values of different types of noise reduction autoencoder neural network-NARX neural network models are respectively used as the input of the corresponding PSO wavelet adaptive neural network model, and the outputs of multiple PSO wavelet adaptive neural network models are used as the ESN neural network The corresponding input of the model and the output of the ESN neural network model are used as the output of the parameter detection module; the parameter detection module is shown in Figure 2.
本发明进一步技术改进方案是:The further technical improvement scheme of the present invention is:
参数预测模块包括参数检测模块、TDL按拍延迟线A、新陈代谢GM(1,1)趋势模型、NARX神经网络模型A、NARX神经网络模型B、TDL按拍延迟线B、TDL按拍延迟线C、TDL按拍延迟线D和区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型组成,参数检测模块输出作为作为TDL按拍延迟线A的输入,TDL按拍延迟线A的输出作为新陈代谢GM(1,1)趋势模型输入,TDL按拍延迟线A的输出与新陈代谢GM(1,1)趋势模型输出的差和新陈代谢GM(1,1)趋势模型输出分别作为NARX神经网络模型A和NARX神经网络模型B的输入,NARX神经网络模型A和NARX神经网络模型B输出分别作为TDL按拍延迟线B和TDL按拍延迟线C的输入,TDL按拍延迟线B、TDL按拍延迟线C和区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型的输出分别作为区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型的对应输入,区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型输出的4个参数分别为a、b、c和d,a和b组成区间数[a,b]作为被检测参数的极小值,c和d组成区间数[c,d]作为被检测参数的极大值,区间数[a,b]和区间数[c,d]组成([a,b],[c,d])作为被检测参数的区间犹豫模糊数,区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型输出被检测参数的区间犹豫模糊数。参数预测模块见图2。Parameter prediction module includes parameter detection module, TDL beat delay line A, metabolic GM (1,1) trend model, NARX neural network model A, NARX neural network model B, TDL beat delay line B, TDL beat delay line C , TDL beat delay line D and interval hesitant fuzzy number BAM neural network-ANFIS adaptive neuro-fuzzy reasoning model, the output of the parameter detection module is used as the input of TDL beat delay line A, and the output of TDL beat delay line A is used as The input of the metabolic GM (1, 1) trend model, the difference between the output of the TDL beat delay line A and the output of the metabolic GM (1, 1) trend model and the output of the metabolic GM (1, 1) trend model are respectively used as the NARX neural network model A And the input of NARX neural network model B, the output of NARX neural network model A and NARX neural network model B are respectively used as the input of TDL pressing delay line B and TDL pressing delay line C, TDL pressing delay line B, TDL pressing delay line Line C and the output of the BAM neural network-ANFIS adaptive neuro-fuzzy inference model of interval hesitant fuzzy numbers are respectively used as the corresponding input of the BAM neural network-ANFIS adaptive neuro-fuzzy inference model of interval hesitant fuzzy numbers, and the BAM neural network of interval hesitant fuzzy numbers The four parameters output by the network-ANFIS adaptive neuro-fuzzy inference model are a, b, c and d, respectively, a and b form the interval number [a, b] as the minimum value of the detected parameter, and c and d form the interval number [c, d] is the maximum value of the detected parameter, the interval number [a, b] and the interval number [c, d] constitute ([a, b], [c, d]) as the interval hesitation of the detected parameter Fuzzy number, interval hesitation fuzzy number BAM neural network-ANFIS adaptive neuro-fuzzy inference model outputs the interval hesitation fuzzy number of the detected parameter. The parameter prediction module is shown in Figure 2.
本发明进一步技术改进方案是:The further technical improvement scheme of the present invention is:
降噪自编码神经网络-NARX神经网络模型、BAM神经网络-ANFIS自适应神经模糊推理模型、BAM神经网络-NARX神经网络模型、Elman神经网络-NARX神经网络模型、模糊递归神经网络-NARX神经网络控制器、PI控制器-NARX神经网络控制器的特征在于降噪自编码神经网络与NARX神经网络模型串联、BAM神经网络与ANFIS自适应神经模糊推理模型串联、BAM神经网络与NARX神经网络模型串联、Elman神经网络与NARX神经网络模型串联、模糊递归神经网络与NARX神经网络控制器串联、PI控制器与NARX神经网络控制器串联。Noise reduction self-encoder neural network-NARX neural network model, BAM neural network-ANFIS adaptive neuro-fuzzy inference model, BAM neural network-NARX neural network model, Elman neural network-NARX neural network model, fuzzy recurrent neural network-NARX neural network Controller, PI controller-NARX neural network controller is characterized by series connection of noise reduction self-encoder neural network and NARX neural network model, series connection of BAM neural network and ANFIS adaptive neuro-fuzzy reasoning model, series connection of BAM neural network and NARX neural network model , Elman neural network and NARX neural network model connected in series, fuzzy recurrent neural network connected in series with NARX neural network controller, PI controller connected in series with NARX neural network controller.
本发明进一步技术改进方案是:The further technical improvement scheme of the present invention is:
物联网的水资源大数据管理子系统包括水文参数测量端、水文网关、现场监控端、水文参数控制端、水文参数云平台和水文监测手机APP,水文参数测量端负责采集被检测水域的水文参数信息,在现场监控端中有水情检测与控制子系统,通过水文网关实现水文参数测量端、水文参数控制端、现场监控端、水文参数云平台和水文监测手机APP的双向通信,实现水文参数智能化调节。物联网的水资源大数据管理子系统见图3。The water resources big data management subsystem of the Internet of Things includes a hydrological parameter measurement terminal, a hydrological gateway, an on-site monitoring terminal, a hydrological parameter control terminal, a hydrological parameter cloud platform, and a hydrological monitoring mobile APP. The hydrological parameter measurement terminal is responsible for collecting the hydrological parameters of the detected water area Information, there is a hydrological detection and control subsystem in the on-site monitoring terminal. Through the hydrological gateway, the two-way communication between the hydrological parameter measurement terminal, the hydrological parameter control terminal, the on-site monitoring terminal, the hydrological parameter cloud platform and the hydrological monitoring mobile phone APP is realized, and the hydrological parameter is realized. Intelligent regulation. The water resource big data management subsystem of the Internet of Things is shown in Figure 3.
本发明进一步技术改进方案是:The further technical improvement scheme of the present invention is:
水文参数测量终端包括采集水文参数的水位、水流、降雨量、PH值、水温和溶解氧传感器组和对应的信号调理电路、STM32微处理器、GPS模块、摄像头和GPRS无线传输模块;水文参数测量终端见图4。The hydrological parameter measurement terminal includes water level, water flow, rainfall, PH value, water temperature and dissolved oxygen sensor group and corresponding signal conditioning circuit, STM32 microprocessor, GPS module, camera and GPRS wireless transmission module for collecting hydrological parameters; hydrological parameter measurement See Figure 4 for the terminal.
相比于现有技术,本发明具有以下明显优点:Compared with the prior art, the present invention has the following obvious advantages:
一、本发明针对水文参数测量过程中,水文参数测量传感器精度误差、干扰和测量异常等问题存在的不确定性和随机性,本发明通过参数预测模块将水文参数测量传感器输出值转化为区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型形式,它们有效地处理了水文参数测量的模糊性、动态性和不确定性,提高了水文参数传感器检测参数的客观性和可信度。1. In the process of hydrological parameter measurement, the present invention aims at the uncertainty and randomness of the hydrological parameter measurement sensor accuracy error, interference and measurement anomaly. The present invention converts the output value of the hydrological parameter measurement sensor into interval hesitation through the parameter prediction module The fuzzy number BAM neural network-ANFIS adaptive neuro-fuzzy reasoning model form, they effectively deal with the fuzziness, dynamics and uncertainty of hydrological parameter measurement, and improve the objectivity and credibility of the hydrological parameter sensor detection parameters.
二、本发明BAM神经网络-ANFIS模糊神经网络模型的BAM神经网络是一种联想记忆神经网络模型,BAM神经网络输出作为ANFIS模糊神经网络模型输入,BAM神经网络可以实现双向异联想模型,通过双向联想记忆矩阵,把事先概括好的BAM神经网络的水文参数输入样本数据对存储起来,当有BAM神经网络有新的水文参数信息输入时,BAM神经网络并行回忆联想出相对应的输出结果作为ANFIS模糊神经网络模型输入,BAM神经网络的双向联想记忆网络是一种两层的非线性反馈神经网络,具有联想记忆、分布式存储及自学习等功能,当向BAM神经网络的一层加入输入信号时,BAM神经网络的另一层可以得到水文参数输出信号。Two, the BAM neural network of the present invention's BAM neural network-ANFIS fuzzy neural network model is a kind of associative memory neural network model, and the BAM neural network output is input as the ANFIS fuzzy neural network model, and the BAM neural network can realize two-way different associative models, through two-way The associative memory matrix stores the hydrological parameter input sample data of the BAM neural network summarized in advance. When the BAM neural network has new hydrological parameter information input, the BAM neural network recalls the corresponding output results in parallel as ANFIS Fuzzy neural network model input, the two-way associative memory network of BAM neural network is a two-layer nonlinear feedback neural network, which has the functions of associative memory, distributed storage and self-learning. When adding an input signal to a layer of BAM neural network When , another layer of BAM neural network can get the output signal of hydrological parameters.
三、本发明PSO的自适应小波神经网络模型,该模型中隐含层的传递函数使用小波函数,通过自适应调整小波函数的参数,能够更加有效输入提取信号时频特征,具有结构稳定,算法简单,全局搜索能力强,收敛速度快,泛化能力强等优点,PSO的自适应小波神经网络能够有效对输入信号进行预测,具有受噪声影响小、稳定性好等特点,收敛速度快,识别精度高。避免了BP网络在结构设计的盲目性,网络权系数线性分布和学习目标函数的凸性,使网络的训练过程从根本上避免了局部最优化等问题,算法概念简单,收敛速度快,有较强的函数学习能力,可以高精度逼近任意非线性函数。Three, the self-adaptive wavelet neural network model of PSO of the present invention, the transfer function of hidden layer uses wavelet function in this model, by adaptively adjusting the parameter of wavelet function, can more effectively input and extract signal time-frequency characteristic, have stable structure, algorithm Simple, strong global search ability, fast convergence speed, strong generalization ability, etc., the adaptive wavelet neural network of PSO can effectively predict the input signal, has the characteristics of small noise influence, good stability, fast convergence speed, and identification High precision. It avoids the blindness of BP network structure design, the linear distribution of network weight coefficients and the convexity of the learning objective function, so that the network training process fundamentally avoids problems such as local optimization. Strong function learning ability, can approximate any nonlinear function with high precision.
四、本发明采用PSO的自适应小波神经网络,避免了梯度下降法中要求激活函数可微,以及对函数求导的过程计算,并且各个粒子搜索时迭代公式简单,因而计算速度又比梯度下降法快得多。通过对迭代公式中参数的调整,还能很好地跳出局部极值,进行全局寻优,简单有效地提高了网络的训练速度。PSO算法的自适应小波神经网络模型的误差更小,收敛速度更快,泛化能力更强。以小波函数作为隐含层的激励函数,采用PSO算法,对权值、伸缩参数、平移参数进行调整,PSO算法的自适应小波神经网络模型,该模型具有算法简单、结构稳定、计算收敛速度快、全局寻优能力强、识别精度高、泛化能力强的优点。Four, the present invention adopts the self-adaptive wavelet neural network of PSO, has avoided requiring activation function to be differentiable in the gradient descent method, and the process calculation to function derivation, and the iterative formula is simple during each particle search, thus calculation speed is lower than gradient again The method is much faster. By adjusting the parameters in the iterative formula, it can also jump out of the local extremum and perform global optimization, which simply and effectively improves the training speed of the network. The adaptive wavelet neural network model of PSO algorithm has smaller error, faster convergence speed and stronger generalization ability. The wavelet function is used as the excitation function of the hidden layer, and the PSO algorithm is used to adjust the weight, stretching parameters, and translation parameters. The adaptive wavelet neural network model of the PSO algorithm has the advantages of simple algorithm, stable structure, and fast calculation convergence speed. , Strong global optimization ability, high recognition accuracy, and strong generalization ability.
五、本发明采用新陈代谢GM(1,1)趋势模型预测输入参数的时间跨度长。用新陈代谢GM(1,1)趋势模型可以根据输入参数值预测未来时刻值,用上述方法预测出的各个输入参数未来时刻值,把多个输入参数的未来值再加分别加入新陈代谢GM(1,1)趋势模型的原始数列中,相应地去掉输入数列开头的一个数据建模,再进行预测输入数参数的未来值。依此类推,预测出新陈代谢GM(1,1)趋势模型输出参数的未来值。这种方法称为等维灰数递补模型,它可实现较长时间的预测,可以更加准确地掌握输入参数值的变化趋势。5. The present invention adopts the metabolic GM (1, 1) trend model to predict the input parameters with a long time span. Using the metabolic GM(1, 1) trend model can predict the future time value according to the input parameter value, use the above method to predict the future time value of each input parameter, and add the future value of multiple input parameters to the metabolic GM(1, 1) In the original sequence of the trend model, the data at the beginning of the input sequence is correspondingly removed for modeling, and then the future value of the input parameter is predicted. By analogy, the future values of the output parameters of the metabolic GM(1,1) trend model are predicted. This method is called the equal-dimensional gray number supplementary model, which can realize long-term prediction and more accurately grasp the change trend of input parameter values.
附图说明Description of drawings
图1为本发明水情检测与控制子系统;Fig. 1 is the water regime detection and control subsystem of the present invention;
图2为本发明参数检测模块与参数预测模块;Fig. 2 is a parameter detection module and a parameter prediction module of the present invention;
图3为本发明物联网的水资源大数据管理子系统;Fig. 3 is the water resources big data management subsystem of the Internet of Things of the present invention;
图4为本发明水文参数测量终端;Fig. 4 is the hydrological parameter measurement terminal of the present invention;
图5为本发明水文参数控制端;Fig. 5 is the hydrological parameter control end of the present invention;
图6为本发明水文网关;Fig. 6 is the hydrological gateway of the present invention;
图7为本发明现场控制端软件结构。Fig. 7 is the software structure of the field control terminal of the present invention.
具体实施方式Detailed ways
结合附图1-7,对本发明技术方案作进一步描述:In conjunction with accompanying drawing 1-7, the technical solution of the present invention is further described:
一、水情检测与控制子系统步骤:1. Water regime detection and control subsystem steps:
1、构建参数检测模块,参数检测模块由多个降噪自编码神经网络-NARX神经网络模型、自适应AP聚类器、多个PSO的小波自适应神经网络模型和ESN神经网络模型组成;多组参数传感器输出的一段时间的测量参数分别作为对应的降噪自编码神经网络-NARX神经网络模型的输入,多个降噪自编码神经网络-NARX神经网络模型输出作为自适应AP聚类器的输入,自适应AP聚类器输出不同类型的降噪自编码神经网络-NARX神经网络模型的输出值分别作为对应的PSO的小波自适应神经网络模型的输入,多个PSO的小波自适应神经网络模型的输出作为ESN神经网络模型的对应输入,ESN神经网络模型输出作为参数检测模块输出;1. Construct a parameter detection module, which consists of multiple noise reduction autoencoder neural network-NARX neural network models, adaptive AP clusterers, multiple PSO wavelet adaptive neural network models and ESN neural network models; multiple The measurement parameters output by the group parameter sensor for a period of time are respectively used as the input of the corresponding noise reduction autoencoder neural network-NARX neural network model, and the outputs of multiple noise reduction autoencoder neural network-NARX neural network models are used as the adaptive AP clusterer. Input, the adaptive AP clusterer outputs different types of noise reduction self-encoder neural network-NARX neural network model output values are respectively used as the input of the corresponding PSO wavelet adaptive neural network model, multiple PSO wavelet adaptive neural network The output of the model is used as the corresponding input of the ESN neural network model, and the output of the ESN neural network model is used as the output of the parameter detection module;
(1)、降噪自编码神经网络-NARX神经网络模型设计(1), Noise Reduction Autoencoder Neural Network-NARX Neural Network Model Design
降噪自编码神经网络-NARX神经网络模型为降噪自编码神经网络输出作为NARX神经网络模型输入,降噪自编码神经网络是一种降维方法,通过训练具有小中心层的多层神经网络,将高维数据转换为低维数据。降噪自编码神经网络(DAE)是一种典型的三层神经网络,在隐藏层和输入层之间有一个编码过程,在输出层和隐藏层之间有一个解码过程。降噪自编码神经网络通过对输入数据的编码操作得到编码表示(编码器),通过对隐含层的输出解码操作得到重构的输入数据(解码器),隐含层的数据就是降维数据。然后定义重构误差函数来测量降噪自编码神经网络的学习效果。基于误差函数可以添加约束条件,生成各种类型的降噪自编码神经网络,编码器和解码器以及损失函数如下所示:Noise Reduction Autoencoder Neural Network-NARX Neural Network Model is the Noise Reduction Autoencoder Neural Network output as the input of the NARX Neural Network model, the Noise Reduction Autoencoder Neural Network is a dimensionality reduction method by training a multilayer neural network with a small central layer , to transform high-dimensional data into low-dimensional data. Denoising autoencoder neural network (DAE) is a typical three-layer neural network, which has an encoding process between the hidden layer and the input layer, and a decoding process between the output layer and the hidden layer. The noise reduction self-encoder neural network obtains the encoded representation (encoder) through the encoding operation of the input data, and obtains the reconstructed input data (decoder) through the output decoding operation of the hidden layer. The data in the hidden layer is the dimensionality reduction data. . Then a reconstruction error function is defined to measure the learning effect of the denoising autoencoder neural network. Constraints can be added based on the error function to generate various types of noise reduction autoencoder neural networks, encoders and decoders, and loss functions are as follows:
编码器:h=δ(Wx+b) (1)Encoder: h=δ(Wx+b) (1)
解码器: decoder:
损失函数: Loss function:
AE的训练过程与BP神经网络类似,W和W′为权值矩阵,b和b′为偏置量,h为隐含层的输出值,x为输入向量,为输出向量,δ为激励函数,一般使用Sigmoid函数或tanh函数。降噪自编码神经网络分为编码过程和解码过程,输入层到隐藏层为编码过程,隐藏层到输出层为解码过程。降噪自编码神经网络的目标是利用误差函数使输入和输出尽量相近,通过反向传播最小化误差函数,得到自编码网络最优的权值和偏置,为建立深度自编码网络模型做准备。根据自编码网络编码和解码原理,利用含有噪声测量数据得到编码数据和解码数据,通过解码数据和原始数据构造误差函数,通过反向传播最小化误差函数,得到最优的网络权值和偏置。通过加入噪声来破坏测量数据,然后将损坏的数据作为输入层输入到神经网络中。降噪自编码神经网络的重构结果应与测量参数原始数据近似,通过这种方法,可以消除测量参数扰动,并获得稳定的结构。原始测量参数通过加入噪声得到干扰输入,然后输入到编码器中得到特征表达,再通过解码器映射到输出层。NARX神经网络模型的当时输出不仅取决于过去的NARX神经网络模型输出y(t-n),还取决于NARX神经网络模型当前的降噪自编码神经网络输出向量X(t)以及降噪自编码神经网络输出向量的延迟阶数等。NARX神经网络模型的降噪自编码神经网络输出通过时延层传递给隐层,隐层对降噪自编码神经网络输出的信号进行处理后传递到输出层,输出层将隐层输出信号做线性加权获得最终的神经网络输出信号,时延层将NARX神经网络模型反馈的信号和输入层的降噪自编码神经网络输出的信号进行延时,然后输送到隐层。NARX神经网络模型具有非线性映射能力、良好的鲁棒性和自适应性等特点,适宜对参数传感器输出预测值的多个高频波动部分进一步处理。NARX神经网络模型(Nonlinear Auto-Regression with External input neuralnetwork)是一种动态的前馈神经网络,NARX神经网络是一个有着外部输入的非线性自回归网络,它有一个多步时延的动态特性,并通过反馈连接封闭网络的若干层,NARX神经网络模型是非线性动态系统中应用最广泛的一种回归动态神经网络,其性能普遍优于全回归神经网络。NARX神经网络模型主要由输入层、隐层、输出层及输入和输出延时层构成,在应用前一般要事先确定输入和输出的延时阶数、隐层神经元个数,NARX神经网络模型的当时输出不仅取决于过去的NARX神经网络模型输出y(t-n),还取决于NARX神经网络模型当前的粒子群优化自适应小波神经网络输出向量X(t)以及粒子群优化自适应小波神经网络输出向量的延迟阶数等。NARX神经网络模型的粒子群优化自适应小波神经网络输出通过时延层传递给隐层,隐层对粒子群优化自适应小波神经网络输出的信号进行处理后传递到输出层,输出层将隐层输出信号做线性加权获得最终的神经网络输出信号,时延层将NARX神经网络模型反馈的信号和输入层的粒子群优化自适应小波神经网络输出的信号进行延时,然后输送到隐层。NARX神经网络模型具有非线性映射能力、良好的鲁棒性和自适应性等特点。x(t)表示NARX神经网络模型的输入,即降噪自编码神经网络输出;m表示外部输入的延迟阶数;y(t)是神经网络的输出,即下一时段的降噪自编码神经网络输出的预测值;n是输出延迟阶数;s为隐含层神经元的个数;由此可以得到第j个隐含单元的输出为:The training process of AE is similar to that of the BP neural network. W and W' are weight matrixes, b and b' are biases, h is the output value of the hidden layer, x is the input vector, is the output vector, and δ is the activation function, generally using the Sigmoid function or tanh function. The noise reduction self-encoder neural network is divided into an encoding process and a decoding process. The encoding process is from the input layer to the hidden layer, and the decoding process is from the hidden layer to the output layer. The goal of the noise reduction self-encoder neural network is to use the error function to make the input and output as close as possible, minimize the error function through backpropagation, and obtain the optimal weight and bias of the self-encoder network to prepare for the establishment of a deep self-encoder network model . According to the principle of self-encoding network encoding and decoding, the encoded data and decoded data are obtained by using the noise-containing measurement data, and the error function is constructed through the decoded data and the original data, and the error function is minimized through backpropagation to obtain the optimal network weight and bias. . The measured data is corrupted by adding noise, and the corrupted data is fed into the neural network as an input layer. The reconstruction result of the noise-reduced autoencoder neural network should be similar to the original data of the measured parameters. By this method, the disturbance of the measured parameters can be eliminated and a stable structure can be obtained. The original measurement parameters are obtained by adding noise to get the interference input, and then input into the encoder to obtain the feature expression, and then mapped to the output layer through the decoder. The current output of the NARX neural network model depends not only on the past NARX neural network model output y(tn), but also on the current noise reduction autoencoder neural network output vector X(t) of the NARX neural network model and the noise reduction autoencoder neural network The delay order of the output vector, etc. The output of the noise-reducing self-encoding neural network of the NARX neural network model is transmitted to the hidden layer through the delay layer. The hidden layer processes the signal output by the noise-reducing self-encoding neural network and then passes it to the output layer. The final neural network output signal is obtained by weighting, and the delay layer delays the signal fed back by the NARX neural network model and the signal output by the noise reduction self-encoding neural network of the input layer, and then sends it to the hidden layer. The NARX neural network model has the characteristics of nonlinear mapping ability, good robustness and adaptability, and is suitable for further processing of multiple high-frequency fluctuations in the output prediction values of parameter sensors. The NARX neural network model (Nonlinear Auto-Regression with External input neural network) is a dynamic feedforward neural network. The NARX neural network is a nonlinear auto-regressive network with external input. It has a dynamic characteristic of multi-step delay. And connect several layers of the closed network through feedback. The NARX neural network model is the most widely used regression dynamic neural network in nonlinear dynamic systems, and its performance is generally better than that of the full regression neural network. The NARX neural network model is mainly composed of an input layer, a hidden layer, an output layer, and an input and output delay layer. Before application, it is generally necessary to determine the delay order of the input and output, the number of neurons in the hidden layer, and the NARX neural network model. The current output depends not only on the past NARX neural network model output y(tn), but also on the current particle swarm optimization adaptive wavelet neural network output vector X(t) of the NARX neural network model and the particle swarm optimization adaptive wavelet neural network The delay order of the output vector, etc. The particle swarm optimization adaptive wavelet neural network output of the NARX neural network model is transmitted to the hidden layer through the delay layer. The hidden layer processes the signal output by the particle swarm optimization adaptive wavelet neural network and then passes it to the output layer. The output signal is linearly weighted to obtain the final neural network output signal. The delay layer delays the signal fed back by the NARX neural network model and the output signal of the particle swarm optimization adaptive wavelet neural network in the input layer, and then sends it to the hidden layer. The NARX neural network model has the characteristics of nonlinear mapping ability, good robustness and adaptability. x(t) represents the input of the NARX neural network model, that is, the output of the noise reduction self-encoder neural network; m represents the delay order of the external input; y(t) is the output of the neural network, that is, the noise reduction self-encoder neural The predicted value of the network output; n is the output delay order; s is the number of neurons in the hidden layer; thus, the output of the jth hidden unit can be obtained as:
上式中,wji为第i个输入与第j个隐含神经元之间的连接权值,bj是第j个隐含神经元的偏置值。In the above formula, w ji is the connection weight between the i-th input and the j-th hidden neuron, and b j is the bias value of the j-th hidden neuron.
(2)、自适应AP聚类器设计(2), adaptive AP clusterer design
自适应AP聚类器是通过数据对象之间的“消息传递”完成聚类,主要是以数据点之间的相似度similarity(采用负欧氏距离作为标准)为基础,运用吸引度responsibility和归属度availability两种消息进行循环更新迭代,最终寻找出最优聚类结果。设有N个数据点构成的数据集X={x1,x2,…,xn},其中任意两个数据点之间的相似度为:Adaptive AP clusterer completes clustering through "message passing" between data objects, mainly based on the similarity between data points (using negative Euclidean distance as the standard), using attractiveness responsibility and attribution The two kinds of messages of degree availability are cyclically updated and iterated, and finally the optimal clustering result is found. Assuming a data set X={x 1 , x 2 ,…, x n } composed of N data points, the similarity between any two data points is:
其中,simi(i,k)主对角线上的值用偏向参数值p去替换,p越大表明该点被选为代表点的概率就越大。所以,最终的聚类数目会随着p的改变而发生改变,一般在无先验知识的情况下将p设定为simi(i,k)的中值。定义R(i,k)为候选代表点k对每个数据点i的吸引程度,A(i,k)为数据点i支持k作为代表点的程度。R(i,k)+A(i,k)越大,代表点k作为数据中心(exemplar)的可能性就越大。具体自适应AP聚类器算法流程如下:Among them, the value on the main diagonal of simi(i, k) is replaced by the bias parameter value p, the larger p indicates that the probability of the point being selected as a representative point is greater. Therefore, the final number of clusters will change as p changes, and p is generally set as the median of simi(i, k) without prior knowledge. Define R(i, k) as the degree of attraction of candidate representative point k to each data point i, and A(i, k) as the degree to which data point i supports k as a representative point. The larger R(i, k)+A(i, k), the more likely the representative point k is the data center (exemplar). The specific adaptive AP clusterer algorithm flow is as follows:
A、初始化吸引度R(i,k)和归属度R(i,k)均为与相似矩阵simi(i,k)同构零矩阵。A. The initial attractiveness R(i, k) and belongingness R(i, k) are zero matrices isomorphic to the similarity matrix simi(i, k).
B、令p=-50,lamda=0.5,不断循环更新R(i,k)和A(i,k),直到达到约束条件得到聚类数目记为K1。B. Set p=-50, lamda=0.5, update R(i, k) and A(i, k) continuously until the constraint condition is met, and record the number of clusters as K1.
C、令p=p-10,不断循环更新R(i,k)和A(i,k),直到达到约束条件得到一系列聚类数目为K2,3,…l(根据经验lmax=10)。C. Let p=p-10, update R(i, k) and A(i, k) continuously until the constraint condition is met, and obtain a series of cluster numbers K2, 3, ... l (according to experience lmax=10) .
D、在B和C步骤中,若检测到算法发生震荡且无法收敛,则lamda(取值范围0.5-0.9)以0.1的步长来消除震荡,直到算法收敛。D. In steps B and C, if it is detected that the algorithm is oscillating and cannot converge, then lamda (value range 0.5-0.9) will eliminate the oscillating with a step size of 0.1 until the algorithm converges.
E、利用轮廓系数指标对B和C步中的到的聚类质量和聚类数目进行评估,指标越大,表示聚类质量越好,对应的聚类数目K即最优聚类数目。E. Use the silhouette coefficient index to evaluate the clustering quality and number of clusters obtained in steps B and C. The larger the index, the better the clustering quality, and the corresponding number of clusters K is the optimal number of clusters.
F、自适应AP聚类器主要是通过自适应调整原有AP聚类器的偏向参数和阻尼因子来改善算法的准确性和快速性。本算法利用轮廓系数作为聚类有效性和聚类质量的评判指标,利用震荡度作为判断算法发生震荡后是否收敛的指标,自适应调整并获取最优偏向参数和阻尼因子组合,最终得到最优聚类结果。F. The adaptive AP clusterer mainly improves the accuracy and rapidity of the algorithm by adaptively adjusting the bias parameters and damping factors of the original AP clusterer. This algorithm uses the silhouette coefficient as the evaluation index of clustering effectiveness and clustering quality, uses the degree of oscillation as the index to judge whether the algorithm converges after oscillation, adaptively adjusts and obtains the optimal combination of bias parameters and damping factors, and finally obtains the optimal Clustering results.
(3)、PSO的小波自适应神经网络模型设计(3), PSO wavelet adaptive neural network model design
PSO的自适应小波神经网络的自适应小波神经网络是在小波理论基础上,采用非线性小波基取代常用的非线性Sigmoid函数并结合人工神经网络而提出的一种前馈型网络,隐含层的传递函数使用小波函数,通过自适应调整小波函数的参数,能够更加有效提取输入参数的时频特征。它是以小波函数为神经元的激励函数,小波的伸缩、平移因子,以及连接权重,在对误差能量函数的优化过程中被自适应调整。设小波神经网络的输入信号可以表示为一个输入参数的一维向量xi(i=1,2,…,n),输出信号表示为yk(k=1,2,…,m),小波神经网络输出层值的计算公式为:Adaptive wavelet neural network of PSO The adaptive wavelet neural network is based on the wavelet theory, using a nonlinear wavelet base instead of the commonly used nonlinear Sigmoid function and combining with artificial neural network to propose a feedforward network, the hidden layer The transfer function of the method uses a wavelet function, and by adaptively adjusting the parameters of the wavelet function, the time-frequency characteristics of the input parameters can be extracted more effectively. It uses the wavelet function as the activation function of the neurons, and the expansion and translation factors of the wavelet, as well as the connection weight, are adaptively adjusted during the optimization of the error energy function. Assuming that the input signal of the wavelet neural network can be expressed as a one-dimensional vector x i (i=1, 2, ..., n) of an input parameter, the output signal is expressed as y k (k = 1, 2, ..., m), and the wavelet The formula for calculating the value of the output layer of the neural network is:
公式中ωij输入层i节点和隐含层j节点间的连接权值,为小波基函数,bj为小波基函数的平移因子,aj小波基函数的伸缩因子,ωjk为隐含层j节点和输出层k节点间的连接权值。本专利中的自适应小波神经网络的权值和阈值的修正算法采用梯度修正法来更新网络权值和小波基函数参数,从而使小波神经网络输出不断逼近期望输出。采用PSO的自适应小波神经网络,避免了梯度下降法中要求激活函数可微,以及对函数求导的过程计算,并且各个粒子搜索时迭代公式简单,因而计算速度又比梯度下降法快得多。而且通过对迭代公式中参数的调整,还能很好地跳出局部极值。初始化一群随机粒子,然后通过迭代找到最优解。在每一次迭代中,粒子通过跟踪二个“极值”来更新自己。第一个就是粒子本身所找到的最优解pbest,这个解称为个体极值;另一个是整个种群目前找到的最优解,这个解称为全局极值gbest。用PSO的自适应小波神经网络,就是首先将自适应小波神经网络的各种参数列为粒子的位置向量X,将均方误差能量函数式设为用于优化的目标函数,通过粒子群优算法的基本公式进行迭代,寻求最优解。PSO的自适应小波神经网络训练算法如下:In the formula, ω ij is the connection weight between the input layer i node and the hidden layer j node, is the wavelet basis function, b j is the translation factor of the wavelet basis function, a j is the expansion factor of the wavelet basis function, ω jk is the connection weight between the hidden layer j node and the output layer k node. The weight and threshold correction algorithm of the self-adaptive wavelet neural network in this patent uses a gradient correction method to update the network weight and wavelet basis function parameters, so that the output of the wavelet neural network is constantly approaching the desired output. The adaptive wavelet neural network of PSO avoids the differentiable activation function in the gradient descent method and the process calculation of the function derivation, and the iteration formula is simple when each particle is searched, so the calculation speed is much faster than the gradient descent method. . And by adjusting the parameters in the iterative formula, it can also jump out of the local extremum well. Initialize a group of random particles, and then find the optimal solution through iteration. In each iteration, the particle updates itself by tracking two "extrema". The first is the optimal solution pbest found by the particle itself, which is called the individual extremum; the other is the optimal solution currently found by the entire population, and this solution is called the global extremum gbest. Using the adaptive wavelet neural network of PSO is to first list the various parameters of the adaptive wavelet neural network as the position vector X of the particle, set the mean square error energy function as the objective function for optimization, and use the particle swarm optimization algorithm The basic formula is iterated to find the optimal solution. The adaptive wavelet neural network training algorithm of PSO is as follows:
A、初始化网络结构,确定网络隐含层神经元个数,确定目标搜索空间的维数D。A. Initialize the network structure, determine the number of neurons in the hidden layer of the network, and determine the dimension D of the target search space.
B、确定微粒个数m,初始化微粒的位置向量和速度向量,将粒子的位置向量和速度向量带入算法迭代公式进行更新,以误差能量函数作为目标函数进行优化计算,记录下每个粒子迄今搜索到的最优位置pbest和整个粒子群迄今搜索到的最优位置gbest。B. Determine the number of particles m, initialize the position vector and velocity vector of the particles, bring the position vector and velocity vector of the particles into the iterative formula of the algorithm to update, use the error energy function as the objective function to perform optimization calculations, and record each particle so far The searched optimal position pbest and the optimal position gbest found so far by the entire particle swarm.
C、将整个粒子群迄今搜索到最优位置gbest,映射为网络权值和阈值进行本学习,以误差能量函数作为粒子的适应度进行化计算。C. Search the entire particle swarm to the optimal position gbest so far, map it to network weights and thresholds for this learning, and use the error energy function as the fitness of the particles for calculation.
D、若误差能量函数值在实际问题允许的误范围内,则迭代完毕;反之,转回算法继续迭代。D. If the value of the error energy function is within the allowable error range of the actual problem, the iteration is complete; otherwise, switch back to the algorithm and continue the iteration.
(4)、ESN神经网络模型设计(4), ESN neural network model design
ESN神经网络模型(Echo state network,ESN)是一种新型的动态神经网络,具有动态神经网络的全部优点,同时由于回声状态网络引入了“储备池”概念,所以该方法较一般动态神经网络能够更好地适应非线性系统辨识。“储备池”就是把传统动态神经网络中间连接的部分转变成一个随机连接的“储备池”,整个学习过程其实就是学习如何连接“储备池”的过程。“储备池”其实就是一个随机生成的大规模递归结构,该结构中神经元相互连接是稀疏的,通常用SD表示相互连接的神经元占总的神经元N的百分比。ESN神经网络模型的其状态方程为:The ESN neural network model (Echo state network, ESN) is a new type of dynamic neural network, which has all the advantages of the dynamic neural network. Better adapt to nonlinear system identification. The "reserve pool" is to transform the middle connection part of the traditional dynamic neural network into a randomly connected "reserve pool". The whole learning process is actually the process of learning how to connect the "reserve pool". The "reserve pool" is actually a randomly generated large-scale recursive structure in which neurons are sparsely connected to each other, and SD is usually used to represent the percentage of interconnected neurons in the total neurons N. The state equation of the ESN neural network model is:
式中W为储备池权值矩阵,Win为输入权值矩阵;Wback为反馈权值矩阵;x(n)表示神经网络的内部状态;Wout为ESN神经网络模型的核储备池、神经网络的输入以及神经网络的输出之间的连接权矩阵;为神经网络的输出偏差或可以代表噪声;f=f[f1,f2,…,fn]为“储备池”内部神经元的n个激活函数;fi为双曲正切函数;fout为ESN神经网络模型的ε个输出函数。In the formula, W is the weight matrix of the reserve pool, W in is the input weight matrix; W back is the feedback weight matrix; x(n) represents the internal state of the neural network; W out is the kernel reserve pool of the ESN neural network model, neural The connection weight matrix between the input of the network and the output of the neural network; is the output deviation of the neural network or can represent noise; f=f[f 1 , f 2 ,..., f n ] is the n activation functions of neurons inside the "reserve pool"; f i is the hyperbolic tangent function; f out are the ε output functions of the ESN neural network model.
2、构建参数预测模块设计2. Build parameter prediction module design
参数预测模块包括参数检测模块、TDL按拍延迟线A、新陈代谢GM(1,1)趋势模型、NARX神经网络模型A、NARX神经网络模型B、TDL按拍延迟线B、TDL按拍延迟线C、TDL按拍延迟线D和区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型组成;参数检测模块输出作为作为TDL按拍延迟线A的输入,TDL按拍延迟线A的输出作为新陈代谢GM(1,1)趋势模型输入,TDL按拍延迟线A的输出与新陈代谢GM(1,1)趋势模型输出的差和新陈代谢GM(1,1)趋势模型输出分别作为NARX神经网络模型A和NARX神经网络模型B的输入,NARX神经网络模型A和NARX神经网络模型B输出分别作为TDL按拍延迟线B和TDL按拍延迟线C的输入,TDL按拍延迟线B、TDL按拍延迟线C和区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型的输出分别作为区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型的对应输入,区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型输出的4个参数分别为a、b、c和d,a和b组成区间数[a,b]作为被检测参数的极小值,c和d组成区间数[c,d]作为被检测参数的极大值,区间数[a,b]和区间数[c,d]组成([a,b],[c,d])作为被检测参数的区间犹豫模糊数,区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型输出被检测参数的区间犹豫模糊数。Parameter prediction module includes parameter detection module, TDL beat delay line A, metabolic GM (1,1) trend model, NARX neural network model A, NARX neural network model B, TDL beat delay line B, TDL beat delay line C , TDL beat delay line D and BAM neural network-ANFIS adaptive neuro-fuzzy reasoning model of interval hesitant fuzzy numbers; the output of the parameter detection module is used as the input of TDL beat delay line A, and the output of TDL beat delay line A is used as The input of the metabolic GM (1, 1) trend model, the difference between the output of the TDL beat delay line A and the output of the metabolic GM (1, 1) trend model and the output of the metabolic GM (1, 1) trend model are respectively used as the NARX neural network model A And the input of NARX neural network model B, the output of NARX neural network model A and NARX neural network model B are respectively used as the input of TDL pressing delay line B and TDL pressing delay line C, TDL pressing delay line B, TDL pressing delay line Line C and the output of the BAM neural network-ANFIS adaptive neuro-fuzzy inference model of interval hesitant fuzzy numbers are respectively used as the corresponding input of the BAM neural network-ANFIS adaptive neuro-fuzzy inference model of interval hesitant fuzzy numbers, and the BAM neural network of interval hesitant fuzzy numbers The four parameters output by the network-ANFIS adaptive neuro-fuzzy inference model are a, b, c and d, respectively, a and b form the interval number [a, b] as the minimum value of the detected parameter, and c and d form the interval number [c, d] is the maximum value of the detected parameter, the interval number [a, b] and the interval number [c, d] constitute ([a, b], [c, d]) as the interval hesitation of the detected parameter Fuzzy number, interval hesitation fuzzy number BAM neural network-ANFIS adaptive neuro-fuzzy inference model outputs the interval hesitation fuzzy number of the detected parameter.
(1)、新陈代谢GM(1,1)趋势模型设计(1) Metabolic GM (1, 1) trend model design
GM(1,1)灰色预测方法较传统的统计预测方法有着较多的优点,它不需要确定预测变量是否服从正态分布,不需要大的样本统计量,不需要根据ESN神经网络模型输出的变化而随时改变预测模型,通过累加生成技术,建立统一的微分方程模型,累加ESN神经网络模型输出原始值还原后得出预测结果,微分方程模型具有更高的预测精度。建立GM(1,1)趋势模型的实质是对输入原始数据作一次累加生成,使生成数列呈现一定规律,通过建立微分方程模型,求得拟合曲线,用以对ESN神经网络模型输出的趋势进行预测。本发明采用新陈代谢GM(1,1)趋势模型预测ESN神经网络模型输出趋势的时间跨度长,用新陈代谢GM(1,1)趋势模型可以根据ESN神经网络模型值预测未来时刻输出趋势参数值,用上述方法预测出的各个输出趋势参数值后,把预测未来时刻输出参数趋势值再加分别加入ESN神经网络模型输出参数的原始数列中,相应地去掉数列开头的一个数据建模,再进行预测多个未来输出参数趋势值。依此类推,预测出未来时刻输出参数趋势值。这种方法称为新陈代谢模型,它可实现较长时间的输出参数的趋势预测,可以更加准确地掌握输出参数值的变化趋势。The GM(1,1) gray prediction method has more advantages than the traditional statistical prediction method. It does not need to determine whether the predictor variables obey the normal distribution, does not need large sample statistics, and does not need to output according to the ESN neural network model. The prediction model can be changed at any time due to changes. Through the cumulative generation technology, a unified differential equation model is established, and the cumulative ESN neural network model outputs the original value to restore the prediction result. The differential equation model has higher prediction accuracy. The essence of establishing a GM (1, 1) trend model is to accumulate and generate the input raw data once, so that the generated sequence shows a certain law, and by establishing a differential equation model, a fitting curve is obtained to predict the trend of the output of the ESN neural network model. Make predictions. The present invention adopts the metabolic GM (1, 1) trend model to predict the time span of the output trend of the ESN neural network model, and the metabolic GM (1, 1) trend model can predict the future time output trend parameter value according to the ESN neural network model value, and use After the value of each output trend parameter predicted by the above method, the trend value of the output parameter at the predicted future time is added to the original sequence of output parameters of the ESN neural network model, and a data model at the beginning of the sequence is removed accordingly, and then the prediction is performed. A future output parameter trend value. By analogy, the trend value of the output parameter at the future time is predicted. This method is called the metabolic model, which can realize the trend prediction of the output parameters for a longer period of time, and can grasp the change trend of the output parameter values more accurately.
(2)、区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型设计(2) Design of BAM neural network-ANFIS adaptive neuro-fuzzy reasoning model for interval hesitant fuzzy numbers
区间犹豫模糊数的BAM神经网络-ANFIS自适应神经模糊推理模型为BAM神经网络输出作为ANFIS自适应神经模糊推理模型输入,BAM神经网络模型拓扑结构中,网络输入端的初始模式为x(t),通过权值矩阵W1加权后到达输出端y端,经过输出节点的转移特性fy的非线性变换和W2矩阵加权后返回到输入端x,再经过x端输出节点转移特性fx的非线性变换,变为输入端x的输出,反复这一运行过程,BAM神经网络模型状态转移方程见式(8)。The BAM neural network-ANFIS adaptive neuro-fuzzy inference model of interval hesitant fuzzy numbers is the output of the BAM neural network as the input of the ANFIS adaptive neuro-fuzzy inference model. In the topology of the BAM neural network model, the initial mode of the network input is x(t), After being weighted by the weight matrix W 1 , it reaches the output terminal y, after the nonlinear transformation of the transfer characteristic f y of the output node and weighted by the W 2 matrix, it returns to the input terminal x, and then passes through the non-linear transformation of the transfer characteristic f x of the output node at the x terminal. The linear transformation becomes the output of the input terminal x, and this operation process is repeated, and the state transition equation of the BAM neural network model is shown in formula (8).
ANFIS自适应神经推理模型中将神经网络和模糊控制有机地结合起来,既能发挥二者的优点,又可弥补各自的不足。ANFIS自适应神经网络模糊系统中的模糊隶属度函数及模糊规则是通过对大量已知数据的学习得到的,ANFIS自适应神经推理模型最大的特点就是基于数据的建模方法,而不是基于经验或是直觉任意给定的。这对于那些特性还未被人们完全了解或者特性非常复杂的系统是尤为重要的。ANFIS自适应神经推理模型的主要运算步骤如下:The neural network and fuzzy control are organically combined in the ANFIS adaptive neural reasoning model, which can not only play the advantages of both, but also make up for their respective shortcomings. The fuzzy membership function and fuzzy rules in the ANFIS adaptive neural network fuzzy system are obtained by learning a large amount of known data. The biggest feature of the ANFIS adaptive neural inference model is the modeling method based on data, not based on experience or is intuitively given arbitrarily. This is especially important for systems whose properties are not fully understood or whose properties are very complex. The main operation steps of the ANFIS adaptive neural reasoning model are as follows:
第1层:将输入的数据模糊化,每个节点对应输出可表示为:Layer 1: Fuzzify the input data, and the corresponding output of each node can be expressed as:
式n为每个输入隶属函数个数,隶属函数采用高斯隶属函数。The formula n is the number of membership functions for each input, and the membership functions adopt Gaussian membership functions.
第2层:实现规则运算,输出规则的适用度,ANFIS自适应神经推理模型的规则运算采用乘法。Layer 2: Realize rule operations, output the applicability of rules, and use multiplication for rule operations in the ANFIS adaptive neural inference model.
第3层:将各条规则的适用度归一化:Layer 3: Normalize the applicability of each rule:
第4层:每个节点的传递函数为线性函数,表示局部的线性模型,每个自适应节点i输出为:Layer 4: The transfer function of each node is a linear function, representing a local linear model, and the output of each adaptive node i is:
第5层:该层的单节点是一个固定节点,ANFIS自适应神经推理模型的输出为:Layer 5: The single node of this layer is a fixed node, and the output of the ANFIS adaptive neural reasoning model is:
ANFIS自适应神经推理模型中决定隶属函数形状的条件参数和推理规则的结论参数可以通过学习过程进行训练。参数采用线性最小二乘估计算法与梯度下降结合的算法调整参数。ANFIS自适应神经推理模型每一次迭代中首先输入信号沿网络正向传递直到第4层,此时固定条件参数,采用最小二乘估计算法调节网络参数;信号继续沿网络正向传递直到输出层(即第5层)。ANFIS自适应神经推理模型将获得的误差信号沿网络反向传播,用梯度法更新条件参数。以此方式对ANFIS自适应神经推理模型中给定的条件参数进行调整,可以得到结论参数的全局最优点,这样不仅可以降低梯度法中搜索空间的维数,还可以提高ANFIS自适应神经推理模型参数的收敛速度。The conditional parameters that determine the shape of the membership function and the conclusion parameters of the inference rules in the ANFIS adaptive neural inference model can be trained through the learning process. The parameters are adjusted using the linear least squares estimation algorithm combined with the gradient descent algorithm. In each iteration of the ANFIS adaptive neural reasoning model, the input signal is transmitted along the forward direction of the network until the fourth layer. At this time, the conditional parameters are fixed, and the least square estimation algorithm is used to adjust the network parameters; the signal continues to be forwarded along the network until the output layer ( i.e. layer 5). The ANFIS adaptive neural inference model backpropagates the obtained error signal along the network, and uses the gradient method to update the conditional parameters. By adjusting the given condition parameters in the ANFIS adaptive neural inference model in this way, the global optimal point of the conclusion parameters can be obtained, which can not only reduce the dimension of the search space in the gradient method, but also improve the ANFIS adaptive neural inference model. The convergence rate of the parameter.
3、水情检测与控制子系统设计3. Water regime detection and control subsystem design
水情检测与控制子系统包括Elman神经网络-NARX神经网络模型、模糊递归神经网络-NARX神经网络控制器、AANN自联想神经网络模型、PI控制器-NARX神经网络控制器、PI控制器、参数检测模块和参数预测模块;The water regime detection and control subsystem includes Elman neural network-NARX neural network model, fuzzy recurrent neural network-NARX neural network controller, AANN self-associative neural network model, PI controller-NARX neural network controller, PI controller, parameters Detection module and parameter prediction module;
(1)、Elman神经网络-NARX神经网络模型设计(1), Elman neural network-NARX neural network model design
Elman神经网络-NARX神经网络模型的Elman神经网络输出作为NARX神经网络模型的输入,Elman神经网络可以看作是一个具有局部记忆单元和局部反馈连接的前向神经网络,除了隐层外,还有一个特别的关联层。该层从隐层接收反馈信号,每一个隐层节点都有一个与之对应的关联层节点连接。关联层将上一时刻的隐层状态连同当前时刻的网络输入一起作为隐层的输入,相当于状态反馈。隐层的传递函数一般为Sigmoid函数,输出层为线性函数,关联层也为线性函数。为了有效地解决水位参数调节中的逼近精度问题,增强关联层的作用。NARX神经网络模型的当时输出不仅取决于过去的NARX神经网络模型输出y(t-n),还取决于NARX神经网络模型当前的Elman神经网络输出向量X(t)以及Elman神经网络输出向量的延迟阶数等。NARX神经网络模型的Elman神经网络输出通过时延层传递给隐层,隐层对Elman神经网络输出的信号进行处理后传递到输出层,输出层将隐层输出信号做线性加权获得最终的神经网络输出信号,时延层将NARX神经网络模型反馈的信号和输入层的Elman神经网络输出的信号进行延时,然后输送到隐层。The Elman neural network output of the Elman neural network-NARX neural network model is used as the input of the NARX neural network model. The Elman neural network can be regarded as a forward neural network with local memory units and local feedback connections. In addition to the hidden layer, there are A special association layer. This layer receives feedback signals from the hidden layer, and each hidden layer node has a corresponding association layer node connection. The association layer takes the hidden layer state at the previous moment together with the network input at the current moment as the input of the hidden layer, which is equivalent to state feedback. The transfer function of the hidden layer is generally a Sigmoid function, the output layer is a linear function, and the association layer is also a linear function. In order to effectively solve the problem of approximation accuracy in the adjustment of water level parameters, the role of the association layer is enhanced. The current output of the NARX neural network model not only depends on the past NARX neural network model output y(t-n), but also depends on the current Elman neural network output vector X(t) of the NARX neural network model and the delay order of the Elman neural network output vector wait. The Elman neural network output of the NARX neural network model is transmitted to the hidden layer through the delay layer. The hidden layer processes the signal output by the Elman neural network and then passes it to the output layer. The output layer linearly weights the output signal of the hidden layer to obtain the final neural network. The output signal, the delay layer delays the signal fed back by the NARX neural network model and the signal output by the Elman neural network of the input layer, and then sends it to the hidden layer.
(2)、模糊递归神经网络-NARX神经网络控制器设计(2), fuzzy recurrent neural network-NARX neural network controller design
模糊递归神经网络-NARX神经网络控制器的模糊递归神经网络输出作为NARX神经网络控制器输入,模糊递归神经网络的由4层组成:输入层、成员函数层、规则层和输出层,网络包含n个输入节点,其中每个输入节点对应m个条件节点,m代表规则数,nm个规则节点,1个输出节点。第I层将输入引入网络;第II层将输入模糊化,采用的隶属函数为高斯函数;第III层对应模糊推理;第IV层对应去模糊化操作。用分别代表第k层的第i个节点的输入和输出,则网络内部的信号传递过程和各层之间的输入输出关系可以描述如下。第I层:输入层,该层的各输入节点直接与输入变量相连接,网络的输入和输出表示为:The fuzzy recurrent neural network-NARX neural network controller's fuzzy recurrent neural network output is used as the input of the NARX neural network controller. The fuzzy recurrent neural network consists of 4 layers: input layer, member function layer, rule layer and output layer. The network contains n input nodes, where each input node corresponds to m conditional nodes, m represents the number of rules, nm rule nodes, and 1 output node. The first layer introduces the input into the network; the second layer fuzzifies the input, and the membership function adopted is a Gaussian function; the third layer corresponds to fuzzy reasoning; the fourth layer corresponds to the defuzzification operation. use represent the input and output of the i-th node in the k-th layer respectively, then the signal transmission process inside the network and the input-output relationship between each layer can be described as follows. Layer I: the input layer, each input node of this layer is directly connected to the input variable, and the input and output of the network are expressed as:
式中和为网络输入层第i个节点的输入和输出,N表示迭代的次数。第II层:成员函数层,该层的节点将输入变量进行模糊化,每一个节点代表一个隶属函数,采用高斯基函数作为隶属函数,网络的输入和输出表示为:In the formula and is the input and output of the i-th node of the network input layer, and N represents the number of iterations. Layer II: membership function layer, the nodes in this layer fuzzify the input variables, each node represents a membership function, and the Gaussian function is used as the membership function, and the input and output of the network are expressed as:
式中mij和σij分别表示第II层第i个语言变量的第j项高斯基函数的均值中心和宽度值,m为对应输入节点的全部语言变量数。第III层:模糊推理层,即规则层,加入动态反馈,使网络具有更好的学习效率,反馈环节引入内部变量hk,选用sigmoid函数作为反馈环节内部变量的激活函数。网络的输入和输出表示为:In the formula, m ij and σ ij represent the mean center and width value of the jth Gaussian function of the i-th language variable in the second layer, respectively, and m is the number of all language variables corresponding to the input node. Layer III: The fuzzy reasoning layer, that is, the rule layer, adds dynamic feedback to make the network have better learning efficiency. The internal variable h k is introduced in the feedback link, and the sigmoid function is selected as the activation function of the internal variable in the feedback link. The input and output of the network are expressed as:
式中ωjk是递归部分的连接权值,该层的神经元代表了模糊逻辑规则的前件部分,该层节点对第二层的输出量和第三层的反馈量进行∏操作,是第三层的输出量,m表示完全连接时的规则数。反馈环节主要是计算内部变量的值和内部变量相应隶属函数的激活强度。该激活强度与第3层的规则节点匹配度相关。反馈环节引入的内部变量,包含两种类型的节点:承接节点和反馈节点,承接节点使用加权求和来计算内部变量实现去模糊化的功能;内部变量表示的隐藏规则的模糊推理的结果。反馈节点采用sigmoid函数作为模糊隶属度函数,实现内部变量的模糊化。第IV层:去模糊化层,即输出层。该层节点对输入量进行求和操作。网络的输入和输出表示为:In the formula, ω jk is the connection weight of the recursive part. The neurons in this layer represent the antecedent part of the fuzzy logic rules. The nodes in this layer perform ∏ operation on the output of the second layer and the feedback of the third layer. is the output of the third layer, and m represents the number of rules when fully connected. The feedback link is mainly to calculate the value of the internal variable and the activation intensity of the corresponding membership function of the internal variable. This activation strength is related to the level 3 regular node matching degree. The internal variables introduced in the feedback link include two types of nodes: the receiving node and the feedback node. The receiving node uses the weighted sum to calculate the internal variables to realize the function of defuzzification; the internal variable represents the result of the fuzzy reasoning of the hidden rules. The feedback node uses the sigmoid function as the fuzzy membership function to realize the fuzzification of internal variables. Layer IV: Defuzzification layer, namely the output layer. This layer node performs a summation operation on the input quantities. The input and output of the network are expressed as:
公式中λj是输出层的连接权值,递归神经网络具有逼近高度非线性动态系统的性能,加入内部变量的递归神经网络的训练误差和测试误差分别为明显减少,本专利的模糊递归神经网络采用加入交叉验证的梯度下降算法对神经网络的权值进行训练,通过在反馈环节引入内部变量,将规则层的输出量加权求和后再反模糊化输出作为反馈量,并将反馈量与隶属度函数层的输出量一起作为规则层的下一时刻的输入。模糊递归神经网络输出包含规则层激活强度和输出的历史信息,增强了模糊递归神经网络适应非线性动态系统的能力。NARX神经网络模型的当时输出不仅取决于过去的NARX神经网络模型输出y(t-n),还取决于NARX神经网络模型当前的模糊递归神经网络输出向量X(t)以及模糊递归神经网络输出向量的延迟阶数等。NARX神经网络模型的模糊递归神经网络输出通过时延层传递给隐层,隐层对模糊递归神经网络输出的信号进行处理后传递到输出层,输出层将隐层输出信号做线性加权获得最终的神经网络输出信号,时延层将NARX神经网络模型反馈的信号和输入层的模糊递归神经网络输出的信号进行延时,然后输送到隐层。In the formula, λ j is the connection weight of the output layer. The recurrent neural network has the performance of approaching a highly nonlinear dynamic system. The training error and test error of the recurrent neural network added with internal variables are respectively significantly reduced. The fuzzy recurrent neural network of this patent The weight of the neural network is trained by using the gradient descent algorithm with cross-validation. By introducing internal variables in the feedback link, the output of the rule layer is weighted and summed, and then the defuzzification output is used as the feedback amount, and the feedback amount and the membership The output of the degree function layer is used as the input of the next moment of the rule layer. The output of the fuzzy recurrent neural network contains the history information of the activation intensity and output of the rule layer, which enhances the ability of the fuzzy recurrent neural network to adapt to the nonlinear dynamic system. The current output of the NARX neural network model not only depends on the past NARX neural network model output y(tn), but also depends on the current fuzzy recurrent neural network output vector X(t) of the NARX neural network model and the delay of the fuzzy recurrent neural network output vector order etc. The output of the fuzzy recurrent neural network of the NARX neural network model is transmitted to the hidden layer through the delay layer. The hidden layer processes the output signal of the fuzzy recurrent neural network and then passes it to the output layer. The neural network outputs the signal, and the delay layer delays the signal fed back by the NARX neural network model and the signal output by the fuzzy recurrent neural network of the input layer, and then sends it to the hidden layer.
(3)、AANN自联想神经网络模型设计(3), AANN self-associative neural network model design
AANN自联想神经网络模型是一种特殊结构的前馈自联想自联想神经网(Auto-associative neural networ,AANN),AANN自联想神经网络模型结构包括一个输入层、一定数量的隐含层和一个输出层。首先通过参数的输入层、映射层以及瓶颈层实现了多个水位参数检测模块输出数据信息的压缩,从多个水位参数检测模块输出的高维参数空间中提取了反映多个水位参数检测模块输出的最具代表性的低维子空间,同时有效地滤去了多个水位参数检测模块输出数据中的噪声和测量误差,再通过瓶颈层、解映射层和输出层实现多个水位参数检测模块输出输出的解压缩,将前面压缩的信息还原到各个输入值,从而实现各多个水位参数检测模块输出输出数据的重构。为了达到多个水位参数检测模块输出信息压缩的目的,AANN自联想神经网络模型瓶颈层节点数目明显小于输入层,又为了防止形成多个水位参数检测模块输出输出与AANN自联想神经网络输出层之间的简单单一映射,除了AANN自联想神经网络输出层激励函数采用线形函数外,其它各层均采用非线形的激励函数。从本质来讲,AANN自联想神经网络模型的隐含层第一层叫作映射层,映射层的节点传递函数可能是S型函数也可能是其他类似的非线性函数;隐含层第二层叫做瓶颈层,瓶颈层的维数是网络中最小的,它的传递函数可能是线性的或者是非线性,瓶颈层避免了那种很容易实现的一对一的输出和输入相等的映射关系,它使网络对多个水位参数检测模块输出信号进行编码和压缩,并在瓶颈层后进行多个水位参数检测模块输出数据解码和解压缩以产生多个水位参数检测模块输出信号的估计值;隐含层第三层或最后一层叫做解映射层,解映射层的节点传递函数是通常是非线性的S型函数,AANN自联想神经网络用误差反向传播算法来训练。The AANN auto-associative neural network model is a feedforward auto-associative neural network (Auto-associative neural network, AANN) with a special structure. The AANN auto-associative neural network model structure includes an input layer, a certain number of hidden layers and a output layer. Firstly, through the parameter input layer, mapping layer and bottleneck layer, the compression of the output data information of multiple water level parameter detection modules is realized, and the output of multiple water level parameter detection modules is extracted from the high-dimensional parameter space output by multiple water level parameter detection modules. The most representative low-dimensional subspace of , and effectively filter out the noise and measurement errors in the output data of multiple water level parameter detection modules, and then realize multiple water level parameter detection modules through the bottleneck layer, demapping layer and output layer The decompression of the output and output restores the previously compressed information to each input value, thereby realizing the reconstruction of the output and output data of each multiple water level parameter detection modules. In order to achieve the purpose of compressing the output information of multiple water level parameter detection modules, the number of nodes in the bottleneck layer of the AANN self-associative neural network model is significantly smaller than that of the input layer. The simple single mapping among them, except that the excitation function of the output layer of AANN self-associative neural network adopts the linear function, the other layers all adopt the non-linear excitation function. In essence, the first layer of the hidden layer of the AANN self-associative neural network model is called the mapping layer, and the node transfer function of the mapping layer may be an S-type function or other similar nonlinear functions; the second layer of the hidden layer It is called the bottleneck layer. The dimension of the bottleneck layer is the smallest in the network. Its transfer function may be linear or nonlinear. The bottleneck layer avoids the easy-to-implement one-to-one mapping relationship between the output and the input. It Make the network encode and compress the output signals of multiple water level parameter detection modules, and decode and decompress the output data of multiple water level parameter detection modules after the bottleneck layer to generate estimated values of the output signals of multiple water level parameter detection modules; the hidden layer The third or last layer is called the demapping layer. The node transfer function of the demapping layer is usually a nonlinear S-type function. The AANN self-associative neural network is trained with the error back propagation algorithm.
(4)、PI控制器-NARX神经网络控制器设计(4), PI controller-NARX neural network controller design
PI控制器-NARX神经网络控制器的PI控制器输出作为NARX神经网络控制器输入,NARX神经网络模型的当时输出不仅取决于过去的NARX神经网络模型输出y(t-n),还取决于NARX神经网络模型当前的PI控制器输出向量X(t)以及PI控制器输出向量的延迟阶数等。NARX神经网络模型的PI控制器输出通过时延层传递给隐层,隐层对PI控制器输出的信号进行处理后传递到输出层,输出层将隐层输出信号做线性加权获得最终的神经网络输出信号,时延层将NARX神经网络模型反馈的信号和输入层的PI控制器输出的信号进行延时,然后输送到隐层。PI Controller - The PI controller output of the NARX neural network controller is used as the input of the NARX neural network controller, and the current output of the NARX neural network model depends not only on the past NARX neural network model output y(t-n), but also depends on the NARX neural network Model the current PI controller output vector X(t) and the delay order of the PI controller output vector, etc. The PI controller output of the NARX neural network model is transmitted to the hidden layer through the delay layer. The hidden layer processes the signal output by the PI controller and then passes it to the output layer. The output layer linearly weights the output signal of the hidden layer to obtain the final neural network. The output signal, the delay layer delays the signal fed back by the NARX neural network model and the signal output by the PI controller of the input layer, and then sends it to the hidden layer.
二、物联网的水资源大数据管理子系统2. Water resources big data management subsystem of the Internet of Things
物联网的水资源大数据管理子系统包括水文参数测量端、现场监控端、水文参数控制端、水文网关、水文参数云平台和水文监测手机APP,水文参数测量端负责采集被检测水域的水文参数信息,在现场监控端中有水情检测与控制子系统,通过水文网关实现水文参数测量端、水文参数控制端、现场监控端、水文参数云平台和水文监测手机APP的双向通信,实现水文参数智能化调节。水文监测手机APP通过5G网络访问水文参数云平台实现对水文参数远程监测,其中水情检测与控制子系统在权利1中实现。The water resources big data management subsystem of the Internet of Things includes a hydrological parameter measurement terminal, an on-site monitoring terminal, a hydrological parameter control terminal, a hydrological gateway, a hydrological parameter cloud platform, and a hydrological monitoring mobile phone APP. The hydrological parameter measurement terminal is responsible for collecting the hydrological parameters of the detected water area Information, there is a hydrological detection and control subsystem in the on-site monitoring terminal. Through the hydrological gateway, the two-way communication between the hydrological parameter measurement terminal, the hydrological parameter control terminal, the on-site monitoring terminal, the hydrological parameter cloud platform and the hydrological monitoring mobile phone APP is realized, and the hydrological parameter is realized. Intelligent regulation. The hydrological monitoring mobile APP accesses the hydrological parameter cloud platform through the 5G network to realize remote monitoring of hydrological parameters, and the water regime detection and control subsystem is implemented in entitlement 1.
1、系统总体功能的设计1. Design of the overall function of the system
本发明物联网的水资源大数据管理子系统实现对水文参数进行检测和水文参数进行调节,该系统的多个水文参数测量终端以自组织方式构建成无线监测网络来实现参数测量终端、水文参数控制端、水文网关、现场监控端、水文参数云平台和水文监测手机APP的双向无线通信;水位参数测量终端将检测的水文参数通过水文网关发送给现场监控端和水文参数云平台,现场监控端的水情检测与控制子系统对水文参数进行处理和水文参数进行智能化调节;在水文监测手机APP端通过访问水文参数云平台实现对水文参数实时监视。物联网的水资源大数据管理子系统见图3所示。The water resources big data management subsystem of the Internet of Things of the present invention realizes the detection and adjustment of hydrological parameters, and a plurality of hydrological parameter measurement terminals of the system are constructed into a wireless monitoring network in a self-organized manner to realize parameter measurement terminals, hydrological parameter Two-way wireless communication between the control terminal, the hydrological gateway, the on-site monitoring terminal, the hydrological parameter cloud platform and the hydrological monitoring mobile APP; the water level parameter measurement terminal sends the detected hydrological parameters to the on-site monitoring terminal and the hydrological parameter cloud platform through the hydrological gateway. The water regime detection and control subsystem processes and intelligently adjusts the hydrological parameters; on the hydrological monitoring mobile phone APP, the real-time monitoring of the hydrological parameters is realized by accessing the hydrological parameter cloud platform. The water resource big data management subsystem of the Internet of Things is shown in Figure 3.
2、水文参数测量终端的设计2. Design of hydrological parameter measurement terminal
采用大量基于无线传感器网络的水文参数测量终端作为水文参数感知终端,水文参数测量终端和水文参数控制端通过自组织无线网络实现现场监控端、水文网关以及水文参数云平台之间的信息相互交互。水文参数测量终端包括采集水文参数的水位、水流速、降雨量、PH值、水温、溶解氧感器组等影响水文参数的传感器组和对应的信号调理电路、STM32微处理器、GPS模块、摄像头和GPRS无线传输模块;依托于高精度GPS定位技术,建立河道地理图形,实时监测铅鱼的坐标信息,自动控制摄像头及探照灯的转动角度,监控断面上游及下游,自追踪式视频监控可实时追踪铅鱼位置或手动追踪铅鱼位置。水文参数测量终端的软件主要实现无线通信和水文参数的采集与预处理。软件采用C语言程序设计,兼容程度高,大大提高了软件设计开发的工作效率,增强了程序代码的可靠性、可读性和可移植性。水文参数测量终端结构见图4。A large number of hydrological parameter measurement terminals based on wireless sensor networks are used as hydrological parameter sensing terminals. The hydrological parameter measurement terminal and the hydrological parameter control terminal realize the information interaction between the on-site monitoring terminal, the hydrological gateway and the hydrological parameter cloud platform through the self-organizing wireless network. The hydrological parameter measurement terminal includes the water level, water flow rate, rainfall, PH value, water temperature, dissolved oxygen sensor group and other sensor groups that affect the hydrological parameters and the corresponding signal conditioning circuit, STM32 microprocessor, GPS module, camera and GPRS wireless transmission module; relying on high-precision GPS positioning technology, establish river geographic graphics, monitor the coordinate information of lead fish in real time, automatically control the rotation angle of camera and searchlight, monitor the upstream and downstream of the section, and self-tracking video surveillance can track in real time lead fish position or manually track lead fish position. The software of the hydrological parameter measurement terminal mainly realizes the wireless communication and the collection and preprocessing of hydrological parameters. The software adopts C language programming, which has a high degree of compatibility, which greatly improves the work efficiency of software design and development, and enhances the reliability, readability and portability of program codes. The terminal structure of hydrological parameter measurement is shown in Figure 4.
3、水文参数控制端设计3. Design of hydrological parameter control terminal
水文参数控制端包括STM32单片机、GPRS无线传输模块和抽水装置,在现场监控端中设计水情检测与控制子系统实现对水文参数进行检测和调节,软件采用C语言程序设计,兼容程度高,大大提高了软件设计开发的工作效率,增强了程序代码的可靠性、可读性和可移植性。水文参数控制端结构见图5。The hydrological parameter control terminal includes STM32 single-chip microcomputer, GPRS wireless transmission module and pumping device. The water regime detection and control subsystem is designed in the on-site monitoring terminal to realize the detection and adjustment of hydrological parameters. The software is designed in C language, with a high degree of compatibility. The working efficiency of software design and development is improved, and the reliability, readability and portability of program codes are enhanced. The structure of the hydrological parameter control terminal is shown in Figure 5.
3、水文网关设计3. Hydrological gateway design
水文网关包括GPRS无线传输模块、NB-IoT模块、STM32单片机和RS232接口,水文网关包括GPRS无线传输模块实现与水文参数测量终端、水文控制终端之间通信的自组织通信网络,NB-IoT模块实现网关与水文参数云平台、水文参数控制端、现场控制端和水文监测手机APP之间的数据双向交互,RS232接口连接现场监控端实现网关与现场监控端之间的信息交互。水文网关见图6。The hydrological gateway includes a GPRS wireless transmission module, NB-IoT module, STM32 microcontroller and RS232 interface. The hydrological gateway includes a GPRS wireless transmission module to realize the self-organizing communication network between the hydrological parameter measurement terminal and the hydrological control terminal. The NB-IoT module realizes The two-way data interaction between the gateway and the hydrological parameter cloud platform, the hydrological parameter control terminal, the on-site control terminal and the hydrological monitoring mobile APP, the RS232 interface is connected to the on-site monitoring terminal to realize the information interaction between the gateway and the on-site monitoring terminal. The hydrological gateway is shown in Figure 6.
4、现场监控端软件4. On-site monitoring software
现场监控端是一台工业控制计算机,现场监控端主要实现对水文参数进行采集和水文参数的智能调节,实现与参数测量终端与水文参数控制端以及水文参数云平台和水文监测手机APP之间信息交互,现场监控端主要功能为通信参数设置、水资源数据分析与数据管理、水情检测与控制子系统。该管理软件选择了Microsoft Visual++6.0作为开发工具,调用系统的Mscomm通信控件来设计通讯程序,现场监控端软件功能见图7。The on-site monitoring terminal is an industrial control computer. The on-site monitoring terminal mainly realizes the collection of hydrological parameters and the intelligent adjustment of hydrological parameters, and realizes the information between the parameter measurement terminal and the hydrological parameter control terminal, as well as the hydrological parameter cloud platform and the hydrological monitoring mobile APP. Interaction, the main functions of the on-site monitoring terminal are communication parameter setting, water resources data analysis and data management, water regime detection and control subsystem. The management software selects Microsoft Visual++6.0 as the development tool, and calls the system's Mscomm communication control to design the communication program. The software functions of the on-site monitoring terminal are shown in Figure 7.
本发明方案所公开的技术手段不仅限于上述实施方式所公开的技术手段,还包括由以上技术特征任意组合所组成的技术方案。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本发明的保护范围。The technical means disclosed in the solutions of the present invention are not limited to the technical means disclosed in the above embodiments, but also include technical solutions composed of any combination of the above technical features. It should be pointed out that for those skilled in the art, some improvements and modifications can be made without departing from the principle of the present invention, and these improvements and modifications are also regarded as the protection scope of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211301484.4A CN115630101B (en) | 2022-10-24 | 2022-10-24 | Hydrologic parameter intelligent monitoring and water resource big data management system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211301484.4A CN115630101B (en) | 2022-10-24 | 2022-10-24 | Hydrologic parameter intelligent monitoring and water resource big data management system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115630101A true CN115630101A (en) | 2023-01-20 |
CN115630101B CN115630101B (en) | 2023-10-20 |
Family
ID=84907119
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211301484.4A Active CN115630101B (en) | 2022-10-24 | 2022-10-24 | Hydrologic parameter intelligent monitoring and water resource big data management system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115630101B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116644832A (en) * | 2023-04-07 | 2023-08-25 | 自然资源部第二海洋研究所 | Optimized layout determining method for bay water quality monitoring station |
CN116859830A (en) * | 2023-03-27 | 2023-10-10 | 福建天甫电子材料有限公司 | Production management control system for electronic grade ammonium fluoride production |
CN117572770A (en) * | 2023-11-15 | 2024-02-20 | 淮阴工学院 | Control method of intelligent valve positioner and its Internet of Things system |
CN118430698A (en) * | 2024-04-25 | 2024-08-02 | 淮阴工学院 | Intelligent water quality monitoring method and aquaculture Internet of things system |
CN118629577A (en) * | 2024-06-04 | 2024-09-10 | 江苏省肿瘤医院 | A method for generating nursing plans for lung cancer patients at home based on adaptive neuro-fuzzy reasoning |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201229194Y (en) * | 2008-06-19 | 2009-04-29 | 北京矿咨信矿业技术研究有限公司 | Automatic monitoring system for seepage line of tailing dam |
CN104715282A (en) * | 2015-02-13 | 2015-06-17 | 浙江工业大学 | Data prediction method based on improved PSO-BP neural network |
CN105142177A (en) * | 2015-08-05 | 2015-12-09 | 西安电子科技大学 | Complex neural network channel prediction method |
CN105139274A (en) * | 2015-08-16 | 2015-12-09 | 东北石油大学 | Power transmission line icing prediction method based on quantum particle swarm and wavelet nerve network |
CN108345738A (en) * | 2018-02-06 | 2018-07-31 | 广州地理研究所 | A kind of self rating method of middle Storm flood of small basins confluence Runoff Model parameter |
US20180237487A1 (en) * | 2002-08-20 | 2018-08-23 | Opsanitx Llc | Lectin compositions and methods for modulating an immune response to an antigen |
CN109492792A (en) * | 2018-09-28 | 2019-03-19 | 昆明理工大学 | A method of it is predicted based on particle group optimizing wavelet neural network powerline ice-covering |
CN113903395A (en) * | 2021-10-28 | 2022-01-07 | 聊城大学 | An improved particle swarm optimization-based BP neural network copy number variation detection method and system |
US11355223B1 (en) * | 2015-02-06 | 2022-06-07 | Brain Trust Innovations I, Llc | Baggage system, RFID chip, server and method for capturing baggage data |
CN115016276A (en) * | 2022-06-17 | 2022-09-06 | 淮阴工学院 | Intelligent moisture regulation and environmental parameter Internet of things big data system |
-
2022
- 2022-10-24 CN CN202211301484.4A patent/CN115630101B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180237487A1 (en) * | 2002-08-20 | 2018-08-23 | Opsanitx Llc | Lectin compositions and methods for modulating an immune response to an antigen |
CN201229194Y (en) * | 2008-06-19 | 2009-04-29 | 北京矿咨信矿业技术研究有限公司 | Automatic monitoring system for seepage line of tailing dam |
US11355223B1 (en) * | 2015-02-06 | 2022-06-07 | Brain Trust Innovations I, Llc | Baggage system, RFID chip, server and method for capturing baggage data |
CN104715282A (en) * | 2015-02-13 | 2015-06-17 | 浙江工业大学 | Data prediction method based on improved PSO-BP neural network |
CN105142177A (en) * | 2015-08-05 | 2015-12-09 | 西安电子科技大学 | Complex neural network channel prediction method |
CN105139274A (en) * | 2015-08-16 | 2015-12-09 | 东北石油大学 | Power transmission line icing prediction method based on quantum particle swarm and wavelet nerve network |
CN108345738A (en) * | 2018-02-06 | 2018-07-31 | 广州地理研究所 | A kind of self rating method of middle Storm flood of small basins confluence Runoff Model parameter |
CN109492792A (en) * | 2018-09-28 | 2019-03-19 | 昆明理工大学 | A method of it is predicted based on particle group optimizing wavelet neural network powerline ice-covering |
CN113903395A (en) * | 2021-10-28 | 2022-01-07 | 聊城大学 | An improved particle swarm optimization-based BP neural network copy number variation detection method and system |
CN115016276A (en) * | 2022-06-17 | 2022-09-06 | 淮阴工学院 | Intelligent moisture regulation and environmental parameter Internet of things big data system |
Non-Patent Citations (5)
Title |
---|
CHANGWEI CAI 等: "Optimizing floating centroids method neural network classifier using dynamic multilayer particle swarm optimization", 《GECCO \'18: PROCEEDINGS OF THE GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE》, pages 394 * |
YE-QUN WANG 等: "Dropout topology-assisted bidirectional learning particle swarm optimization for neural architecture search", 《GECCO \'22: PROCEEDINGS OF THE GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION》, pages 93 * |
滕飞达: "松嫩平原典型区白城市水资源快速评价系统建立及应用", 《中国优秀硕士学位论文全文数据库基础科学辑》, no. 08, pages 013 - 16 * |
蒋佰权: "人工神经网络在水环境质量评价与预测上的应用", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》, no. 02, pages 027 - 210 * |
赵晗博: "基于GMS的某矿区地下水环境影响预测研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》, no. 02, pages 027 - 466 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116859830A (en) * | 2023-03-27 | 2023-10-10 | 福建天甫电子材料有限公司 | Production management control system for electronic grade ammonium fluoride production |
CN116859830B (en) * | 2023-03-27 | 2024-01-26 | 福建天甫电子材料有限公司 | Production management control system for electronic grade ammonium fluoride production |
CN116644832A (en) * | 2023-04-07 | 2023-08-25 | 自然资源部第二海洋研究所 | Optimized layout determining method for bay water quality monitoring station |
CN117572770A (en) * | 2023-11-15 | 2024-02-20 | 淮阴工学院 | Control method of intelligent valve positioner and its Internet of Things system |
CN117572770B (en) * | 2023-11-15 | 2024-05-17 | 淮阴工学院 | Control method of intelligent valve positioner and Internet of Things system thereof |
CN118430698A (en) * | 2024-04-25 | 2024-08-02 | 淮阴工学院 | Intelligent water quality monitoring method and aquaculture Internet of things system |
CN118430698B (en) * | 2024-04-25 | 2024-10-22 | 淮阴工学院 | A water quality intelligent monitoring method and aquaculture Internet of Things system |
CN118629577A (en) * | 2024-06-04 | 2024-09-10 | 江苏省肿瘤医院 | A method for generating nursing plans for lung cancer patients at home based on adaptive neuro-fuzzy reasoning |
CN118629577B (en) * | 2024-06-04 | 2025-02-11 | 江苏省肿瘤医院 | A method for generating nursing plans for lung cancer patients at home based on adaptive neuro-fuzzy reasoning |
Also Published As
Publication number | Publication date |
---|---|
CN115630101B (en) | 2023-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115630101B (en) | Hydrologic parameter intelligent monitoring and water resource big data management system | |
CN113126676B (en) | Livestock and poultry house breeding environment parameter intelligent control system | |
CN115016276B (en) | Intelligent water content adjustment and environment parameter Internet of things big data system | |
CN113301127B (en) | Livestock feed detection system | |
CN114418183B (en) | Livestock and poultry health signs big data IoT detection system | |
CN114839881B (en) | Intelligent garbage cleaning and environmental parameter big data Internet of things system | |
Song et al. | Application of artificial intelligence based on synchrosqueezed wavelet transform and improved deep extreme learning machine in water quality prediction | |
CN115755219B (en) | Real-time correction method and system for flood forecasting errors based on STGCN | |
Guo et al. | A combined model based on sparrow search optimized BP neural network and Markov chain for precipitation prediction in Zhengzhou City, China | |
CN117232817A (en) | Intelligent big data monitoring method of electric valve and Internet of things system | |
CN115616163A (en) | Gas precise preparation and concentration measurement system | |
CN114397043A (en) | Multi-point temperature intelligent detection system | |
CN117200454A (en) | Intelligent big data monitoring method and Internet of Things system for power distribution devices | |
CN115905938B (en) | Storage tank safety monitoring method and system based on Internet of Things | |
CN115330082A (en) | PM2.5 concentration prediction method of LSTM network based on attention mechanism | |
CN115687995A (en) | Big data environmental pollution monitoring method and system | |
CN116632834A (en) | Short-term power load prediction method based on SSA-BiGRU-Attention | |
CN112911533B (en) | A temperature detection system based on mobile app | |
CN115659201A (en) | Internet of things gas concentration detection method and monitoring system | |
CN117590746A (en) | Pressure detection and intelligent control method and its cloud platform system | |
CN115511062B (en) | Multi-parameter detection system for inspection robots | |
CN117221352A (en) | Internet of things data acquisition and intelligent big data processing method and cloud platform system | |
CN117214179A (en) | Intelligent detection method for big product defect data and cloud platform system thereof | |
CN117306608A (en) | Foundation pit big data acquisition and intelligent monitoring method and Internet of things system thereof | |
CN117236378A (en) | Intelligent monitoring method and cloud platform system for building safety big data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20241203 Address after: 230000 b-1018, Woye Garden commercial office building, 81 Ganquan Road, Shushan District, Hefei City, Anhui Province Patentee after: HEFEI WISDOM DRAGON MACHINERY DESIGN Co.,Ltd. Country or region after: China Address before: 223400 8th floor, Anton building, 10 Haian Road, Lianshui, Huaian, Jiangsu Patentee before: HUAIYIN INSTITUTE OF TECHNOLOGY Country or region before: China |
|
TR01 | Transfer of patent right |