CN117221352A - Internet of things data acquisition and intelligent big data processing method and cloud platform system - Google Patents

Internet of things data acquisition and intelligent big data processing method and cloud platform system Download PDF

Info

Publication number
CN117221352A
CN117221352A CN202311200350.8A CN202311200350A CN117221352A CN 117221352 A CN117221352 A CN 117221352A CN 202311200350 A CN202311200350 A CN 202311200350A CN 117221352 A CN117221352 A CN 117221352A
Authority
CN
China
Prior art keywords
neural network
network model
pso
output
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311200350.8A
Other languages
Chinese (zh)
Inventor
苏浩
吴鹏
贾树森
季新然
周红标
马从国
王建国
秦小芹
李亚洲
孙娜
金德飞
马海波
周恒瑞
赵宏亮
黄凤芝
徐健翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202311200350.8A priority Critical patent/CN117221352A/en
Publication of CN117221352A publication Critical patent/CN117221352A/en
Pending legal-status Critical Current

Links

Landscapes

  • Feedback Control In General (AREA)

Abstract

The invention discloses a big data processing method and a cloud platform system for data acquisition and intellectualization of the Internet of things, wherein parameters acquired by a parameter acquisition terminal are uploaded to a cloud platform through a gateway node, and data provided by the cloud platform are utilized to a client APP; the invention can effectively solve the request pressure brought by the sudden access peak of the data information, thereby improving the data transmission rate, stabilizing the high concurrency flow condition in the data acquisition process, and further effectively relieving the pressure problems of complex data acquisition structure and massive data of the Internet of things.

Description

Internet of things data acquisition and intelligent big data processing method and cloud platform system
Technical Field
The invention relates to the technical field of automatic equipment for data acquisition and intelligent data processing, in particular to a big data acquisition and intelligent data processing method for the Internet of things and a cloud platform system.
Background
The internet of things and the cloud platform are utilized for data acquisition and processing, so that a large amount of information can be effectively extracted for practical application, business processes, product development and the like, and the system has logic integration and becomes the key of future informatization work. The collection, processing and analysis of the internet of things data helps consumers and organizations to obtain valuable information and make better decisions in time. The data acquisition and processing technology mainly comprises functions of data acquisition, storage, analysis, monitoring and the like, and is becoming a technical field which needs to be studied intensively. The system takes a cloud database as a data information storage space, so that the data storage capacity can be effectively improved, the cloud database can also solve the request pressure brought by sudden access peaks, the data transmission rate is improved, the high concurrency flow condition existing in the data acquisition process is stabilized, and the pressure of complex data acquisition structures and mass data of the Internet of things is effectively relieved.
Disclosure of Invention
The invention can effectively solve the request pressure brought by the sudden access peak of the data information, thereby improving the data transmission rate, stabilizing the high concurrency flow condition in the data acquisition process, and further effectively relieving the pressure problems of complex data acquisition structure and massive data of the Internet of things.
In order to solve the problems, the invention adopts the following technical scheme:
1. the data acquisition and intelligent big data processing method of the Internet of things comprises the following steps:
1. constructing a sensor array parameter processing module, wherein the sensor array parameter processing module comprises a CNN convolutional neural network model of a noise reduction self-encoder-PSO, a GRNN neural network model of the noise reduction self-encoder-PSO, a NARX neural network model of the noise reduction self-encoder-PSO, a recursive fuzzy neural network model of the NARX neural network model of the PSO, a GRNN neural network model of the NARX neural network model of the PSO, a DRNN neural network model of the NARX neural network model of the PSO, a NARX neural network model of the PSO, an integrating loop, a BiGRU neural network model-ARIMA time sequence model, a TDL beat delay device, and a BiGRU neural network model-fuzzy wavelet neural network model of dynamic triangular fuzzy numbers; the sensor array parameter processing module is shown in fig. 1.
2. The sensor array output values are respectively used as a plurality of corresponding inputs of a CNN convolutional neural network model of a noise reduction self-encoder-PSO, a GRNN neural network model of the noise reduction self-encoder-PSO and a NARX neural network model of the noise reduction self-encoder-PSO, outputs of the CNN convolutional neural network model of the noise reduction self-encoder-PSO, the GRNN neural network model of the noise reduction self-encoder-PSO and the NARX neural network model of the noise reduction self-encoder-PSO are respectively used as a plurality of corresponding inputs of a recursive fuzzy neural network model of the NARX neural network model of the PSO, a GRNN neural network model of the NARX neural network model of the PSO and a DRNN neural network model of the NARX neural network model of the PSO, the method comprises the steps that an NARX neural network model of PSO, a recursive fuzzy neural network model of PSO, a NARX neural network model of PSO, a GRNN neural network model of PSO, and an output of the NARX neural network model of PSO are respectively used as corresponding inputs of a BiGRU neural network model-ARIMA time sequence model 1, a BiGRU neural network model-ARIMA time sequence model 2 and a BiGRU neural network model-ARIMA time sequence model 3, the output of the BiGRU neural network model-ARIMA time sequence model 1 is used as a corresponding input of the BiGRU neural network model-ARIMA time sequence model 2, the output of the BiGRU neural network model-ARIMA time sequence model 2 is used as a corresponding input of the BiGRU neural network model-ARIMA time sequence model 3, and the output of the BiGRU neural network model-ARIMA time sequence model 3 is used as an input of a TDL beat delay 3; the method comprises the steps that an output time sequence difference of a NARX neural network model of PSO-a recursive fuzzy neural network model of PSO and a NARX neural network model of PSO-a GRNN neural network model of PSO and an output time sequence difference of a GRNN neural network model of PSO and a DRNN neural network model of NARX neural network model of PSO-a DRNN neural network model of PSO are respectively used as corresponding inputs of NARX neural network model 1 of PSO and NARX neural network model 2 of PSO, NARX neural network model 1 output of PSO is respectively used as an input of an integral loop 1 and a corresponding input of a TDL beat delay 1, and integral loop 1 output is used as a corresponding input of the TDL beat delay 1; the output of the NARX neural network model 2 of PSO is respectively used as the input of the integral loop 2 and the corresponding input of the TDL beat delay device 2, the output of the integral loop 2 is used as the corresponding input of the TDL beat delay device 2, the outputs of the TDL beat delay device 1, the TDL beat delay device 2 and the TDL beat delay device 3 are respectively used as the corresponding input of the BiGRU neural network model-fuzzy wavelet neural network model of the dynamic triangle fuzzy number, the BiGRU neural network model-fuzzy wavelet neural network model output of the dynamic triangle fuzzy number is used as the input of the TDL beat delay device 4, the TDL beat delay device 4 is used as the corresponding input of the BiGRU neural network model-fuzzy wavelet neural network model of the dynamic triangle fuzzy number, the NARX neural network model 2 of PSO and the BiGRU neural network model-ARIMA time sequence model 3, five parameters of the BiGRU neural network model-fuzzy wavelet neural network model output are i, j, k, l, m to form a parameter sensor output of a dynamic triangle [ (j) and the dynamic triangle fuzzy neural network model of the dynamic triangle fuzzy number is used as the dynamic triangle sensor output of the fuzzy neural network array;
3. The system comprises an Internet of things data acquisition and intelligent big data processing subsystem, wherein the Internet of things data acquisition and intelligent big data processing subsystem comprises a CNN convolutional neural network model of a noise reduction self-encoder-PSO, a noise reduction self-encoder-ARIMA time sequence model, an integrating loop, a TDL beat-by-beat delayer, a NARX neural network model of a BiGRU neural network model-PSO, a sensor array parameter processing module and a recursive fuzzy neural network model of a dynamic triangle fuzzy number BiGRU neural network model-PSO; the data acquisition and intelligent big data processing subsystem of the Internet of things is shown in fig. 2.
4. The output values of the temperature and humidity sensors are respectively used as the corresponding inputs of a CNN convolutional neural network model of a noise reduction self-encoder-PSO and a noise reduction self-encoder-ARIMA time sequence model, the output of the CNN convolutional neural network model of the noise reduction self-encoder-PSO is respectively used as the corresponding inputs of a TDL beat delay device 1 and an integral loop 1, the output of the integral loop 1 is used as the corresponding input of the TDL beat delay device 1, the output of the noise reduction self-encoder-ARIMA time sequence model is respectively used as the corresponding inputs of a TDL beat delay device 2 and an integral loop 2, the output of the integral loop 2 is used as the corresponding input of the TDL beat delay device 2, the output of a parameter sensor array is used as the input of a sensor array parameter processing module, the output of the sensor array parameter processing module is used as the corresponding input of the TDL beat delay device 3, the outputs of the TDL beat delay 1 and the TDL beat delay 2 are respectively used as the corresponding inputs of NARX neural network models of the BiGRU neural network model-PSO, the NARX neural network models of the BiGRU neural network model-PSO and the TDL beat delay 3 are respectively used as the corresponding inputs of the recursive fuzzy neural network models of the BiGRU neural network model-PSO of the dynamic triangle blur number, the recursive fuzzy neural network model output of the BiGRU neural network model-PSO of the dynamic triangle blur number is used as the input of the TDL beat delay 4, the TDL beat delay 4 outputs the recursive fuzzy neural network models of the BiGRU neural network model-PSO of the dynamic triangle blur number, the TDL beat delay 3 and the NARX neural network models of the BiGRU neural network model-PSO are respectively a, five parameters of the recursive fuzzy neural network models of the BiGRU neural network model-PSO of the dynamic triangle blur number are respectively a, b. c, d and e form dynamic triangle fuzzy values [ (a, b), c, (d, e) ] which are environment correction output values of the parameter sensor array; the dynamic triangle fuzzy values [ (a, b), c, (d, e) ] are output by the data acquisition and intelligent big data processing subsystem of the Internet of things;
5. The integration loop is formed by connecting 2 integration operators S in series, wherein the input of 1 integration operator S is used as the input of the integration loop, and the 2 integration operator connecting ends and the 1 integration operator output end are used as the output of the integration loop.
2. Cloud platform system for data acquisition and intelligent big data processing of Internet of things:
the parameter acquisition terminal acquires parameters, uploads the parameters to the cloud platform through the gateway node, and utilizes data provided by the cloud platform to the client APP, the client APP can monitor the acquired parameters in real time and adjust external equipment of the control terminal through data information provided by the cloud platform, the parameter acquisition terminal and the control terminal are responsible for acquiring the parameter information and controlling the external equipment, and the two-way communication of the parameter acquisition terminal, the control terminal, the field monitoring terminal, the cloud platform and the client APP is realized through the gateway node, so that the parameter acquisition and the environmental equipment adjustment are realized. The cloud platform system structure for data acquisition and intelligent big data processing of the Internet of things is shown in fig. 3.
Compared with the prior art, the invention has the following obvious advantages:
1. aiming at uncertainty and randomness of sensor accuracy errors, interference, measurement environment abnormality and other problems in the process of sensor array parameter acquisition, the invention expresses sensor output of the sensor array in a dynamic interval number form, effectively processes the ambiguity and uncertainty of sensor array measurement parameters and improves objectivity and credibility of sensor value fusion values of the sensor array.
2. The recursive fuzzy neural network model introduces internal variables into a fuzzy rule layer to enable a static network to have dynamic characteristics; the network comprises the activation degree value calculated by the current input at the moment K and the contribution of all the rule activation degree values at the moment K, wherein the feedback connection of the network comprises a group of 'structure' units for memorizing the past states of the hidden layer, and the network input is used as the input of the hidden layer units at the moment K together with the next moment, and the property enables a part of the recursive network to have a dynamic memorizing function, so that the network is suitable for building a time sequence prediction model.
3. The GRNN neural network model has strong nonlinear mapping capability, a flexible network structure and high fault tolerance and robustness, has stronger advantages than the RBF network in approximation capability and learning speed, and finally converges on an optimized regression plane with more sample size accumulation, and can process unstable data when sample data are smaller, and the prediction effect is also better. The GRNN neural network model has the advantages of strong generalization capability, high prediction precision, stable algorithm, high convergence speed, few adjustment parameters, difficult sinking into local minima and the like, and the prediction network operation speed is high.
4. According to the invention, an ARIMA time sequence model is adopted to integrate original time sequence variables of factors such as trend factors, periodic factors and random errors of input parameter changes, a non-stationary sequence is converted into a stationary random sequence with zero mean value through methods such as differential data conversion, and the like, and the input parameter data fitting and prediction are carried out through repeated identification and model diagnosis comparison and selection of an ideal model. The method combines the advantages of autoregressive and moving average methods, has the characteristics of no constraint of data types and strong applicability, and is a model with good short-term prediction effect on input data.
5. The BiGRU neural network model consists of 2 circulating layers of GRU neural network models with opposite information transmission directions, wherein the 1 st layer transmits information in time sequence (time-sequence circulating layer) and the 2 nd layer transmits information in time reverse sequence (time-reverse circulating layer). The BiGRU neural network model obtains a forward hidden state through a time-sequential circulating layer, obtains a reverse hidden state through a reverse time-sequential circulating layer, and then splices the forward hidden state and the reverse hidden state to obtain a hidden state finally output by the BiGRU neural network model. The BiGRU neural network model can simultaneously acquire the sequence and reverse sequence information characteristics of input information in a positive-negative combination mode, and improves the accuracy and reliability of the information of the detection parameter sensor.
Drawings
FIG. 1 is a sensor array parameter processing module of the present invention;
FIG. 2 is a schematic diagram of a big data processing subsystem for data acquisition and intellectualization of the Internet of things according to the present invention;
FIG. 3 is a cloud platform system for data acquisition and intelligent big data processing of the Internet of things;
FIG. 4 is a functional diagram of a parameter acquisition terminal according to the present invention;
FIG. 5 is a functional diagram of a control terminal according to the present invention;
FIG. 6 is a functional diagram of a gateway node of the present invention;
FIG. 7 is a functional diagram of the field monitor software of the present invention.
Detailed Description
For better explanation of the present invention, the following describes the technical solution of the present invention in detail with reference to fig. 1 to 7. The following examples are illustrative of the present invention, but the present invention is not limited to the following examples.
1. Sensor array parameter processing module design
The sensor array parameter processing module structure is shown in fig. 1, and the sensor array parameter processing module comprises a CNN convolutional neural network model of a noise reduction self-encoder-PSO, a GRNN neural network model of the noise reduction self-encoder-PSO, a NARX neural network model of the noise reduction self-encoder-PSO, a recursive fuzzy neural network model of the NARX neural network model of the PSO, a GRNN neural network model of the NARX neural network model of the PSO, a DRNN neural network model of the NARX neural network model of the PSO, a NARX neural network model of the PSO, an integral loop, a BiGRU neural network model-ARIMA time sequence model of the PSO, a TDL beat-by-beat delay device, and a BiGRU neural network model-fuzzy wavelet neural network model of dynamic triangle fuzzy numbers;
1. CNN convolutional neural network model design of noise reduction self-encoder-PSO
The CNN convolutional neural network model of the noise reduction self-encoder-PSO is formed by connecting the noise reduction self-encoder with the CNN convolutional neural network model of the PSO in series, and the noise reduction self-encoder outputs the input of the CNN convolutional neural network model of the PSO; noise-reducing self-encoder (DAE) is a method of dimension reduction that converts the high-dimensional data of a sensor array into low-dimensional data by training a multi-layer neural network with a small center layer. Noise-reducing self-encoders (DAE) are a typical three-layer neural network, with an encoding process between the hidden layer and the input layer and a decoding process between the output layer and the hidden layer. The automatic encoder obtains the encoded representation (encoder) by encoding the output data of the sensor array, and obtains the reconstructed input data (decoder) by decoding the output of the hidden layer, which is the dimension reduction data. A reconstruction error function is then defined to measure the learning effect of the automatic encoder. Constraints can be added based on the error function to generate various types of automatic encoders. The encoder and decoder and the loss function are as follows:
an encoder: h=δ (wx+b) (1)
A decoder:
loss function:
the training process of the DAE is similar to that of the BP neural network, W and W 'are weight matrixes, b and b' are offset, h is an output value of an hidden layer, x is an input vector,for the output vector, δ is an excitation function, and a Sigmoid function or a tanh function is generally used. The noise reduction self-encoder trains a sparse self-encoding network by adding noise data into sensor array output data, the self-encoding network learns data characteristics more robust due to the action of the noise data, the self-encoding network is divided into an encoding process and a decoding process, an input layer is an encoding process to a hidden layer, and the hidden layer is a decoding process to an output layer. The self-coding network aims at utilizing an error function to enable input and output to be as close as possible, and obtaining optimal weight and bias of the self-coding network through back propagation minimization of the error function, so as to prepare for establishing a depth self-coding network model. And setting certain values in the sensor array original data to 0 by using random probability in the noise reduction self-coding network process to obtain noise-containing data, obtaining coded data and decoded data by using the noise-containing sensor array data according to the self-coding network coding and decoding principle, and finally constructing an error function by using the decoded data and the original data, and obtaining the optimal network weight and bias by minimizing the error function through counter propagation. The sensor array raw data is corrupted by adding noise, and the corrupted data is then input into the neural network as an input layer. The reconstruction result of the noise reduction self-encoder neural network should be similar to the sensor array raw data by this method By the method, the disturbance of the sensor array can be eliminated, and a stable structure can be obtained. The input data of the original sensor array is obtained by adding noise to obtain interference input, then the interference input is input into an encoder to obtain characteristic expression, and then the characteristic expression is mapped to an output layer through a decoder.
The CNN convolutional neural network model consists of an input layer, a convolutional layer, a pooling layer, a full-connection layer and an output layer, wherein the convolutional layer mainly performs feature extraction on the output of the noise reduction self-encoder, and data information from the input layer and a convolution kernel of the convolutional layer perform convolution operation, and the mathematical expression is as follows:
wherein:a j-th feature map of the first layer; />Is the ith feature map of the previous layer; />A weight matrix of the first layer; />Corresponding to the bias item; f (·) is an activation function, typically a ReLU function. The convolution layer is usually connected with a pooling layer, which is used for filtering information and reducing parameters participating in calculation in a network to prevent overfitting, and the calculation formula is as follows:
wherein:the weight of the j-th feature map of the first layer; down (·) is poolThe function comprises maximum value pooling, average value pooling, random pooling and the like; m is M l The pooling window size adopted for the first layer is M l ×M l . A CNN convolutional neural network model of PSO optimizes CNN convolutional neural network model algorithm for particle swarm is as follows, A, initializing CNN convolutional neural network model structure number to determine network hidden layer neuron number. B. And determining the dimension D of the target search space according to the CNN convolutional neural network model structure. D= (number of input parameters+1) ×number of hidden layer neurons+number of pan parameters+number of telescopic parameters. C. And determining the number m of the particles, and initializing a position vector and a speed vector of the particles. D. And carrying the position vector and the speed vector of the particles into an algorithm iteration formula to update, and carrying out optimization calculation by taking the error energy function as an objective function. The optimum position pbest searched so far for each particle and the optimum position gbest searched so far for the whole particle group are recorded. E. And the whole particle swarm is searched to an optimal position gbest so far, mapped to a DRNN neural network model weight and a threshold value for self-learning, and the adaptability of the particle is calculated by taking an error energy function as a particle. F. If the error energy function value is within the allowable error range, finishing iteration; otherwise, the algorithm is switched back to continue the iteration.
2. GRNN neural network model design of noise reduction self-encoder-PSO
The GRNN neural network model of the noise reduction self-encoder-PSO is formed by connecting the noise reduction self-encoder with the GRNN neural network model of the PSO in series, and the noise reduction self-encoder outputs the GRNN neural network model input serving as the PSO; the GRNN neural network model is built on the basis of mathematical statistics, has a clear theoretical basis, and the network structure and the connection value are determined along with the determination of the learning sample, and only one variable of the smoothing parameter is required to be determined in the training process. The GRNN neural network model learning is totally dependent on data samples, has strong nonlinear mapping, flexible network structure and high fault tolerance and robustness, and is particularly suitable for fast approximation of functions and processing of unstable data. The GRNN neural network model has few artificial adjustment parameters, the learning of the network totally depends on data samples, and the characteristic ensures that the network can furthest reduce the prediction of artificial subjective assumptionEffect of the results. The GRNN neural network model has strong prediction capability under a small sample, and also has the characteristics of quick training, strong robustness and the like, and is basically free from the trouble of multiple collinearity of input data. The GRNN neural network model consists of an input layer, a mode layer, a summation layer and an output layer, wherein an input vector X of the GRNN neural network model is an n-dimensional vector, and an output vector Y of the GRNN neural network model is a k-dimensional vector X= { X 1 ,x 2 ,…,x n } T And y= { Y 1 ,y 2 ,…,y k } T . The number of the neurons of the mode layer is equal to the number m of the training samples, each neuron corresponds to the training samples one by one, and the transfer function p of the neurons of the mode layer i The method comprises the following steps:
p i =exp{-[(x-x i ) T (x-x i )]/2σ},(i=1,2,…,m) (6)
the neuron outputs in the above method enter a summation layer for summation, and the functions of the summation layer are divided into two types, namely:
wherein y is ij The j-th element value in the vector is output for the i-th training sample. According to the GRNN neural network algorithm, the estimated value of the j-th element of the network output vector Y is:
y j =s Nj /s D ,(j=1,2,…k) (9)
the output result of the GRNN neural network model can be converged on an optimal regression plane, has strong prediction capability and high learning speed, is mainly used for solving the function approximation problem and has high parallelism in the aspect of structure. A GRNN neural network model of PSO optimizes the GRNN neural network model algorithm for particle swarm is as follows, A, initializing the GRNN neural network model structure to determine the number of neurons of the hidden layer of the network. B. And determining the dimension D of the target search space according to the GRNN neural network model structure. D= (number of input parameters+1) ×number of hidden layer neurons+number of pan parameters+number of telescopic parameters. C. And determining the number m of the particles, and initializing a position vector and a speed vector of the particles. D. And carrying the position vector and the speed vector of the particles into an algorithm iteration formula to update, and carrying out optimization calculation by taking the error energy function as an objective function. The optimum position pbest searched so far for each particle and the optimum position gbest searched so far for the whole particle group are recorded. E. And the whole particle swarm is searched to an optimal position gbest so far, mapped to a DRNN neural network model weight and a threshold value for self-learning, and the adaptability of the particle is calculated by taking an error energy function as a particle. F. If the error energy function value is within the allowable error range, finishing iteration; otherwise, the algorithm is switched back to continue the iteration.
3. NARX neural network model design for noise reduction self-encoder-PSO
The NARX neural network model of the noise reduction self-encoder-PSO is formed by connecting the noise reduction self-encoder with the NARX neural network model of the PSO in series, and the noise reduction self-encoder output is used as the NARX neural network model input of the PSO; the NARX neural network model is a dynamic feedforward neural network, the NARX neural network model is a nonlinear autoregressive network with external input, the NARX neural network model has the dynamic characteristic of multi-step time delay and is connected with a plurality of layers of a closed network through feedback, and the NARX neural network model is the most widely applied dynamic neural network in a nonlinear dynamic system, and the NARX neural network model has the performance generally superior to that of a total regression neural network. A typical NARX neural network model mainly consists of an input layer, a hidden layer, an output layer and input and output delays, wherein the delay order of the input and output and the number of hidden layer neurons are generally determined in advance before application, and the current output of the NARX neural network model depends on not only the past output y (t-n) but also the current input vector X (t) and the delay order of the input vector. The NNARX neural network model input signals are transmitted to the hidden layer through the time delay layer, the hidden layer processes the input signals and then transmits the processed signals to the output layer, the output layer carries out linear weighting on the hidden layer output signals to obtain final neural network output signals, and the time delay layer delays the signals fed back by the network and the signals output by the input layer and then transmits the signals to the hidden layer. The NARX neural network model has the characteristics of nonlinear mapping capability, good robustness, adaptivity and the like, and is suitable for predicting the output of the sensor array. x (t) represents an external input of the NARX neural network model, and m represents a delay order of the external input; y (t) is the output of the neural network, n is the output delay order; s is the number of hidden layer neurons; the output of the jth implicit unit can thus be derived as:
In the above, w ji B is the connection weight between the ith input and the jth hidden neuron j Is the bias value of the jth hidden neuron, and the value of the output y (t+1) of the network is:
y (t+1) =f [ y (t), y (t-1), …, y (t-n), x (t), x (t-1), …, x (t-m+1); the step of optimizing NARX neural network combination model by the W ] (11) particle swarm algorithm (PSO) is as follows:
A. and initializing and setting PSO, including setting population scale, iteration times, and randomly giving initial particles and initial particle speeds. B. And determining a NARX neural network model by the parameters corresponding to the particle vectors, and calculating the fitness value of the individual through a fitness function f (x). C. The calculated fitness function value is compared with its own optimal value fPBest and if < fPBest, the previous round of optimization solution is replaced with the new fitness value and the previous round of particles is replaced with the new particles. D. The best fitness value for each particle is compared to the best fitness value for all particles fGBest. If < fGBest, the best fitness value of the particle is substituted for the original global best fitness value while the current state of the particle is saved. E. And C, judging whether the adaptive value meets the requirement, if not, performing a new round of calculation, moving the particles so as to generate new particles (namely a new solution), and returning to the step B. And if the adaptation value meets the requirement, ending the calculation.
4. NARX neural network model of PSO-recursive fuzzy neural network model design of PSO
NARX neural network model of PSO-PSOThe recurrent fuzzy neural network model is formed by connecting an NARX neural network model of PSO with the recurrent fuzzy neural network model of PSO in series, and the NARX neural network model output of PSO is used as the recurrent fuzzy neural network model input of PSO; the recurrent fuzzy neural network is a multi-input multi-output network topology structure, and the network consists of 4 layers: an input layer, a membership function layer, a rule layer and an output layer. The network comprises n input nodes, wherein each input node corresponds to m condition nodes, m represents the rule number, nm rule nodes and 5 output nodes. Layer I directs inputs to the network; the II layer fuzzifies the input, and the adopted membership function is a Gaussian function; the third layer corresponds to fuzzy reasoning; the fourth layer corresponds to a defuzzification operation. By usingRepresenting the input and output of the ith node of the kth layer, respectively, the signaling process within the network and the input-output relationship between the layers can be described as follows. Layer I: each input node of the input layer is directly connected with an input variable, and the input and output of the network are expressed as:
in the middle of And->For the input and output of the i-th node of the network input layer, N represents the number of iterations.
Layer II: the nodes of the member function layer fuzzify the input variables, each node represents a membership function, a Gaussian basis function is adopted as the membership function, and the input and output of the network are expressed as follows:
mij and sigma in ij And respectively representing the mean center and the width value of the jth Gaussian basis function of the ith language variable of the IIth layer, wherein m is the number of all the language variables corresponding to the input node.
Layer III: the fuzzy reasoning layer, namely the rule layer, adds dynamic feedback to enable the network to have better learning efficiency, and the feedback link introduces an internal variable h k The sigmoid function is selected as an activation function of an internal variable of a feedback link, and the input and output of the network are expressed as follows:
omega in jk Is the connection weight of the recursion part, the neurons of the layer represent the front part of the fuzzy logic rule, the layer nodes perform the pi operation on the output quantity of the second layer and the feedback quantity of the third layer,is the output of the third layer, m represents the regular number when fully connected. The feedback link is mainly used for calculating the value of the internal variable and the activation intensity of the corresponding membership function of the internal variable. The activation strength is related to the level 3 regular node matching. The internal variables introduced by the feedback link comprise two types of nodes: and the receiving node and the feedback node. The receiving node uses weighted summation to calculate internal variables to realize the defuzzification function; fuzzy inference results of hidden rules of internal variable representation. The feedback nodes adopt a sigmoid function as a fuzzy membership function, so that the fuzzification of internal variables is realized, the number of the receiving nodes is equal to the number of the feedback nodes, and the number of the receiving nodes is equal to the number of the nodes of the rule layer. The feedback quantity is connected to the layer 3, and is used as the input quantity of the fuzzy rule layer, and the output of the feedback node contains the historical information of the fuzzy rule activation intensity.
And IV layer: the defuzzification layer, the output layer. The layer node sums the input quantities. The inputs and outputs of the network are expressed as:
lambda in the formula j The recursive fuzzy neural network model has the performance approaching to a highly nonlinear dynamic system, the training error and the test error of the recursive fuzzy neural network added with the internal variable are respectively obviously reduced, the network prediction effect is superior to that of the fuzzy neural network with self-feedback recursive fuzzy neural network and dynamic modeling, and the learning capability of the network is enhanced after the internal variable is added, and the dynamic characteristic of the recursive fuzzy neural network is more fully reflected. The recursive fuzzy neural network model introduces internal variables in a feedback link, takes the output quantity of a rule layer as feedback quantity after weighted summation, takes the feedback quantity and the output quantity of a membership function layer as the input of the rule layer at the next moment, and enhances the capability of the recursive fuzzy neural network for adapting to a nonlinear dynamic system by the network output containing the activation strength of the rule layer and the history information of the output. The particle swarm optimization of the recursive fuzzy neural network model of the PSO is as follows: A. initializing a recursive fuzzy neural network model structure and determining the number of neurons of a hidden layer of the network. B. And determining the dimension D of the target search space according to the recursive fuzzy neural network model structure. D= (number of input parameters+1) ×number of hidden layer neurons+number of pan parameters+number of telescopic parameters. C. The number m of particles is determined and the relevant parameters are set. The position vector and the velocity vector of the microparticle are initialized. D. And carrying the position vector and the speed vector of the particles into an algorithm iteration formula to update, and carrying out optimization calculation by taking the error energy function as an objective function. The optimum position pbest searched so far for each particle and the optimum position gbest searched so far for the whole particle group are recorded. E. And the whole particle swarm is searched to an optimal position gbest so far, mapped to a recursive fuzzy neural network model weight and a threshold value for self-learning, and the adaptability of the particles is calculated by taking an error energy function as a parallelization. F. If the error energy function value is within the error range allowed by the actual problem, finishing iteration; conversely, the algorithm relay turns back And (5) continuing iteration.
5. NARX neural network model of PSO-GRNN neural network model design of PSO
The NARX neural network model of PSO-GRNN neural network model of PSO is that NARX neural network model of PSO is connected in series with GRNN neural network model of PSO, NARX neural network model of PSO outputs as GRNN neural network model of PSO inputs; the design method of the NARX neural network model of PSO and the GRNN neural network model of PSO refers to the relevant design process of the patent.
6. NARX neural network model of PSO-DRNN neural network model design of PSO
The NARX neural network model of PSO-DRNN neural network model of PSO is that NARX neural network model of PSO is connected in series with DRNN neural network model of PSO, NARX neural network model of PSO outputs as DRNN neural network model of PSO inputs; the DRNN neural network model is a dynamic regression neural network with feedback and the capability of adapting to time-varying characteristics, the network can reflect the output dynamic variation performance of the NARX neural network model of PSO more directly and vividly, the output value of the NARX neural network model of PSO can be accurately predicted, the DRNN neural network model comprises an input layer, a hidden layer of the input layer as a regression layer and an output layer, and in the DRNN neural network model, I= [ I ] is set 1 (t),I 2 (t),…,I n (t)]Inputting vectors for DRNN neural network model, wherein I i (t) is the input of the ith neuron of the input layer of the DRNN neural network model at the t moment, and the output of the jth neuron of the regression layer is X j (t),S j And (t) is the sum of the j-th regression neuron inputs, f (·) is a function of S, and O (t) is the output of the DRNN neural network model. The DRNN neural network model output is:
the DRNN neural network model of the PSO comprises structural determination of the DRNN neural network model and optimization of a PSO algorithm, and the method comprises the following basic steps: step1, giving M groups of input and output samples of a DRNN neural network model as a training set, and normalizing the original data; step2, determining a DRNN neural network model structure according to the number of input and output parameters, so as to determine the length of PSO algorithm particles; step3, encoding indirect weights and thresholds among all neurons in the DRNN neural network model structure into individuals represented by real numbers, and if the network comprises N optimized weights and thresholds, each individual represents an initialized particle swarm by an N-dimensional vector formed by N weight and threshold parameters; step4, taking the sum of absolute values of the prediction errors as an individual fitness value, and obtaining an individual extremum and a global extremum according to the fitness value; step5, judging whether the global extremum meets PSO ending conditions, if so, exiting PSO optimizing, and turning to Step6; if not, updating the speed and the position of each particle, and turning to Step4; step6, decoding particles corresponding to the global extremum, and taking the particles as initial weights and thresholds of the DRNN neural network model; step7, giving the optimal initial weight and the threshold value obtained in Step6 to the DRNN neural network model, training and determining the network model, and fusing the multi-point oil gas change speed by using the trained neural network model.
7. BiGRU neural network model-ARIMA time series model design
The BiGRU neural network model-ARIMA time sequence model is formed by connecting the BiGRU neural network model with the ARIMA time sequence model in series, and the output of the BiGRU neural network model is used as the input of the ARIMA time sequence model; the GRU neural network model can learn the dependency information between the long-period sequence data input into the GRU neural network model, the GRU neural network model is composed of an update gate and a reset gate, the update gate represents the influence degree of the output information of the neuron hidden layer at the previous moment on the hidden layer at the current moment, and the larger the update gate value is, the larger the influence is represented; the reset gate represents the neglected degree of the hidden layer output of the neuron at the previous moment, and the larger the reset gate value is, the less information is ignored. In the unidirectional GRU neural network model, the states are always output from front to back according to the sequence, the input signal at a certain moment is associated with the input signal at the previous moment and at a certain moment in the future, and the past influence factors and the future signal influence factors of the input information are associated with the current signal prediction, so that the deep characteristics of the input signal are extracted more conveniently. 2 input signals of bi-directional gating circulation unit BiGRU neural network modelThe method comprises the steps of forming a circulating layer of a GRU neural network model with opposite information transmission directions, wherein the layer 1 transmits information according to time sequence, the layer 2 transmits information according to time reverse sequence, the BiGRU neural network model obtains a forward hidden state through the time-sequential circulating layer, obtains a reverse hidden state through the reverse time-sequential circulating layer, and then splices the forward hidden state and the reverse hidden state to obtain a hidden state finally output by the BiGRU. Hidden layer state h of BiGRU neural network model at current moment t From the current input x t Forward propagated t-1 moment hidden layer output h t-1 Backward propagated t-1 moment hidden layer output h t-1 The three parts are determined together, the BiGRU neural network model can be regarded as a combination of two unidirectional GRUs, and hidden layer output at the moment t can be obtained by forward hidden layer output h t And backward hidden layer output h t And (5) obtaining weighted summation. The specific training process of the GRU neural network model is as follows: 1) Current state input x t Output h from the previous time t-1 And outputting a value of 0-1 through the update gate, wherein 0 represents completely discarded information, 1 represents completely reserved information, and the calculation formula is shown in a formula (17). 2) X is x t And h t-1 The sigmoid layer entering the reset gate outputs a value of 0-1, while the tanh layer creates a new candidate vector h t The calculation formulas are shown as formula (18) and formula (19). 3) The updating gate is used as a weight vector, and the GRU neural network model output h is obtained by weighted average of the candidate vector and the output vector at the previous moment t The calculation formula is shown as formula (20):
r t =σ(W r ·[h t-1 ,x t ]) (17)
z t =σ(W z ·[h t-1 ,x t ]) (18)
wherein: r is (r) t Representing an update gate vector, Z t Representing a reset gate vector, σ is the activation function, x t An input vector h representing time t t Output vector representing time t []Representing that the 2 vectors are connected together,representing the multiplication of matrix elements. The RIMA time series model organically combines an Autoregressive model (AR) and a Moving Average Model (MA), so that the RIMA time series model becomes a comprehensive prediction method, which is known as the most complex and highest-level model in the time series prediction method, and in practical application, as an original data sequence often shows a certain trend or circulation characteristic, the stability requirement of the ARMA time series model on the time series is not met, and taking the difference is a convenient and effective method for eliminating the data trend. The model built based on the data sequence after the difference is called ARIMA time sequence model and is marked as { Xt } -ARIMA (p, d, q), wherein p and q are called the order of the model, and d represents the difference times. When d is 0, the ARIMA model is an ARMA model, which is defined as:
x t =b 1 x t-1 +…+b p x t-pt +a 1 ε t-1 +…+a q ε t-q (21)
{x t Output data sequence for BiGRU neural network model, { epsilon }, and t }~WN(0,σ 2 )。
ARIMA time series model building mainly comprises model identification, parameter estimation and model diagnosis. Model prediction mainly comprises preprocessing of time sequences and preliminary order determination of model parameters; after model order determination is completed, unknown parameters in the model are estimated by combining the time sequence observation values and p, d and q values; the diagnosis of the model is mainly aimed at the significance test of the whole model and the significance test of parameters in the model. The establishment of a model is a continuous optimization process, and the model optimization is commonly used as an AIC and BIC criterion, namely, the smaller the value of the minimum information amount criterion is, the more suitable the model is, and the BIC criterion is an improvement for the deficiency of the AIC criterion on a large sample sequence.
8. BiGRU neural network model-fuzzy wavelet neural network model design of dynamic triangle fuzzy number
The BiGRU neural network model of the dynamic triangle fuzzy number is formed by connecting the BiGRU neural network model and the fuzzy wavelet neural network model in series, the output of the BiGRU neural network model is input as the fuzzy wavelet neural network model, the fuzzy wavelet neural network model adopts a fuzzy neural network to carry out fuzzy reasoning and combines the characteristic of multi-resolution analysis of wavelets, a wavelet function is taken as an excitation function of a neural network neuron to construct a novel fuzzy wavelet network model, the fuzzy wavelet neural network model comprises two parts, namely a Fuzzy Neural Network (FNN) and a Wavelet Neural Network (WNN), the fuzzy neural network comprises 4 basic layers, namely, a first layer is an input layer, and each input vector corresponds to one neuron; each neuron of the second layer represents a linguistic variable value; each neuron of the third layer represents a fuzzy rule; the fourth layer is the normalization layer. Meanwhile, the input of the fuzzy neural network is used as the input of the wavelet neural network, and each fuzzy rule corresponds to one wavelet network. The wavelet basis function is a wavelet basis group obtained by translating the wavelet function, the wavelet neural networks generated by different scale functions can capture the characteristics of different time domains and frequency domains, and different fuzzy reasoning selects the corresponding wavelet networks. The wavelet has the characteristic of multi-resolution analysis, if the wavelet function is used as the excitation function of the neural network neurons, the expansion and the translation of each neuron can be adjusted, the smooth function can be learned by selecting low-scale parameters, and the accuracy of the ANN with higher scale and higher local singular functions than that of the same neurons can be learned. The fuzzy wavelet network model is realized by 5 basic layers of input, fuzzification, reasoning, wavelet network layer and de-fuzzification layer. The number of the neural network nodes of each layer is n, n×m, M and 5 respectively. Once the number of inputs n and rules M are determined, the structure of the FWN model is determined, where the input to the fuzzy wavelet neural network is x= [ X ] 1 ,x 2 ,…x n ],T i Is the number of wavelets corresponding to the ith rule; w (w) ik Is a weight coefficient;is a wavelet function, +.>The output of the linear combination of the local model wavelet network corresponding to the rule i is:
the first layer is an input layer: each node of the layer is directly connected with each component x of the input vector j Connection is performed, and the input value X= [ X ] 1 ,x 2 ,…x n ]Pass on to the next layer; the second layer calculates membership function values corresponding to each input variable; the third layer calculates the applicability of each rule; the fourth layer is wavelet network layer output mainly used for output compensation; the fifth layer is a control signal output layer, also called an anti-fuzzy layer, and outputs by the fuzzy wavelet neural network:
the five parameters output by the BiGRU neural network model-fuzzy wavelet neural network model of the dynamic triangle fuzzy number are i, j, k, l, m respectively to form the dynamic triangle fuzzy number [ (i, j), k, (l, m) ] of the parameter sensor array output, and the BiGRU neural network model-fuzzy wavelet neural network model output of the dynamic triangle fuzzy number is output as a sensor array parameter processing module;
2. internet of things data acquisition and intelligent big data processing subsystem design
The structure of the data acquisition and intelligent big data processing subsystem of the Internet of things is shown in fig. 2, and the data acquisition and intelligent big data processing subsystem of the Internet of things comprises a CNN convolutional neural network model of a noise reduction self-encoder-PSO, a noise reduction self-encoder-ARIMA time sequence model, an integral loop, a TDL beat delay timer, a NARX neural network model of a BiGRU neural network model-PSO, a sensor array parameter processing module and a recursive fuzzy neural network model of a dynamic triangle fuzzy number BiGRU neural network model-PSO; the related model in the data acquisition and intelligent big data processing subsystem of the Internet of things refers to the design method of the related model in the sensor array parameter processing module of the patent.
1. Noise reduction self-encoder-ARIMA time series model design
The noise-reducing self-encoder-ARIMA time sequence model is formed by connecting a noise-reducing self-encoder with the ARIMA time sequence model in series, the output of the noise-reducing self-encoder is input as the ARIMA time sequence model, and the design methods of the noise-reducing self-encoder and the ARIMA time sequence model refer to the related design process of the patent.
2. NARX neural network model design of BiGRU neural network model-PSO
The NARX neural network model of the BiGRU neural network model-PSO is formed by serially connecting the NARX neural network model of the BiGRU neural network model and the NARX neural network model of the PSO, the NARX neural network model of the PSO is input by the output of the BiGRU neural network model, and the design method of the NARX neural network model of the BiGRU neural network model and the NARX neural network model of the PSO refers to the design process of the related model of the patent.
3. Recursive fuzzy neural network model design of BiGRU neural network model-PSO of dynamic triangle fuzzy number
The recursive fuzzy neural network model of the BiGRU neural network model-PSO of the dynamic triangle fuzzy number is that the BiGRU neural network model is connected with the recursive fuzzy neural network model of the PSO in series, and the output of the BiGRU neural network model is used as the recursive fuzzy neural network model input of the PSO; the five parameters output by the BiGRU neural network model of the dynamic triangle fuzzy number-PSO recursive fuzzy neural network model are a, b, c, d, e to form dynamic triangle fuzzy values [ (a, b), c, (d, e) ] which are the parameter sensor array environment correction output values; the dynamic triangle fuzzy values [ (a, b), c, (d, e) ] are output by the data acquisition and intelligent big data processing subsystem of the Internet of things; the design method of the BiGRU neural network model and the PSO recursive fuzzy neural network model refers to the relevant model design process of the patent.
3. Cloud platform system design for data acquisition and intelligent big data processing of Internet of things
The parameter acquisition terminal acquires parameters, uploads the parameters to the cloud platform through the gateway node, and utilizes data provided by the cloud platform to the client APP, the client APP can monitor the acquired parameters in real time and adjust external equipment of the control terminal through data information provided by the cloud platform, the parameter acquisition terminal and the control terminal are responsible for acquiring the parameter information and controlling the external equipment, and the two-way communication of the parameter acquisition terminal, the control terminal, the field monitoring terminal, the cloud platform and the client APP is realized through the gateway node, so that the parameter acquisition and the environmental equipment adjustment are realized. The structure of the cloud platform system for data acquisition and intelligent big data processing of the Internet of things is shown in fig. 3.
1. Parameter acquisition terminal design
A large number of parameter acquisition terminals based on a CC2530 self-organizing communication network are adopted as a parameter sensor array and an environmental parameter sensing terminal, wherein the parameter acquisition terminals comprise sensors for acquiring output values of the parameter sensor array, ambient temperature, humidity, wind speed and illuminance, a corresponding signal conditioning circuit, an STM32 microprocessor and a CC2530 module; the parameter acquisition terminal realizes the information interaction with the gateway node through the self-organizing communication network of the CC2530 module. The software of the parameter acquisition terminal mainly realizes self-organizing network communication and parameter acquisition and preprocessing. The software adopts the programming of the C language, the compatibility degree is high, the working efficiency of the software design and development is greatly improved, the reliability, the readability and the portability of the program codes are enhanced, and the structure of the parameter acquisition terminal is shown in figure 4.
2. Control terminal design
The control terminal realizes the information interaction between the control terminal and the gateway node through the self-organizing communication network of the CC2530 module, and comprises 4 digital-to-analog conversion circuits, an STM32 microprocessor, the CC2530 module and 4 external equipment controllers which are corresponding to the control external equipment; the 4 external device controllers are respectively a temperature regulating device, a humidity regulating device, a wind speed regulating device and an illuminance regulating device; the control terminal is shown in fig. 5.
3. Gateway node design
The gateway node comprises a CC2530 module, an NB-IoT module, an STM32 microprocessor and an RS232 interface, the gateway node comprises a self-organizing communication network for communication between the CC2530 module and the parameter acquisition terminal and the control terminal, the NB-IoT module realizes data bidirectional interaction between the gateway and the cloud platform, and the RS232 interface is connected with the field monitoring terminal to realize information interaction between the gateway node and the field monitoring terminal. The gateway node is shown in fig. 6.
4. Software function design of field monitoring end
The on-site monitoring end is an industrial control computer and is mainly used for processing acquired parameters and realizing information interaction with the gateway node, and the on-site monitoring end mainly has the functions of communication parameter setting, data analysis and data management and an Internet of things data acquisition and intelligent big data processing subsystem. The management software selects Microsoft visual++6.0 as a development tool, and calls an Mscomm communication control of the system to design a communication program, and the function of the field monitoring end software is shown in fig. 7.
The foregoing embodiments are merely illustrative of the technical concept and features of the present invention, and are intended to enable those skilled in the art to understand the present invention and to implement the same, not to limit the scope of the present invention. All equivalent changes or modifications made according to the spirit of the present invention should be included in the scope of the present invention.

Claims (5)

1. The big data processing method for data acquisition and intellectualization of the Internet of things is characterized by comprising the following steps:
step 1, constructing a sensor array parameter processing module
The sensor array parameter processing module comprises a CNN convolutional neural network model of a noise reduction self-encoder-PSO, a GRNN neural network model of the noise reduction self-encoder-PSO, a NARX neural network model of the noise reduction self-encoder-PSO, a recursion fuzzy neural network model of the NARX neural network model of the PSO, a GRNN neural network model of the NARX neural network model of the PSO, a DRNN neural network model of the NARX neural network model of the PSO, a NARX neural network model of the PSO, an integration loop, a BiGRU neural network model-ARIMA time sequence model, a TDL beat delay device, a BiGRU neural network model-fuzzy wavelet neural network model of dynamic triangular fuzzy numbers;
Step 2, constructing an internet of things data acquisition and intelligent big data processing subsystem
The data acquisition and intelligent big data processing subsystem of the Internet of things comprises a CNN convolutional neural network model of a noise reduction self-encoder-PSO, a noise reduction self-encoder-ARIMA time sequence model, an integral loop, a TDL beat delay device, a NARX neural network model of a BiGRU neural network model-PSO, a sensor array parameter processing module and a recursive fuzzy neural network model of a dynamic triangle fuzzy number BiGRU neural network model-PSO.
2. The method for data collection and intelligentized big data processing according to claim 1, wherein in the step 1, the sensor array output values are respectively used as a CNN convolutional neural network model of the noise reduction self-encoder-PSO, a GRNN neural network model of the noise reduction self-encoder-PSO and a plurality of corresponding inputs of a NARX neural network model of the noise reduction self-encoder-PSO, outputs of the CNN convolutional neural network model of the noise reduction self-encoder-PSO, the GRNN neural network model of the noise reduction self-encoder-PSO and the NARX neural network model of the noise reduction self-encoder-PSO are respectively used as a recursive fuzzy neural network model of the NARX neural network model of the PSO, a GRNN neural network model of the PSO and a plurality of corresponding inputs of a DRNN neural network model of the NARX neural network model of the PSO, the outputs of the NARX neural network model of PSO-the recursive fuzzy neural network model of PSO, the NARX neural network model of PSO-the GRNN neural network model of PSO, the NARX neural network model of PSO-the DRNN neural network model of PSO are respectively used as the corresponding inputs of the BiGRU neural network model-ARIMA time series model 1, the BiGRU neural network model-ARIMA time series model 2 and the BiGRU neural network model-ARIMA time series model 3, the output of the BiGRU neural network model-ARIMA time series model 1 is used as the corresponding input of the BiGRU neural network model-ARIMA time series model 2, the output of the BiGRU neural network model-ARARIMA time series model 2 is used as the corresponding input of the BiGRU neural network model-ARIMA time series model 3, the output of the BiGRU neural network model-ARIMA time sequence model 3 is used as the input of the TDL beat-to-beat delay 3; the method comprises the steps that an output time sequence difference of a NARX neural network model of PSO-a recursive fuzzy neural network model of PSO and a NARX neural network model of PSO-a GRNN neural network model of PSO and an output time sequence difference of a GRNN neural network model of PSO and a DRNN neural network model of NARX neural network model of PSO-a DRNN neural network model of PSO are respectively used as corresponding inputs of NARX neural network model 1 of PSO and NARX neural network model 2 of PSO, NARX neural network model 1 output of PSO is respectively used as an input of an integral loop 1 and a corresponding input of a TDL beat delay 1, and integral loop 1 output is used as a corresponding input of the TDL beat delay 1; the output of the NARX neural network model 2 of PSO is respectively used as the input of the integral loop 2 and the corresponding input of the TDL beat delay device 2, the output of the integral loop 2 is used as the corresponding input of the TDL beat delay device 2, the outputs of the TDL beat delay device 1, the TDL beat delay device 2 and the TDL beat delay device 3 are respectively used as the corresponding input of the BiGRU neural network model-fuzzy wavelet neural network model of the dynamic triangle fuzzy number, the BiGRU neural network model-fuzzy wavelet neural network model output of the dynamic triangle fuzzy number is used as the input of the TDL beat delay device 4, the TDL beat delay device 4 is used as the corresponding input of the BiGRU neural network model-fuzzy wavelet neural network model of the dynamic triangle fuzzy number, the NARX neural network model 1 of PSO, the NARX neural network model 2 of the PSO and the BiGRU neural network model-ARIMA time sequence model 3, and five parameters of the dynamic triangle fuzzy number are respectively i, j, k, l, m to form a parameter sensor output of a dynamic triangle [ (j) and the dynamic triangle fuzzy neural network model of the dynamic triangle fuzzy number, and the dynamic triangle fuzzy neural network array of the dynamic triangle number is processed as the fuzzy neural network array.
3. The method for data collection and intelligent big data processing according to claim 1, wherein in the step 2, the output values of the temperature and humidity sensor are respectively used as the corresponding inputs of a CNN convolutional neural network model of a noise reduction self-encoder-PSO and a noise reduction self-encoder-ARIMA time sequence model, the output of the CNN convolutional neural network model of the noise reduction self-encoder-PSO is respectively used as the corresponding inputs of a TDL beat delay device 1 and an integral loop 1, the output of the integral loop 1 is used as the corresponding input of the TDL beat delay device 1, the output of the noise reduction self-encoder-ARIMA time sequence model is respectively used as the corresponding inputs of a TDL beat delay device 2 and an integral loop 2, the output of the integral loop 2 is used as the corresponding input of the TDL beat delay device 2, the output of the parameter sensor array is used as the input of the sensor array parameter processing module, the sensor array parameter processing module outputs are used as the corresponding input of the TDL beat delay 3, the outputs of the TDL beat delay 1 and the TDL beat delay 2 are respectively used as the corresponding input of the NARX neural network model of the BiGRU neural network model-PSO, the outputs of the NARX neural network model of the BiGRU neural network model-PSO and the TDL beat delay 3 are respectively used as the corresponding input of the recursive fuzzy neural network model of the BiGRU neural network model-PSO of the dynamic triangle fuzzy number, the recursive fuzzy neural network model output of the BiGRU neural network model-PSO of the dynamic triangle fuzzy number is used as the input of the TDL beat delay 4, the TDL beat delay 4 outputs the recursive fuzzy neural network model of the BiGRU neural network model-PSO of the dynamic triangle fuzzy number, the corresponding inputs of the NARX neural network model of the TDL beat delay 3 and the BiGRU neural network model-PSO, the five parameters output by the BiGRU neural network model of the dynamic triangle fuzzy number-PSO recursive fuzzy neural network model are a, b, c, d, e to form dynamic triangle fuzzy values [ (a, b), c, (d, e) ] which are the parameter sensor array environment correction output values; the dynamic triangle fuzzy values [ (a, b), c, (d, e) ] are output by the data acquisition and intelligent big data processing subsystem of the Internet of things.
4. The internet of things data acquisition and intelligent big data processing method according to claim 3, wherein the integration loop is formed by connecting 2 integration operators S in series, wherein the input of 1 integration operator S is used as an integration loop input, and the 2 integration operator connecting ends and the 1 integration operator output end are used as an integration loop output.
5. The internet of things big data cloud platform system is characterized in that parameters acquired by a parameter acquisition terminal of the cloud platform system are uploaded to a cloud platform through a gateway node, data provided by the cloud platform are utilized for a client APP, the client APP can monitor the acquired parameters in real time through data information provided by the cloud platform and adjust external equipment of a control terminal, the parameter acquisition terminal and the control terminal are responsible for acquiring the parameter information and controlling the external equipment, two-way communication of the parameter acquisition terminal, the control terminal, a site monitoring terminal, the cloud platform and the client APP is realized through the gateway node, parameter acquisition and environmental equipment adjustment are realized, and the system is loaded with computer program steps for realizing the internet of things data acquisition and intelligent big data processing method according to any one of claims 1-4.
CN202311200350.8A 2023-09-15 2023-09-15 Internet of things data acquisition and intelligent big data processing method and cloud platform system Pending CN117221352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311200350.8A CN117221352A (en) 2023-09-15 2023-09-15 Internet of things data acquisition and intelligent big data processing method and cloud platform system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311200350.8A CN117221352A (en) 2023-09-15 2023-09-15 Internet of things data acquisition and intelligent big data processing method and cloud platform system

Publications (1)

Publication Number Publication Date
CN117221352A true CN117221352A (en) 2023-12-12

Family

ID=89045835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311200350.8A Pending CN117221352A (en) 2023-09-15 2023-09-15 Internet of things data acquisition and intelligent big data processing method and cloud platform system

Country Status (1)

Country Link
CN (1) CN117221352A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117793106A (en) * 2024-02-27 2024-03-29 广东云百科技有限公司 Intelligent gateway, internet of things data acquisition method and Internet of things system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117793106A (en) * 2024-02-27 2024-03-29 广东云百科技有限公司 Intelligent gateway, internet of things data acquisition method and Internet of things system
CN117793106B (en) * 2024-02-27 2024-05-28 广东云百科技有限公司 Intelligent gateway, internet of things data acquisition method and Internet of things system

Similar Documents

Publication Publication Date Title
CN113126676B (en) Livestock and poultry house breeding environment parameter intelligent control system
CN115630101B (en) Hydrologic parameter intelligent monitoring and water resource big data management system
CN115016276B (en) Intelligent water content adjustment and environment parameter Internet of things big data system
CN113281465A (en) Livestock and poultry house breeding environment harmful gas detection system
CN117221352A (en) Internet of things data acquisition and intelligent big data processing method and cloud platform system
CN114397043B (en) Multi-point temperature intelligent detection system
CN114839881B (en) Intelligent garbage cleaning and environmental parameter big data Internet of things system
CN115687995A (en) Big data environmental pollution monitoring method and system
CN113301127A (en) Livestock feed detection system
CN114911185A (en) Security big data Internet of things intelligent system based on cloud platform and mobile terminal App
CN115128978A (en) Internet of things environment big data detection and intelligent monitoring system
CN113255739A (en) Fish feed detection and formula system
CN117232817A (en) Intelligent big data monitoring method of electric valve and Internet of things system
CN113283642A (en) Poultry feed detection and formula system
CN115062764B (en) Intelligent illuminance adjustment and environmental parameter Internet of things big data system
CN114386672B (en) Environment big data Internet of things intelligent detection system
CN114970745B (en) Intelligent security and environment big data system of Internet of things
CN114415503B (en) Temperature big data internet of things detection and intelligent control system
CN115659201A (en) Gas concentration detection method and monitoring system for Internet of things
CN114399024B (en) Oil gas concentration big data intelligent detection system
CN114995248A (en) Intelligent maintenance and environmental parameter big data internet of things system
CN114358244A (en) Pressure big data intelligent detection system based on Internet of things
CN116108361B (en) Intelligent oil quality detection method and monitoring system
CN112906856A (en) Livestock and poultry house ambient temperature detection system based on cloud platform
CN117092976A (en) Intelligent monitoring method and system for reliability of Internet of things production device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination