CN110866672A - Data processing method, device, terminal and medium - Google Patents

Data processing method, device, terminal and medium Download PDF

Info

Publication number
CN110866672A
CN110866672A CN201910960852.8A CN201910960852A CN110866672A CN 110866672 A CN110866672 A CN 110866672A CN 201910960852 A CN201910960852 A CN 201910960852A CN 110866672 A CN110866672 A CN 110866672A
Authority
CN
China
Prior art keywords
data
analysis result
analyzed
network model
weighting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910960852.8A
Other languages
Chinese (zh)
Inventor
刘念慈
李世武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Financial Assets Exchange LLC
Original Assignee
Chongqing Financial Assets Exchange LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Financial Assets Exchange LLC filed Critical Chongqing Financial Assets Exchange LLC
Priority to CN201910960852.8A priority Critical patent/CN110866672A/en
Publication of CN110866672A publication Critical patent/CN110866672A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The embodiment of the invention discloses a data processing method, a device, a terminal and a medium, wherein the method comprises the following steps: acquiring data to be analyzed, inputting the data to be analyzed into a long and short time sequence memory network model to acquire a first analysis result aiming at the data to be analyzed, and detecting whether a monitoring index parameter corresponding to first prediction data is matched with a monitoring index parameter of the data to be analyzed; if not, inputting the data to be analyzed into the deep neural network model to obtain a second analysis result aiming at the data to be analyzed, and calculating the similarity between the first analysis result and the second analysis result; and if the similarity is greater than the preset similarity, determining a target analysis result aiming at the data to be analyzed according to the first analysis result and the second analysis result. By implementing the method, the data can be comprehensively analyzed and processed by combining a plurality of analysis models, and the accuracy of the data analysis result is improved.

Description

Data processing method, device, terminal and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, an apparatus, a terminal, and a medium.
Background
The macroscopic economic index is an important basis for measuring the national economic development, such as the total domestic production value, the common public budget expenditure and the like. Accurate economic index prediction can effectively assist government decision-making, and has great practical significance.
At present, the prediction of various economic indexes is mainly based on the prediction of a linear method, and prediction models corresponding to the method are all business rule models. According to an experience formula of business personnel, a traditional economic metering model is combined, short-period data prediction is conducted on economic indexes, and common algorithms include a trend extrapolation model, a distributed hysteresis model, a seasonal coordination model and the like. The method has the advantages of simple principle, strong interpretability and easy logic self-consistency. But the disadvantages are also evident: if the method completely depends on the experience of the business rules, the derivation relation among the indexes is not easy to find, namely the prediction accuracy is low. Therefore, how to predict the macroscopic economic indicators more accurately becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, a terminal and a medium, which can be used for carrying out comprehensive analysis processing on data by combining a plurality of analysis models and improving the accuracy of data analysis results.
In a first aspect, an embodiment of the present invention provides a data processing method, where the method includes:
acquiring data to be analyzed;
inputting the data to be analyzed into a long and short time sequence memory network model to obtain a first analysis result aiming at the data to be analyzed, wherein the first analysis result is obtained by processing the data to be analyzed by the long and short time sequence memory network model, the first analysis result comprises first prediction data obtained by analyzing the data to be analyzed and monitoring index parameters corresponding to the first prediction data, and the monitoring index parameters comprise at least one of an average value, a variance and an average growth rate;
detecting whether the monitoring index parameters are matched with the monitoring index parameters of the data to be analyzed;
if not, inputting the data to be analyzed into a deep neural network model to obtain a second analysis result aiming at the data to be analyzed, wherein the second analysis result is obtained by processing the data to be analyzed by the deep neural network model, and the second analysis result comprises second prediction data obtained by analyzing the data to be analyzed and monitoring index parameters corresponding to the second prediction data;
calculating a similarity between the first analysis result and the second analysis result;
and if the similarity is greater than the preset similarity, determining a target analysis result aiming at the data to be analyzed according to the first analysis result and the second analysis result.
In a second aspect, an embodiment of the present invention provides a data processing apparatus, where the apparatus includes:
the acquisition module is used for acquiring data to be analyzed;
the analysis module is used for analyzing the data to be analyzed according to the average value, the variance and the average growth rate of the data to be analyzed, and obtaining a first analysis result of the data to be analyzed;
the detection module is used for detecting whether the monitoring index parameters are matched with the monitoring index parameters of the data to be analyzed;
the input module is further configured to, if not, input the data to be analyzed into a deep neural network model to obtain a second analysis result for the data to be analyzed, where the second analysis result is obtained by processing the data to be analyzed by the deep neural network model, and the second analysis result includes second prediction data obtained by analyzing the data to be analyzed and a monitoring index parameter corresponding to the second prediction data;
the calculation module is used for calculating the similarity between the first analysis result and the second analysis result;
and if the similarity is greater than the preset similarity, determining a target analysis result aiming at the data to be analyzed according to the first analysis result and the second analysis result.
In a third aspect, an embodiment of the present invention provides a terminal, including a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, wherein the computer storage medium stores a computer program, and the computer program includes program instructions, which, when executed by a processor, cause the processor to execute the method according to the first aspect.
In the embodiment of the invention, a terminal acquires data to be analyzed and inputs the data to be analyzed into a long and short time sequence memory network model so as to acquire a first analysis result aiming at the data to be analyzed, and the terminal detects whether a monitoring index parameter corresponding to first prediction data is matched with a monitoring index parameter of the data to be analyzed; if not, inputting the data to be analyzed into the deep neural network model to obtain a second analysis result aiming at the data to be analyzed, and calculating the similarity between the first analysis result and the second analysis result by the terminal; and if the similarity is greater than the preset similarity, the terminal determines a target analysis result aiming at the data to be analyzed according to the first analysis result and the second analysis result. By implementing the method, the data can be comprehensively analyzed and processed by combining a plurality of analysis models, and the accuracy of the data analysis result is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a data processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another data processing method provided by the embodiment of the invention;
FIG. 3 is a schematic structural diagram of a long-short time-series memory network model according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a deep neural network model according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a recurrent neural network model according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The data processing method provided by the embodiment of the invention is realized on a terminal, and the terminal comprises electronic equipment such as a smart phone, a tablet personal computer, a digital audio and video player, an electronic reader, a handheld game machine or vehicle-mounted electronic equipment.
Fig. 1 is a schematic flow chart of a data processing method according to an embodiment of the present invention. As shown in fig. 1, the flow of the data processing method in this embodiment may include:
s101, the terminal obtains data to be analyzed.
In the embodiment of the invention, the data to be analyzed can be macroscopic economic index data, such as total domestic production value, common public budget expenditure and the like, or can be other data with time sequence characteristics, such as exchange rate change data, stock change data, futures rising and falling data and the like.
In specific implementation, the terminal may obtain macro economic index data in a target time period, and use the macro economic index data as data to be analyzed, where the target time period may be a recent year, a recent month, a recent week, or the like, and may be specifically preset by a user. It should be noted that the specific way for the terminal to obtain the data to be analyzed may be to obtain the data from a preset database, where the preset database records corresponding macro economic index data, or the terminal may also directly receive data input by a user, and use the data input by the user as the data to be analyzed.
S102, the terminal inputs the data to be analyzed into the long and short time sequence memory network model so as to obtain a first analysis result aiming at the data to be analyzed.
In the embodiment of the invention, after the terminal acquires the data to be analyzed, the data to be analyzed can be input into the long and short time sequence memory network model after training, so that the long and short time sequence memory network model after training processes the data to be analyzed to obtain a first analysis result, the first analysis result comprises first prediction data obtained by analyzing the data to be analyzed and monitoring index parameters corresponding to the first prediction data, and the monitoring index parameters comprise at least one of an average value, a variance and an average growth rate.
For example, the data to be analyzed is total domestic production value data in 2015-2018, specifically 69, 74, 82, and 90 (trillion), the terminal inputs the data to be analyzed into the long-short time-series memory network model, the long-short time-series memory network model processes the data input by the terminal based on a built-in algorithm to obtain first prediction data, the first prediction data may be total domestic production value prediction data in 2019-2022, specifically 100, 110, 121, and 133 (trillion), and further, the terminal calculates according to the first prediction data to obtain corresponding monitoring index parameters, and obtains an average value of 116, a variance of 151.5, and an average growth rate of 10%. And the terminal combines the first prediction data and the monitoring index parameters corresponding to the first prediction data to obtain a first analysis result.
In the specific implementation, because the long and short time-series memory network model has a requirement on the input format of the data to be analyzed, the terminal needs to preprocess the data to be analyzed after acquiring the data to be analyzed, and then input the preprocessed data to be analyzed into the long and short time-series memory network model, wherein the preprocessing mode includes data normalization, redundancy elimination, error correction and the like. The data normalization is specifically used for converting data to be analyzed into data which can be received by the model, the redundancy elimination is specifically used for clearing repeated parts in the data to be analyzed, the error correction is specifically used for clearing data which do not accord with the laws in the data to be analyzed, and the terminal inputs the preprocessed data to be analyzed into the long and short time sequence memory network model.
It should be noted that before the terminal inputs the data to be analyzed into the long and short time sequence memory network model, the long and short time sequence memory network model needs to be trained, so that the trained long and short time sequence memory network model outputs the required analysis result, wherein the specific training mode for the long and short time sequence memory network model may be divided into the following steps: 1. configuring parameters in the long and short time sequence memory network model, wherein the parameters specifically comprise random inactivation retention probability, a full connection layer activation function, data time sequence length, learning rate, forced maximum iteration times, learning rate attenuation proportion, iteration turn number of learning rate attenuation start, a truncation gradient threshold, a weight initialization mode, a regularization item coefficient, an optimizer, an error item threshold for stopping training, proportion of a training set file, random seeds, the number of hidden layer neurons, a hidden layer activation function and the like. 2. And training the long and short time sequence memory network model with the parameter configuration completed by adopting the sample data to obtain the long and short time sequence memory network model with the completed training. Specifically, the sample data may be historical economic data, specifically may include previous n-day economic data and next m-day economic data, the previous n-day economic data in the historical economic data is pre-selected as prediction data, the prediction data is input into a long and short time sequence memory network model with the parameter configuration completed, m output data are obtained, and the difference between the m output data and the next m-day economic data is checked. Through the method, the long and short time sequence memory network model is subjected to multiple rounds of iteration, and parameters in the long and short time sequence memory network model are updated through gradient descent type (such as adaptive momentum and root-mean-square back propagation) algorithms. Through multi-round multi-dimensional cross test verification, when the difference between the m output data and the economic data m days later is smaller than a preset threshold value, the long and short time sequence memory network model at the moment is determined as the long and short time sequence memory network model after training, wherein m and n are positive integers and can be preset by a user. As shown in fig. 3, a schematic structural diagram of a long and short time-series memory network model is shown, as can be seen from fig. 3, the long and short time-series memory network specifically includes a plurality of neuron cells H, each cell H includes a forgetting gate, an input gate and an output gate, the forgetting gate is used for combining an output value at a previous time and a sample characteristic value at a current time, and a probability value is output through a sigmoid function, and the closer the probability value is to 0, the more information representing the previous time is discarded; the closer the probability value is to 1, the more information is retained at the previous moment. The probability value is used to measure how much history information is forgotten at the current moment. The input gate is used for combining the output value at the previous moment with the sample characteristic value at the current moment, and outputting a probability value through a sigmoid function, wherein the closer the probability value is to 0, the more information at the current moment is discarded; the closer the probability value is to 1, the more the information at the current time is retained. The probability value is used to measure how large the sample characteristic value input proportion is at the current moment. The forgetting gate and the input gate jointly determine the state information of the current neuron. The output gate is used for combining the output value at the previous moment with the sample characteristic value at the current moment, and outputting a probability value through a sigmoid function, wherein the closer the probability value is to 0, the more the state information of the current neuron is discarded; the closer the probability value is to 1, the more state information output representing the current neuron, and the state information output proportion of the current neuron is measured by the probability value. I represents the input end of the long and short time sequence memory network model, O represents the output end of the long and short time sequence memory network model, X (ti) is used for representing the input data of the model, and I is more than or equal to 0 and less than or equal to n. In an implementation scenario, the long and short time-series memory network model is used for predicting value-added tax data, x (ti) input by an input end I specifically represents actual value-added tax data of different time periods, and an input gate, a forgetting gate and an output gate in the long and short time-series memory network model operate on the received value-added tax data to obtain prediction data of the value-added tax data, and the prediction data is output at an output end O.
S103, the terminal detects whether the monitoring index parameter corresponding to the first prediction data is matched with the monitoring index parameter of the data to be analyzed.
In the embodiment of the invention, the terminal inputs the data to be analyzed into the long and short time sequence memory network model, and after a first analysis result aiming at the data to be analyzed is obtained, whether a monitoring index parameter corresponding to the first prediction data is matched with a monitoring index parameter of the data to be analyzed is detected.
In one implementation mode, the monitoring index parameters include an average value, and the specific mode that the terminal detects whether the monitoring index parameters corresponding to the first prediction data are matched with the monitoring index parameters of the data to be analyzed is that the terminal calculates a first difference value between the average value of the first prediction data and the average value of the data to be analyzed; and if the first difference is smaller than the first preset difference, the terminal determines that the monitoring index parameter corresponding to the first prediction data is matched with the monitoring index parameter of the data to be analyzed.
In one implementation mode, the monitoring index parameters comprise variances, and the specific mode that the terminal detects whether the monitoring index parameters corresponding to the first prediction data are matched with the monitoring index parameters of the data to be analyzed is that the terminal calculates a second difference value between the variances of the first prediction data and the variances of the data to be analyzed; and if the second difference is smaller than the second preset difference, the terminal determines that the monitoring index parameter corresponding to the first prediction data is matched with the monitoring index parameter of the data to be analyzed.
In one implementation manner, the monitoring index parameters include an average growth rate, and the specific manner of detecting whether the monitoring index parameter corresponding to the first prediction data matches the monitoring index parameter of the data to be analyzed by the terminal is that the terminal calculates a third difference between the average growth rate of the first prediction data and the average growth rate of the data to be analyzed; and if the third difference is smaller than the third preset difference, the terminal determines that the monitoring index parameter corresponding to the first prediction data is matched with the monitoring index parameter of the data to be analyzed.
In one implementation mode, the monitoring index parameters include a mean value, a variance and a mean growth rate, and the specific mode that the terminal detects whether the monitoring index parameter corresponding to the first prediction data is matched with the monitoring index parameter of the data to be analyzed is that the terminal calculates a first difference value between the mean value of the first prediction data and the mean value of the data to be analyzed; if the first difference is smaller than a first preset difference, the terminal calculates a second difference between the variance of the first prediction data and the variance of the data to be analyzed; if the second difference is smaller than the second preset difference, the terminal calculates a third difference between the average growth rate of the first prediction data and the average growth rate of the data to be analyzed; and if the third difference is smaller than the third preset difference, the terminal determines that the monitoring index parameter corresponding to the first prediction data is matched with the monitoring index parameter of the data to be analyzed.
For example, the terminal calculates, according to the first prediction data, that the average value of the monitored index parameters is 116, the variance is 151.5, the average increase rate is 10%, the first preset difference value is 70, the second preset difference value is 30, and the third preset difference value is 10%. If the average value of the data to be analyzed is 80, the variance is 130 and the average growth rate is 12%, the terminal determines that the monitoring index parameter corresponding to the first prediction data is matched with the monitoring index parameter of the data to be analyzed, and if the average value of the data to be analyzed is 30, the terminal determines that the monitoring index parameter corresponding to the first prediction data is not matched with the monitoring index parameter of the data to be analyzed because a first difference between the average value of the first prediction data and the average value of the data to be analyzed is greater than a first preset difference.
Further, if the terminal determines that the monitoring index parameter corresponding to the first prediction data matches with the monitoring index parameter of the data to be analyzed, the terminal may directly determine the first analysis result as a target analysis result for the data to be analyzed; if the monitoring index parameter corresponding to the first prediction data is not matched with the monitoring index parameter of the data to be analyzed, the terminal executes step S104.
And S104, if not, the terminal inputs the data to be analyzed into the deep neural network model so as to obtain a second analysis result aiming at the data to be analyzed.
In the embodiment of the invention, if the monitoring index parameter corresponding to the first prediction data is not matched with the monitoring index parameter of the data to be analyzed, the terminal inputs the data to be analyzed into the trained deep neural network model, so that the trained deep neural network model processes the data to be analyzed to obtain a second analysis result, and the second analysis result comprises second prediction data obtained by analyzing based on the data to be analyzed and the monitoring index parameter corresponding to the second prediction data.
In the specific implementation, because the deep neural network model has a requirement on the input format of the data to be analyzed, the terminal needs to preprocess the data to be analyzed after acquiring the data to be analyzed, and then input the preprocessed data to be analyzed into the deep neural network model, wherein the preprocessing mode includes data normalization, redundancy elimination, error correction and the like.
It should be noted that before the terminal inputs the data to be analyzed into the deep neural network model, the deep neural network model needs to be trained, so that the trained deep neural network model outputs the required analysis result, wherein the specific training mode for the deep neural network model may be divided into the following steps: 1. and configuring parameters in the deep neural network model. The model parameters may specifically include a random inactivation retention probability, a full connection layer activation function, a data time sequence length, a learning rate, a forced maximum iteration number, a learning rate attenuation ratio, an iteration round number at the beginning of learning rate attenuation, a truncation gradient threshold, a weight initialization mode, a regularization term coefficient, an optimizer, an error term threshold for stopping training, a ratio of a training set file, a random seed, a number of hidden layer neurons, a hidden layer activation function, and the like. 2. And training the deep neural network model with the parameter configuration completed by adopting the sample data to obtain the trained deep neural network model. Specifically, the sample data may be historical economic data, specifically may include economic data of the previous n days and economic data of the next m days, the economic data of the previous n days in the historical economic data is pre-selected as prediction data, the prediction data is input into the deep neural network model with the parameter configuration completed, m output data are obtained, and the difference between the m output data and the economic data of the next m days is checked. Through the method, multiple rounds of iteration are carried out on the deep neural network model, and parameters in the deep neural network model are updated through gradient descent type (such as adaptive momentum and root mean square back propagation) algorithms. Through multi-round multi-dimensional cross test verification, when the difference between m output data and the economic data m days later is smaller than a preset threshold value, the deep neural network model at the moment is determined as the deep neural network model after training is completed, wherein m and n are positive integers and can be specifically preset by a user. As shown in fig. 4, it can be known from fig. 4 that the deep neural network model is composed of an input layer, a hidden layer, and an output layer, where the hidden layer may have s layers, each layer includes j parameters (h11, h12 … hsj) for enhancing the expression ability of the model and weighting the input parameters to express the relationship between the input parameters and the output parameters, the input layer includes a plurality of input ports for inputting (xi1, xi2 … xin) representing the input parameters, the output layer also includes a plurality of output ports for outputting (o1, o2 … or) representing the output parameters, and the deep neural network model can show the nonlinear relationship between the input parameters and the output function. In an implementation scenario, the deep neural network model is used for predicting value-added tax data, actual value-added tax data of specific different time periods (xi1, xi2 … xin) input by an input end are subjected to weighted summation processing by parameters (h11, h12 … hsj) of a hidden layer in the model to obtain output parameters (o1, o2 … on), the model adopts the output layer to output the output parameters, and the output parameters can be the value-added tax data predicted by the deep neural network model. The parameters (h11, h12 … hsj) in the hidden layer can be obtained through multiple training tests, in the process of model training, a user can input initial hidden layer parameters and input the value-added tax data of the previous n days into the model to obtain predicted output parameters, the output parameters comprise m data, the difference between the m data in the output parameters and the actual value-added tax data of the next m days is detected, if the difference is smaller than a preset difference, the parameters in the hidden layer at the moment are used as the parameters in the hidden layer in the deep neural network model after training is completed, and if the difference is larger than or equal to the preset difference, the parameters in the hidden layer can be adjusted to be retrained until the difference is smaller than the preset difference. The difference can be embodied by the euclidean distance, the average value or the variance between the two. And if the gap is the Euclidean distance, when the Euclidean distance between the m data and the actual value-added tax data m days later is smaller than the preset Euclidean distance, taking the parameter in the hidden layer at the moment as the parameter in the hidden layer in the trained deep neural network model.
And S105, the terminal calculates the similarity between the first analysis result and the second analysis result.
In the embodiment of the invention, after the terminal acquires the second analysis result aiming at the data to be analyzed, the similarity between the first analysis result and the second analysis result is detected and calculated.
In an implementation manner, the specific calculation manner of the similarity may be that the terminal calculates a euclidean distance between first prediction data in the first analysis result and second prediction data in the second analysis result, and determines a target similarity corresponding to the euclidean distance according to a correspondence between the distance and the similarity; and the terminal determines the target similarity as the similarity between the first analysis result and the second analysis result. The corresponding relationship between the distance and the similarity may specifically be that if the distance is smaller than a first preset distance, the similarity is determined to be a first preset similarity, if the distance is between the first preset distance and a second preset distance, the similarity is determined to be a second preset similarity, and if the distance is greater than the second preset distance, the similarity is determined to be a third preset similarity, where the first preset distance, the second preset distance, the first preset similarity, the second preset similarity, and the third preset similarity may be preset by a user. Alternatively, the similarity may be determined by the reciprocal of the distance, i.e., the greater the distance, the less the similarity.
In one implementation, the similarity may also be determined by the monitoring index parameter corresponding to the first prediction data and the monitoring index parameter in the second prediction data, specifically, the terminal may calculate a first difference between the average value of the first prediction data and the average value of the data to be analyzed, and determining the similarity between the first analysis result and the second analysis result according to the corresponding relation between the first difference and the similarity, alternatively, the terminal may calculate a second difference between the variance of the first prediction data and the variance of the data to be analyzed, and determining the similarity between the first analysis result and the second analysis result according to the corresponding relation between the second difference and the similarity, alternatively, the terminal may calculate a third difference between the average growth rate of the first prediction data and the average growth rate of the data to be analyzed, and determining the similarity between the first analysis result and the second analysis result according to the corresponding relation between the third difference and the similarity. Or, the terminal may also sum the first difference, the second difference, and the third difference to obtain a target difference, and determine the similarity between the first analysis result and the second analysis result according to the correspondence between the target difference and the similarity.
And S106, if the similarity is greater than the preset similarity, the terminal determines a target analysis result aiming at the data to be analyzed according to the first analysis result and the second analysis result.
In the embodiment of the invention, after the terminal calculates the similarity between the first analysis result and the second analysis result, whether the similarity is greater than the preset similarity is detected, and if the similarity is greater than the preset similarity, the terminal determines the target analysis result aiming at the data to be analyzed according to the first analysis result and the second analysis result.
Specifically, the terminal obtains a first weighting coefficient corresponding to the long and short time sequence memory network model and a second weighting coefficient corresponding to the deep neural network model, and performs weighting processing on a first analysis result by adopting the first weighting coefficient to obtain a first weighting result; weighting the second analysis result by adopting a second weighting coefficient to obtain a second weighting result; and the terminal sums the first weighting result and the second weighting result to obtain a target analysis result. The first weighting coefficient corresponding to the long and short time sequence memory network model can be determined by a first analysis accuracy corresponding to the long and short time sequence memory network model, and the second weighting coefficient corresponding to the deep neural network model can be determined by a second analysis accuracy corresponding to the deep neural network model, wherein the specific calculation mode of the accuracy in the first analysis accuracy or the second analysis accuracy can be that the terminal obtains the number of times of model analysis and the number of times of analysis correctness, and the ratio of the number of times of analysis correctness to the number of times of analysis is determined as the accuracy, and the higher the accuracy is, the larger the corresponding weighting coefficient is. For example, if the first and second analysis accuracies are 90% and 80%, respectively, the corresponding first and second weighting coefficients are 0.6 and 0.4, and if the first and second analysis accuracies are 90% and 90%, respectively, the corresponding first and second weighting coefficients are 0.5 and 0.5.
In the embodiment of the invention, a terminal acquires data to be analyzed and inputs the data to be analyzed into a long and short time sequence memory network model so as to acquire a first analysis result aiming at the data to be analyzed, and the terminal detects whether a monitoring index parameter corresponding to first prediction data is matched with a monitoring index parameter of the data to be analyzed; if not, the terminal inputs the data to be analyzed into the deep neural network model to obtain a second analysis result aiming at the data to be analyzed, and the terminal calculates the similarity between the first analysis result and the second analysis result; and if the similarity is greater than the preset similarity, the terminal determines a target analysis result aiming at the data to be analyzed according to the first analysis result and the second analysis result. By implementing the method, the data can be comprehensively analyzed and processed by combining a plurality of analysis models, and the accuracy of the data analysis result is improved.
Fig. 2 is a schematic flow chart of another data processing method according to an embodiment of the present invention. As shown in the figure, the flow of the data processing method in this embodiment may include:
s201, the terminal obtains data to be analyzed.
In the embodiment of the present invention, the data to be analyzed may specifically be macro performance index data, such as total domestic production value, general public budget expenditure, and the like, or may also be other data with time sequence characteristics, such as exchange rate change data, stock change data, futures rising and falling data, and the like.
S202, the terminal inputs the data to be analyzed into the long and short time sequence memory network model so as to obtain a first analysis result aiming at the data to be analyzed.
In the embodiment of the invention, the first analysis result comprises first prediction data obtained by analyzing based on data to be analyzed and monitoring index parameters corresponding to the first prediction data, and the monitoring index parameters comprise at least one of an average value, a variance and an average growth rate.
S203, the terminal detects whether the monitoring index parameter corresponding to the first prediction data is matched with the monitoring index parameter of the data to be analyzed.
And S204, if not, the terminal inputs the data to be analyzed into the deep neural network model so as to obtain a second analysis result aiming at the data to be analyzed.
In the embodiment of the invention, if the monitoring index parameter corresponding to the first prediction data is not matched with the monitoring index parameter of the data to be analyzed, the terminal inputs the data to be analyzed into the trained deep neural network model, so that the trained deep neural network model processes the data to be analyzed to obtain a second analysis result, and the second analysis result comprises second prediction data obtained by analyzing based on the data to be analyzed and the monitoring index parameter corresponding to the second prediction data.
S205, the terminal calculates the similarity between the first analysis result and the second analysis result.
In the embodiment of the invention, after the terminal acquires the second analysis result aiming at the data to be analyzed, the similarity between the first analysis result and the second analysis result is detected and calculated.
And S206, if the similarity between the first analysis result and the second analysis result is smaller than the preset similarity, the terminal inputs the data to be analyzed into the recurrent neural network model to obtain a third analysis result aiming at the data to be analyzed.
In the embodiment of the invention, if the similarity between the first analysis result and the second analysis result is smaller than the preset similarity, the terminal inputs the data to be analyzed into the trained recurrent neural network model, so that the trained recurrent neural network model processes the data to be analyzed to obtain a third analysis result, and the third analysis result comprises third prediction data obtained by analyzing the data to be analyzed and monitoring index parameters corresponding to the third prediction data.
In the specific implementation, due to the requirement of the cyclic neural network model on the input format of the data to be analyzed, the terminal needs to preprocess the data to be analyzed after acquiring the data to be analyzed, and then input the preprocessed data to be analyzed into the cyclic neural network model, wherein the preprocessing mode includes data normalization, redundancy elimination, error correction and the like.
It should be noted that before the terminal inputs the data to be analyzed into the recurrent neural network model, the recurrent neural network model needs to be trained, so that the trained recurrent neural network model outputs the required analysis result, wherein the specific training mode for the recurrent neural network model may be divided into the following steps: 1. and configuring parameters in the recurrent neural network model. The parameters may specifically include a random inactivation retention probability, a full connection layer activation function, a data time sequence length, a learning rate, a forced maximum iteration number, a learning rate attenuation ratio, an iteration round number at the beginning of learning rate attenuation, a truncation gradient threshold, a weight initialization mode, a regularization term coefficient, an optimizer, an error term threshold for stopping training, a ratio of a training set file, a random seed, a number of hidden layer neurons, a hidden layer activation function, and the like. 2. And training the cyclic neural network model with the parameter configuration completed by adopting the sample data to obtain the trained cyclic neural network model. Specifically, the sample data may be historical economic data, specifically may include economic data of the previous n days and economic data of the following m days, the economic data of the previous n days in the historical economic data is pre-selected as prediction data, the prediction data is input into a cyclic neural network model with the parameter configuration completed, m output data are obtained, and the difference between the m output data and the economic data of the following m days is checked. Through the method, multiple rounds of iteration are carried out on the recurrent neural network model, and parameters in the recurrent neural network model are updated through gradient descent type (such as adaptive momentum and root-mean-square back propagation) algorithms. Through multi-round multi-dimensional cross test verification, when the difference between m output data and the economic data m days later is smaller than a preset threshold value, the recurrent neural network model at the moment is determined as the recurrent neural network model after training is completed, wherein m and n are positive integers and can be specifically preset by a user. As shown in fig. 5, it can be known from fig. 5 that the recurrent neural network model includes an input layer, a hidden layer, and an output layer, and different from the deep neural network model, a feedback loop is added to neurons of the hidden layer of the recurrent neural network model to implement time sequence transmission, that is, the recurrent neural network model can embody a time sequence relationship between input parameters, specifically, I represents an input end of the recurrent neural network model, O represents an output end of the recurrent neural network model, x (ti) is used to represent input data of the model, and I is greater than or equal to 0 and less than or equal to n. In an implementation scenario, the recurrent neural network model is used for predicting value-added tax data, x (ti) input by an input end I specifically represents actual value-added tax data of different time periods, and a hidden layer H in the recurrent neural network model operates on the received value-added tax data to obtain predicted data of the value-added tax data, and the predicted data is output at an output end O. The parameters in the hidden layer H can be obtained through multiple training tests, in the process of model training, a user can input initial hidden layer parameters and input value-added tax data of previous n days into the model to obtain predicted output parameters, the output parameters comprise m data, the difference between the m data in the output parameters and the actual value-added tax data of the next m days is detected, if the difference is smaller than a preset difference, the parameters in the hidden layer H at the moment are used as the parameters in the hidden layer in the deep neural network model after training is completed, and if the difference is larger than or equal to the preset difference, the parameters in the hidden layer H can be adjusted to be retrained until the difference is smaller than the preset difference. The difference can be embodied by the euclidean distance, the average value or the variance between the two.
And S207, the terminal determines a target analysis result aiming at the data to be analyzed according to the first analysis result, the second analysis result and the third analysis result.
In the embodiment of the invention, after the terminal acquires the first analysis result, the second analysis result and the third analysis result, the target analysis result for the data to be analyzed is determined according to the first analysis result, the second analysis result and the third analysis result.
In specific implementation, a terminal acquires a first weighting coefficient corresponding to a long and short time sequence memory network model, a second weighting coefficient corresponding to a deep neural network model and a third weighting coefficient corresponding to a cyclic neural network model; weighting the first analysis result by adopting a first weighting coefficient to obtain a first weighting result; weighting the second analysis result by adopting a second weighting coefficient to obtain a second weighting result; weighting the third analysis result by adopting a third weighting coefficient to obtain a third weighting result; and the terminal sums the first weighting result, the second weighting result and the third weighting result to obtain a target analysis result. The specific way for the terminal to obtain the first weighting coefficient, the second weighting coefficient and the third weighting coefficient may be that the terminal obtains a first analysis accuracy corresponding to the long and short time sequence memory network model, and determines a first weighting coefficient corresponding to the first analysis accuracy according to a corresponding relationship between the accuracy and the weighting coefficients; the terminal obtains a second analysis accuracy corresponding to the deep neural network model, and determines a second weighting coefficient corresponding to the second analysis accuracy according to the corresponding relation between the accuracy and the weighting coefficient; and the terminal acquires a third analysis accuracy rate corresponding to the recurrent neural network model, and determines a third weighting coefficient corresponding to the third analysis accuracy rate according to the corresponding relation between the accuracy rate and the weighting coefficient.
In the embodiment of the invention, a terminal acquires data to be analyzed and inputs the data to be analyzed into a long and short time sequence memory network model so as to acquire a first analysis result aiming at the data to be analyzed, and the terminal detects whether a monitoring index parameter corresponding to first prediction data is matched with a monitoring index parameter of the data to be analyzed; if not, inputting the data to be analyzed into the deep neural network model to obtain a second analysis result aiming at the data to be analyzed, and calculating the similarity between the first analysis result and the second analysis result by the terminal; if the similarity is smaller than the preset similarity, the terminal inputs the data to be analyzed into the recurrent neural network model so as to obtain a third analysis result aiming at the data to be analyzed, and a target analysis result aiming at the data to be analyzed is determined according to the first analysis result, the second analysis result and the third analysis result. By implementing the method, the data can be comprehensively analyzed and processed by combining a plurality of analysis models, and the accuracy of the data analysis result is improved.
A data processing apparatus according to an embodiment of the present invention will be described in detail with reference to fig. 6. It should be noted that the data processing apparatus shown in fig. 6 is used for executing the method according to the embodiment of the present invention shown in fig. 1-2, for convenience of description, only the portion related to the embodiment of the present invention is shown, and details of the specific technology are not disclosed, and reference is made to the embodiment of the present invention shown in fig. 1-2.
Referring to fig. 6, which is a schematic structural diagram of a data processing apparatus provided in the present invention, the data processing apparatus 60 may include: the device comprises an acquisition module 601, an input module 602, a detection module 603, a calculation module 604 and a determination module 605.
An obtaining module 601, configured to obtain data to be analyzed;
an input module 602, configured to input the data to be analyzed into a long and short time-series memory network model to obtain a first analysis result for the data to be analyzed, where the first analysis result is obtained by processing the data to be analyzed by the long and short time-series memory network model, the first analysis result includes first prediction data obtained by analyzing the data to be analyzed and a monitoring index parameter corresponding to the first prediction data, and the monitoring index parameter includes at least one of an average value, a variance, and an average growth rate;
a detecting module 603, configured to detect whether the monitoring index parameter matches with a monitoring index parameter of the data to be analyzed;
the input module 602 is further configured to, if not, input the data to be analyzed into a deep neural network model to obtain a second analysis result for the data to be analyzed, where the second analysis result is obtained by processing the data to be analyzed by the deep neural network model, and the second analysis result includes second prediction data obtained by analyzing the data to be analyzed and a monitoring index parameter corresponding to the second prediction data;
a calculating module 604, configured to calculate a similarity between the first analysis result and the second analysis result;
a determining module 605, configured to determine a target analysis result for the data to be analyzed according to the first analysis result and the second analysis result if the similarity is greater than a preset similarity.
In an implementation manner, the detecting module 603 is specifically configured to:
calculating a first difference between the average of the first prediction data and the average of the data to be analyzed;
if the first difference is smaller than a first preset difference, calculating a second difference between the variance of the first prediction data and the variance of the data to be analyzed;
if the second difference is smaller than a second preset difference, calculating a third difference between the average growth rate of the first prediction data and the average growth rate of the data to be analyzed;
and if the third difference is smaller than a third preset difference, determining that the monitoring index parameter corresponding to the first prediction data is matched with the monitoring index parameter of the data to be analyzed.
In an implementation manner, the calculating module 604 is specifically configured to:
calculating a Euclidean distance between first prediction data in the first analysis result and second prediction data in the second analysis result;
determining the target similarity corresponding to the Euclidean distance according to the corresponding relation between the distance and the similarity;
determining the target similarity as a similarity between the first analysis result and the second analysis result.
In an implementation manner, the determining module 605 is specifically configured to:
acquiring a first weighting coefficient corresponding to the long and short time sequence memory network model and a second weighting coefficient corresponding to the deep neural network model;
weighting the first analysis result by adopting the first weighting coefficient to obtain a first weighting result;
weighting the second analysis result by using the second weighting coefficient to obtain a second weighting result;
and summing the first weighting result and the second weighting result to obtain a target analysis result aiming at the data to be analyzed.
In one implementation, the determining module 605 is further configured to:
if the similarity between the first analysis result and the second analysis result is smaller than a preset similarity, inputting the data to be analyzed into a recurrent neural network model to obtain a third analysis result aiming at the data to be analyzed, wherein the third analysis result is obtained by processing the data to be analyzed by the recurrent neural network model, and the third analysis result comprises third prediction data obtained by analyzing based on the data to be analyzed and monitoring index parameters corresponding to the third prediction data;
and determining a target analysis result aiming at the data to be analyzed according to the first analysis result, the second analysis result and the third analysis result.
In an implementation manner, the determining module 605 is specifically configured to:
acquiring a first weighting coefficient corresponding to the long and short time sequence memory network model, a second weighting coefficient corresponding to the deep neural network model and a third weighting coefficient corresponding to the cyclic neural network model;
weighting the first analysis result by adopting the first weighting coefficient to obtain a first weighting result;
weighting the second analysis result by using the second weighting coefficient to obtain a second weighting result;
weighting the third analysis result by using the third weighting coefficient to obtain a third weighting result;
and summing the first weighting result, the second weighting result and the third weighting result to obtain a target analysis result aiming at the data to be analyzed.
In one implementation, the determining module 605 is further configured to:
acquiring a first analysis accuracy corresponding to the long and short time sequence memory network model, and determining a first weighting coefficient corresponding to the first analysis accuracy according to the corresponding relation between the accuracy and the weighting coefficient;
acquiring a second analysis accuracy corresponding to the deep neural network model, and determining a second weighting coefficient corresponding to the second analysis accuracy according to the corresponding relation between the accuracy and the weighting coefficient;
and acquiring a third analysis accuracy corresponding to the recurrent neural network model, and determining a third weighting coefficient corresponding to the third analysis accuracy according to the corresponding relation between the accuracy and the weighting coefficient.
In the embodiment of the present invention, the obtaining module 601 obtains data to be analyzed, the input module 602 inputs the data to be analyzed into the long and short time series memory network model to obtain a first analysis result for the data to be analyzed, and the detecting module 603 detects whether a monitoring index parameter corresponding to the first prediction data matches with a monitoring index parameter of the data to be analyzed; if not, the input module 602 inputs the data to be analyzed into the deep neural network model to obtain a second analysis result for the data to be analyzed, and the calculation module 604 calculates the similarity between the first analysis result and the second analysis result; if the similarity is greater than the preset similarity, the determining module 605 determines a target analysis result for the data to be analyzed according to the first analysis result and the second analysis result.
Fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention. As shown in fig. 7, the terminal includes: at least one processor 701, an input device 703, an output device 704, a memory 705, at least one communication bus 702. Wherein a communication bus 702 is used to enable connective communication between these components. The input device 703 may be a control panel, a microphone, or the like, and the output device 704 may be a display screen, or the like. The memory 705 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 705 may optionally be at least one memory device located remotely from the processor 701. Wherein the processor 701 may be combined with the apparatus described in fig. 6, the memory 705 stores a set of program codes, and the processor 701, the input device 703 and the output device 704 call the program codes stored in the memory 705 to perform the following operations:
an input device 703 for acquiring data to be analyzed;
the processor 701 is configured to input the data to be analyzed into a long and short time series memory network model to obtain a first analysis result for the data to be analyzed, where the first analysis result is obtained by processing the data to be analyzed by the long and short time series memory network model, the first analysis result includes first prediction data obtained by analyzing the data to be analyzed and a monitoring index parameter corresponding to the first prediction data, and the monitoring index parameter includes at least one of an average value, a variance, and an average growth rate;
a processor 701, configured to detect whether the monitoring index parameter matches with a monitoring index parameter of the data to be analyzed;
the processor 701 is configured to, if not, input the data to be analyzed into a deep neural network model to obtain a second analysis result for the data to be analyzed, where the second analysis result is obtained by processing the data to be analyzed by the deep neural network model, and the second analysis result includes second prediction data obtained by analyzing the data to be analyzed and a monitoring index parameter corresponding to the second prediction data;
a processor 701 configured to calculate a similarity between the first analysis result and the second analysis result;
the processor 701 is configured to determine a target analysis result for the data to be analyzed according to the first analysis result and the second analysis result if the similarity is greater than a preset similarity.
In one implementation, the processor 701 is specifically configured to:
calculating a first difference between the average of the first prediction data and the average of the data to be analyzed;
if the first difference is smaller than a first preset difference, calculating a second difference between the variance of the first prediction data and the variance of the data to be analyzed;
if the second difference is smaller than a second preset difference, calculating a third difference between the average growth rate of the first prediction data and the average growth rate of the data to be analyzed;
and if the third difference is smaller than a third preset difference, determining that the monitoring index parameter corresponding to the first prediction data is matched with the monitoring index parameter of the data to be analyzed.
In one implementation, the processor 701 is specifically configured to:
calculating a Euclidean distance between first prediction data in the first analysis result and second prediction data in the second analysis result;
determining the target similarity corresponding to the Euclidean distance according to the corresponding relation between the distance and the similarity;
determining the target similarity as a similarity between the first analysis result and the second analysis result.
In one implementation, the processor 701 is specifically configured to:
acquiring a first weighting coefficient corresponding to the long and short time sequence memory network model and a second weighting coefficient corresponding to the deep neural network model;
weighting the first analysis result by adopting the first weighting coefficient to obtain a first weighting result;
weighting the second analysis result by using the second weighting coefficient to obtain a second weighting result;
and summing the first weighting result and the second weighting result to obtain a target analysis result aiming at the data to be analyzed.
In one implementation, the processor 701 is specifically configured to:
if the similarity between the first analysis result and the second analysis result is smaller than a preset similarity, inputting the data to be analyzed into a recurrent neural network model to obtain a third analysis result aiming at the data to be analyzed, wherein the third analysis result is obtained by processing the data to be analyzed by the recurrent neural network model, and the third analysis result comprises third prediction data obtained by analyzing based on the data to be analyzed and monitoring index parameters corresponding to the third prediction data;
and determining a target analysis result aiming at the data to be analyzed according to the first analysis result, the second analysis result and the third analysis result.
In one implementation, the processor 701 is specifically configured to:
acquiring a first weighting coefficient corresponding to the long and short time sequence memory network model, a second weighting coefficient corresponding to the deep neural network model and a third weighting coefficient corresponding to the cyclic neural network model;
weighting the first analysis result by adopting the first weighting coefficient to obtain a first weighting result;
weighting the second analysis result by using the second weighting coefficient to obtain a second weighting result;
weighting the third analysis result by using the third weighting coefficient to obtain a third weighting result;
and summing the first weighting result, the second weighting result and the third weighting result to obtain a target analysis result aiming at the data to be analyzed.
In one implementation, the processor 701 is specifically configured to:
acquiring a first analysis accuracy corresponding to the long and short time sequence memory network model, and determining a first weighting coefficient corresponding to the first analysis accuracy according to the corresponding relation between the accuracy and the weighting coefficient;
acquiring a second analysis accuracy corresponding to the deep neural network model, and determining a second weighting coefficient corresponding to the second analysis accuracy according to the corresponding relation between the accuracy and the weighting coefficient;
and acquiring a third analysis accuracy corresponding to the recurrent neural network model, and determining a third weighting coefficient corresponding to the third analysis accuracy according to the corresponding relation between the accuracy and the weighting coefficient.
In the embodiment of the present invention, an input device 703 acquires data to be analyzed, a processor 701 inputs the data to be analyzed into a long-short time sequence memory network model to acquire a first analysis result for the data to be analyzed, and the processor 701 detects whether a monitoring index parameter corresponding to first prediction data matches with a monitoring index parameter of the data to be analyzed; if not, the processor 701 inputs the data to be analyzed into the deep neural network model to obtain a second analysis result aiming at the data to be analyzed, and the processor 701 calculates the similarity between the first analysis result and the second analysis result; if the similarity is greater than the preset similarity, the processor 701 determines a target analysis result for the data to be analyzed according to the first analysis result and the second analysis result.
The module in the embodiment of the present invention may be implemented by a general-purpose integrated circuit, such as a CPU (central processing Unit), or an ASIC (application Specific integrated circuit).
It should be understood that, in the embodiment of the present invention, the Processor 701 may be a Central Processing Unit (CPU), and the Processor may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The bus 702 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Enhanced ISA (EISA) bus, or the like, and the bus 702 may be divided into an address bus, a data bus, a control bus, or the like, where fig. 7 illustrates only one thick line for ease of illustration, but does not illustrate only one bus or one type of bus.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program, which may be stored in a computer storage medium and may include the processes of the embodiments of the methods described above when executed. The computer storage medium may be a magnetic disk, an optical disk, a Read-only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (10)

1. A method of data processing, the method comprising:
acquiring data to be analyzed;
inputting the data to be analyzed into a long and short time sequence memory network model to obtain a first analysis result aiming at the data to be analyzed, wherein the first analysis result is obtained by processing the data to be analyzed by the long and short time sequence memory network model, the first analysis result comprises first prediction data obtained by analyzing the data to be analyzed and monitoring index parameters corresponding to the first prediction data, and the monitoring index parameters comprise at least one of an average value, a variance and an average growth rate;
detecting whether the monitoring index parameters are matched with the monitoring index parameters of the data to be analyzed;
if not, inputting the data to be analyzed into a deep neural network model to obtain a second analysis result aiming at the data to be analyzed, wherein the second analysis result is obtained by processing the data to be analyzed by the deep neural network model, and the second analysis result comprises second prediction data obtained by analyzing the data to be analyzed and monitoring index parameters corresponding to the second prediction data;
calculating a similarity between the first analysis result and the second analysis result;
and if the similarity is greater than the preset similarity, determining a target analysis result aiming at the data to be analyzed according to the first analysis result and the second analysis result.
2. The method of claim 1, wherein the detecting whether the monitoring index parameter matches a monitoring index parameter of the data to be analyzed comprises:
calculating a first difference between the average of the first prediction data and the average of the data to be analyzed;
if the first difference is smaller than a first preset difference, calculating a second difference between the variance of the first prediction data and the variance of the data to be analyzed;
if the second difference is smaller than a second preset difference, calculating a third difference between the average growth rate of the first prediction data and the average growth rate of the data to be analyzed;
and if the third difference is smaller than a third preset difference, determining that the monitoring index parameter corresponding to the first prediction data is matched with the monitoring index parameter of the data to be analyzed.
3. The method of claim 1, wherein said calculating a similarity between said first analysis result and said second analysis result comprises:
calculating a Euclidean distance between first prediction data in the first analysis result and second prediction data in the second analysis result;
determining the target similarity corresponding to the Euclidean distance according to the corresponding relation between the distance and the similarity;
determining the target similarity as a similarity between the first analysis result and the second analysis result.
4. The method of claim 1, wherein determining a target analysis result for the data to be analyzed from the first analysis result and the second analysis result comprises:
acquiring a first weighting coefficient corresponding to the long and short time sequence memory network model and a second weighting coefficient corresponding to the deep neural network model;
weighting the first analysis result by adopting the first weighting coefficient to obtain a first weighting result;
weighting the second analysis result by using the second weighting coefficient to obtain a second weighting result;
and summing the first weighting result and the second weighting result to obtain a target analysis result aiming at the data to be analyzed.
5. The method of claim 1, wherein after calculating the similarity between the first analysis result and the second analysis result, the method further comprises:
if the similarity between the first analysis result and the second analysis result is smaller than a preset similarity, inputting the data to be analyzed into a recurrent neural network model to obtain a third analysis result aiming at the data to be analyzed, wherein the third analysis result is obtained by processing the data to be analyzed by the recurrent neural network model, and the third analysis result comprises third prediction data obtained by analyzing based on the data to be analyzed and monitoring index parameters corresponding to the third prediction data;
and determining a target analysis result aiming at the data to be analyzed according to the first analysis result, the second analysis result and the third analysis result.
6. The method of claim 5, wherein determining a target analysis result for the data to be analyzed from the first analysis result, the second analysis result, and the third analysis result comprises:
acquiring a first weighting coefficient corresponding to the long and short time sequence memory network model, a second weighting coefficient corresponding to the deep neural network model and a third weighting coefficient corresponding to the cyclic neural network model;
weighting the first analysis result by adopting the first weighting coefficient to obtain a first weighting result;
weighting the second analysis result by using the second weighting coefficient to obtain a second weighting result;
weighting the third analysis result by using the third weighting coefficient to obtain a third weighting result;
and summing the first weighting result, the second weighting result and the third weighting result to obtain a target analysis result aiming at the data to be analyzed.
7. The method according to claim 6, wherein the obtaining of the first weighting coefficient corresponding to the long and short time series memory network model, the second weighting coefficient corresponding to the deep neural network model, and the third weighting coefficient corresponding to the recurrent neural network model includes:
acquiring a first analysis accuracy corresponding to the long and short time sequence memory network model, and determining a first weighting coefficient corresponding to the first analysis accuracy according to the corresponding relation between the accuracy and the weighting coefficient;
acquiring a second analysis accuracy corresponding to the deep neural network model, and determining a second weighting coefficient corresponding to the second analysis accuracy according to the corresponding relation between the accuracy and the weighting coefficient;
and acquiring a third analysis accuracy corresponding to the recurrent neural network model, and determining a third weighting coefficient corresponding to the third analysis accuracy according to the corresponding relation between the accuracy and the weighting coefficient.
8. A data processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring data to be analyzed;
the analysis module is used for analyzing the data to be analyzed according to the average value, the variance and the average growth rate of the data to be analyzed, and obtaining a first analysis result of the data to be analyzed;
the detection module is used for detecting whether the monitoring index parameters are matched with the monitoring index parameters of the data to be analyzed;
the input module is further configured to, if not, input the data to be analyzed into a deep neural network model to obtain a second analysis result for the data to be analyzed, where the second analysis result is obtained by processing the data to be analyzed by the deep neural network model, and the second analysis result includes second prediction data obtained by analyzing the data to be analyzed and a monitoring index parameter corresponding to the second prediction data;
the calculation module is used for calculating the similarity between the first analysis result and the second analysis result;
and the determining module is used for determining a target analysis result aiming at the data to be analyzed according to the first analysis result and the second analysis result if the similarity is greater than a preset similarity.
9. A terminal, comprising a processor, an input device, an output device, and a memory, the processor, the input device, the output device, and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-7.
CN201910960852.8A 2019-10-10 2019-10-10 Data processing method, device, terminal and medium Pending CN110866672A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910960852.8A CN110866672A (en) 2019-10-10 2019-10-10 Data processing method, device, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910960852.8A CN110866672A (en) 2019-10-10 2019-10-10 Data processing method, device, terminal and medium

Publications (1)

Publication Number Publication Date
CN110866672A true CN110866672A (en) 2020-03-06

Family

ID=69652823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910960852.8A Pending CN110866672A (en) 2019-10-10 2019-10-10 Data processing method, device, terminal and medium

Country Status (1)

Country Link
CN (1) CN110866672A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507212A (en) * 2020-04-03 2020-08-07 咪咕文化科技有限公司 Video focus area extraction method, device, equipment and storage medium
WO2023151277A1 (en) * 2022-02-08 2023-08-17 深圳前海微众银行股份有限公司 Data processing method and apparatus, device, and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073785A (en) * 2010-11-26 2011-05-25 哈尔滨工程大学 Daily gas load combination prediction method based on generalized dynamic fuzzy neural network
CN107590569A (en) * 2017-09-25 2018-01-16 山东浪潮云服务信息科技有限公司 A kind of data predication method and device
CN108108520A (en) * 2017-11-29 2018-06-01 海南电网有限责任公司电力科学研究院 A kind of transmission line of electricity damage to crops caused by thunder risk forecast model based on Artificial neural network ensemble
CN108399248A (en) * 2018-03-02 2018-08-14 郑州云海信息技术有限公司 A kind of time series data prediction technique, device and equipment
CN108416479A (en) * 2018-03-20 2018-08-17 易联众信息技术股份有限公司 A kind of construction method of the decision scheme Data Analysis Model based on GDP
CN108763277A (en) * 2018-04-10 2018-11-06 平安科技(深圳)有限公司 A kind of data analysing method, computer readable storage medium and terminal device
CN110024097A (en) * 2016-11-30 2019-07-16 Sk 株式会社 Semiconductors manufacture yield forecasting system and method based on machine learning
CN110110796A (en) * 2019-05-13 2019-08-09 哈尔滨工程大学 A kind of analysis method of the marine ships time series data based on deep learning
CN110297179A (en) * 2018-05-11 2019-10-01 宫文峰 Diesel-driven generator failure predication and monitoring system device based on integrated deep learning
JP2022085163A (en) * 2020-11-27 2022-06-08 ロベルト・ボッシュ・ゲゼルシャフト・ミト・ベシュレンクテル・ハフツング Data analysis device and data analysis method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073785A (en) * 2010-11-26 2011-05-25 哈尔滨工程大学 Daily gas load combination prediction method based on generalized dynamic fuzzy neural network
CN110024097A (en) * 2016-11-30 2019-07-16 Sk 株式会社 Semiconductors manufacture yield forecasting system and method based on machine learning
CN107590569A (en) * 2017-09-25 2018-01-16 山东浪潮云服务信息科技有限公司 A kind of data predication method and device
CN108108520A (en) * 2017-11-29 2018-06-01 海南电网有限责任公司电力科学研究院 A kind of transmission line of electricity damage to crops caused by thunder risk forecast model based on Artificial neural network ensemble
CN108399248A (en) * 2018-03-02 2018-08-14 郑州云海信息技术有限公司 A kind of time series data prediction technique, device and equipment
CN108416479A (en) * 2018-03-20 2018-08-17 易联众信息技术股份有限公司 A kind of construction method of the decision scheme Data Analysis Model based on GDP
CN108763277A (en) * 2018-04-10 2018-11-06 平安科技(深圳)有限公司 A kind of data analysing method, computer readable storage medium and terminal device
CN110297179A (en) * 2018-05-11 2019-10-01 宫文峰 Diesel-driven generator failure predication and monitoring system device based on integrated deep learning
CN110110796A (en) * 2019-05-13 2019-08-09 哈尔滨工程大学 A kind of analysis method of the marine ships time series data based on deep learning
JP2022085163A (en) * 2020-11-27 2022-06-08 ロベルト・ボッシュ・ゲゼルシャフト・ミト・ベシュレンクテル・ハフツング Data analysis device and data analysis method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507212A (en) * 2020-04-03 2020-08-07 咪咕文化科技有限公司 Video focus area extraction method, device, equipment and storage medium
WO2023151277A1 (en) * 2022-02-08 2023-08-17 深圳前海微众银行股份有限公司 Data processing method and apparatus, device, and storage medium

Similar Documents

Publication Publication Date Title
CN107506868B (en) Method and device for predicting short-time power load
CN110772700B (en) Automatic sleep-aiding music pushing method and device, computer equipment and storage medium
CN112699998B (en) Time series prediction method and device, electronic equipment and readable storage medium
CN114220458B (en) Voice recognition method and device based on array hydrophone
CN112288137A (en) LSTM short-term load prediction method and device considering electricity price and Attention mechanism
CN110866672A (en) Data processing method, device, terminal and medium
CN110634060A (en) User credit risk assessment method, system, device and storage medium
CN111178537A (en) Feature extraction model training method and device
CN110175689A (en) A kind of method of probabilistic forecasting, the method and device of model training
CN110503182A (en) Network layer operation method and device in deep neural network
Denk et al. Avoid filling Swiss cheese with whipped cream: imputation techniques and evaluation procedures for cross-country time series
CN113253336B (en) Earthquake prediction method and system based on deep learning
CN112819256A (en) Convolution time sequence room price prediction method based on attention mechanism
WO2022222230A1 (en) Indicator prediction method and apparatus based on machine learning, and device and storage medium
CN115563568A (en) Abnormal data detection method and device, electronic device and storage medium
CN115278757A (en) Method and device for detecting abnormal data and electronic equipment
CN114707420A (en) Credit fraud behavior identification method, device, equipment and storage medium
CN114399101A (en) TCN-BIGRU-based gas load prediction method and device
CN113469570A (en) Information quality evaluation model construction method, device, equipment and storage medium
KR20220147968A (en) A stock price prediction system based on real-time macro index prediction
CN113902187A (en) Time-of-use electricity price prediction method and device and terminal equipment
CN109408531B (en) Method and device for detecting slow-falling data, electronic equipment and storage medium
CN112686470A (en) Power grid saturation load prediction method and device and terminal equipment
CN115329968B (en) Method, system and electronic equipment for determining fairness of quantum machine learning algorithm
CN111913940B (en) Temperature membership tag prediction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination