Disclosure of Invention
In view of the above problems, the present invention is directed to a financial prediction system based on blockchains and artificial intelligence.
The purpose of the invention is realized by the following technical scheme:
a financial forecasting system based on a block chain and artificial intelligence comprises a data acquisition module, a data preprocessing module, a block chain storage module and a financial forecasting module, wherein the data acquisition module is used for acquiring a financial time sequence, and inputting the obtained financial time series into a data preprocessing module for processing, wherein the data preprocessing module is used for removing noise data in the financial time series, and transmitting the preprocessed financial time sequence to a block chain storage module for storage, wherein the financial prediction module is used for retrieving the financial time sequence from the block chain storage module, predicting the trend of financial data according to the financial time sequence, predicting the trend of the financial time sequence by the financial prediction module by adopting a BP (back propagation) neural network, and optimizing the initial weight and the threshold of the BP neural network adopted by the financial prediction module by adopting a particle swarm algorithm; defining the particle swarm algorithm to update in the following way:
vi(t+1)=ωi(t)vi(t)+c1r1(pi(t)-xi(t))+c2r2(g(t)-xi(t))
xi(t+1)=xi(t)+vi(t+1)
in the formula, ωi(t) represents the inertial weight factor, v, of the particle i at the t-th iterationi(t +1) and xi(t +1) denotes the step size and position of the particle i at the (t +1) th iteration, vi(t) and xi(t) denotes the step size and position of the particle i at the t-th iteration, c1And c2Represents a learning factor, r1And r2Representing a random number between 0 and 1, g (t) representing a globally optimal solution, p'i(t) watchShowing local learned solution of particle i at the t-th iteration, and p'iThe value of (t) is determined in the following manner:
let P (t) denote the set of individual optimal solutions for the particles in the population at the t-th iteration, and P (t) { p }
i(t), i ═ 1, 2.., N }, where p is
i(t) represents the individual optimal solution for particle i at the tth iteration, and N represents the number of particles in the population; let M (t) denote the local classification number of the particle swarm optimization algorithm at the t-th iteration, and
defining an individual optimal solution p
i(t) the local detection coefficient is ε
i(t) and ε
iThe expression of (t) is:
in the formula, x
l(t) represents the position of the particle l at the t-th iteration, p
1(p
i(t),x
l(t)) represents a first value function, d (p)
i(t),x
l(t)) represents the individual optimal solution p
i(t) and position x
l(t) Euclidean distance between (t) when d (p)
i(t),x
lWhen (t)) < D, then rho
1(p
i(t),x
l(t)). 1, when d (p)
i(t),x
l(t)) > D, then ρ
1(p
i(t),x
l(t)) -0, where D is a given distance threshold, and
wherein p is
j(t) represents the individual optimal solution for particle j at the t-th iteration, d (p)
i(t),p
j(t)) represents the individual optimal solution p
i(t) and the individual optimal solution p
j(t) Euclidean distance between (t), p
i(e) Represents the individual optimal solution, p, of the particle i at the e-th iteration
2(p
i(t),p
i(e) Represents a second value function, when p
i(t)=p
i(e) Then ρ
2(p
i(t),p
i(e) 1 when p is ═ 1
i(t)≠p
i(e) Then ρ
2(p
i(t),p
i(e))=0,h
i(t) represents the fitness function value of particle i at the t-th iteration, h
max(t) and h
min(t) respectively representing a maximum fitness function and a minimum fitness function value of the particles in the particle swarm during the t-th iteration;
sorting the individual optimal solutions in the set P (t) from small to large according to the values of the local detection coefficients, selecting the first M (t) individual optimal solutions as candidate local learning solutions, forming a set P '(t) by the selected M (t) candidate local learning solutions, and selecting the candidate local learning solution closest to the particles in the particle swarm from the set P' (t) as the corresponding local learning solution, namely the local learning solution
Wherein p is
r(t) represents the individual optimal solution for particle r at the t-th iteration, d (x)
i(t),p
r(t)) represents position x
i(t) and the individual optimal solution p
r(t) Euclidean distance between them.
Preferably, the fitness function h of the particle swarm algorithm is defined as:
in the formula, yuOutput value, o, representing the u-th training sampleuThe target value of the u-th training sample is shown, and M represents the number of training samples.
Preferably, the inertial weight factor ω of the particle i at the t-th timeiThe expression of (t) is:
in the formula, ω
maxAnd ω
minRespectively representing a maximum inertia weight factor and a minimum inertia weight factor, T representing the current iteration number, T
maxRepresenting the maximum number of iterations, δ
i(t) represents the inertial weight adjustment factor for particle i at the tth iteration, let x
i(t-2) denotes the position of the particle i at the (t-2) th iteration, x
i(t-1) denotes the position of the particle i at the (t-1) th iteration, defining α
i(t) represents the forward property value, β, of particle i at the t-th iteration
i(t) represents the value of the optimizing property for particle i at the tth iteration, and
β
i(t)=ρ
5(p
i(t),p
i(t-1)), wherein d (x)
i(t-1), g (t)) represents the position x
iEuclidean distance between (t-1) and global optimal solution g (t), d (x)
i(t), g (t)) represents a position x
iEuclidean distance between (t) and global optimal solution g (t), d (x)
i(t-1),x
i(t-2)) represents the position x
i(t-1) and position x
i(t-2) Euclidean distance between d (x)
i(t),x
i(t-2)) represents the position x
i(t) and position x
i(t-2) the Euclidean distance between,
represents a third value function when
When it is, then
If not, then,
represents a fourth value function when
When it is, then
If not, then,
p
i(t-1) represents the individual optimal solution, ρ, for particle i at the (t-1) th iteration
5(p
i(t),p
i(t-1)) represents a fifth value function, when p
i(t)=p
iAt (t-1), then ρ
5(p
i(t),p
i(t-1)) ═ 1, when p
i(t)≠p
iAt (t-1), then ρ
5(p
i(t),p
i(t-1))=0;
Let τi(t) represents the number of iterations before and at the latest distance from the t-th iteration when the inertial weight adjustment factor for the particle i is equal to-1, then δiThe expression of (t) is:
wherein the content of the first and second substances,
represents a sixth value-taking function, given a threshold value M (δ), and
when in use
When it is, then
When in use
When it is, then
The beneficial effects created by the invention are as follows:
predicting the trend of a financial time sequence by adopting a BP neural network, optimizing an initial weight and a threshold of the BP neural network adopted in a financial prediction module by adopting a particle swarm algorithm, improving an update mode of a traditional particle swarm, and compared with a traditional mode of locally learning particles to individual optimal solutions in the update process of the particles, selecting only part of better individual optimal solutions as the local learning solutions of the particles in the preferred embodiment, defining local classification numbers M (t) and local detection coefficients corresponding to the individual optimal solutions, and determining the probability of the individual optimal solutions becoming the local learning solutions by comprehensively considering the density of the particles in the local neighborhood of the individual optimal solutions, the stagnation times corresponding to the individual optimal solutions and the fitness function values of the individual optimal solutions through a local detection system corresponding to the defined individual optimal solutions, when the density of particles in the local neighborhood of the individual optimal solution is small, the search of the local neighborhood of the individual optimal solution is strengthened, namely, the probability that the individual optimal solution becomes a local learning solution is increased, and when the stagnation times of the individual optimal solution are more, the position has higher possibility to be the local optimal solution, so that the probability that the individual optimal solution becomes the local learning solution is reduced, and when the fitness function value of the individual optimal solution is smaller, the individual optimal solution is better, so that the probability that the individual optimal solution becomes the local learning solution is increased; the value of the set local classification number is increased along with the increase of the iteration number, namely in the early stage of the iteration, less representative individual optimal solutions are selected from the individual optimal solutions of the particle swarm as the object for local learning of the particles, so that the convergence speed of the particle swarm algorithm is increased, and in the later stage of the particle swarm algorithm, more representative individual optimal solutions are selected from the individual optimal solutions of the particle swarm as the object for local learning of the particles, so that the local searching capability of the particle swarm algorithm is enhanced; in summary, in the update process of the particle swarm algorithm, the preferred embodiment introduces a local learning solution to replace an individual optimal solution in the traditional update formula as an object for local learning of the particles, and selects a part of better individual optimal solutions as the object for local learning of the particles through the defined local classification number and the local detection coefficients corresponding to the individual optimal solutions, so that the convergence rate of the particle swarm algorithm is accelerated, and the optimization capability of the particle swarm algorithm is ensured; aiming at the two situations, the advancing attribute value and the optimizing attribute value of the particles in the current iteration are introduced into the inertia weight adjusting factor defined by the preferred embodiment, the advancing attribute value is used for measuring the updating route of the particles in the iteration process, compared with the fitness function value, the Euclidean distance can effectively reflect the distance relationship between the positions of the particles, and the position of the particles after three iterations is continuously compared with the distance relationship between the current global optimal solution to judge whether the particles advance towards the current global optimal solution after the iteration update or not, when the position of the particle in the current iteration is farther from the current global optimal solution than the position of the particle in the previous iteration and is closer to the positions of the previous iterations, that is, the particle is indicated to have a return phenomenon in the updating process, at this time, the advance attribute value of the particle is increased by 1, that is, the advance attribute value of the particle records the return phenomenon of the particle in the updating process, the optimization attribute value is used for measuring the optimization performance of the particle in the updating process, when the individual optimal solution of the particle is not changed after the iterative updating process, the particle is indicated to possibly sink into the local optimal value, at this time, the optimization attribute value of the particle is increased by 1, that is, the optimization attribute value of the particle records the number of times that the particle may sink into the local optimal solution in the iteration process, and in conclusion, the inertia weight adjustment factor of the particle records the number of returns and sink into the local optimal solution of the particle in the iterative updating process through the advance attribute value of the particle and the optimization attribute value of the particle And when the sum of the number of times of returning the particle to the local optimum value in the updating process and the number of times of trapping the particle into the local optimum value is smaller than the given threshold value, the value of the inertia weight adjusting factor of the particle at the moment is equal to 0, namely the particle is subjected to iterative updating according to the traditional inertia weight factor value.
Detailed Description
The invention is further described with reference to the following examples.
Referring to fig. 1, the financial prediction system based on blockchain and artificial intelligence of the embodiment includes a data acquisition module, a data preprocessing module, a blockchain storage module, and a financial prediction module, where the data acquisition module is configured to acquire a financial time series, and inputting the obtained financial time series into a data preprocessing module for processing, wherein the data preprocessing module is used for removing noise data in the financial time series, and transmitting the preprocessed financial time sequence to a block chain storage module for storage, wherein the financial prediction module is used for retrieving the financial time sequence from the block chain storage module, predicting the trend of financial data according to the financial time sequence, predicting the trend of the financial time sequence by the financial prediction module by adopting a BP (back propagation) neural network, and optimizing the initial weight and the threshold of the BP neural network adopted by the financial prediction module by adopting a particle swarm algorithm; defining the particle swarm algorithm to update in the following way:
vi(t+1)=ωi(t)vi(t)+c1r1(p′i(t)-xi(t))+c2r2(g(t)-xi(t))
xi(t+1)=xi(t)+vi(t+1)
in the formula, ωi(t) represents the inertial weight factor, v, of the particle i at the t-th iterationi(t +1) and xi(t +1) denotes the step size and position of the particle i at the (t +1) th iteration, vi(t) and xi(t) denotes the step size and position of the particle i at the t-th iteration, c1And c2Represents a learning factor, r1And r2Representing a random number between 0 and 1, g (t) representing a globally optimal solution, p'i(t) represents the local learned solution for particle i at the t-th iteration, and p'iThe value of (t) is determined in the following manner:
let P (t) denote the set of individual optimal solutions for the particles in the population at the t-th iteration, and P (t) { p }
i(t), i ═ 1, 2.., N }, where p is
i(t) represents the individual optimal solution for particle i at the tth iteration, and N represents the number of particles in the population; let M (t) denote the local classification number of the particle swarm optimization algorithm at the t-th iteration, and
defining an individual optimal solution p
i(t) the local detection coefficient is ε
i(t) and ε
iThe expression of (t) is:
in the formula, x
l(t) represents the position of the particle l at the t-th iteration, p
1(p
i(t),x
l(t)) represents a first value function, let d (p)
i(t),x
l(t)) represents the individual optimal solution p
i(t) and position x
l(t) Euclidean distance between (t) when d (p)
i(t),x
lWhen (t)) < D, then rho
1(p
i(t),x
l(t)). 1, when d (p)
i(t),x
l(t)) > D, then ρ
1(p
i(t),x
l(t)) -0, where D is a given distance threshold, and
wherein p is
j(t) represents the individual optimal solution for particle j at the t-th iteration, d (p)
i(t),p
j(t)) represents the individual optimal solution p
i(t) and the individual optimal solution p
j(t) Euclidean distance between (t), p
i(e) Represents the individual optimal solution, p, of the particle i at the e-th iteration
2(p
i(t),p
i(e) Represents a second value function, when p
i(t)=p
i(e) Then ρ
2(p
i(t),p
i(e) 1 when p is ═ 1
i(t)≠p
i(e) Then ρ
2(p
i(t),p
i(e))=0,h
i(t) represents the fitness function value of particle i at the t-th iteration, h
max(t) and h
min(t) respectively representing a maximum fitness function and a minimum fitness function value of the particles in the particle swarm during the t-th iteration;
sorting the individual optimal solutions in the set P (t) from small to large according to the values of the local detection coefficients, selecting the first M (t) individual optimal solutions as candidate local learning solutions, forming a set P '(t) by the selected M (t) candidate local learning solutions, and selecting the candidate local learning solution closest to the particles in the particle swarm from the set P' (t) as the corresponding local learning solution, namely the local learning solution
Wherein p is
r(t) represents the individual optimal solution for particle r at the t-th iteration, d (x)
i(t),p
r(t)) represents position x
i(t) and the individual optimal solution p
r(t) Euclidean distance between them.
The optimal particle swarm optimization is adopted in the optimal embodiment to optimize the initial weight and the threshold of the BP neural network adopted in the financial prediction module, and the updating mode of the traditional particle swarm is improved, compared with the traditional mode that the particles are made to locally learn to the individual optimal solution in the updating process of the particles, the optimal particle swarm optimization embodiment only selects part of the better individual optimal solution as the local learning solution of the particles, defines the local classification number M (t) and the local detection coefficient corresponding to the individual optimal solution, the local detection system corresponding to the defined individual optimal solution determines the probability that the individual optimal solution becomes the local learning solution by comprehensively considering the density of the particles in the local neighborhood of the individual optimal solution, the stagnation number corresponding to the individual optimal solution and the fitness function value of the individual optimal solution, when the density of the particles in the local neighborhood of the individual optimal solution is smaller, the search of the local neighborhood of the individual optimal solution is strengthened, namely the probability that the individual optimal solution becomes a local learning solution is increased, when the stagnation times of the individual optimal solution are more, the position has higher possibility to be the local optimal solution, therefore, the probability that the individual optimal solution becomes the local learning solution is reduced, and when the fitness function value of the individual optimal solution is smaller, the individual optimal solution is better, therefore, the probability that the individual optimal solution becomes the local learning solution is increased; the value of the set local classification number is increased along with the increase of the iteration number, namely in the early stage of the iteration, less representative individual optimal solutions are selected from the individual optimal solutions of the particle swarm as the object for local learning of the particles, so that the convergence speed of the particle swarm algorithm is increased, and in the later stage of the particle swarm algorithm, more representative individual optimal solutions are selected from the individual optimal solutions of the particle swarm as the object for local learning of the particles, so that the local searching capability of the particle swarm algorithm is enhanced; in summary, in the update process of the particle swarm algorithm, the preferred embodiment introduces a local learning solution to replace an individual optimal solution in a traditional update formula as an object for local learning of the particles, and selects a part of better individual optimal solutions as the object for local learning of the particles through the defined local classification number and the local detection coefficients corresponding to the individual optimal solutions, so that the convergence rate of the particle swarm algorithm is accelerated, and the optimization capability of the particle swarm algorithm is ensured.
Preferably, the fitness function h of the particle swarm algorithm is defined as:
in the formula, yuOutput value, o, representing the u-th training sampleuThe target value of the u-th training sample is shown, and M represents the number of training samples.
The smaller the value of the fitness function of the particle population defined in the preferred embodiment is, the better the optimization result of the particle is.
Preferably, the inertial weight factor ω of the particle i at the t-th timeiThe expression of (t) is:
in the formula, ω
maxAnd ω
minRespectively representing a maximum inertia weight factor and a minimum inertia weight factor, T representing the current iteration number, T
maxRepresenting the maximum number of iterations, δ
i(t) represents the inertial weight adjustment factor for particle i at the tth iteration, let x
i(t-2) denotes the position of the particle i at the (t-2) th iteration, x
i(t-1) denotes the position of the particle i at the (t-1) th iteration, defining α
i(t) represents the forward property value, β, of particle i at the t-th iteration
i(t) represents the value of the optimizing property for particle i at the tth iteration, and
β
i(t)=ρ
5(p
i(t),p
i(t-1)), wherein d (x)
i(t-1), g (t)) represents the position x
iEuclidean distance between (t-1) and global optimal solution g (t), d (x)
i(t), g (t)) represents a position x
iEuclidean distance between (t) and global optimal solution g (t), d (x)
i(t-1),x
i(t-2)) represents the position x
i(t-1) and position x
i(t-2) Euclidean distance between d (x)
i(t),x
i(t-2)) represents the position x
i(t) and position x
i(t-2) the Euclidean distance between,
represents a third value function when
When it is, then
If not, then,
represents a fourth value function when
When it is, then
If not, then,
p
i(t-1) represents the individual optimal solution, ρ, for particle i at the (t-1) th iteration
5(p
i(t),p
i(t-1)) represents a fifth value function, when p
i(t)=p
iAt (t-1), then ρ
5(p
i(t),p
i(t-1)) ═ 1, when p
i(t)≠p
iAt (t-1), then ρ
5(p
i(t),p
i(t-1))=0;
Let τi(t) represents the number of iterations before and at the latest distance from the t-th iteration when the inertial weight adjustment factor for the particle i is equal to-1, then δiThe expression of (t) is:
wherein the content of the first and second substances,
represents a sixth value-taking function, given a threshold value M (δ), and
when in use
When it is, then
When in use
When it is, then
In the preferred embodiment, an inertia weight adjustment factor is introduced into the inertia weight factors of the particle swarm algorithm, when a particle returns in the updating process, the convergence speed of the particle swarm algorithm is influenced, when the individual optimal solution of the particle in the updating process is not changed, the particle possibly falls into the local optimal solution, and the optimization performance of the particle swarm algorithm is influenced, aiming at the two situations, the advancing attribute value and the optimization attribute value of the particle in the current iteration are introduced into the inertia weight adjustment factor defined by the preferred embodiment, the advancing attribute value is used for measuring the updating route of the particle in the iteration process, compared with the fitness function value, the Euclidean distance can effectively reflect the distance relationship between the positions of the particles, and whether the particle is advanced towards the current global optimal solution after the iterative updating is judged by comparing the positions of the particle after three times of continuous iterations with the distance relationship between the current global optimal solutions, when the position of the particle in the current iteration is farther from the current global optimal solution than the position of the particle in the previous iteration and is closer to the positions of the previous iterations, that is, the particle is indicated to have a return phenomenon in the updating process, at this time, the advance attribute value of the particle is increased by 1, that is, the advance attribute value of the particle records the return phenomenon of the particle in the updating process, the optimization attribute value is used for measuring the optimization performance of the particle in the updating process, when the individual optimal solution of the particle is not changed after the iterative updating process, the particle is indicated to possibly sink into the local optimal value, at this time, the optimization attribute value of the particle is increased by 1, that is, the optimization attribute value of the particle records the number of times that the particle may sink into the local optimal solution in the iteration process, and in conclusion, the inertia weight adjustment factor of the particle records the number of returns and sink into the local optimal solution of the particle in the iterative updating process through the advance attribute value of the particle and the optimization attribute value of the particle And when the sum of the number of times of returning the particle to the local optimum value in the updating process and the number of times of trapping the particle into the local optimum value is smaller than the given threshold value, the value of the inertia weight adjusting factor of the particle at the moment is equal to 0, namely the particle is subjected to iterative updating according to the traditional inertia weight factor value.
Preferably, the data preprocessing module is configured to remove noise data in the financial time series, set the financial time series to be processed as F, sequentially process the financial data in the financial time series F, and set F (k) to represent the current financial data to be processed in the financial time series F, and F (k) to represent the kth financial data in the financial time series F, where Δ F (k) may be set to be a data threshold Δ F (k), where Δ F (k) is
Determining a reference data sequence F (k) corresponding to the financial data F (k) according to a given data threshold value delta F (k), and setting the reference data sequence F (k) determined according to the given data threshold value delta F (k) to { F (k-l +1), F (k-l +2),.. times, F (k-1) }, wherein F (k-l +1), F (k-l +2) and F (k-1) respectively represent the (k-l +1), (k-l +2) and (k-1) financial data in the financial time sequence F, and (l-1) represents the financial data amount in the parameter data sequence F (k);
let F (a) denote the financial data in the reference data sequence F (k), and F (a) is the a-th financial data in the financial time sequence F, F (b) denotes the financial data in the reference data sequence F (k), and F (b) denotes the b-th financial data in the financial time sequence F, wherein a ≠ b, then the financial data F (a) and the financial data F (b) in the reference data sequence F (k) satisfy: (a) f (b) Δ f ≦ Δ f (k);
is provided with
Representing the mean of the financial data in the reference data series F (k), let F ' (k) represent the first reference data subsequence of financial data F (k), and F ' (k) { F (k-m '), F (k-m ' +1),. ·, F (k) }, where F (k-m ') represents the (k-m ') th financial data in the financial time series F, F (k-m ' +1) represents the (k-m ' +1) th financial data in the financial time series F, and the value of m ' is determined in the following manner;
(1) when the financial data f (k) is satisfied
Then, the value of m' is determined in the following manner:
wherein, θ (k) represents when the financial data f (k) is greater than or equal to
A time-corresponding sequence detection function, F (k-s) represents the (k-s) th financial data in the financial time sequence F,
a first comparison function representing the correspondence of the financial data f (k-s) when
When it is, then
When in use
When it is, then
Selecting a value of the maximum m which enables the sequence detection function theta (k) to be 1 as m';
(2) when the financial data f (k) is satisfied
Then, the value of m' is determined in the following manner:
wherein the content of the first and second substances,
when the financial data f (k) is less than
The time of the corresponding sequence detection function,
a second comparison function representing the correspondence of the financial data f (k-s) when
When it is, then
When in use
When it is, then
Selecting the function that makes the sequence detection
The value of the maximum m of (a) is denoted as m';
let F' (k) denote a second reference data subsequence of financial data F (k), and
wherein the content of the first and second substances,
representing the second in financial time series F
The financial data of the individual financial data,
representing the second in financial time series F
The financial data of the individual financial data,
representing the second in financial time series F
Individual financial data; defining the first detection coefficient of the financial data F (k) in the first reference data subsequence F '(k) and the second reference data subsequence F' (k) as Y
1(k) And Y is
1(k) The expression of (a) is:
wherein Δ F (k-m ') represents the standard deviation of the financial data F (k-m') in the first sub-sequence of reference data F '(k), Δ F (k) represents the standard deviation of the financial data F (k) in the first sub-sequence of reference data F' (k),
representing financial data
The standard deviation in the second reference data subsequence F "(k),
representing financial data
The standard deviation in the second reference data subsequence F "(k),
represents rounding up;
defining the financial data F (k) as Y in the first reference data subsequence F '(k) and in the second reference data, subsequence F' (k)2(k) And Y is2(k) The expression of (a) is:
in the formula (I), the compound is shown in the specification,
representing the mean of the fusion data in the first sub-sequence of reference data F' (k),
represents the mean of the fusion data in the second subsequence of reference data F "(k);
defining the financial data F (k) as an anomaly detection function Y (k) in the first reference data subsequence F '(k) and the second reference data subsequence F' (k), and the expression of Y (k) is:
when the value of the anomaly detection function y (k) satisfies: when Y (k) is less than or equal to 0, the financial data f (k) is judged to be normal financial data, and the value of the financial data f (k) is kept unchanged; when the value of the anomaly detection function y (k) satisfies: when Y (k) is greater than 0, the financial data f (k) is judged to be abnormal data, and the order is given
Where F (k-c) represents the (k-c) th financial data in the financial time series F.
The preferred embodiment is used for removing noise data in the financial time sequence, sequentially detecting financial data in the financial time sequence, and judging whether the financial data is noise data, when the financial data is detected, a given data threshold is used for determining a reference data sequence of the financial data to be detected, and Euclidean distances between any two pieces of financial data in the reference data sequence are smaller than or equal to the data threshold, so that the similarity of the financial data in the parameter data sequence is ensured, according to the relation of the financial data to be detected and the mean value of the melting data in the reference data sequence, part of financial data and the financial data to be detected in the reference data sequence form a first reference data subsequence of the financial data to be detected, so that the uniformity of the trend of the first reference data subsequence is ensured, and part of the financial data in the middle of the first reference data subsequence is selected to form a second reference subsequence of the financial data to be detected When the financial data to be detected is normal data, a first reference data subsequence and a second reference data subsequence which are determined have similar trends, and according to the characteristic, a first detection coefficient and a second detection coefficient of the financial data in the first reference data subsequence and the second reference data subsequence are defined, wherein the first detection coefficient judges the similarity of the trends of the first reference data subsequence and the second reference data subsequence of the financial data to be detected by comparing a standard deviation of initial financial data of the first reference data subsequence with a standard deviation of initial financial data of the second reference data subsequence, a standard deviation of ending financial data of the first reference data subsequence (namely the standard deviation of the financial data to be detected) and a standard deviation of ending financial data of the second reference data subsequence, and the second detection coefficient judges the similarity of the trends of the first reference data subsequence and the second reference data subsequence of the financial data to be detected by fusing a mean value of the gold data in the first reference data subsequence with the gold in the second reference data subsequence The mean value of the fused data is compared, so that the trend similarity of the first reference data subsequence and the second reference data subsequence is judged, an abnormal detection function corresponding to the financial data to be detected is defined, the abnormal detection function compares the trend similarity between the first reference data subsequence and the second parameter data subsequence through a first detection coefficient and a second detection coefficient, so that whether the financial data to be detected is noise data is judged, and the condition that the trend similarity between the first reference data subsequence and the second reference data subsequence is reduced when the distance between the head data and the tail data of the second reference data subsequence is farther from the head financial data and the tail financial data of the first reference data subsequence is considered, in the preferred embodiment, a sine-form correction coefficient is introduced into the abnormal detection function of the financial data to be detected to correct the first detection coefficient, so that the abnormal detection function of the financial data to be detected can be more flexible, therefore, the detection precision of the noise data is effectively improved.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.